Skip to main content
Log in

Comparison of Stochastic Global Optimization Methods to Estimate Neural Network Weights

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Training a neural network is a difficult optimization problem because of numerous local minima. Many global search algorithms have been used to train neural networks. However, local search algorithms are more efficient with computational resources, and therefore numerous random restarts with a local algorithm may be more effective than a global algorithm. This study uses Monte-Carlo simulations to determine the efficiency of a local search algorithm relative to nine stochastic global algorithms when using a neural network on function approximation problems. The computational requirements of the global algorithms are several times higher than the local algorithm and there is little gain in using the global algorithms to train neural networks. Since the global algorithms only marginally outperform the local algorithm in obtaining a lower local minimum and they require more computational resources, the results in this study indicate that with respect to the specific algorithms and function approximation problems studied, there is little evidence to show that a global algorithm should be used over a more traditional local optimization routine for training neural networks. Further, neural networks should not be estimated from a single set of starting values whether a global or local optimization method is used.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Uryasev S, Pardalos PM (eds) (2001) Stochastic optimization: algorithms and applications. Kluwer Academic Publishers, Netherlands

    MATH  Google Scholar 

  2. Porto VW, Fogel DB, Fogel LJ (1995) Alternative neural network training models. IEEE Expert 16–22

  3. Sexton RS, Dorsey RE and Johnson JD (1999). Beyond backpropagation: using simulated annealing for training neural networks. J End User Comput 11: 3–10

    Google Scholar 

  4. Ludermir TB (2003). Neural networks for odor recognition in artificial noses. Proceedings of the International Joint Conference on Neural Networks 1: 143–148

    Article  Google Scholar 

  5. Mirmirani S and Li HC (2004). Gold price, neural networks and genetic algorithm. Comput Econ 23: 193–200

    Article  MATH  Google Scholar 

  6. Matilla-Garcia M and Arguello C (2005). A hybrid approach based on neural networks and genetic algorithms to study the profitability in the Spanish stock market. Appl Econ Lett 12: 303–308

    Article  Google Scholar 

  7. Zhao Z (2006). Steel columns under fire—A neural network-based strength model. Adv Eng Sof 32: 97–105

    Article  Google Scholar 

  8. Binner JM, Graham K and Gazely A (2004). Co-evolving neural networks with evolutionary strategies. A new application to Divisia money. Adv Econom 19: 127–143

    Article  Google Scholar 

  9. Fisher MM, Hlaváčková-Schindler K and Reismann M (1999). A global search procedure estimation in neural spatial interaction modeling. Pap Reg Sci 78: 119–34

    Article  Google Scholar 

  10. Maniezzo V (1994). Genetic evolution of the topology and weight distribution of neural networks. IEEE T Neural Networ 5(1): 39–53

    Article  Google Scholar 

  11. Plagianakos VP, Magoulas GD and Vrahatis MN (2001). Learning in multilayer perceptrons using global optimization strategies. Nonlinear Anal 47: 3431–3436

    Article  MATH  MathSciNet  Google Scholar 

  12. Georgieva A, Jordanov I (2006) Supervised neural network training with a hybrid global optimization technique. 2006 International Joint Conference on Neural Networks, Vancouver, BC, Canada, July

  13. Ye H, Lin Z (2003) Global optimization of neural network weights using subenergy tunneling functions and ripple search. Circuits and Systems, 2003, Isacs ’03. Proceedings of the 2003 International Symposium on, vol. 5, Issue 25–28, May 2003, pp V-725–V-728

  14. Michalewicz Z (1996) Genetic algorithms + data structures = evolution programs 3rd rev. and extended ed., Springer-Verlag, Berlin

  15. Syswerda G (1991). Schedule optimization using genetic algorithms. In: Davis, L (eds) Handbook of genetic algorithms, pp 332–349. Van Nostrand Reinhold, New York

    Google Scholar 

  16. Davis L (ed) (1991) Handbook of genetic algorithms. Van Nostrand Reinhold, New York

    Google Scholar 

  17. Franses PH and van Dijk D (2000). Nonlinear time series models in empirical finance. Cambridge University Press, Cambridge

    Google Scholar 

  18. Hamm L and Brorsen BW (2000). Trading futures markets based on signals from a neural network. Appl Econ Lett 7: 137–140

    Article  Google Scholar 

  19. Granger CW and Andersen AP (1978). An introduction to bilinear time series models. Vandenhoek and Ruprecht, Gottingen

    MATH  Google Scholar 

  20. Yang S-R and Brorsen BW (1995). Nonlinear dynamics of daily foreign exchange rates. Adv Quant Anal Finance Account 3: 111–130

    Google Scholar 

  21. Gallant A and White H (1992). On learning the derivatives of an unknown mapping with multilayer feedforward networks. Neural Networks 5: 129–138

    Article  Google Scholar 

  22. Prechelt L (1994) Proben1—A set of neural network benchmark problems and benchmarking rules. Technical Report 21/94, Universitat Karlsrude. http://page.mi.fu-berlin.de/~prechelt/Biblio/1994-21.pdf

  23. Pardalos PM, Romeijn E (eds) (2002) Handbook of global optimization vol 2: Heuristic approaches. Dordrecht, Netherlands

    Google Scholar 

  24. Boender CGE and Romeijn HE (1995). Stochastic methods. In: Horst, R and Pardalos, PM (eds) Handbook of global optimization, pp 829–869. Kluwer Academic Publishers, Netherlands

    Google Scholar 

  25. Skinner AJ and Broughton JQ (1995). Neural networks in computational materials science: training algorithms. model Simul Mater Sci 3: 371–390

    Article  ADS  Google Scholar 

  26. Yan W, Zhu Z, Hu R (1997) A hybrid genetic/bp algorithm and its application for radar target classification. Proceedings of the 1997 IEEE National Aerospace and Electronics Conference (NAECON) 2:981–984

  27. IMSL Math/Library version 3.0 (1997) Visual Numerics, Houston, TX

  28. van Rooij AJF, Jain LC, Johnson RP (1996). Neural network training using genetic algorithms. World Scientific Publishing Co., Singapore

    Google Scholar 

  29. Schwefel HP (1995). Evolution and optimum seeking. Wiley, New York

    Google Scholar 

  30. Szu H and Hartley R (1987). Fast simulated annealing. Phys Lett A 122: 157–162

    Article  ADS  Google Scholar 

  31. Solis FJ and Wets JB (1981). Minimization by random search techniques. Math Oper Res 6: 19–30

    Article  MATH  MathSciNet  Google Scholar 

  32. Baba N, Mogami Y, Kohzaki M, Shiraihi Y and Yoshida Y (1994). A hybrid algorithm for finding the global minimum of error function of neural networks and its applications. Neural Networks 7: 1253–1265

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to B. Wade Brorsen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hamm, L., Brorsen, B.W. & Hagan, M.T. Comparison of Stochastic Global Optimization Methods to Estimate Neural Network Weights. Neural Process Lett 26, 145–158 (2007). https://doi.org/10.1007/s11063-007-9048-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-007-9048-7

Keywords

Navigation