Skip to main content

Improvement of the Backpropagation Algorithm Using (1+1) Evolutionary Strategies

  • Chapter
  • 824 Accesses

Part of the book series: Studies in Computational Intelligence ((SCI,volume 312))

Abstract

Currently, the standard in supervised Artificial Neural Networks (ANNs) research is to use the backpropagation (BP) algorithm or one of its improved variants, for training. In this chapter, we present an improvement to the most widely used BP learning algorithm using (1+1) evolutionary Strategy (ES), one of the most widely used artificial evolution paradigms. The goal is to provide a method that can adaptively change the main learning parameters of the BP algorithm in an unconstrained manner. The BP/ES algorithm we propose is simple to implement and can be used in combination with various improved versions of BP. In our experimental tests we can see a substantial improvement in ANN performance, in some cases a reduction of more than 50% in error for time series prediction on a standard benchmark test. Therefore, we believe that our proposal effectively combines the learning abilities of BP with the global search of ES to provide a useful tool that improves the quality of learning for BP-based methods.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Belew, R.K., McInerney, J., Schraudolph, N.N.: Evolving networks:Using the Genetic Algorithm with connectionist learning. In: Proc. Second Artificial Life Conference, NewYork, pp. 511–547. Addison-Wesley, Reading (1991)

    Google Scholar 

  2. De Jong, K.: A unified approach to Evolutionary Computation. In: Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, GECCO 2009, Montreal, Québec, Canada, July 08-12, pp. 2811–2824. ACM, New York (2009)

    Chapter  Google Scholar 

  3. Eiben, A.E., Smith, J.E.: 2003 Introduction to Evolutionary Computing. Springer, Heidelberg (2003)

    Google Scholar 

  4. Frean, M.: The upstart algorithm: A method for constructing and training feedforward neural networks. Neural Computation 2(2), 198–209 (1990)

    Article  Google Scholar 

  5. Gurney, K.: An Introduction to Neural Networks. Taylor & Francis, Inc., Abington (1997)

    Book  Google Scholar 

  6. Hagan, M.T., Demuth, H.B., Beale, M.: Neural Network Design. PWS Publishing Co. (1996)

    Google Scholar 

  7. Harp, S.A., Samad, T., Guha, A.: Toward the genetic synthesis of neural networks. In: Schaffer, J.D. (ed.) Proc. 3rd Int. Conf. Genetic Algorithms and Their Applications, pp. 360–369. Morgan Kaufmann, San Mateo (1989)

    Google Scholar 

  8. Pedro, I.V.: Redes Neuronales Artificiales: Un Enfoque Práctico. Pearson Educacion, London (2004)

    Google Scholar 

  9. Jacobs, R.A.: Increased Rates of Convergence Through Learning Rate Adaptation. Technical Report. UMI Order Number: UM-CS-1987-117., University of Massachusetts (1987)

    Google Scholar 

  10. Kim, H.B., Jung, S.H., Kim, T.G., Park, K.H.: Fast learning method for back-propagation neural network by evolutionary adaptation of learning rates. Neurocomput. 11(1), 101–106 (1996)

    Article  MATH  Google Scholar 

  11. Lee, S.-W.: Off-line recognition of totally unconstrained handwritten numerals using multilayer cluster neural network. IEEE Trans. Pattern Anal. Machine Intell. 18, 648–652 (1996)

    Article  Google Scholar 

  12. Merelo, J.J., Paton, M., Cañas, A., Prieto, A., Moran, F.: Optimization of a competitive learning neural network by genetic algorithms. In: Mira, J., Cabestany, J., Prieto, A.G. (eds.) IWANN 1993. LNCS, vol. 686, pp. 185–192. Springer, Heidelberg (1993)

    Google Scholar 

  13. Patel, D.: Using genetic algorithms to construct a network for financial prediction. In: Proc. SPIE: Applications of Artificial Neural Networks in Image Processing, Bellingham, WA, pp. 204–213 (1996)

    Google Scholar 

  14. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D.E., McClelland, J.L. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Foundations. MIT Press Computational Models of Cognition And Perception Series, vol. 1, pp. 318–362. MIT Press, Cambridge (1986)

    Google Scholar 

  15. Samarasinghe, S.: Neural Networks for Applied Sciences and Engineering. Auerbach Publications (2006)

    Google Scholar 

  16. Schefel, H.-P.: Numerische Optimierung von Computer-Modellen mittels der Wvolutionsstrategie. ISR, vol. 26. Birkhaeuser, Basel (1997)

    Google Scholar 

  17. Skinner, A.J., Broughton, J.Q.: Neural networks in computational materials science: Training algorithms. Modeling and Simulation in Materials Sci. Eng. 3(3), 371–390 (1995)

    Article  Google Scholar 

  18. Topchy, A.P., Lebedko, O.A.: Neural network training by means of cooperative evolutionary search. Nuclear Instrum. Methods in Phys. Res., Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 389(1-2), 240–241 (1997)

    Article  Google Scholar 

  19. Whitehead, B.A., Choate, T.D.: Evolving space-filling curves to distribute radial basis functions over an input space. IEEE Trans. Neural Networks 5, 15–23 (1994)

    Article  Google Scholar 

  20. Whitley, D., Starkweather, T., Bogart, C.: Genetic algorithms and neural networks: Optimizing connections and connectivity. Parallel Comput 14(3), 347–361 (1990)

    Article  Google Scholar 

  21. Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 87(9), 1423–1447 (1999)

    Article  Google Scholar 

  22. Liu, Y., Yao, X.: Evolutionary design of artificial neural networks with different nodes. In: Proc. 1996 IEEE Int. Conf. Evolutionary Computation (ICEC 1996), Nagoya, Japan, pp. 670–675 (1996)

    Google Scholar 

  23. Hwang, M.W., Choi, J.Y., Park, J.: Evolutionary projection neural networks. In: Proc. 1997 IEEE Int. Conf. Evolutionary Computation, ICEC 1997, pp. 667–671 (1997)

    Google Scholar 

  24. Sebald, A.V., Chellapilla, K.: On making problems evolutionarily friendly, part I: Evolving the most convenient representations. In: Porto, W., Saravanan, N., Waagen, D., Eiben, A.E. (eds.) EP 1998. LNCS, vol. 1447, pp. 271–280. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Galaviz, J.P., Melin, P., Trujillo, L. (2010). Improvement of the Backpropagation Algorithm Using (1+1) Evolutionary Strategies. In: Melin, P., Kacprzyk, J., Pedrycz, W. (eds) Soft Computing for Recognition Based on Biometrics. Studies in Computational Intelligence, vol 312. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15111-8_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15111-8_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15110-1

  • Online ISBN: 978-3-642-15111-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics