Skip to main content

Adjusting Weights in Artificial Neural Networks using Evolutionary Algorithms

  • Chapter
Estimation of Distribution Algorithms

Part of the book series: Genetic Algorithms and Evolutionary Computation ((GENA,volume 2))

Abstract

Training artificial neural networks is a complex task of great practical importance. Besides classical ad-hoc algorithms such as backpropagation, this task can be approached by using Evolutionary Computation, a highly configurable and effective optimization paradigm. This chapter provides a brief overview of these techniques, and shows how they can be readily applied to the resolution of this problem. Three popular variants of Evolutionary Algorithms —Genetic Algorithms, Evolution Strategies and Estimation of Distribution Algorithms— are described and compared. This comparison is done on the basis of a benchmark comprising several standard classification problems of interest for neural networks. The experimental results confirm the general appropriateness of Evolutionary Computation for this problem. Evolution Strategies seem particularly proficient techniques in this optimization domain, and Estimation of Distribution Algorithms are also a competitive approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Alander, J. T. (1994). Indexed bibliography of genetic algorithms and neural networks. Technical Report 94–1-NN, University of Vaasa, Department of Information Technology and Production Economics.

    Google Scholar 

  • Bäck, T. (1996). Evolutionary Algorithms in Theory and Practice. Oxford University Press, New York.

    MATH  Google Scholar 

  • Baluja, S. (1995). An empirical comparison of seven iterative and evolutionary function optimization heuristics. Technical Report CMU-CS-95–193, Carnegie Mellon University.

    Google Scholar 

  • Berlanga, A., Isasi, P., Sanchís, A., and Molina, J. M. (1999a). Neural networks robot controller trained with evolution strategies. In Proceedings of the 1999 Congress on Evolutionary Computation, pages 413–419, Washington D. C. IEEE Press.

    Google Scholar 

  • Berlanga, A., Molina, J. M., Sanchís, A., and Isasi, P. (1999b). Applying evolution strategies to neural networks robot controllers. In Mira, J. and Sánchez-Andrés, J. V., editors, Engineering Applications of Bio-Inspired Artificial Neural Networks, volume 1607 of Lecture Notes in Computer Science, pages 516–525. Springer-Verlag, Berlin.

    Chapter  Google Scholar 

  • Beyer, H.-G. (1993). Toward a theory of evolution strategies: Some asymptotical results from the (1+005C)-theory. Evolutionary Computation, 1(2):165–188.

    Article  Google Scholar 

  • Beyer, H.-G. (1995). Toward a theory of evolution strategies: The ,))-theory. Evolutionary Computation, 3(1):81–111.

    Article  MathSciNet  Google Scholar 

  • Beyer, H.-G. (1996). Toward a theory of evolution strategies: Self adaptation. Evolutionary Computation, 3(3):311–347.

    Article  Google Scholar 

  • Castillo, P. A., González, J., Merelo, J. J., Prieto, A., Rivas, V., and Romero, G. (1999). GA-Prop-II: Global optimization of multilayer perceptrons using GAs. In Proceedings of the 1999 Congress on Evolutionary Computation, pages 2022–2027, Washington D. C. IEEE Press.

    Google Scholar 

  • Caudell, T. P. and Dolan, C. P. (1989). Parametric connectivity: training of constrained networks using genetic algoritms. In Schaffer, J. D., editor, Proceedings of the Third International Conference on Genetic Algorithms, pages 370–374, San Mateo, CA. Morgan Kaufmann.

    Google Scholar 

  • Davis, L. (1991). Handbook of Genetic Algorithms. Van Nostrand Reinhold Computer Library, New York.

    Google Scholar 

  • Fogel, D. B., Fogel, L. J., and Porto, V. W. (1990). Evolving neural networks. Biological Cybernetics, 63:487–493.

    Article  Google Scholar 

  • Galic, E. and Höhfeld, M. (1996). Improving the generalization performance of multi-layer-perceptrons with population-based incremental learning. In Parallel Problem Solving from Nature IV, volume 1141 of Lecture Notes in Computer Science, pages 740–750. Springer-Verlag, Berlin.

    Google Scholar 

  • Gallagher, M. R. (2000). Multi-layer Perceptron Error Surfaces: Visualization, Structure and Modelling. PhD thesis, Department of Computer Science and Electrical Engineering, University of Queensland.

    Google Scholar 

  • Gruau, F. and Whitley, D. (1993). Adding learning to the cellular development of neural networks: Evolution and the baldwin effect. Evolutionary Computation, 1:213–233.

    Article  Google Scholar 

  • Herrera, F., Lozano, M., and Verdegay, J. L. (1996). Dynamic and heuristic fuzzy connectives-based crossover operators for controlling the diversity and convengence of real coded genetic algorithms. Journal of Intelligent Systems, 11:1013–1041.

    Article  MATH  Google Scholar 

  • Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Harbor.

    Google Scholar 

  • Jones, T. C. (1995). Evolutionary Algorithms,Fitness Landscapes and Search. PhD thesis, University of New Mexico.

    Google Scholar 

  • Larrañaga, P. (2001). A review on Estimation of Distribution Algorithms. In Larrañaga, P. and Lozano, J. A., editors, Estimation of Distribution Algorithms: A new tool for Evolutionary Computation. Kluwer Academic Publishers.

    Google Scholar 

  • Mangasarian, O. L. and Wolberg, W. H. (1990). Cancer diagnosis via linear programming. SIAM News, 23(5):1–18.

    Google Scholar 

  • Maxwell, B. and Anderson, S. (1999). Training hidden Markov models using population-based learning. In Banzhaf, W. et al., editors, Proceedings of the 1999 Genetic and Evolutionary Computation Conference, page 944, Orlando FL. Morgan Kaufmann.

    Google Scholar 

  • McClelland, J. L. and Rumelhart, D. E. (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition. The MIT Press.

    Google Scholar 

  • Montana, D. and Davis, L. (1989). Training feedforward neural networks using genetic algorithms. In Proceedings of the Eleventh International Joint Con- ference on Artificial Intelligence, pages 762–767, San Mateo, CA. Morgan Kaufmann.

    Google Scholar 

  • Moscato, P. (1999). Memetic algorithms: A short introduction. In Corne, D., Dorigo, M., and Glover, F., editors, New Ideas in Optimization, pages 219234. McGraw-Hill.

    Google Scholar 

  • Mühlenbein, H. and Paaß, G. (1996). From recombination of genes to the es-timation of distributions i. binary parameters. In H. M. Voigt, e. a., editor,Parallel Problem Solving from Nature IV, volume 1141 of Lecture Notes in Computer Science, pages 178–187. Springer-Verlag, Berlin.

    Google Scholar 

  • Nakai, K. and Kanehisa, M. (1992). A knowledge base for predicting protein localization sites in eukaryotic cells. Genomics, 14:897–911.

    Article  Google Scholar 

  • Rechenberg, I. (1973). Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Frommann-Holzboog Verlag, Stuttgart.

    Google Scholar 

  • Ribeiro, B., Costa, E., and Dourado, A. (1995). Lime kiln fault detection and diagnosis by neural networks. In Pearson, D. W., Steele, N. C., and Albrecht, R. F., editors, Artificial Neural Nets and Genetic Algorithms 2, pages 112115, Wien New York. Springer-Verlag.

    Google Scholar 

  • Rosenblatt, F. (1959). Principles of Neurodynamics. Spartan Books, New York.

    Google Scholar 

  • Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by backpropagating errors. Nature, 323:533–536.

    Article  Google Scholar 

  • Schwefel, H.-P. (1977). Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie, volume 26 of Interdisciplinary Systems Research. Birkhäuser, Basel.

    Google Scholar 

  • Silva, F. M. and Almeida, L. B. (1990). Speeding up backpropagation. In Eck-miller, R., editor, Advanced Neural Computers. North Holland.

    Google Scholar 

  • Whitley, D. (1999). A free lunch proof for gray versus binary encoding. In Banzhaf, W. et al., editors, Proceedings of the 1999 Genetic and Evolutionary Computation Conference, pages 726–733, Orlando FL. Morgan Kaufmann.

    Google Scholar 

  • Whitley, D. and Hanson, T. (1989). Optimizing neural networks using faster, more accurate genetic search. In Schaffer, J. D., editor, Proceedings of the Third International Conference on Genetic Algorithms, pages 391–396, San Mateo, CA. Morgan Kaufmann.

    Google Scholar 

  • Whitley, D., Mathias, K., and Fitzhorn, P. (1991). Delta coding: An iterative search strategy for genetic algorithms. In Belew, R. K. and Booker, L. B., editors, Proceedings of the Fourth International Conference on Genetic Algorithms, pages 77–84, San Mateo CA. Morgan Kaufmann.

    Google Scholar 

  • Whitley, D., Starkweather, T., and Bogart, B. (1990). Genetic algorithms and neural networks: Optimizing connections and connectivity. Parallel Computing, 14:347–361.

    Article  Google Scholar 

  • Wienholt, W. (1993). Minimizing the system error in feedforward neural networks with evolution strategy. In Gielen, S. and Kappen, B., editors, Proceedings of the International Conference on Artificial Neural Networks, pages 490–493, London. Springer-Verlag.

    Google Scholar 

  • Wolpert, D. H. and Macready, W. G. (1997). No free lunch theorems for opti-mization. IEEE Transactions on Evolutionary Computation, 1(1):67–82.

    Article  Google Scholar 

  • Yang, J.-M., Horng, J.-T., and Kao, C.-Y. (1999). Incorporation family competition into Gaussian and Cauchy mutations to training neural networks using an evolutionary algorithm. In Proceedings of the 1999 Congress on Evolutionary Computation, pages 1994–2001, Washington D. C. IEEE Press.

    Google Scholar 

  • Zhang, B.-T. and Cho, D.-Y. (2000). Evolving neural trees for time series prediction using Bayesian evolutionary algorithms. In Proceedings of the First IEEE Workshop on Combinations of Evolutionary Computation and Neural Networks (ECNN-2000).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer Science+Business Media New York

About this chapter

Cite this chapter

Cotta, C., Alba, E., Sagarna, R., Larrañaga, P. (2002). Adjusting Weights in Artificial Neural Networks using Evolutionary Algorithms. In: Larrañaga, P., Lozano, J.A. (eds) Estimation of Distribution Algorithms. Genetic Algorithms and Evolutionary Computation, vol 2. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-1539-5_18

Download citation

  • DOI: https://doi.org/10.1007/978-1-4615-1539-5_18

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4613-5604-2

  • Online ISBN: 978-1-4615-1539-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics