Skip to main content

How long does it take to evolve a neural net?

  • Conference paper
  • First Online:
Artificial Evolution (AE 1995)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1063))

Included in the following conference series:

  • 153 Accesses

Abstract

This paper deals with technical issues relevant to artificial neural net (ANN) training by genetic algorithms. Neural nets have applications ranging from perception to control; in the context of control, achieving great precision is more critical than in pattern recognition or classification tasks. In previous work, the authors have found that when employing genetic search to train a net, both precision and training speed can be greatly enhanced by an input renormalization technique. In this paper we investigate the automatic tuning of such renormalization coefficients, as well as the tuning of the slopes of the transfer functions of the individual neurons in the net. Waiting time analysis is presented as an alternative to the classical ”mean performance” interpretation of GA experiments. It is felt that it provides a more realistic evaluation of the real-world usefulness of a GA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. P. J. Angeline, G. M. Saunders and J. B. Pollack, An evolutionary algorithm that construct recurrent neural networks. To appear in IEEE Transactions on Neural Networks.

    Google Scholar 

  2. D. E. Goldberg, Genetic algorithms in search, optimization and machine learning, Addison Wesley, 1989.

    Google Scholar 

  3. S. A. Harp and T. Samad, Genetic synthesis of neural network architecture, in Handbook of Genetic Algorithms, L. Davis Ed., Van Nostrand Reinhold, New York, 1991.

    Google Scholar 

  4. R. Hecht-Nielsen, Neurocomputing, Addison Wesley, 1990.

    Google Scholar 

  5. J. Holland, Adaptation in natural and artificial systems, University of Michigan Press, Ann Harbor, 1975.

    Google Scholar 

  6. Empirical studies on the speed of convergence of neural network training using genetic algorithms. In Proceedings of Eight National Conference on Artificial Intelligence, AAAI-90, Vol. 2, pp 789–795, Boston, MA, 29 July–3 Aug 1990. MIT Press, Cambridge, MA.

    Google Scholar 

  7. G. Mani, Learning by Gradient Descent in Function Space, in Proceedings of IEEE Conference on System, Man, and Cybernetics, pp. 242–247, Los Angeles 1990.

    Google Scholar 

  8. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer Verlag 1992.

    Google Scholar 

  9. D. Nguyen, B. Widrow, The truck Backer Upper: An example of self learning in neural networks, in Neural networks for Control, W. T. Miller III, R. S. Sutton, P. J. Werbos eds, The MIT Press, Cambridge MA, 1990.

    Google Scholar 

  10. N. J. Radcliffe, Equivalence Class Analysis of Genetic Algorithms, in Complex Systems 5, pp 183–205, 1991.

    Google Scholar 

  11. E. Ronald, M. Schoenauer Genetic lander: An experiment in accurate neuro-genetic control, in PPSN 94, to appear.

    Google Scholar 

  12. D. E. Rumelhart, J. L. McClelland, Parallel Distributed Processing — Exploration in the micro structure of cognition, MIT Press, Cambridge MA, 1986.

    Google Scholar 

  13. J. D. Schaffer, R. A. Caruana and L. J. Eshelman, Using genetic search to exploit the emergent behavior of neural networks, Physica D 42 (1990), pp244–248.

    Google Scholar 

  14. M. Schoenauer, E.Ronald S.Damour, Evolving Networks for Control, in Neuronimes 93, EC2, Paris 1993.

    Google Scholar 

  15. M. Schoenauer, E.Ronald Neuro Genetic Truck Backer-Upper Controller, in IEEE World Conference on Computational Intelligence, Orlando 1994.

    Google Scholar 

  16. D. G. Stork, S. Walker, M. Burns & B. Jackson, Preadaptation in Neural Circuits, in Proceedings of International Joint Conference on Neural Networks 8, pp 1–202–1–205, Erlbaum, Hillsdale 1990.

    Google Scholar 

  17. Steve C. Suddarth, The symbolic-neural method for creating models and control behaviors from examples. Ph.D. dissertation. University of Washington. 1988.

    Google Scholar 

  18. Xin Yao, A Review of Evolutionary Artificial Neural Networks, in International Journal of Intelligent Systems 8, pp 539–567, 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Jean-Marc Alliot Evelyne Lutton Edmund Ronald Marc Schoenauer Dominique Snyers

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schoenauer, M., Ronald, E. (1996). How long does it take to evolve a neural net?. In: Alliot, JM., Lutton, E., Ronald, E., Schoenauer, M., Snyers, D. (eds) Artificial Evolution. AE 1995. Lecture Notes in Computer Science, vol 1063. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61108-8_41

Download citation

  • DOI: https://doi.org/10.1007/3-540-61108-8_41

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61108-0

  • Online ISBN: 978-3-540-49948-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics