Skip to main content

Evolving Neural Networks Using the “Baldwin Effect”

  • Conference paper

Abstract

This paper describes how through simple means a genetic search towards optimal neural network architectures can be improved, both in the convergence speed as in the quality of the final result. This result can be theoretically explained with the Baldwin effect, which is implemented here not just by the learning process of the network alone, but also by changing the network architecture as part of the learning procedure. This can be seen as a combination of two different techniques, both helping and improving on simple genetic search.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J.M. Baldwin; ‘A new factor in evolution.’ In: American Naturalist, 30, 441–451, 1896.

    Article  Google Scholar 

  2. R.K. Belew; ‘When both individuals and populations search: adding simple learning to the genetic algorithm’. In: J.D. Schaffer (Ed.); Proceedings of the third International Conference on Genetic Algorithms, 34–41, Kaufmann, San Mateo, CA, 1989.

    Google Scholar 

  3. E.J.W. Boers and H. Kuiper; Biological Metaphors and the Design of Modular Artificial Neural Networks. MSc. Thesis, Leiden University, 1992.

    Google Scholar 

  4. E.J.W. Boers, H. Kuiper, B.L.M. Happel and I.G. Sprinkhuizen-Kuyper; ‘Designing modular artificial neural networks’. In: H.A. Wijshoff; Computing Science in The Netherlands: Proceedings (CSN’93), Ed.: H.A. Wijshoff, 87–96, Stichting Mathematisch Centrum, Amsterdam, 1993.

    Google Scholar 

  5. M.V. Borst; Local Structure Optimization in Evolutionairy Generated Neural Network Architectures. MSc. Thesis, Leiden University, 1994.

    Google Scholar 

  6. Y.L. Cun, J. Denker and S. Solla; ‘Optimal brain damage’. In: Advances in Neural information Processing Systems, 2, 598–605, 1990.

    Google Scholar 

  7. S.E. Fahlman and C. Lebiere; ‘The Cascaded-Correlation Learning Architecture’. In: Advances in Neural Information Processing Systems, 2, 524–532, 1990.

    Google Scholar 

  8. D.B. Fogel; ‘An introduction to simulated evolutionary optimization’. In: IEEE Transactions on Neural Networks, 5, 3–14, 1994.

    Google Scholar 

  9. M. Fréan; ‘The Upstart algorithm: a method for constructing and training feedforward neural networks’. In: Neural Computations, 2, 198–209, 1990.

    Google Scholar 

  10. B. Fritzke; ‘Growing cell structures — A self-organizing network for unsupervised and supervised Learning. TR-93-026, 1993.

    Google Scholar 

  11. F. Gruau; Neural Network Synthesis Using Cellular Encoding and the Genetic Algorithm. PhD. Thesis, l’Ecole Normale Supérieure de Lyon, 1994.

    Google Scholar 

  12. F. Gruau and D. Whitley; ‘Adding learning to the cellular development of neural networks: evolution and the Baldwin effect’. In: Evolutionary Computation, 1, 213–233, 1993.

    Google Scholar 

  13. B.L.M. Happel and J.M.J. Murre; ‘Design and evolution of modular neural network architectures’. In: Neural Networks, 7,985–1004, 1994.

    Google Scholar 

  14. S.A. Harp, T. Samad and A. Guha; ‘Towards the genetic synthesis of neural networks’. In: J.D. Schaffer (Ed.); Proceedings of the third International Conference on Genetic Algorithms (ICGA), 360–369, Kaufmann, San Mateo, CA, 1989.

    Google Scholar 

  15. G.E. Hinton and S.J. Nowlan; ‘How learning can guide evolution’. In: Complex Systems, 1, 495–502, 1987

    Google Scholar 

  16. H. Kitano; ‘Designing neural network using genetic algorithm with graph generation system’. Complex Systems, 4, 461–476, 1990.

    MATH  Google Scholar 

  17. M. Marchand, M. Golea and P. Ruján; ‘A convergence theorem for sequential learning in two-layer perceptrons’. In: Europhysics Letters, 11, 487–492, 1990.

    Google Scholar 

  18. M. Mezard and J.-P. Nadal; ‘Learning in feedforward layered networks: the Tiling algorithm’. In: Journal of Physics A, 22, 2191–2204, 1989.

    Article  MathSciNet  Google Scholar 

  19. E. Mjolsness; ‘Bayesian interference on visual grammars by neural nets that optimize’. Technical Report YALEU-DCS-TR-854, Yale University, 1990.

    Google Scholar 

  20. M. Mozer and P. Smolensky; ‘Skeletonization: a technique for trimming the fat from a network via relevance assessment’. In: Advances in Neural Information Processing Systems, 1,107–115, 1989.

    Google Scholar 

  21. C.W. Omlin and C.L. Giles; Pruning recurrent neural net-works for improved generalization performance. Revised Technical Report No. 93-6, Computer Science Department, Rensselaer Polytechnic Institute, Trov, N.Y., 1993.

    Google Scholar 

  22. J.G. Rueckl, K.R. Cave and S.M. Kosslyn; ‘Why are “what” and “where” processed by separate cortical visual systems? A computational investigation’. In: Journal of Cognitive Neuroscience, 1, 171–186, 1989.

    Article  Google Scholar 

  23. D. Whitley, V.S. Gordon and K. Mathias; ‘Lamarckian evolution, the Baldwin effect and function optimization’. In: Y Davidor, H.-P. Schwefel and R. Männer (Eds.); Lecture Notes in Computer Science, 866, 6–15, Springer-Verlag, 1994

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag/Wien

About this paper

Cite this paper

Boers, E.J.W., Borst, M.V., Sprinkhuizen-Kuyper, I.G. (1995). Evolving Neural Networks Using the “Baldwin Effect”. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-7535-4_87

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-7535-4_87

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-82692-8

  • Online ISBN: 978-3-7091-7535-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics