Skip to main content

Empirical analysis of the factors that affect the Baldwin effect

  • Conference paper
  • First Online:
Parallel Problem Solving from Nature — PPSN V (PPSN 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1498))

Included in the following conference series:

Abstract

The inclusion of learning in genetic algorithms based on the Baldwin effect is one of the popular approaches to improving the convergence of genetic algorithms. However, the expected improvement may not be easily obtained. This is mainly due to the lack of understanding of the factors that affect the Baldwin effect. This paper aims at providing sufficient evidence to confirm that the level of difficulties for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning can significantly affect the Baldwin effect. The results suggest that combining genetic algorithms inattentively with any learning methods available is not a proper way to construct hybrid algorithms. Instead, the correlation between the genetic operations and the learning methods has to be carefully considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. H. Ackley and M. L. Littman. A case for Lamarckian evolution. In C. G. Langton, editor, Artificial Life 3, pages 3–10. Reading, Mass.: Addison-Wesley, 1994.

    Google Scholar 

  2. R. W. Anderson. Learning and evolution: A quantitative genetics approach. Journal of Theoretical Biology, 175:89–101, 1995.

    Article  Google Scholar 

  3. J. M. Baldwin. A new factor in evolution. American Naturalist, 30:441–451, 1896.

    Article  Google Scholar 

  4. Y. Davidor. A naturally occurring niche & species phenomenon: the model and first results. In Proceedings of the Fourth International Conference on Genetic Algorithms, pages 257–262, 1991.

    Google Scholar 

  5. W. E. Hart, T. E. Kammeyer, and R. K. Belew. The role of development in genetic algorithms. In L. D. Whitley and M. D. Vose, editors, Foundations of Genetic Algorithms 3, pages 315–332. San Mateo, CA: Morgan Kaufmann Pub., 1995.

    Google Scholar 

  6. G. E. Hinton and S. J. Nowlan. How learning can guide evolution. Complex Systems, 1:495–502, 1987.

    MATH  Google Scholar 

  7. K. W. C. Ku and M. W. Mak. Exploring the effects of Lamarckian and Baldwinian learning in evolving recurrent neural networks. In Proceedings of the IEEE International Conference on Evolutionary Computation, pages 617–621, 1997.

    Google Scholar 

  8. V. Maniezzo. Genetic evolution of the topology and weight distribution of neural networks. IEEE Transactions on Neural Networks, 5(1):39–53, 1994.

    Article  Google Scholar 

  9. G. Mayley. Landscapes, learning costs, and genetic assimilation. Evolutionary Computation, 4(3):213–234, 1997.

    Google Scholar 

  10. D. J. Montana and L. Davis. Training feedforward neural network using genetic algorithms. In Proceedings of the Eleventh International Joint Conference on Artifical Intelligence, pages 762–767, 1989.

    Google Scholar 

  11. S. Nolfi, J. L. Elman, and D. Parisi. Learning and evolution in neural networks. Adaptive Behavior, 3:5–28, 1994.

    Google Scholar 

  12. P. Turney. Myths and legends of the Baldwin effect. In Proceedings of the Workshop on Evolutionary Computing and Machine Learning at the 13th International Conference on Machine Learning, pages 135–142, 1996.

    Google Scholar 

  13. D. Whitley. A genetic algorithm tutorial. Statistics & Computing, 4(2):65–85, 1994.

    Google Scholar 

  14. D. Whitley, V. S. Gordon, and K. Mathias. Lamarckian evolution, the Baldwin effect and function optimization. In Y. Davidor, H.-P. Schwefel, and R. Manner, editors, Parallel Problem Solving from Nature — PPSN III, pages 6–15. Springer-Verlag, 1994.

    Google Scholar 

  15. R. J. Williams and D. Zipser. Experimental analysis of the real-time recurrent learning algorithm. Connection Science, 1:87–111, 1989.

    MATH  Google Scholar 

  16. R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Y. Chauvin and D. E Rumelhart, editors, Backpropagation: Theory, Architectures, and Applications, pages 433–486. Hillsdale, NJ: Lawrence Erlbaum Associates Pub., 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Agoston E. Eiben Thomas Bäck Marc Schoenauer Hans-Paul Schwefel

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ku, K.W.C., Mak, M.W. (1998). Empirical analysis of the factors that affect the Baldwin effect. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, HP. (eds) Parallel Problem Solving from Nature — PPSN V. PPSN 1998. Lecture Notes in Computer Science, vol 1498. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0056890

Download citation

  • DOI: https://doi.org/10.1007/BFb0056890

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65078-2

  • Online ISBN: 978-3-540-49672-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics