Skip to main content
Log in

A Lamarckian Approach for Neural Network Training

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In Nature, living beings improve their adaptation to surrounding environments by means of two main orthogonal processes: evolution and lifetime learning. Within the Artificial Intelligence arena, both mechanisms inspired the development of non-orthodox problem solving tools, namely: Genetic and Evolutionary Algorithms (GEAs) and Artificial Neural Networks (ANNs). In the past, several gradient-based methods have been developed for ANN training, with considerable success. However, in some situations, these may lead to local minima in the error surface. Under this scenario, the combination of evolution and learning techniques may induce better results, desirably reaching global optima. Comparative tests that were carried out with classification and regression tasks, attest this claim.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ackley, D. H. and Littman, M. L.: A Case for Lamarckian Evolution, Addison-Wesley, Reading, MA, 1994, pp. 3–10.

    Google Scholar 

  2. Belew, R., McInerney, J. and Schraudolph, N.: Evolving networks: Using the genetic algorithms with connectionist learning, CSE Technical Report CS90–174, Computer Science, UCSD, 1990.

  3. Chin, T. and Mital, D.: An evolutionary approach to training feed-forward and recurrent neural networks, In: L. C. Jain and R. K. Jain (eds), Proc. 2nd International Conference on Knowledge-Based Intelligent Electronic Systems, Adelaide, Australia, 1998, pp. 596–602.

  4. Corke, P. A Robotics Toolbox for MATLAB, IEEE Robotics and Automation Magazine 3(1) (1996), 24–32.

    Article  Google Scholar 

  5. Cortez, P., Rocha, M. and Neves, J.: Evolving time series forecasting neural network models, In: Proceedings of International Symposium on Adaptive Systems: Evolutionary Computation and Probabilistic Graphical Models (ISAS 2001), Havana, Cuba, 2001.

  6. Giraud-Carrier, C.: Unifying learning with evolution through Baldwinian evolution and Lamarckism: A case study, In: Proceedings of the Symposium on Computational Intelligence and Learning (CoIL-2000), 2000, pp. 36–41.

  7. Gorman, R. P. and Sejnowski, T. J.: Analysis of hidden units in a layered network trained to classify sonar targets, Neural Networks 1 (1986), 75–89.

    Article  Google Scholar 

  8. Hinton, G. and Nolan, S.: How learning can guide evolution, Complex Systems 1 (1987), 495–502.

    MATH  Google Scholar 

  9. Holland, J.: Adaptation in Natural and Artificial Systems, PhD thesis, University of Michigan, Ann Arbor, 1975.

    Google Scholar 

  10. Kitano, H.: Empirical studies on the speed of convergence of neural network training using genetic algorithms, In: Proceedings of the International Joint Conference on Neural Networks, IEEE, 1990, pp. 397 404.

  11. Kim Ku, Man Mak and Wan-Chi Siu: A study of the lamarckian evolution of recurrent neural networks, IEEE Transactions on Evolutionary Computation 4(1) (2000), 31–42.

    Article  Google Scholar 

  12. Mangasarian, O., Street, W. and Wolberg,W.: Breast cancer diagnosis and prognosis via linear programming, Operations Research 43(4) (1995), 570–577.

    Article  MATH  MathSciNet  Google Scholar 

  13. Mayley, G.: The evolutionary cost of learning, In: P. Maes, M. Mataric, J. A. Meyer, J. Pollack and S. Wilson (eds), From Animals to Animats 4: Proc. 4th International Conference on Simulation of Adaptive Behavior, MIT Press, 1996.

  14. Neves, J., Rocha, M., Rodrigues, H., Biscaia, M. and Alves, J.: Adaptive strategies and the design of evolutionary applications, In: W. Banzhaf, J. Daida, A. Eiben, M. Garzon, V. Honavar, M. Jakiela, and R. Smith (eds), Proceedings of the Genetic and Evolutionary Computation Conference (GECCO99), Morgan Kaufmann, Orlando, Florida, USA, 1999, pp. 473–479.

    Google Scholar 

  15. Prechelt, L.: A quantitative study of experimental evaluations of neural network learning algorithms: Current research practice, Neural Networks 9, 1995.

  16. Quinlan, J.: Combining instance-based and model-based learning, In: P. E. Utgoff (ed.), Machine Learning–ML'93, Morgan Kaufmann: San Mateo, 1993.

    Google Scholar 

  17. Riedmiller, M.: Supervised learning in multilayer perceptrons – from backpropagation to adaptive learning techniques, Computer Standards and Interfaces 16, 1994.

  18. Ripley, B. D.: Pattern Recognition and Neural Networks, Cambridge University Press.1996.

  19. Rocha,M., Cortez, P. and Neves, J.: The relationship between learning and evolution in static and in dynamic environments, In: C. Fyfe (ed.), Proc. 2nd ICSC Symposium on Engineering of Intelligent Systems (EIS'2000), ICSC Academic Press, 2000, pp. 377–383.

  20. Rumelhart, D., Hinton, G. and Williams, R.: Learning internal representations by error propagation, In: D. Rulmelhart and J. McClelland (eds), Parallel Distributed Processing: Explorations in the Microstructures of Cognition, MIT Press, Cambridge MA, vol 1, 1986, pp. 318–362.

    Google Scholar 

  21. Sarle, W.: Stopped training and other remedies for overfitting, In: Proceedings of the 27th Symposium on the Interface of Computer Science and Statistics, 1995, pp. 352–360.

  22. Schaffer, J., Whitley, D. and Eshelman, L.: Combinations of genetic algorithms and neural networks: A survey of the state of the art, In: Whitley and Schaffer (eds), Proceedings of the International Workshop on Combinations of Genetic Algorithms and Neural Networks, 1992, pp. 1–37.

  23. Schwefel, H-P.: Numerical Optimization of Computer Models, Wiley, 1981.

  24. Whitley, D.: Genetic algorithms and neural networks, In: J. Perioux, G. Winter, M. Galan and P. Cuesta (eds), Genetic Algorithms in Engineering and Computer Science, John Wiley & Sons Ltd., 1995.

  25. Yooni, B., Holmes, D., Langholz, G. and Kandel, A.: Efficient genetic algorithms for training layered feedforward neural networks, Information Sciences 76 (1994), 67–85.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cortez, P., Rocha, M. & Neves, J. A Lamarckian Approach for Neural Network Training. Neural Processing Letters 15, 105–116 (2002). https://doi.org/10.1023/A:1015259001150

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1015259001150

Navigation