Skip to main content

Genesis of Organic Computing Systems: Coupling Evolution and Learning

  • Chapter

Part of the book series: Understanding Complex Systems ((UCS))

Summary

Organic computing calls for efficient adaptive systems in which flexibility is not traded in against stability and robustness. Such systems have to be specialized in the sense that they are biased towards solving instances from certain problem classes, namely those problems they may face in their environment. Nervous systems are perfect examples. Their specialization stems from evolution and development. In organic computing, simulated evolutionary structure optimization can create artificial neural networks for particular environments. In this chapter, trends and recent results in combining evolutionary and neural computation are reviewed. The emphasis is put on the influence of evolution evolution and development on the structure of neural systems. It is demonstrated how neural structures can be evolved that efficiently learn solutions for problems from a particular problem class. Simple examples of systems that “learn to learn” as well as technical solutions for the design design of turbomachinery components are presented.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H. A. Abbass. An evolutionary artificial neural networks approach for breast cancer diagnosis. Artificial Intelligence in Medicine, 25(3):265–281, 2002.

    Article  Google Scholar 

  2. H. A. Abbass. Speeding up backpropagation using multiobjective evolutionary algorithms. Neural Computation, 15(11):2705–2726, 2003.

    Article  MATH  Google Scholar 

  3. M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, 1999.

    Google Scholar 

  4. K. Arai, S. Das, E. L. Keller, and E. Aiyoshi. A distributed model of the saccade system: simulations of temporally perturbed saccades using position and velocity feedback. Neural Networks, 12:1359–1375, 1999.

    Article  Google Scholar 

  5. M. A. Arbib, editor. The Handbook of Brain Theory and Neural Networks. MIT Press, 2 edition, 2002.

    Google Scholar 

  6. M. A. Arbib. Towards a neurally-inspired computer architecture. Natural Computing, 2(1):1–46, 2003.

    Article  MathSciNet  Google Scholar 

  7. H.-G. Beyer, H.-P. Schwefel, and I. Wegener. How to analyse evolutionary algorithms. Theoretical Computer Science, 287:101–130, 2002.

    Article  MathSciNet  MATH  Google Scholar 

  8. C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.

    Google Scholar 

  9. C. M. Bishop. Pattern Recognition and Machine Learning. Springer-Verlag, 2006.

    Google Scholar 

  10. O. Bousquet, S. Boucheron, and G. Lugosi. Introduction to Statistical Learning Theory. In Advanced Lectures in Machine Learning, volume 3176 of LNAI, pages 169–207. Springer-Verlag, 2004.

    Google Scholar 

  11. A. Chandra and X. Yao. Evolving hybrid ensembles of learning machines for better generalisation. Neurocomputing, 69(7–9):686–700, 2006.

    Article  Google Scholar 

  12. K. Chellapilla and D. B. Fogel. Evolution, neural networks, games, and intelligence. Proceedings of the IEEE, 87(9):1471–1496, 1999.

    Google Scholar 

  13. C. A. Coello Coello, D. A. Van Veldhuizen, and G. B. Lamont. Evolutionary Algorithms for Solving Multi-Objective Problems. Kluwer Academic Publishers, 2002.

    Google Scholar 

  14. C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, 1995.

    MATH  Google Scholar 

  15. N. Cristianini and J. Shawe-Taylor. An Introduction o Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000.

    Google Scholar 

  16. I. Das and J. E. Dennis. A closer look at drawbacks of minimizing weighted sums of objectives for pareto set generation in multicriteria optimization problems. Structural Optimization, 14(1):63–69, 1997.

    Article  Google Scholar 

  17. P. Dayan and L. F. Abbott. Theoretical neuroscience: Computational and mathematical modeling of neural systems. MIT Press, 2001.

    Google Scholar 

  18. K. Deb. Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, 2001.

    Google Scholar 

  19. L. Devroye and L. Gyorfi. A Probabilistic Theory of Pattern Recognition. Springer-Verlag, 1997.

    Google Scholar 

  20. S. Droste, T. Jansen, and I. Wegener. On the analysis of the (1+1) evolutionary algorithm. Theoretical Computer Science, 276:51–81, 2002.

    Article  MathSciNet  MATH  Google Scholar 

  21. D. R. Eads, D. Hill, S. Davis, S. J. Perkins, J. Ma, R. B. Porter, and J. P. Theiler. Genetic algorithms and support vector machines for time series classification. In B. Bosacchi et al., editors, Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation V., volume 4787 of Proceedings of the SPIE, pages 74–85, 2002.

    Google Scholar 

  22. D. B. Fogel, editor. Evolutionary Computation: The Fossile Record. IEEE Press, 1998.

    Google Scholar 

  23. D. B. Fogel, T. J. Hays, S. L. Hahn, and J. Quon. A self-learning evolutionary chess program. Proceedings of the IEEE, 92(12):1947–1954, 2004.

    Google Scholar 

  24. K. Foli, T. Okabe, M. Olhofer, Y. Jin, and B. Sendhoff. Optimization of micro heat exchanger: CFD, analytical approach and multi-objective evolutionary algorithms. International Journal of Heat and Mass Transfer, 49(5-6):1090–1099, 2005.

    Article  Google Scholar 

  25. C. M. Friedrich and C. Moraga. An evolutionary method to find good building-blocks for architectures of artificial neural networks. In Sixth International Conference on Information Processing and Management of Uncertainty in Knowledge Based Systems (IPMU’96), volume 2, pages 951–956, 1996.

    Google Scholar 

  26. F. Friedrichs and C. Igel. Evolutionary tuning of multiple SVM parameters. Neurocomputing, 64(C):107–117, 2005.

    Article  Google Scholar 

  27. H. Fröhlich, O. Chapelle, and B. Schölkopf. Feature selection for support vector machines using genetic algorithms. International Journal on Artificial Intelligence Tools, 13(4):791–800, 2004.

    Article  Google Scholar 

  28. K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 39:139–202, 1980.

    Google Scholar 

  29. A. Gepperth and S. Roth. Applications of multi-objective structure optimization. Neurocomputing, 6(7-9):701–713, 2006.

    Article  Google Scholar 

  30. L. Graening, Y. Jin, and B. Sendhoff. Efficient evolutionary optimization using individual-based evolution control and neural networks: A comparative study. In M. Verleysen, editor, 13th European Symposium on Artificial Neural Networks (ESANN 2005), pages 273–278, 2005.

    Google Scholar 

  31. J. J. Grefenstette and J. M. Fitzpatrick. Genetic search with approximate fitness evaluations. In J. J. Grefenstette, editor, International Conference on Genetic Algorithms and Their Applications, pages 112–120. Lawrence Erlbaum Associates, 1985.

    Google Scholar 

  32. F. Gruau. Automatic definition of modular neural networks. Adaptive Behavior, 3(2):151–183, 1995.

    Article  Google Scholar 

  33. S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, 1998.

    Google Scholar 

  34. M. Hüsken, J. E. Gayko, and B. Sendhoff. Optimization for problem classes – Neural networks that learn to learn. In X.Yao, editor, IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks. IEEE Press, 2000. 98-109.

    Google Scholar 

  35. M. Hüsken, C. Igel, and M. Toussaint. Task-dependent evolution of modularity in neural networks. Connection Science, 14(3):219–229, 2002.

    Article  Google Scholar 

  36. M. Hüsken, Y. Jin, and B. Sendhoff. Structure optimization of neural networks for aerodynamic optimization. Soft Computing, 9(1):21–28, 2005.

    Article  Google Scholar 

  37. M. Hüsken and B. Sendhoff. Evolutionary optimization for problem classes with Lamarckian inheritance. In S.-Y. Lee, editor, Seventh International Conference on Neural Information Processing – Proceedings, volume 2, pages 897–902, Taejon, Korea, November 2000.

    Google Scholar 

  38. C. Igel. Neuroevolution for reinforcement learning using evolution strategies. In R. Sarker et al., editors, Congress on Evolutionary Computation (CEC 2003), volume 4, pages 2588–2595. IEEE Press, 2003.

    Google Scholar 

  39. C. Igel. Multiobjective model selection for support vector machines. In C. A. Coello Coello et al., editors, Proceedings of the Third International Conference on Evolutionary Multi-Criterion Optimization (EMO 2005), volume 3410 of LNAI, pages 534–546. Springer-Verlag, 2005.

    Google Scholar 

  40. C. Igel and M. Kreutz. Operator adaptation in evolutionary computation and its application to structure optimization of neural networks. Neurocomputing, 55(1-2):347–361, 2003.

    Article  Google Scholar 

  41. C. Igel and P. Stagge. Effects of phenotypic redundancy in structure optimization. IEEE Transactions on Evolutionary Computation, 6(1):74–85, 2002.

    Article  Google Scholar 

  42. C. Igel and M. Toussaint. On classes of functions for which No Free Lunch results hold. Information Processing Letters, 86(6):317–321, 2003.

    Article  MathSciNet  MATH  Google Scholar 

  43. C. Igel and M. Toussaint. A No-Free-Lunch theorem for non-uniform distributions of target functions. Journal of Mathematical Modelling and Algorithms, 3(4):313–322, 2004.

    Article  MathSciNet  MATH  Google Scholar 

  44. C. Igel, W. von Seelen, W. Erlhagen, and D. Jancke. Evolving field models for inhibition effects in early vision. Neurocomputing, 44-46(C):467–472, 2002.

    Article  Google Scholar 

  45. C. Igel, S. Wiegand, and F. Friedrichs. Evolutionary optimization of neural systems: The use of self-adptation. In M. G. de Bruin et al., editors, Trends and Applications in Constructive Approximation, number 151 in International Series of Numerical Mathematics, pages 103–123. Birkhäuser Verlag, 2005.

    Google Scholar 

  46. J. Jägersküpper. How the (1+1) ES using isotropic mutations minimizes positive definite quadratic forms. Theoretical Computer Science, 36(1):38–56, 2006.

    Article  Google Scholar 

  47. Y. Jin. A comprehensive survey of fitness approximation in evolutionary computation. Soft Computing, 9(1):3–12, 2005.

    Article  Google Scholar 

  48. Y. Jin. Multi-objective Machine Learning. Springer-Verlag, 2006.

    Google Scholar 

  49. Y. Jin, T. Okabe, and B. Sendhoff. Neural network regularization and ensembling using multi-objective evolutionary algorithms. In Congress on Evolutionary Computation (CEC’04), pages 1–8. IEEE Press, 2004.

    Google Scholar 

  50. Y. Jin, M. Olhofer, and B. Sendhoff. A framework for evolutionary optimization with approximate fitness functions. IEEE Transactions on Evolutionary Computation, 6(5):481–494, 2002.

    Article  Google Scholar 

  51. Y. Jin and B. Sendhoff. Reducing fitness evaluations using clustering techniques and neural network ensembles. In K. Deb et al., editors, Proceedings of the Genetic and Evolutionary Computation Conference - GECCO, volume 1 of LNCS, pages 688–699. Springer-Verlag, 2004.

    Google Scholar 

  52. R. R. Kampfner and M. Conrad. Computational modeling of evolutionary learning processes in the brain. Bulletin of Mathematical Biology, 45(6):931–968, 1983.

    MATH  Google Scholar 

  53. V. R. Khare, X. Yao, and B. Sendhoff. Multi-network evolutionary systems and automatic decomposition of complex problems. International Journal of General Systems, 35(3):259–274, 2006.

    Article  MATH  Google Scholar 

  54. H. Kitano. Designing neural networks using genetic algorithms with graph generation system. Complex Systems, 4:461–476, 1990.

    MATH  Google Scholar 

  55. C. Koch and I. Segev. The role of single neurons in information processing. Nature Neuroscience, 3:1171–1177, 2000.

    Article  Google Scholar 

  56. V. R. Konda and J. N. Tsitsiklis. On actor-critic algorithms. SIAM Journal on Control and Optimization, 42(4):1143–1166, 2003.

    Article  MathSciNet  MATH  Google Scholar 

  57. H. Lipson and J. B. Pollack. Automatic design and manufacture of robotic lifeforms. Nature, 406:974–978, 2000.

    Article  ADS  Google Scholar 

  58. Y. Liu, X. Yao, and T. Higuchi. Evolutionary ensembles with negative correlation learning. IEEE Transactions on Evolutionary Computation, 4(4):380–387, 2000.

    Article  Google Scholar 

  59. S. M. Lucas and G. Kendall. Evolutionary computation and games. Computational Intelligence Magazine, IEEE, 1(1):10–18, 2006.

    Article  Google Scholar 

  60. M. Mahner and M. Kary. What exactly are genomes, genotypes and phenotypes? And what about phenomes? Journal of Theoretical Biology, 186(1):55–63, 1997.

    Article  Google Scholar 

  61. D. P. Mandic and J. A. Chambers. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. John Wiley and Sons Ltd, 2001.

    Google Scholar 

  62. A. Mark, H. Wersing, and B. Sendhoff. A decision making framework for game playing using evolutionary optimization and learning. In Y. Shi, editor, Congress on Evolutionary Computation (CEC), volume 1, pages 373–380. IEEE Press, 2004.

    Google Scholar 

  63. G. F. Miller and P. M. Todd. Designing neural networks using genetic algorithms. In J. D. Schaffer, editor, Proceeding of the 3rd International Conference on Genetic Algorithms, pages 379–384. Morgan Kaufmann, 1989.

    Google Scholar 

  64. D. E. Moriarty, A. C. Schultz, and J. J. Grefenstette. Evolutionary Algorithms for Reinforcement Learning. Journal of Artificial Intelligence Research, 11:199–229, 1999.

    MathSciNet  Google Scholar 

  65. S. Nolfi. Evolution and learning in neural networks. In M. A. Arbib, editor, The Handbook of Brain Theory and Neural Networks, pages 415–418. MIT Press, 2 edition, 2002.

    Google Scholar 

  66. S. Nolfi and D. Floreano. Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines. Intelligent Robotics and Autonomous Agents. MIT Press, 2000.

    Google Scholar 

  67. S. Obayashi, Y. Yamaguchi, and T. Nakamura. Multiobjective genetic algorithm for multidisciplinary design of transonic wing planform. Journal of Aircraft, 34(5):690–693, 1997.

    Article  Google Scholar 

  68. Z. Pan, T. Sabisch, R. Adams, and H. Bolouri. Staged training of neocognitron by evolutionary algorithms. In P. J. Angeline et al., editors, Proceedings of the Congress on Evolutionary Computation, volume 3, pages 1965–1972. IEEE Press, 1999.

    Google Scholar 

  69. M. Papadrakakis, N. Lagaros, and Y. Tsompanakis. Optimization of large-scale 3D trusses using evolution strategies and neural networks. International Journal of Space Structures, 14(3):211–223, 1999.

    Article  Google Scholar 

  70. F. Pasemann, U. Steinmetz, M. Hülse, and B. Lara. Robot control and the evolution of modular neurodynamics. Theory in Biosciences, 120(3-4):311–326, 2001.

    Google Scholar 

  71. M. Patel, V. Honavar, and K. Balakrishnan, editors. Advances in the Evolutionary Synthesis of Intelligent Agents. MIT Press, 2001.

    Google Scholar 

  72. A. Pellecchia, C. Igel, J. Edelbrunner, and G. Schöner. Making driver modeling attractive. IEEE Intelligent Systems, 20(2):8–12, 2005.

    Article  Google Scholar 

  73. S. Pierret. Turbomachinery blade design using a Navier-Stokes solver and artificial neural network. ASME Journal of Turbomachinery, 121(3):326–332, 1999.

    Article  Google Scholar 

  74. W. B. Powell, A. G. Barto, and J. Si. Handbook of Learning and Approximate Dynamic Programming. Wiley-IEEE Press, 2004.

    Google Scholar 

  75. E. T. Rolls and S. M. Stringer. On the design of neural networks in the brain by genetic evolution. Progress in Neurobiology, 6(61):557–579, 2000.

    Article  Google Scholar 

  76. G. Rudolph. Convergence Properties of Evolutionary Algorithms. Kovač, Hamburg, 1997.

    Google Scholar 

  77. T. P. Runarsson and S. Sigurdsson. Asynchronous parallel evolutionary model selection for support vector machines. Neural Information Processing – Letters and Reviews, 3(3):59–68, 2004.

    Google Scholar 

  78. G. Schneider, H. Wersing, B. Sendhoff, and E. Körner. Coupling of evolution and learning to optimize a hierarchical object recognition model. In X. Yao et al., editors, Parallel Problem Solving from Nature (PPSN), LNCS, pages 662–671. Springer-Verlag, 2004.

    Google Scholar 

  79. G. Schneider, H. Wersing, B. Sendhoff, and E. Körner. Evolutionary optimization of an hierarchical object recognition model. IEEE Transactions on Systems, Man and Cybernetics, Part B, 35(3):426–437, 2005.

    Article  Google Scholar 

  80. S. Schneider, C. Igel, C. Klaes, H. Dinse, and J. Wiemer. Evolutionary adaptation of nonlinear dynamical systems in computational neuroscience. Journal of Genetic Programming and Evolvable Machines, 5(2):215–227, 2004.

    Article  Google Scholar 

  81. B. Schölkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2002.

    Google Scholar 

  82. C. Schumacher, M. D. Vose, and L. D. Whitley. The No Free Lunch and description length. In L. Spector et al., editors, Genetic and Evolutionary Computation Conference (GECCO 2001), pages 565–570, San Francisco, CA, USA, 2001. Morgan Kaufmann.

    Google Scholar 

  83. B. Sendhoff. Evolution of Structures – Optimization of Artificial Neural Structures for Information Processing. Shaker Verlag, Aachen, 1998.

    Google Scholar 

  84. B. Sendhoff and M. Kreutz. A model for the dynamic interaction between evolution and learning. Neural Processing Letters, 10(3):181–193, 1999.

    Article  Google Scholar 

  85. A. J. C. Sharkey. On combining artificial neural nets. Connection Science, 8(3-4):299–313, 1996.

    Article  Google Scholar 

  86. D. Shi and C. L. Tan. GA-based supervised learning of neocognitron. In International Joint Conference on Neural Network (IJCNN 2000). IEEE Press, 2000.

    Google Scholar 

  87. H. T. Siegelmann and E. D. Sontag. On the computational power of neural nets. Journal of Computer and System Sciences, 50(1):132–150, 1995.

    Article  MathSciNet  MATH  Google Scholar 

  88. J. Šíma. Training a single sigmoidal neuron is hard. Neural Computation, 14:2709–2728, 2002.

    Article  MATH  Google Scholar 

  89. J. Šíma and P. Orponen. General-purpose computation with neural networks: A survey of complexity theoretix results. Neural Computation, 15(12):2727–2778, 2003.

    Article  MATH  Google Scholar 

  90. T. Sonoda, Y. Yamaguchi, T. Arima, M. Olhofer, B. Sendhoff, and H.-A. Schreiber. Advanced high turning compressor airfoils for low Reynolds number condition, Part 1: Design and optimization. Journal of Turbomachinery, 126:350–359, 2004.

    Article  Google Scholar 

  91. E. D. Sontag. Recurrent neural networks: Some systems-theoretic aspects. In M. Karny et al., editors, Dealing with Complexity: A Neural Network Approach, pages 1–12. Springer-Verlag, 1997.

    Google Scholar 

  92. O. Sporns, G. Tononi, and G. M. Edelman. Theoretical neuroanatomy: relating anatomical and functional connectivity in graphs and cortical connection matrices. Cerebral Cortex, 10(2):127–141, 2000.

    Article  Google Scholar 

  93. P. Stagge and B. Sendhoff. An extended Elman net for modeling time series. In W. Gerstner, A. Germond, M. Hasler, and J.-D. Nicoud, editors, Artificial Neural Networks (ICANN’97), volume 1327 of LNCS, pages 427–432. Springer-Verlag, 1997.

    Google Scholar 

  94. K. O. Stanley, B. D. Bryant, and R. Miikkulainen. Evolving neural network agents in the NERO video game. In Proceedings of the IEEE 2005 Symposium on Computational Intelligence and Games (CIG’05). IEEE Press, 2005.

    Google Scholar 

  95. R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.

    Google Scholar 

  96. R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In S. A. Solla et al., editors, Advances in Neural Information Processing Systems 12, pages 1057–1063. MIT Press, 2000.

    Google Scholar 

  97. T. Suttorp and C. Igel. Multi-objective optimization of support vector machines. In Y. Jin, editor, Multi-objective Machine Learning, volume 16 of Studies in Computational Intelligence, pages 199–220. Springer-Verlag, 2006.

    Google Scholar 

  98. M.-Y. Teo, L.-P. Khoo, and S.-K. Sim. Application of genetic algorithms to optimise neocognitron network parameters. Neural Network World, 7(3):293–304, 1997.

    Google Scholar 

  99. S. Thrun and L. Pratt, editors. Learning to Learn. Kluwer Academic Publishers, 1998.

    Google Scholar 

  100. J. Tsitsiklis and D. Bertsekas. Neurodynamic programming. Belmont, MA: Athena Scientific, U.S.A., 1996.

    Google Scholar 

  101. J. Walker, S. Garrett, and M. Wilson. Evolving controllers for real robots: A survey of the literature. Adaptive Behavior, 11:179–203, 2003.

    Article  Google Scholar 

  102. H. Wersing and E. Körner. Learning optimized features for hierarchical models of invariant recognition. Neural Computation, 15(7):1559–1588, 2003.

    Article  MATH  Google Scholar 

  103. S. Wiegand, C. Igel, and U. Handmann. Evolutionary multi-objective optimization of neural networks for face detection. International Journal of Computational Intelligence and Applications, 4(3):237–253, 2004.

    Article  Google Scholar 

  104. D. H. Wolpert. The lack of a priori distinctions between learning algorithms. Neural Computation, 8(7):1341–1390, 1996.

    Article  Google Scholar 

  105. D. H. Wolpert and W. G. Macready. No Free Lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1):67–82, 1997.

    Article  Google Scholar 

  106. D. H. Wolpert and W. G. Macready. Coevolutionary free lunches. IEEE Transactions on Evolutionary Computation, 9, 2005.

    Google Scholar 

  107. X. Yao. Evolving artificial neural networks. Proceedings of the IEEE, 87(9):1423–1447, 1999.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Igel, C., Sendhoff, B. (2009). Genesis of Organic Computing Systems: Coupling Evolution and Learning. In: Organic Computing. Understanding Complex Systems. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-77657-4_7

Download citation

Publish with us

Policies and ethics