Abstract
Within the field of Genetic Algorithms (GA) and Artificial Intelligence (AI) a variety computational substrates with the power to find solutions to a large variety of problems have been described. Research has specialized on different computational substrates that each excel in different problem domains. For example, Artificial Neural Networks (ANN) (Russell et al., Artificial intelligence: a modern approach, vol 2. Prentice Hall, Upper Saddle River, 2003) have proven effective at classification, Genetic Programs (by which we mean mathematical tree-based genetic programming and will abbreviate with GP) (Koza, Stat Comput 4:87–112, 1994) are often used to find complex equations to fit data, Neuro Evolution of Augmenting Topologies (NEAT) (Stanley and Miikkulainen, Evolut Comput 10:99–127, 2002) is good at robotics control problems (Cully et al., Nature 521:503, 2015), and Markov Brains (MB) (Edlund et al., PLoS Comput Biol 7:e1002,236, 2011; Marstaller et al., Neural Comput 25:2079–2107, 2013; Hintze et al., Markov brains: a technical introduction. arXiv:1709.05601, 2017) are used to test hypotheses about evolutionary behavior (Olson et al., J R Soc Interf 10:20130,305, 2013) (among many other examples). Given the wide range of problems and vast number of computational substrates practitioners of GA and AI face the difficulty that every new problem requires an assessment to find an appropriate computational substrates and specific parameter tuning to achieve optimal results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adami, C., Brown, C.T.: Evolutionary learning in the 2d artificial life system avida. In: Artificial Life IV, vol. 1194, pp. 377–381. Cambridge, MA: MIT Press (1994)
Adami, C., Schossau, J., Hintze, A.: Evolutionary game theory using agent-based methods. Physics of Life Reviews 19, 1–26 (2016)
Albantakis, L., Hintze, A., Koch, C., Adami, C., Tononi, G.: Evolution of integrated causal structures in animats exposed to environments of increasing complexity. PLoS Computational Biology 10, e1003,966 (2014)
Barto, A.G., Sutton, R.S., Anderson, C.W.: Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on Systems, Man, and Cybernetics 13, 834–846 (1983)
Beer, R.D., et al.: Toward the evolution of dynamical neural networks for minimally cognitive behavior. From Animals to Animats 4, 421–429 (1996)
Bohm, C., CG, N., Hintze, A.: MABE (modular agent based evolver): A framework for digital evolution research. Proceedings of the European Conference of Artificial Life (2017)
Cully, A., Clune, J., Tarapore, D., Mouret, J.B.: Robots that can adapt like animals. Nature 521, 503 (2015)
Edlund, J.A., Chaumont, N., Hintze, A., Koch, C., Tononi, G., Adami, C.: Integrated information increases with fitness in the evolution of animats. PLoS Computational Biology 7, e1002,236 (2011)
Elman, J.L.: Finding structure in time. Cognitive Science 14, 179–211 (1990)
Goldman, B.W., Punch, W.F.: Parameter-less population pyramid. In: GECCO ‘14: Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, pp. 785–792. ACM, Vancouver, BC, Canada (2014).
Grabowski, L.M., Bryson, D.M., Dyer, F.C., Ofria, C., Pennock, R.T.: Early evolution of memory usage in digital organisms. In: ALIFE, pp. 224–231. Citeseer (2010)
Hintze, A., et al.: Markov Brains: A Technical Introduction. arXiv preprint arXiv:1709.05601 (2017)
Hintze, A., Miromeni, M.: Evolution of autonomous hierarchy formation and maintenance. Artificial Life 14, 366–367 (2014)
Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)
James, D., Tucker, P.: A comparative analysis of simplification and complexification in the evolution of neural network topologies. In: Proc. of Genetic and Evolutionary Computation Conference (2004)
Jordan, M.I.: Serial order: A parallel distributed processing approach. In: Advances in Psychology, vol. 121, pp. 471–495. Elsevier (1997)
Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artificial Intelligence 101, 99–134 (1998)
Koza, J.R.: Genetic programming as a means for programming computers by natural selection. Statistics and Computing 4, 87–112 (1994)
Kvam, P., Cesario, J., Schossau, J., Eisthen, H., Hintze, A.: Computational evolution of decision-making strategies. arXiv preprint arXiv:1509.05646 (2015)
Lehman, J., Stanley, K.O.: Exploiting open-endedness to solve problems through the search for novelty. In: ALIFE, pp. 329–336 (2008)
Marstaller, L., Hintze, A., Adami, C.: The evolution of representation in simple cognitive networks. Neural Computation 25, 2079–2107 (2013)
Merrild, J., Rasmussen, M.A., Risi, S.: Hyperentm: Evolving scalable neural turing machines through hyperneat. arXiv preprint arXiv:1710.04748 (2017)
Miller, J.F.: Cartesian genetic programming. In: Cartesian Genetic Programming, pp. 17–34. Springer (2011)
Mouret, J.B., Clune, J.: Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909 (2015)
Olson, R.S., Hintze, A., Dyer, F.C., Knoester, D.B., Adami, C.: Predator confusion is sufficient to evolve swarming behaviour. Journal of The Royal Society Interface 10, 20130,305 (2013)
openAI.com: OpenAI Gym Toolkit (2018). URL https://gym.openai.com/envs/. [Online; accessed 1-Jan-2018]
Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y.L., Tan, J., Le, Q., Kurakin, A.: Large-scale evolution of image classifiers. arXiv preprint arXiv:1703.01041 (2017)
Russell, S.J., Norvig, P., Canny, J.F., Malik, J.M., Edwards, D.D.: Artificial Intelligence: A Modern Approach, vol. 2. Prentice Hall Upper Saddle River (2003)
Schaffer, C.: A conservation law for generalization performance. In: Proceedings of the 11th International Conference on Machine Learning, pp. 259–265 (1994)
Schossau, J., Adami, C., Hintze, A.: Information-theoretic neuro-correlates boost evolution of cognitive systems. Entropy 18, 6 (2015)
Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., Dean, J.: Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017)
Sheneman, L., Hintze, A.: Evolving autonomous learning in cognitive networks. Scientific Reports 7, 16,712 (2017)
Smith, A.W.: Neat-python (2015). URL http://neat-python.readthedocs.io/en/latest/index.html. [Online; accessed 10-31-2017]
Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving large-scale neural networks. Artificial Life 15, 185–212 (2009)
Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10, 99–127 (2002)
Thornton, C., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ‘13, pp. 847–855. ACM, New York, NY, USA (2013).
Trujillo, L., Muñoz, L., Naredo, E., Martínez, Y.: Neat, there’s no bloat. In: European Conference on Genetic Programming, pp. 174–185. Springer (2014)
Wikipedia: Inverted pendulum — Wikipedia, the free encyclopedia (2018). URL https://en.wikipedia.org/wiki/Inverted_pendulum. [Online; accessed 1-Jan-2018]
Wolpert, D.H.: The lack of a priori distinctions between learning algorithms. Neural Computation 8, 1341–1390 (1996)
Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1, 67–82 (1997)
Wolpert, D.H., Macready, W.G.: Coevolutionary free lunches. IEEE Transactions on Evolutionary Computation 9, 721–735 (2005)
Wolpert, D.H., Macready, W.G., et al.: No free lunch theorems for search. Technical Report SFI-TR-95-02-010, Santa Fe Institute (1995)
Acknowledgements
This work was in part funded by the NSF BEACON Center for the Study of Evolution in Action, DBI-0939454. We thank Ken Stanley, Joel Lehman, and Randal Olson for insightful discussions on HyperNEAT and Markov Brain crossovers.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Hintze, A., Schossau, J., Bohm, C. (2019). The Evolutionary Buffet Method. In: Banzhaf, W., Spector, L., Sheneman, L. (eds) Genetic Programming Theory and Practice XVI. Genetic and Evolutionary Computation. Springer, Cham. https://doi.org/10.1007/978-3-030-04735-1_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-04735-1_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-04734-4
Online ISBN: 978-3-030-04735-1
eBook Packages: Computer ScienceComputer Science (R0)