Skip to main content
Log in

On the automated, evolutionary design of neural networks: past, present, and future

  • Review Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Neuroevolution is the name given to a field of computer science that applies evolutionary computation for evolving some aspects of neural networks. After the AI Winter came to an end, neural networks reemerged to solve a great variety of problems. However, their usage requires designing their topology, a decision with a potentially high impact on performance. Whereas many works have tried to suggest rules-of-thumb for designing topologies, the truth is that there are not analytic procedures for determining the optimal one for a given problem, and trial-and-error is often used instead. Neuroevolution arose almost 3 decades ago, with some works focusing on the evolutionary design of the topology and most works describing techniques for learning connection weights. Since then, evolutionary computation has been proved to be a convenient approach for determining the topology and weights of neural networks, and neuroevolution has been applied to a great variety of fields. However, for more than 2 decades neuroevolution has mainly focused on simple artificial neural networks models, far from today’s deep learning standards. This is insufficient for determining good architectures for modern networks extensively used nowadays, which involve multiple hidden layers, recurrent cells, etc. More importantly, deep and convolutional neural networks have become a de facto standard in representation learning for solving many different problems, and neuroevolution has only focused in this kind of networks in very recent years, with many works being presented in 2017 onward. In this paper, we review the field of neuroevolution during the last 3 decades. We will put the focus on very recent works on the evolution of deep and convolutional neural networks, which is a new but growing field of study. To the best of our knowledge, this is the best survey reviewing the literature in this field, and we have described the features of each work as well as their performance on well-known databases when available. This work aims to provide a complete reference of all works related to neuroevolution of convolutional neural networks up to the date. Finally, we will provide some future directions for the advancement of this research area.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. However, the term “neuroevolution” would be coined years later.

References

  1. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, Devin M, Ghemawat S, Irving G, Isard M, Kudlur M, Levenberg J, Monga R, Moore S, Murray DG, Steiner B, Tucker P, Vasudevan V, Warden P, Wicke M, Yu Y, Zheng X (2016) TensorFlow: a system for large-scale machine learning. In: 12th USENIX symposium on operating systems design and implementation, pp 265–283

  2. Angeline PJ, Saunders GM, Pollack JB (1994) An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans Neural Netw 5(1):54–65

    Article  Google Scholar 

  3. Assunção F, Lourenço N, Machado P, Ribeiro B (2018) DENSER: deep evolutionary network structured representation. Genet Program Evol Mach (in press)

  4. AWS: Amazon EC2 P3 Instances (2017). https://aws.amazon.com/es/ec2/instance-types/p3/. Last queried on 14 Nov 2017

  5. Baird L (1999) Reinforcement learning through gradient descent. Ph.D. thesis, School of Computer Science, Carnegie Mellon University

  6. Balakrishnan K, Honavar V (1995) Evolutionary design of neural architectures—a preliminary taxonomy and guide to literature. Technical report, Iowa State University. Paper 26

  7. Baldominos A, Saez Y, Isasi P (2018) Evolutionary convolutional neural networks: an application to handwriting recognition. Neurocomputing 283:38–52

    Article  Google Scholar 

  8. Baldominos A, Saez Y, Isasi P (2018) Evolutionary design of convolutional neural networks for human activity recognition in sensor-rich environments. Sensors 18(4):1288

    Article  Google Scholar 

  9. Baldominos A, Saez Y, Isasi P (2018) Model selection in committees of evolved convolutional neural networks using genetic algorithms. In: Intelligent data engineering and automated learning—IDEAL 2018. Lecture Notes in Computer Science, vol 11314. Springer, pp 364–373

  10. Baum EB, Haussler D (1989) What size net gives valid generalization? Neural Comput 1(1):151–160

    Article  Google Scholar 

  11. Belew RK, McInerney K, Schraudolph NN (1991) Evolving networks: using the genetic algorithm with connectionist learning. In: Langton CG, Taylor C, Farmer JD, Rasmussen S (eds) Artificial life II. Addison-Wesley, MA, pp 511–547

    Google Scholar 

  12. Bergstra J, Breuleux O, Bastien F, Lamblin P, Pascanu R, Desjardins G, Turian J, Warde-Farley D, Bengio Y (2010) Theano: a CPU and GPU math compiler in Python. In: 9th Python in science conference

  13. Bergstra J, Yamins D, Cox D (2013) Making a science of model search: hyperparameter optimization in hundreds of dimensions for vision architectures. J Mach Learn Res 28(1):115–123

    Google Scholar 

  14. Blum C, Roli A (2003) Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Comput Surv 35(3):268–308

    Article  Google Scholar 

  15. Bochinski E, Senst T, Sikora T (2017) Hyper-parameter optimization for convolutional neural network committees based on evolutionary algorithms. In: 2017 IEEE international conference on image processing, pp 3924–3928

  16. Canziani A, Paszke A, Culurciello E (2017) An analysis of deep neural network models for practical applications. arXiv:1605.07678

  17. Caruana R (1993) Generalization vs. net size. NIPS Tutorial. Denver, CO

  18. Chalmers DJ (1990) The evolution of learning: an experiment in genetic connectionism. In: 1990 Connectionist Models Summer School, pp 81–90

    Chapter  Google Scholar 

  19. Chetlur S, Woolley C, Vandermersch P, Cohen J, Tran J, Catanzaro B, Shelhamer E (2014) cuDNN: efficient primitives for deep learning. arXiv:1410.0759

  20. Cho K, Van Merriënboer B, Bahdanau D, Bengio Y (2014) On the properties of neural machine translation: encoder–decoder approaches. arXiv:1409.1259

  21. Choudhary A, Rishi R, Dhaka VS, Ahlawat S (2010) Influence of introducing an additional hidden layer on the character recognition capability of a BP neural network having one hidden layer. Int J Eng Technol 2(1):24–28

    Google Scholar 

  22. Cramer NL (1985) A representation for the adaptive generation of simple sequential programs. In: 1st international conference on genetic algorithms and their applications, pp 183–187

  23. Cui X, Zhang W, Tüske Z, Picheny M (2018) Evolutionary stochastic gradient descent for optimization of deep neural networks. In: Advances in neural information processing systems 31. NIPS Proceedings

  24. Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2:303–314

    Article  MathSciNet  MATH  Google Scholar 

  25. Dasgupta D, McGregor DR (1992) Designing application-specific neural networks using the structured genetic algorithm. In: International workshop on combinations of genetic algorithms and neural networks, pp 87–96

  26. Davison J (2017) DEvol: Automated deep neural network design via genetic programming. https://github.com/joeddav/devol. Last visited on 01 July 2017

  27. de Garis H (1992) Steerable GenNETS: the genetic programming of steerable behavior in GenNETS. In: Towards a practice of autonomous systems, pp 272–281

  28. Dean J, Hölzle U (2017) Build and train machine learning models on our new Google Cloud TPUs. https://www.blog.google/topics/google-cloud/google-cloud-offer-tpus-machine-learning/. Published on 17 May 2017

  29. Desell T (2017) Large scale evolution of convolutional neural networks using volunteer computing. In: 2017 genetic and evolutionary computation conference companion, pp 127–128

  30. Ding S, Li H, Su C, Yu J, Jin F (2013) Evolutionary artificial neural networks: a review. Artif Intell Rev 39(3):251–260

    Article  Google Scholar 

  31. Edlund JA, Chaumont N, Hintze A, Koch C, Tononi G, Adami C (2011) Integrated information increases with fitness in the evolution of animats. PLOS Comput Biol 7(10):e1002236

    Article  MathSciNet  Google Scholar 

  32. Elias JG (1992) Genetic generation of connection patterns for a dynamic artificial neural network. In: International workshop on combinations of genetic algorithms and neural networks, pp 38–54

  33. Fahlman SE, Lebiere C (1990) The cascade-correlation learning architecture. In: Touretzky DS (ed) Advances in neural information processing systems, vol 2. Morgan Kaufmann. Los Altos, CA, pp 524–532

    Google Scholar 

  34. Fernando C, Banarse D, Reynolds M, Besse F, Pfau D, Jaderberg M, Lanctot M, Wierstra D (2016) Convolution by evolution: differentiable pattern producing networks. In: 2016 genetic and evolutionary computation conference, pp 109–116

  35. Floreano D, Dürr P, Mattiussi C (2008) Neuroevolution: from architectures to learning. Evol Intell 1(1):1–47

    Article  Google Scholar 

  36. Foley LJ, Owens AJ, Walsh MJ (1966) Artificial intelligence through simulated evolution. Wiley, Hoboken

    MATH  Google Scholar 

  37. Forsyth R (1981) BEAGLE a Darwinian approach to pattern recognition. Kybernetes 10(3):159–166

    Article  Google Scholar 

  38. Frean M (1990) The upstart algorithm: a method for constructing and training feedforward neural networks. Neural Comput 2(2):198–209

    Article  Google Scholar 

  39. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feed forward neural networks. In: 13th international conference on artificial intelligence and statistics, vol 9. JMLR Proceedings, pp 249–256

  40. Gnana Sheela K, Deepa SN (2013) Review on methods to fix number of hidden neurons in neural networks. Math Probl Eng 2013:425740

    Google Scholar 

  41. Gomez F, Schmidhuber J, Miikkulainen R (2008) Accelerated neural evolution through cooperatively coevolved synapses. J Mach Learn Res 9:937–965

    MathSciNet  MATH  Google Scholar 

  42. Gruau F (1994) Neural network synthesis using cellular encoding and the genetic algorithm. Ph.D. thesis, Laboratoire de l’Informatique du Parallélisme, Ecole Normale Supérieure de Lyon

  43. Hammerla NY, Halloran S, Plötz T (2016) Deep, convolutional, and recurrent models for human activity recognition using wearables. In: 25th international conference on artificial intelligence, pp 1533–1540

  44. Hancock PJB (1992) Genetic algorithms and permutation problems: a comparison of recombination operators for neural net structure specification. In: International Workshop on combinations of genetic algorithms and neural networks, pp 108–122

  45. Hansen N (2006) The CMA evolution strategy: a comparing review. In: Towards a new evolutionary computation. Springer, pp 75–102

  46. Harp SA, Samad T, Guha A (1989) Towards the genetic synthesis of neural networks. In: 3rd international conference on genetic algorithms, pp 360–369

  47. Harp SA, Samad T, Guha A (1990) Designing application-specific neural networks using the genetic algorithm. In: Advances NIPS 2. Morgan Kaufmann, pp 447–454

  48. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition. IEEE

  49. Hermundstad AM, Brown KS, Bassett DS, Carlson JM (2011) Learning, memory, and the role of neural network architecture. PLOS Comput Biol 7(6):e1002063

    Article  MathSciNet  Google Scholar 

  50. Hintzelab. MABE: Modular Agent Based Evolution Framework (2017). https://github.com/Hintzelab/MABE. Last visited on 27 June 2017

  51. Hirose Y, Yamashita K, Hijiya S (1991) Back-propagation algorithm which varies the number of hidden units. Neural Netw 4(1):61–66

    Article  Google Scholar 

  52. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  53. Holland JH (1975) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. University of Michigan Press, Ann Arbor

    MATH  Google Scholar 

  54. Iba H (2018) Evolutionary approach to deep learning. In: Evolutionary approach to machine learning and deep neural networks. Springer, pp 77–104

  55. Igel C (2003) Neuroevolution for reinforcement learning using evolution strategies. In: 2003 IEEE congress on evolutionary computation, pp 2588–2595

  56. Karpathy A (2015) The unreasonable effectiveness of recurrent neural networks. http://karpathy.github.io/2015/05/21/rnn-effectiveness/. Published on 21 May 2015

  57. Karunanithi N, Das R, Whitley D (1992) Genetic cascade learning for neural networks. In: International workshop on combinations of genetic algorithms and neural networks, pp 134–145

  58. Kassahun Y, Edgington M, Metzen JH, Sommer G, Kirchner F (2007) Common genetic encoding for both direct and indirect encodings of networks. In: 9th annual conference on genetic and evolutionary computation, pp 1029–1036

  59. Kassahun Y, Sommer G (2005) Efficient reinforcement learning through evolutionary acquisition of neural topologies. In: 13th European symposium on artificial neural networks, pp 259–266

  60. Kitano H (1990) Designing neural networks using genetic algorithms with graph generation system. Complex Syst 4:461–476

    MATH  Google Scholar 

  61. Koutník J, Schmidhuber J, Gomez F (2014) Evolving deep unsupervised convolutional networks for vision-based reinforcement learning. In: 2014 annual conference on genetic and evolutionary computation, pp 541–548

  62. Koza JR (1989) Hierarchical genetic algorithms operating on populations of computer programs. In: 11th international joint conference on artificial intelligence, pp 7768–774

  63. Koza JR, Rice JP (1992) Genetic programming: the movie. MIT Press, Cambridge

    Google Scholar 

  64. Kramer O (2018) Evolution of convolutional highway networks. In: Sim K, Kaufmann P (eds) EvoApplications 2018: applications of evolutionary computation, vol 10784. Lecture Notes in Computer Science. Springer, Berlin, pp 395–404

    Google Scholar 

  65. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Advances NIPS 25. NIPS Proceedings, pp 1097–1105

  66. Larochelle H, Erhan D, Courville A, Bergstra J, Bengio Y (2007) An empirical evaluation of deep architectures on problems with many factors of variation. In: 24th international conference on machine learning, pp 473–480

  67. Lawrence S, Giles CL, Tsoi AC (1996) What size neural network gives optimal generalization?. Technical report, Institute for Advanced Computer Studies, University of Maryland, Convergence properties of backpropagation

    Google Scholar 

  68. LeCun Y, Bengio Y (1998) Convolutional networks for images, speech, and time series. In: Arbib MA (ed) The handbook of brain theory and neural network. MIT Press, MA, USA, pp 255–258

    Google Scholar 

  69. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  70. LeCun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Advances NIPS 2. Morgan Kaufmann, pp 598–605

  71. Lindgren K, Nilsson A, Nordahl MG, Rade I (1992) Regular language inference using evolving neural networks. In: International workshop on combinations of genetic algorithms and neural networks, pp 75–86

  72. Linnainmaa S (1976) Taylor expansion of the accumulated rounding error. BIT Numer Math 16(2):146–160

    Article  MathSciNet  MATH  Google Scholar 

  73. Lipton ZC, Berkowitz J (2015) A critical review of recurrent neural networks for sequence learning. arXiv:1506.00019

  74. Liu H, Simonyan K, Vinyals O, Fernando C, Kavukcuoglu K (2018) Hierarchical representations for efficient architecture search. In: 6th international conference on learning representations

  75. Loshchilov I, Hutter F (2016) CMA-ES for hyperparameter optimization of deep neural networks. In: 2016 international conference on learning representations workshop track

  76. Lu Z, Whalen I, Boddeti V, Dhebar Y, Deb K, Goodman E, Banzhaf W (2018) NSGA-NET: a multi-objective genetic algorithm for neural architecture search. arXiv:1810.03522

  77. Maynard Smith J (1978) Optimization theory in evolution. Ann Rev Ecol Syst 9:31–56

    Article  Google Scholar 

  78. Merrill JWL, Port RF (1991) Fractally configured neural networks. Neural Netw 4(1):53–60

    Article  Google Scholar 

  79. Miihlenbein H, Kindermann J (1989) The dynamics of evolution and learning—towards genetic neural networks. In: Pfeifer R, Schreter Z, Fogelman-Soulié F, Steels L (eds) Connectionism in perspective. Elsevier, pp 173–197

  80. Miikkulainen R (2017) Neuroevolution. In: Sammut C, Webb GI (eds) Encyclopedia of machine learning and data mining. Springer, pp 899–904

  81. Miikkulainen R (2017) Topology of a neural network. In: Sammut C, Webb GI (eds) Encyclopedia of machine learning and data mining. Springer, Boston, MA, pp 1281–1281

    Chapter  Google Scholar 

  82. Miikkulainen R, Liang J, Meyerson E, Rawal A, Fink D, Francon O, Raju B, Shahrzad H, Navruzyan A, Duffy N, Hodjat B (2017) Evolving deep neural networks. arXiv:1703.00548

  83. Miller GF, Todd P, Hedge SU (1989) Designing neural networks using genetic algorithms. In: 3rd international conference on genetic algorithms, pp 379–384

  84. Minsky ML (1954) Theory of neural-analog reinforcement systems and its application to the brain-model problem. Ph.D. thesis, Princeton University

  85. Minsky ML, Papert SA (1969) Perceptrons: an introduction to computational geometry. MIT Press, Cambridge

    MATH  Google Scholar 

  86. Mishkin D, Sergievskiy N, Matas J (2016) Systematic evaluation of CNN advances on the ImageNet. arXiv:1606.02228

  87. Montana DJ, Davis L (1989) Training feedforward neural networks using genetic algorithms. In: 11th joint international conference on artificial intelligence, pp 762–767

  88. Mozer MC, Smolensky P (1989) Skeletonization: a technique for trimming the fat from a network via relevance assessment. In: Advances NIPS 1. Morgan Kaufmann, pp 107–115

  89. New York Times (1958). New Navy device learns by doing; psychologist shows embryo of computer designed to read and grow wiser. http://www.nytimes.com/1958/07/08/archives/new-navy-device-learns-by-doing-psychologist-shows-embryo-of.html

  90. NVIDIA: The world’s most efficient supercomputer for AI and deep learning (2017). http://images.nvidia.com/content/pdf/infographic/dgx-saturnv-infographic.pdf. Last visited on 15 July 2017

  91. Odri SV, Petrovacki DP, Krstonosic GA (1993) Evolutional development of a multilevel neural network. Neural Netw 6(4):583–595

    Article  Google Scholar 

  92. Parker GA, Maynard Smith J (1990) Optimality theory in evolutionary biology. Nature 348:27–33

    Article  Google Scholar 

  93. Prechelt L (1995) Neural Net FAQ . https://www.cs.cmu.edu/Groups/AI/util/html/faqs/ai/neural/faq.html. Last modified on 23 Feb 1995

  94. Prellberg J, Kramer O (2018) Lamarckian evolution of convolutional neural networks. arXiv:1806.08099

  95. Prellberg J, Kramer O (2018) Limited evaluation evolutionary optimization of large neural networks. arXiv:1806.09819

  96. Pugh J, Soros L, Stanley K (2016) Quality diversity: a new frontier for evolutionary computation. Front Robot Artif Intell 3:40

    Google Scholar 

  97. Real E, Aggarwal A, Huang Y, Le QV (2018) Regularized evolution for image classifier architecture search. arXiv:1802.01548

  98. Real E, Moore S, Selle A, Saxena S, Leon-Suematsu Y, Tan J, Le QV, Kurakin A (2017) Large-scale evolution of image classifiers. In: Proceedings of the 34th international conference on machine learning, vol 70. JMLR Proceedings

  99. Rechenberg I (1971) Evolutionsstrategie – optimierung technischer systeme nach prinzipien der biologischen evolution. Ph.D. thesis, Technische Universität Berlin

  100. Risi S, Stanley KO (2012) An enhanced hypercube-based encoding for evolving the placement, density, and connectivity of neurons. Artif Life 18(4):331–363

    Article  Google Scholar 

  101. Rosenblatt F (1957) The perceptron–a perceiving and recognizing automaton. Technical report, Cornell Aeronautical Laboratory

    Google Scholar 

  102. Rumelhart D, Hinton G, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323:533–536

    Article  MATH  Google Scholar 

  103. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    Article  MathSciNet  Google Scholar 

  104. Sammut C, Webb GI (eds) (2017) Encyclopedia of machine learning and data mining. Springer, Berlin

    Google Scholar 

  105. Schaffer JD, Caruana RA, Eshelman LJ (1990) Using genetic search to exploit the emergent behavior of neural networks. Phys D Nonlinear Phenom 42(1–3):244–248

    Article  Google Scholar 

  106. Schaffer JD, Whitley D, Eshelman LJ (1992) Combinations of genetic algorithms and neural networks: a survey of the state of the art. In: International workshop on combinations of genetic algorithms and neural networks, pp 1–37

  107. Schiffmann W, Joost M, Werner R (1991) Performance evaluation of evolutionarily created neural network topologies. In: Schwefel HP, Männer R (eds) Parallel Problem Solving from Nature. PPSN 1990. Lecture Notes in Computer Science, vol 496. Springer, pp 274–283

  108. Scholz M (1991) A learning strategy for neural networks based on a modified evolutionary strategy. In: Schwefel HP, Männer R (eds) Parallel Problem Solving from Nature. PPSN 1990. Lecture Notes in Computer Science, vol 496. Springer, pp 314–318

  109. Schwefel HP (1974) Evolutionsstrategie und numerische optimierung. Ph.D. thesis, Technische Universität Berlin

  110. Siebel NT, Sommer G (2007) Evolutionary reinforcement learning of artificial neural networks. Int J Hybrid Intell Syst 4(3):171–183

    Article  MATH  Google Scholar 

  111. Sietsma J, Dow RJF (1991) Creating artificial neural networks that generalize. Neural Netw 4(1):67–79

    Article  Google Scholar 

  112. Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. In: Advances in neural information processing systems 25. NIPS Proceedings, pp 2951–2959

  113. Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  114. Stanley KO, D’Ambrosio DB, Gauci J (2009) A hypercube-based encoding for evolving large-scale neural networks. Artif Life 15(2):185–212

    Article  Google Scholar 

  115. Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evolut Comput 10(2):99–127

    Article  Google Scholar 

  116. Such FP, Madhavan V, Conti E, Lehman J, Stanley KO, Clune J (2018) Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv:1712.06567

  117. Suganuma M, Shirakawa S, Nagao T (2017) A genetic programming approach to designing convolutional neural network architectures. In: 2017 genetic and evolutionary computation conference companion, pp 497–504

  118. Sun Y, Xue B, Zhang M (2017) Evolving deep convolutional neural networks for image classification. arXiv:1710.10741

  119. Sun Y, Xue B, Zhang M (2018) Automatically evolving cnn architectures based on blocks. arXiv:1810.11875

  120. Szegedy C, Ioffe S, Vanhoucke V, Alemi A (2016) Inception-v4, Inception-ResNet and the impact of residual connections on learning. In: 31st AAAI conference on artificial intelligence, pp 4278–4284

  121. Talbi EG (2009) Metaheuristics: from design to implementation. Wiley, Hoboken

    Book  MATH  Google Scholar 

  122. Tirumala SS, Ali S, Ramesh CP (2016) Evolving deep neural networks: a new prospect. In: 12th international conference on natural computation, fuzzy systems and knowledge discovery, pp 69–74

  123. Torreele J (1991) Temporal processing with recurrent networks: an evolutionary approach. In: 4th international conference on genetic algorithms, pp 555–561

  124. Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460

    Article  MathSciNet  Google Scholar 

  125. Verbancsics P, Harguess J (2013) Generative neuroevolution for deep learning. arXiv:1312.5355

  126. Verbancsics P, Harguess J (2015) Image classification using generative neuroevolution for deep learning. In: 2015 IEEE winter conference on applications of computer vision, pp 488–493

  127. Vonk E, Jain LC, Johnson RP (1997) Automatic generation of neural network architecture using evolutionary computation, advances fuzzy systems–application and theory, vol 14. World Scientific Publishing, Singapore

    MATH  Google Scholar 

  128. Vonk E, Jain LC, Veelenturf LPJ, Johnson RP (1995) Automatic generation of a neural network architecture using evolutionary computation. Electronic Technology Directions to the Year 2000:144–149

    Article  Google Scholar 

  129. Wang B, Sun Y, Xue B, Zhang M (2018) A hybrid DE approach to designing CNN for image classification. In: 31st Australasian joint conference on artificial intelligence

  130. Wang Z, Di Massimo C, Tham MT, Morris AJ (1994) A procedure for determining the topology of multilayer feedforward neural networks. Neural Netw 7(2):291–300

    Article  Google Scholar 

  131. Werbos PJ (1974) Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Committee on Applied Mathematics, Harvard University

  132. Whitley D, Dominic S, Das R (1991) Genetic reinforcement learning with multi-layer neural networks. In: 4th international conference on genetic algorithms, pp 562–569

  133. Whitley D, Hanson T (1989) Optimizing neural networks using faster, more accurate genetic search. In: 3rd international conference genetic algorithms, pp 391–396

  134. Xie L, Yuille A (2017) Genetic CNN. In: Proceedings of the 2017 IEEE international conference on computer vision

  135. Yao X (1993) A review of evolutionary artificial neural networks. Int J Intell Syst 8(4):539–567

    Article  Google Scholar 

  136. Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447

    Google Scholar 

  137. Yao X, Liu Y (1997) A new evolutionary system for evolving artificial neural networks. IEEE Trans Neural Netw 8(3):694–713

    Article  Google Scholar 

  138. Young SR, Rose DC, Johnston T, Heller WT, Karnowski TP, Potok TE, Patton RM, Perdue G, Miller J (2017) Evolving deep networks using HPC. In: Machine learning on HPC environments workshop, pp 3924–3928

  139. Young SR, Rose DC, Karnowsky TP, Lim SH, Patton RM (2015) Optimizing deep learning hyper-parameters through an evolutionary algorithm. In: Workshop on machine learning in high-performance computing environments

Download references

Acknowledgements

This research is partially supported by the Spanish Ministry of Education, Culture and Sports under FPU fellowship with grant number FPU13/03917.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alejandro Baldominos.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Baldominos, A., Saez, Y. & Isasi, P. On the automated, evolutionary design of neural networks: past, present, and future. Neural Comput & Applic 32, 519–545 (2020). https://doi.org/10.1007/s00521-019-04160-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-019-04160-6

Keywords

Navigation