Skip to main content

The Importance of Topology Evolution in NeuroEvolution: A Case Study Using Cartesian Genetic Programming of Artificial Neural Networks

  • Conference paper
  • First Online:

Abstract

NeuroEvolution (NE) is the application of evolutionary algorithms to Artificial Neural Networks (ANN). This paper reports on an investigation into the relative importance of weight evolution and topology evolution when training ANN using NE. This investigation used the NE technique Cartesian Genetic Programming of Artificial Neural Networks (CGPANN). The results presented show that the choice of topology has a dramatic impact on the effectiveness of NE when only evolving weights; an issue not faced when manipulating both weights and topology. This paper also presents the surprising result that topology evolution alone is far more effective when training ANN than weight evolution alone. This is a significant result as many methods which train ANN manipulate only weights.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Both CGP and ANNs can also be structured in a recurrent form.

  2. 2.

    Fully connected between layers i.e. a node in hidden layer two has an input from every node in hidden layer one.

  3. 3.

    If the arity is set high enough however all topologies are possible as each node can lower its own arity by only utilizing the first of multiple connections between two nodes.

References

  1. P. Angeline, G. Saunders, and J. Pollack. An Evolutionary Algorithm that Constructs Recurrent Neural Networks. IEEE Transactions on Neural Networks, 5(1):54–65, 1994.

    Google Scholar 

  2. D. Floreano, P. Dürr, and C. Mattiussi. Neuroevolution: from Architectures to Learning. Evolutionary Intelligence, 1(1):47–62, 2008.

    Google Scholar 

  3. X. Glorot and Y. Bengio. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10). Society for Artificial Intelligence and, Statistics, 2010.

    Google Scholar 

  4. C. Igel. Neuroevolution for Reinforcement Learning using Evolution Strategies. In Evolutionary Computation, volume 4, pages 2588–2595. IEEE, 2003.

    Google Scholar 

  5. M. M. Khan, G. M. Khan, and J. F. Miller. Evolution of Neural Networks using Cartesian Genetic Programming. In Proceedings of IEEE World Congress on Computational Intelligence, 2010.

    Google Scholar 

  6. H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin. Exploring Strategies for Training Deep Neural Networks. The Journal of Machine Learning Research, 10:1–40, 2009.

    Google Scholar 

  7. J. Miller and S. Smith. Redundancy and Computational Efficiency in Cartesian Genetic Programming. IEEE Transactions on Evolutionary Computation, 10(2):167–174, 2006.

    Google Scholar 

  8. J. Miller and P. Thomson. Cartesian Genetic Programming. In Proceedings of the Third European Conference on Genetic Programming (EuroGP2000), volume 1802, pages 121–132. Springer-Verlag, 2000.

    Google Scholar 

  9. J. F. Miller. What bloat? Cartesian Genetic Programming on Boolean Problems. In 2001 Genetic and Evolutionary Computation Conference Late Breaking Papers, pages 295–302, 2001.

    Google Scholar 

  10. J. F. Miller, editor. Cartesian Genetic Programming. Springer, 2011.

    Google Scholar 

  11. D. Moriarty and R. Mikkulainen. Efficient Reinforcement Learning through Symbiotic Evolution. Machine learning, 22(1):11–32, 1996.

    Google Scholar 

  12. R. Poli. Some Steps Towards a Form of Parallel Distributed Genetic Programming. In Proceedings of the First On-line Workshop on, Soft Computing, pages 290–295, 1996.

    Google Scholar 

  13. K. Stanley and R. Miikkulainen. Evolving Neural Networks through Augmenting Topologies. Evolutionary computation, 10(2):99–127, 2002.

    Google Scholar 

  14. S. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S. Fahlman, D. Fisher, et al. The Monk’s Problems a Performance Comparison of Different Learning Algorithms. Technical report, Carnegie Mellon University, 1991.

    Google Scholar 

  15. A. J. Turner and J. F. Miller. Cartesian Genetic Programming encoded Artificial Neural Networks: A Comparison using Three Benchmarks. In Proceedings of the Conference on Genetic and Evolutionary Computation (GECCO-13), pages 1005–1012. ACM, 2013.

    Google Scholar 

  16. A. Vargha and H. D. Delaney. A Critique and Improvement of the CL Common Language Effect Size Statistics of McGraw and Wong. Journal of Educational and Behavioral Statistics, 25(2):101–132, 2000.

    Google Scholar 

  17. V. K. Vassilev and J. F. Miller. The Advantages of Landscape Neutrality in Digital Circuit Evolution. In Proc. International Conference on Evolvable Systems, volume 1801 of LNCS, pages 252–263. Springer Verlag, 2000.

    Google Scholar 

  18. A. Wieland. Evolving Neural Network Controllers for Unstable Systems. In Neural Networks, 1991, IJCNN-91-Seattle International Joint Conference on, volume 2, pages 667–673. IEEE, 1991.

    Google Scholar 

  19. X. Yao. Evolving Artificial Neural Networks. Proceedings of the IEEE, 87(9):1423–1447, 1999.

    Google Scholar 

  20. T. Yu and J. F. Miller. Neutrality and the Evolvability of a Boolean Function Landscape. Genetic programming, pages 204–217, 2001.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew James Turner .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer International Publishing Switzerland

About this paper

Cite this paper

Turner, A.J., Miller, J.F. (2013). The Importance of Topology Evolution in NeuroEvolution: A Case Study Using Cartesian Genetic Programming of Artificial Neural Networks. In: Bramer, M., Petridis, M. (eds) Research and Development in Intelligent Systems XXX. SGAI 2013. Springer, Cham. https://doi.org/10.1007/978-3-319-02621-3_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-02621-3_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-02620-6

  • Online ISBN: 978-3-319-02621-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics