Skip to main content

ANN-EMOA: Evolving Neural Networks Efficiently

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2022)

Abstract

Multi-objective neuroevolution is a research field of growing importance within reinforcement learning. This paper introduces ANN-EMOA, a novel multi-objective neuroevolutionary algorithm that is inspired by nNEAT and aims at high efficiency, usability, and comprehensibility. To that end it applies a simple encoding and efficient variation operators. Diversity plays a key role in evolutionary computation. For this reason, we apply the Riesz s-energy to foster diversity explicitly. This paper also develops a new efficient approach to determine the individual Riesz s-energy contribution of each solution within a set. To assess the performance of the new ANN-EMOA it is compared to nNEAT and NEAT-MODS, two multi-objective variants of NEAT, in the multi-objective Double Pole Balancing problem. While other domains and more complex test cases need to be investigated, these promising first results show that ANN-EMOA does not only converge faster and to higher quality-levels than its competitors, but it also maintains more compact network-genomes and shows convincing performance even with comparably small populations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    An objective vector a dominates another vector b if a is equal or better than b in all objectives and better than b in at least one objective [3, p. 196].

  2. 2.

    One of the standard measures in multi-objective optimization, see [17].

  3. 3.

    However, to save the decoding-effort one could also store this redundant information in the genes.

  4. 4.

    Higher values of s lead to a more pronounced penalization of smaller distances.

  5. 5.

    We employ the squared Euclidean distance as it avoids the computationally expensive sqrt-operation which has no influence on the individual \(E_s\)-contribution.

  6. 6.

    Every variation operator is applied with a certain probability, also controlled by EARPC.

References

  1. Haykin, S.: Neural Networks and Learning Machines, 3rd. vol. 3, p. 906 (2008). ISBN: 9780131471399

    Google Scholar 

  2. Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20 (2019). ISSN: 15337928. arXiv: 1808.05377

  3. Introduction to Evolutionary Computing. NCS, Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-44874-8

  4. Katoch, S., Chauhan, S.S., Kumar, V.: A review on genetic algorithm: past, present, and future. Multimed. Tools Appl. 80(5), 8091–8126 (2020). https://doi.org/10.1007/s11042-020-10139-6

    Article  Google Scholar 

  5. Stanley, K.O., et al.: Designing neural networks through neuroevolution. Nature Mach. Intell. 1(1), 24–35 (2019). https://doi.org/10.1038/s42256-018-0006-z. ISSN: 25225839

    Article  Google Scholar 

  6. Künzel, S., Meyer-Nieberg, S.: Coping with opponents: multi-objective evolutionary neural networks for fighting games. Neural Comput. Appl. 32(17), 13885–13916 (2020). https://doi.org/10.1007/s00521-020-04794-x

    Article  Google Scholar 

  7. Falcón-Cardona, J.G., Coello, C.A.C., Emmerich, M.: CRI-EMOA: a pareto-front shape invariant evolutionary multi-objective algorithm. In: Lecture Notes in Computer Science, vol. 11, 411. 2019, pp. 307–318 (2019). 12598–1. https://doi.org/10.1007/978-3-030-_25. ISBN: 9783030125974

  8. Aleti, A., Moser, I.: Entropy-based adaptive range parameter control for evolutionary algorithms. In: GECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference (2013). https://doi.org/10.1145/2463372.2463560

  9. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002). https://doi.org/10.1162/106365602320169811. ISSN: 10636560

    Article  Google Scholar 

  10. Stanley, K.O.: Efficient evolution of neural networks through complexification. Ph.D. thesis. The University of Texas at Austin, p. 227 (2004). http://nn.cs.utexas.edu/keyword?stanley:phd04

  11. Dürr, P., Mattiussi, C., Floreano, D.: Neuroevolution with analog genetic encoding. In: Runarsson, T.P., Beyer, H.-G., Burke, E., Merelo-Guervós, J.J., Whitley, L.D., Yao, X. (eds.) PPSN 2006. LNCS, vol. 4193, pp. 671–680. Springer, Heidelberg (2006). https://doi.org/10.1007/11844297_68

    Chapter  Google Scholar 

  12. Mattiussi, C.: Evolutionary synthesis of analog networks. Ph.D. thesis (2005). https://doi.org/10.5075/epfl-thesis-3199

  13. Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube- based encoding for evolving large-scale neural networks. Artif. Life 15(2), 185–212 (2009). https://doi.org/10.1162/artl.2009.15.2.15202. ISSN: 10645462

    Article  Google Scholar 

  14. Van Steenkiste, S., et al.: A wavelet-based encoding for neuroevolution. In: GECCO 2016 - Proceedings of the 2016 Genetic and Evolutionary Computation Conference. 2016, pp. 517–524. https://doi.org/10.1145/2908812.2908905. ISBN: 9781450342063

  15. Bellemare, M.G., et al.: The arcade learning environment: an evaluation platform for general agents. J. Artif. Intell. Res. 47, 253–279 (2013). https://doi.org/10.1613/jair.3912. ISSN: 10769757

    Article  Google Scholar 

  16. Deb, K., et al.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002). https://doi.org/10.1109/4235.996017. ISSN: 1089778X

    Article  Google Scholar 

  17. Emmerich, M., Beume, N., Naujoks, B.: An EMO algorithm using the hypervolume measure as selection criterion. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 62–76. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-31880-4_5

    Chapter  MATH  Google Scholar 

  18. Van Willigen, W., Haasdijk, E., Kester, L.: A multi-objective approach to evolving platooning strategies in intelligent transportation systems. In: GECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference, pp. 1397–1404 (2013). https://doi.org/10.1145/2463372.2463534. ISBN: 978-1-45031- 963-8

  19. Abramovich, O., Moshaiov, A.: Multi-objective topology and weight evolution of neuro-controllers. In: 2016 IEEE Congress on Evolutionary Computation, CEC 2016. 2016, pp. 670–677 (2016). https://doi.org/10.1109/CEC.2016.7743857. ISBN: 9781509006229

  20. Künzel, S.: Evolving artificial neural networks for multi-objecitve tasks. Ph.D. thesis. Bundeswehr University Munich (2021). https://doi.org/10.13140/RG.2.2.19743.07843, https://athene-forschung.unibw.de/138617

  21. Dargan, S., Kumar, M., Ayyagari, M.R., Kumar, G.: A survey of deep learning and its applications: a new paradigm to machine learning. Archives Comput. Methods Eng. 27(4), 1071–1092 (2019). https://doi.org/10.1007/s11831-019-09344-w

    Article  MathSciNet  Google Scholar 

  22. Künzel, S., Meyer-Nieberg, S.: Evolving artificial neural networks for multi-objective tasks. In: Sim, K., Kaufmann, P. (eds.) EvoApplications 2018. LNCS, vol. 10784, pp. 671–686. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77538-8_45

    Chapter  Google Scholar 

  23. Beume, N., Naujoks, B., Emmerich, M.: SMS-EMOA: multiobjective selection based on dominated hypervolume. Europ. J. Oper. Res. 181(3), 1653–1669 (2007). https://doi.org/10.1016/j.ejor.2006.08.008. ISSN: 0377-2217

    Article  MATH  Google Scholar 

  24. Hansen, M.P., Jaszkiewicz, A.: Evaluating the quality of approximations to the non-dominated set. In: IMM Technical Report IMM-REP-1998-7 (1998)

    Google Scholar 

  25. Trujillo, L., et al.: Neat genetic programming: controlling bloat naturally. Inf. Sci. 333, 21–43 (2016). https://doi.org/10.1016/j.ins.2015.11.010. ISSN: 00200255

    Article  Google Scholar 

  26. Kukkonen, S., Deb, K.: Improved pruning of non-dominated solutions based on crowding distance for bi-objective optimization problems. In: 2006 IEEE Congress on Evolutionary Computation, CEC 2006. 2006, pp. 1179–1186 (2006). https://doi.org/10.1109/cec.2006.1688443. ISBN: 0780394879

  27. Falcon-Cardona, J.G., Ishibuchi, H., Coello Coello, C.A.: Exploiting the trade-off between convergence and diversity indicators. In: 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020. 2020, pp. 141–148 (2012). https://doi.org/10.1109/SSCI47803.2020.9308469. ISBN: 9781728125473

  28. Doerr, B., Neumann, F. (eds.): Theory of Evolutionary Computation. Natural Computing Series. Springer, Cham (2020) 978–3-030-29413-7. https://doi.org/10.1007/978-3-030-29414-4

  29. Siegmund, F., Ng, A.H.C., Deb, K.: Standard error dynamic resampling for preference-based evolutionary multi-objective optimization. Technical report COIN Laboratory, Michigan State University (2016)

    Google Scholar 

  30. Skillings, J.H., Mack, G.A.: On the use of a friedman-type statistic in balanced and unbalanced block designs. Technometrics 23(2), 171–177 (1981). https://doi.org/10.1080/00401706.1981.10486261. ISSN: 15372723

  31. Shaffer, J.P.: Modified sequentially rejective multiple test procedures. J. Am. Stat. Assoc. 81(395) (1986). https://doi.org/10.1080/01621459.1986.10478341. ISSN: 0162–1459

  32. Miikkulainen, R., et al.: Evolving deep neural networks. In: Artificial Intelligence in the Age of Neural Networks and Brain Computing (2018). https://doi.org/10.1016/B978-0-12-815480-9.00015-3

  33. Such, F.P., et al.: Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. In: December 2018. arXiv: 1712.06567

  34. Liu, Y., Zhou, A., Zhang, H.: Termination detection strategies in evolutionary algorithms: a survey. In: GECCO 2018 - Proceedings of the 2018 Genetic and Evolutionary Computation Conference, pp. 1063–1070 (2018). https://doi.org/10.1145/3205455.3205466. ISBN: 9781450356183

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steven Künzel .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 158 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Künzel, S., Meyer-Nieberg, S. (2022). ANN-EMOA: Evolving Neural Networks Efficiently. In: Jiménez Laredo, J.L., Hidalgo, J.I., Babaagba, K.O. (eds) Applications of Evolutionary Computation. EvoApplications 2022. Lecture Notes in Computer Science, vol 13224. Springer, Cham. https://doi.org/10.1007/978-3-031-02462-7_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-02462-7_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-02461-0

  • Online ISBN: 978-3-031-02462-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics