Skip to main content

Evolving Artificial Neural Networks for Multi-objective Tasks

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10784))

Abstract

Neuroevolution represents a growing research field in Artificial and Computational Intelligence. The adjustment of the network weights and the topology is usually based on a single performance criterion. Approaches that allow to consider several – potentially conflicting – criteria are only rarely taken into account.

This paper develops a novel combination of the NeuroEvolution of Augmenting Topologies (NEAT) algorithm with modern indicator-based evolutionary multi-objective algorithms, which enables the evolution of artificial neural networks for multi-objective tasks including a large number of objectives. Several combinations of evolutionary multi-objective algorithms and NEAT are introduced and discussed. The focus lies on variants with modern indicator-based selection since these are considered as efficient methods for higher dimensional tasks. This paper presents the first combination of these algorithms and NEAT. The experimental analysis shows that the novel algorithms are very promising for multi-objective Neuroevolution.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    An algorithm which addresses multi-objective optimization problems. See [4] for a detailed description of the NSGA-II.

  2. 2.

    See [7, Table 5.2, p. 244] for an overview of the dominance relations.

  3. 3.

    Stochastic Universal Sampling (SUS) behaves similar to the Roulette Wheel Selection (RWS), where each individual gets assigned a hole on a one-armed roulette wheel, the hole’s size depends on the individual’s fitness compared to all individuals’ fitness. The roulette wheel is spun once and an individual is selected then. SUS uses an equally-spaced \(\lambda \)-armed roulette wheel to select \(\lambda \) different individuals at once instead of spinning the wheel \(\lambda \) times. The better an individual’s fitness, the better it’s chance to be selected as parent [8, p. 84]. Every individual (even the worst of the population) has a nonzero chance of being selected [8, p. 81f.]. The advantage of SUS over RWS is that a set of \(\lambda \) unique individuals can be selected at once. Using RWS for selecting \(\lambda > 1\) individuals would require \(\lambda \) executions and a mechanism to avoid the same individual being selected twice.

  4. 4.

    R-library for comparing algorithms: https://cran.r-project.org/web/packages/scmamp/index.html.

  5. 5.

    The only difference between the original mNEAT and the mNEAT with archive is that the latter saves the best solutions ever found in an archive. Because the archive is only kept as a “second population” and finally returned as Pareto front, there is no difference in both variants’ behaviour in the Average Number of Evaluations experiment.

References

  1. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, vol. 1. MIT press, Cambridge (1998)

    Google Scholar 

  2. Stanley, K.O.: Efficient evolution of neural networks through complexification. Ph.D. thesis, Department of Computer Sciences, The University of Texas at Austin (2004)

    Google Scholar 

  3. Schrum, J., Miikkulainen, R.: Constructing complex NPC behavior via multi-objective neuroevolution. AIIDE 8, 108–113 (2008)

    Google Scholar 

  4. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  Google Scholar 

  5. Schrum, J., Miikkulainen, R.: Discovering multimodal behavior in Ms. Pac-Man through evolution of modular neural networks. IEEE Trans. Comput. Intell. AI Games 8(1), 67–81 (2016)

    Article  Google Scholar 

  6. van Willigen, W., Haasdijk, E., Kester, L.: Fast, comfortable or economical: evolving platooning strategies with many objectives. In: 16th International IEEE Conference on Intelligent Transportation Systems-(ITSC), 2013, pp. 1448–1455 (2013)

    Google Scholar 

  7. Coello, C.A.C., Lamont, G.B., van Veldhuizen, D.A., et al.: Evolutionary Algorithms for Solving Multi-objective Problems, vol. 5. Springer, US (2007). https://doi.org/10.1007/978-0-387-36797-2

    MATH  Google Scholar 

  8. Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing. NCS. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-44874-8

    Book  MATH  Google Scholar 

  9. Hansen, M.P., Jaszkiewicz, A.: Evaluating the Quality of Approximations to the Non-dominated Set. Department of Mathematical Modelling, Technical Universityof Denmark, IMM (1998)

    Google Scholar 

  10. Brockhoff, D., Wagner, T., Trautmann, H.: On the properties of the R2 indicator. In: Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, pp. 465–472 (2012)

    Google Scholar 

  11. Beume, N., Rudolph, G.: Faster S-metric calculation by considering dominated hypervolume as Klee’s measure problem. In: Kovalerchuk, B. (ed.) Proceedings of the Second IASTED International Conference on Computational Intelligence, 20–22 November 2006, pp. 233–238. IASTED/ACTA Press, San Francisco (2006)

    Google Scholar 

  12. Emmerich, M., Beume, N., Naujoks, B.: An EMO algorithm using the hypervolume measure as selection criterion. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 62–76. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-31880-4_5

    Chapter  Google Scholar 

  13. Beume, N., Naujoks, B., Emmerich, M.: SMS-EMOA: multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 181(3), 1653–1669 (2007)

    Article  MATH  Google Scholar 

  14. Trautmann, H., Wagner, T., Brockhoff, D.: R2-EMOA: focused multiobjective search using R2-indicator-based selection. In: Nicosia, G., Pardalos, P. (eds.) LION 2013. LNCS, vol. 7997, pp. 70–74. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-44973-4_8

    Chapter  Google Scholar 

  15. Gruau, F.: Cellular encoding as a graph grammar. In: IEE Colloquium on Grammatical Inference: Theory, Applications and Alternatives, pp. 17/1–1710 (1993)

    Google Scholar 

  16. Moriarty, D.E., Mikkulainen, R.: Efficient reinforcement learning through symbiotic evolution. Mach. Learn. 22(1–3), 11–32 (1996)

    Google Scholar 

  17. Gomez, F.J., Miikkulainen, R.: Solving non-markovian control tasks with neuroevolution. In: IJCAI, vol. 99, pp. 1356–1361 (1999)

    Google Scholar 

  18. Diaz-Manriquez, A., Toscano-Pulido, G., Coello, C.A.C., Landa-Becerra, R.: A ranking method based on the R2 indicator for many-objective optimization. In: IEEE Congress on Evolutionary Computation (CEC), 2013, pp. 1523–1530. IEEE, Piscataway (2013)

    Google Scholar 

  19. Gamow, G., Cleveland, J.M., Freeman, I.M.: Physics: foundations and frontiers. Am. J. Phys. 29(1), 60 (1961)

    Article  Google Scholar 

  20. Calvo, B., Santafé, G.: Statistical Assessment of the Differences (2018). https://cran.r-project.org/web/packages/scmamp/vignettes/Statistical_assessment_of_the_differences.html

  21. Friedman, M.: The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 32(200), 675–701 (1937)

    Article  MATH  Google Scholar 

  22. García, S., Fernández, A., Luengo, J., Herrera, F.: Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power. Inf. Sci. 180(10), 2044–2064 (2010)

    Article  Google Scholar 

  23. Shaffer, J.P.: Modified sequentially rejective multiple test procedures. J. Am. Stat. Assoc. 81(395), 826–831 (1986)

    Article  MATH  Google Scholar 

  24. Fonseca, C.M., Knowles, J.D., Thiele, L., Zitzler, E.: A tutorial on the performance assessment of stochastic multiobjective optimizers. In: Third International Conference on Evolutionary Multi-Criterion Optimization (EMO 2005), vol. 216, p. 240 (2005)

    Google Scholar 

  25. Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., Da Fonseca, V.G.: Performance assessment of multiobjective optimizers: an analysis and review. IEEE Trans. Evol. Comput. 7(2), 117–132 (2003)

    Article  Google Scholar 

  26. Ishibuchi, H., Masuda, H., Nojima, Y.: A study on performance evaluation ability of a modified inverted generational distance indicator. In: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pp. 695–702 (2015)

    Google Scholar 

  27. Lu, F., Yamamoto, K., Nomura, L.H., Mizuno, S., Lee, Y., Thawonmas, R.: Fighting game artificial intelligence competition platform. In: IEEE 2nd Global Conference on Consumer Electronics (GCCE), 2013, pp. 320–323 (2013)

    Google Scholar 

  28. Kleijnen, J.P.C.: Design and Analysis of Simulation Experiments, vol. 20. Springer, US (2008). https://doi.org/10.1007/978-0-387-71813-2

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steven Künzel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Künzel, S., Meyer-Nieberg, S. (2018). Evolving Artificial Neural Networks for Multi-objective Tasks. In: Sim, K., Kaufmann, P. (eds) Applications of Evolutionary Computation. EvoApplications 2018. Lecture Notes in Computer Science(), vol 10784. Springer, Cham. https://doi.org/10.1007/978-3-319-77538-8_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-77538-8_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-77537-1

  • Online ISBN: 978-3-319-77538-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics