Skip to main content

Deep Optimisation: Multi-scale Evolution by Inducing and Searching in Deep Representations

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2021)

Abstract

The ability of evolutionary processes to innovate and scale up over long periods of time, observed in nature, remains a central mystery in evolutionary biology, and a challenge for algorithm designers to emulate and explain in evolutionary computation (EC). The Major Transitions in Evolution is a compelling theory that explains evolvability through a multi-scale process whereby individuality (and hence selection and variation) is continually revised by the formation of associations between formerly independent entities, a process still not fully explored in EC. Deep Optimisation (DO) is a new type of model-building optimization algorithm (MBOA) that exploits deep learning methods to enable multi-scale optimization. DO uses an autoencoder model to induce a multi-level representation of solutions, capturing the relationships between the lower-level units that contribute to the quality of a solution. Variation and selection are then performed within the induced representations, causing model-informed changes to multiple solution variables simultaneously. Here, we first show that DO has impressive performance compared with other leading MBOAs (and other rival methods) on multiple knapsack problems, a standard combinatorial optimization problem of general interest. Going deeper, we then carry out a detailed investigation to understand the differences between DO and other MBOAs, identifying key problem characteristics where other MBOAs are afflicted by exponential running times, and DO is not. This study serves to concretize our understanding of the Major Transitions theory, and why that leads to evolvability, and also provides a strong motivation for further investigation of deep learning methods in optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bello, I., Pham, H., Le, Q.V., Norouzi, M., Bengio, S.: Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940 (2016)

  2. Boyan, J., Moore, A.W.: Learning evaluation functions to improve optimization by local search. J. Mach. Learn. Res. 1(Nov), 77–112 (2000)

    Google Scholar 

  3. Caldwell, J., Watson, R.A., Thies, C., Knowles, J.D.: Deep optimisation: Solving combinatorial optimisation problems using deep neural networks. arXiv preprint arXiv:1811.00784 (2018)

  4. Chu, P.C., Beasley, J.E.: A genetic algorithm for the multidimensional knapsack problem. J. Heuristics 4(1), 63–86 (1998)

    Article  Google Scholar 

  5. Churchill, A.W., Sigtia, S., Fernando, C.: A denoising autoencoder that guides stochastic search. arXiv preprint arXiv:1404.1614 (2014)

  6. Hansen, P., Mladenović, N., Pérez, J.A.M.: Variable neighbourhood search: methods and applications. Ann. Oper. Res. 175(1), 367–407 (2010)

    Article  MathSciNet  Google Scholar 

  7. Hopfield, J.J., Tank, D.W.: “neural” computation of decisions in optimization problems. Biol. Cybern. 52(3), 141–152 (1985)

    MATH  Google Scholar 

  8. Horn, J., Goldberg, D.E., Deb, K.: Long path problems. In: Davidor, Y., Schwefel, H.-P., Männer, R. (eds.) PPSN 1994. LNCS, vol. 866, pp. 149–158. Springer, Heidelberg (1994). https://doi.org/10.1007/3-540-58484-6_259

    Chapter  Google Scholar 

  9. Khalil, E., Dai, H., Zhang, Y., Dilkina, B., Song, L.: Learning combinatorial optimization algorithms over graphs. In: Advances in Neural Information Processing Systems, pp. 6348–6358 (2017)

    Google Scholar 

  10. Lombardi, M., Milano, M., Bartolini, A.: Empirical decision model learning. Artif. Intell. 244, 343–367 (2017)

    Article  MathSciNet  Google Scholar 

  11. Martins, J.P., Delbem, A.C.: Pairwise independence and its impact on estimation of distribution algorithms. Swarm Evol. Comput. 27, 80–96 (2016)

    Article  Google Scholar 

  12. Martins, J.P., Fonseca, C.M., Delbem, A.C.: On the performance of linkage-tree genetic algorithms for the multidimensional knapsack problem. Neurocomputing 146, 17–29 (2014)

    Article  Google Scholar 

  13. Martins, J.P., Neto, C.B., Crocomo, M.K., Vittori, K., Delbem, A.C.: A comparison of linkage-learning-based genetic algorithms in multidimensional knapsack problems. In: 2013 IEEE Congress on Evolutionary Computation, pp. 502–509. IEEE (2013)

    Google Scholar 

  14. Mazyavkina, N., Sviridov, S., Ivanov, S., Burnaev, E.: Reinforcement learning for combinatorial optimization: A survey. arXiv preprint arXiv:2003.03600 (2020)

  15. Pelikan, M., Goldberg, D.E.: Hierarchical bayesian optimization algorithm. In: Pelikan, M., Sastry, K., CantúPaz, E. (eds.) Scalable Optimization via Probabilistic Modeling. Studies in Computational Intelligence, vol. 33, pp. 63–90. Springer, Berlin (2006). https://doi.org/10.1007/978-3-540-34954-9_4

  16. Pelikan, M., Goldberg, D.E., Tsutsui, S.: Hierarchical bayesian optimization algorithm: toward a new generation of evolutionary algorithms. In: SICE 2003 Annual Conference (IEEE Cat. No. 03TH8734), vol. 3, pp. 2738–2743. IEEE (2003)

    Google Scholar 

  17. Probst, M.: Denoising autoencoders for fast combinatorial black box optimization (2015)

    Google Scholar 

  18. Santana, R.: Gray-box optimization and factorized distribution algorithms: where two worlds collide (2017)

    Google Scholar 

  19. Smith, J.M., Szathmáry, E.: The Major Transitions in Evolution. Oxford University Press, Oxford (1997)

    Google Scholar 

  20. Thierens, D., Bosman, P.A.: Hierarchical problem solving with the linkage tree genetic algorithm. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 877–884 (2013)

    Google Scholar 

  21. Volpato, R., Song, G.: Active learning to optimise time-expensive algorithm selection (2019)

    Google Scholar 

  22. Vu, K.K., D’Ambrosio, C., Hamadi, Y., Liberti, L.: Surrogate-based methods for black-box optimization. Int. Trans. Oper. Res. 24(3), 393–424 (2017)

    Article  MathSciNet  Google Scholar 

  23. Wang, S.M., Wu, J.W., Chen, W.M., Yu, T.L.: Design of test problems for discrete estimation of distribution algorithms. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 407–414 (2013)

    Google Scholar 

  24. Watson, R.A., Hornby, G.S., Pollack, J.B.: Modeling building-block interdependency. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, H.-P. (eds.) PPSN 1998. LNCS, vol. 1498, pp. 97–106. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0056853

    Chapter  Google Scholar 

  25. Watson, R.A., Szathmáry, E.: How can evolution learn? Trends Ecol. Evol. 31(2), 147–157 (2016)

    Article  Google Scholar 

  26. West, S.A., Fisher, R.M., Gardner, A., Kiers, E.T.: Major evolutionary transitions in individuality. Proc. Nat. Acad. Sci. 112(33), 10112–10119 (2015)

    Article  Google Scholar 

  27. Yu, T.L., Sastry, K., Goldberg, D.E.: Linkage learning, overlapping building blocks, and systematic strategy for scalable recombination. In: Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, pp. 1217–1224. GECCO 2005 (2005)

    Google Scholar 

  28. Zhang, W., Dietterich, T.G.: Solving combinatorial optimization tasks by reinforcement learning: a general methodology applied to resource-constrained scheduling. J.of Artif. Intell. Res. 1, 1–38 (2000)

    Google Scholar 

Download references

Acknowledgements

We acknowledge financial support from the EPSRC Centre for Doctoral Training in Next Generation Computational Modelling grant EP/L015382/1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jamie Caldwell .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Caldwell, J., Knowles, J., Thies, C., Kubacki, F., Watson, R. (2021). Deep Optimisation: Multi-scale Evolution by Inducing and Searching in Deep Representations. In: Castillo, P.A., Jiménez Laredo, J.L. (eds) Applications of Evolutionary Computation. EvoApplications 2021. Lecture Notes in Computer Science(), vol 12694. Springer, Cham. https://doi.org/10.1007/978-3-030-72699-7_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-72699-7_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-72698-0

  • Online ISBN: 978-3-030-72699-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics