Skip to main content

Why Is Auto-Encoding Difficult for Genetic Programming?

  • Conference paper
  • First Online:
Genetic Programming (EuroGP 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11451))

Included in the following conference series:

Abstract

Unsupervised learning is an important component in many recent successes in machine learning. The autoencoder neural network is one of the most prominent approaches to unsupervised learning. Here, we use the genetic programming paradigm to create autoencoders and find that the task is difficult for genetic programming, even on small datasets which are easy for neural networks. We investigate which aspects of the autoencoding task are difficult for genetic programming.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.facebook.com/yann.lecun/posts/10153426023477143.

  2. 2.

    https://github.com/jmmcd/GPAE.

  3. 3.

    http://togelius.blogspot.com/2018/05/empiricism-and-limits-of-gradient.html.

References

  1. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  Google Scholar 

  2. Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5907–5915 (2017). https://arxiv.org/abs/1612.03242

  3. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM (2008)

    Google Scholar 

  4. Yao, X.: A review of evolutionary artificial neural networks. Int. J. Intell. Syst. 8(4), 539–567 (1993)

    Article  MathSciNet  Google Scholar 

  5. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002)

    Article  Google Scholar 

  6. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 (2016)

  7. Such, F.P., Madhavan, V., Conti, E., Lehman, J., Stanley, K.O., Clune, J.: Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. arXiv preprint arXiv:1712.06567 (2017)

  8. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013)

  9. Orzechowski, P., La Cava, W., Moore, J.H.: Where are we now? A large benchmark study of recent symbolic regression methods. In: Proceedings of GECCO (2018). arXiv preprint arXiv:1804.09331

  10. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. California University San Diego La Jolla Institute for Cognitive Science, Technical report (1985)

    Google Scholar 

  11. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  12. Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., Frey, B.: Adversarial autoencoders. arXiv preprint arXiv:1511.05644 (2015)

  13. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  14. McConaghy, T.: FFX: fast, scalable, deterministic symbolic regression technology. In: Riolo, R., Vladislavleva, E., Moore, J. (eds.) Genetic Programming Theory and Practice IX, pp. 235–260. Springer, Heidelberg (2011). https://doi.org/10.1007/978-1-4614-1770-5_13

    Chapter  Google Scholar 

  15. Trujillo, L., et al.: Local search is underused in genetic programming. In: Riolo, R., Worzel, B., Goldman, B., Tozier, B. (eds.) Genetic Programming Theory and Practice XIV. GEC, pp. 119–137. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-97088-2_8

    Chapter  Google Scholar 

  16. Ellis, K., Solar-Lezama, A., Tenenbaum, J.: Unsupervised learning by program synthesis. In: Advances in Neural Information Processing Systems, pp. 973–981 (2015)

    Google Scholar 

  17. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)

    MATH  Google Scholar 

  18. Miller, J.F., Thomson, P.: Cartesian genetic programming. In: Poli, R., Banzhaf, W., Langdon, W.B., Miller, J., Nordin, P., Fogarty, T.C. (eds.) EuroGP 2000. LNCS, vol. 1802, pp. 121–132. Springer, Heidelberg (2000). https://doi.org/10.1007/978-3-540-46239-2_9

    Chapter  Google Scholar 

  19. Jackson, D.: A new, node-focused model for genetic programming. In: Moraglio, A., Silva, S., Krawiec, K., Machado, P., Cotta, C. (eds.) EuroGP 2012. LNCS, vol. 7244, pp. 49–60. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29139-5_5

    Chapter  Google Scholar 

  20. Schmidt, M., Lipson, H.: Distilling free-form natural laws from experimental data. Science 324(5923), 81–85 (2009). http://www.sciencemag.org/content/324/5923/81.abstract

  21. Brameier, M., Banzhaf, W.: Linear genetic programming. Springer, Heidelberg (2006). https://doi.org/10.1007/978-0-387-31030-5

    Book  MATH  Google Scholar 

  22. Potter, M.A., Jong, K.A.D.: Cooperative coevolution: an architecture for evolving coadapted subcomponents. Evol. Comput. 8(1), 1–29 (2000)

    Article  Google Scholar 

  23. Ni, J., Drieberg, R.H., Rockett, P.I.: The use of an analytic quotient operator in genetic programming. Trans. Evol. Comput. 17(1), 146–152 (2013)

    Article  Google Scholar 

  24. Nicolau, M., Agapitos, A.: On the effect of function set to the generalisation of symbolic regression models. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 272–273. ACM (2018)

    Google Scholar 

  25. Bykov, Y., Petrovic, S.: A step counting hill climbing algorithm applied to university examination timetabling. J. Sched. 19(4), 479–492 (2016)

    Article  MathSciNet  Google Scholar 

  26. Burke, E.K., Bykov, Y.: The late acceptance hill-climbing heuristic. Eur. J. Oper. Res. 258(1), 70–78 (2017)

    Article  MathSciNet  Google Scholar 

  27. Cao, V.L., Nicolau, M., McDermott, J.: Late-acceptance and step-counting hill-climbing GP for anomaly detection. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 221–222. ACM (2017)

    Google Scholar 

  28. O’Reilly, U.-M., Oppacher, F.: Program search with a hierarchical variable length representation: genetic programming, simulated annealing and hill climbing. In: Davidor, Y., Schwefel, H.-P., Männer, R. (eds.) PPSN 1994. LNCS, vol. 866, pp. 397–406. Springer, Heidelberg (1994). https://doi.org/10.1007/3-540-58484-6_283. http://www.springer.de/cgi-bin/searchbook.pl?isbn=3-540-58484-6

  29. Chellapilla, K.: Evolutionary programming with tree mutations: evolving computer programs without crossover. In: Genetic Programming, Stanford, CA, USA, pp. 431–438 (1997)

    Google Scholar 

  30. Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml

  31. Cao, V.L., Nicolau, M., McDermott, J.: Learning neural representations for network anomaly detection. IEEE Trans. Cybern. 99, 1–14 (2018). Early access

    Article  Google Scholar 

Download references

Acknowledgements

This work was carried out while JMcD was at University College Dublin. Thanks to members of the University College Dublin Natural Computing Research and Applications group, in particular Takfarinas Saber and Stefano Mauceri, for useful discussions. Thanks to Van Loi Cao for data-processing code and for discussion. Thanks also to the anonymous reviewers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James McDermott .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

McDermott, J. (2019). Why Is Auto-Encoding Difficult for Genetic Programming?. In: Sekanina, L., Hu, T., Lourenço, N., Richter, H., García-Sánchez, P. (eds) Genetic Programming. EuroGP 2019. Lecture Notes in Computer Science(), vol 11451. Springer, Cham. https://doi.org/10.1007/978-3-030-16670-0_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-16670-0_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-16669-4

  • Online ISBN: 978-3-030-16670-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics