Skip to main content

Co-evolution of Fitness Predictors and Deep Neural Networks

  • Conference paper
  • First Online:
Parallel Processing and Applied Mathematics (PPAM 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10777))

Abstract

Deep neural networks proved to be a very useful and powerful tool with many applications. In order to achieve good learning results, the network architecture has, however, to be carefully designed, which requires a lot of experience and knowledge. Using an evolutionary process to develop new network topologies can facilitate this process. The limiting factor is the speed of evaluation of a single specimen (a single network architecture), which includes learning based on a large dataset. In this paper we propose a new approach which uses subsets of the original training set to approximate the fitness. We describe a co-evolutionary algorithm and discuss its key elements. Finally we draw conclusions from experiments and outline plans for future work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25(NIPS2012), 1–9 (2012)

    Google Scholar 

  2. Bengio, Y., Ducharme, R., Vincent, P., Janvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)

    MATH  Google Scholar 

  3. Ng, A.Y.: Feature selection, L1 vs. L2 regularization, and rotational invariance. In: Twenty-First International Conference on Machine Learning - ICML 2004, p. 78 (2004)

    Google Scholar 

  4. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. (JMLR) 15, 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  5. Courville, A., Bengio, Y., Vincent, P.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 9(2007), 201–208 (2010)

    MathSciNet  MATH  Google Scholar 

  6. Koza, J.R.: Human-competitive results produced by genetic programming. Genet. Prog. Evolvable Mach. 11(3–4), 251–284 (2010)

    Article  Google Scholar 

  7. Schmidt, M.D., Lipson, H.: Coevolution of fitness predictors. IEEE Trans. Evol. Comput. 12(6), 736–749 (2008)

    Article  Google Scholar 

  8. Funika, W., Koperek, P.: Genetic programming in automatic discovery of relationships in computer system monitoring data. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Waśniewski, J. (eds.) PPAM 2013. LNCS, vol. 8384, pp. 371–380. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-55224-3_35

    Chapter  Google Scholar 

  9. LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010)

    Google Scholar 

  10. McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5(4), 115–133 (1943)

    Article  MathSciNet  MATH  Google Scholar 

  11. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998)

    Article  Google Scholar 

  12. Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. In: Proceedings of the 11th International Joint Conference on Artificial Intelligence - Volume 1, vol. 89, pp. 762–767 (1989)

    Google Scholar 

  13. Siebel, N.T., Bötel, J., Sommer, G.: Efficient neural network pruning during neuro-evolution. In: Proceedings of the International Joint Conference on Neural Networks, pp. 2920–2927 (2009)

    Google Scholar 

  14. Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving large-scale neural networks. Artif. Life 15(2), 185–212 (2009)

    Article  Google Scholar 

  15. Fernando, C., Banarse, D., Reynolds, M., Besse, F., Pfau, D., Jaderberg, M., Lanctot, M., Wierstra, D.: Convolution by evolution: differentiable pattern producing networks. CoRR, abs/1606.02580 (2016)

    Google Scholar 

  16. David, O.E., Greental, I.: Genetic algorithms for evolving deep neural networks. In: GECCO Competition 2014, pp. 1451–1452 (2014)

    Google Scholar 

  17. Loshchilov, I., Hutter, F.: CMA-ES for hyperparameter optimization of deep neural networks. CoRR, abs/1604.07269 (2016)

    Google Scholar 

  18. Bongard, J.C., Lipson, H.: Nonlinear system identification using coevolution of models and tests. IEEE Trans. Evol. Comput. 9(4), 361–384 (2005)

    Article  MATH  Google Scholar 

  19. Jin, Y.: A comprehensive survey of fitness approximation in evolutionary computation. Soft Comput. 9(1), 3–12 (2005)

    Article  Google Scholar 

  20. Schmidt, M., Lipson, H.: Co-evolving fitness predictors for accelerating and reducing evaluations. GPTP 2006, 1 (2006)

    Google Scholar 

  21. Funika, W., Koperek, P.: Spatial-oriented neural network encoding for neuro-evolution. In: Proceeding of Cracow Grid Workshop (CGW 2016), pp. 37–38. ACC Cyfronet AGH, Krakow (2016)

    Google Scholar 

  22. Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Deep big simple neural nets excel on handwritten digit recognition. CoRR, abs/1003.0358 (2010)

    Google Scholar 

Download references

Acknowledgement

The research is supported by AGH grant no. 11.11.230.337 and by the PL Grid project with computational resources to carry out experiments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Włodzimierz Funika .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Funika, W., Koperek, P. (2018). Co-evolution of Fitness Predictors and Deep Neural Networks. In: Wyrzykowski, R., Dongarra, J., Deelman, E., Karczewski, K. (eds) Parallel Processing and Applied Mathematics. PPAM 2017. Lecture Notes in Computer Science(), vol 10777. Springer, Cham. https://doi.org/10.1007/978-3-319-78024-5_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-78024-5_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-78023-8

  • Online ISBN: 978-3-319-78024-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics