Skip to main content

Continual and One-Shot Learning Through Neural Networks with Dynamic External Memory

  • Conference paper
  • First Online:
Applications of Evolutionary Computation (EvoApplications 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10199))

Included in the following conference series:

Abstract

Training neural networks to quickly learn new skills without forgetting previously learned skills is an important open challenge in machine learning. A common problem for adaptive networks that can learn during their lifetime is that the weights encoding a particular task are often overridden when a new task is learned. This paper takes a step in overcoming this limitation by building on the recently proposed Evolving Neural Turing Machine (ENTM) approach. In the ENTM, neural networks are augmented with an external memory component that they can write to and read from, which allows them to store associations quickly and over long periods of time. The results in this paper demonstrate that the ENTM is able to perform one-shot learning in reinforcement learning tasks without catastrophic forgetting of previously stored associations. Additionally, we introduce a new ENTM default jump mechanism that makes it easier to find unused memory location and therefor facilitates the evolution of continual learning networks. Our results suggest that augmenting evolving networks with an external memory component is not only a viable mechanism for adaptive behaviors in neuroevolution but also allows these networks to perform continual and one-shot learning at the same time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    https://goo.gl/P4unLh.

References

  1. Kumaran, D., Hassabis, D., McClelland, J.L.: What learning systems do intelligent agents need? Complementary learning systems theory updated. Trends Cogn. Sci. 20(7), 512–534 (2016)

    Article  Google Scholar 

  2. Rusu, A.A., Rabinowitz, N.C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., Hadsell, R.: Progressive neural networks. Preprint arXiv:1606.04671 (2016)

  3. Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Proceedings of the Advances in Neural Information Processing Systems 2 (1989)

    Google Scholar 

  4. Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A.A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al.: Overcoming catastrophic forgetting in neural networks. arXiv preprint arXiv:1612.00796 (2016)

  5. Floreano, D., Dürr, P., Mattiussi, C.: Neuroevolution: from architectures to learning. Evol. Intell. 1(1), 47–62 (2008)

    Article  Google Scholar 

  6. Yao, X.: Evolving artificial neural networks. Proc. IEEE 87(9), 1423–1447 (1999)

    Article  Google Scholar 

  7. Risi, S., Togelius, J.: Neuroevolution in games: state of the art and open challenges. IEEE Trans. Comput. Intell. AI Games PP(99), 1–1 (2015)

    Google Scholar 

  8. Stanley, K.O., Bryant, B.D., Miikkulainen, R.: Evolving adaptive neural networks with and without adaptive synapses. In: The 2003 Congress on Evolutionary Computation, CEC 2003, vol. 4, pp. 2557–2564. IEEE (2003)

    Google Scholar 

  9. Floreano, D., Urzelai, J.: Evolutionary robots with on-line self-organization and behavioral fitness. Neural Netw. 13(4), 431–443 (2000)

    Article  Google Scholar 

  10. Blynel, J., Floreano, D.: Exploring the T-Maze: evolving learning-like robot behaviors using CTRNNs. In: Cagnoni, S., Johnson, C.G., Cardalda, J.J.R., Marchiori, E., Corne, D.W., Meyer, J.-A., Gottlieb, J., Middendorf, M., Guillot, A., Raidl, G.R., Hart, E. (eds.) EvoWorkshops 2003. LNCS, vol. 2611, pp. 593–604. Springer, Heidelberg (2003). doi:10.1007/3-540-36605-9_54

    Chapter  Google Scholar 

  11. Ellefsen, K.O., Mouret, J.B., Clune, J.: Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Comput. Biol. 11(4), e1004128 (2015)

    Article  Google Scholar 

  12. Risi, S., Stanley, K.O.: Indirectly encoding neural plasticity as a pattern of local rules. In: Doncieux, S., Girard, B., Guillot, A., Hallam, J., Meyer, J.-A., Mouret, J.-B. (eds.) SAB 2010. LNCS (LNAI), vol. 6226, pp. 533–543. Springer, Heidelberg (2010). doi:10.1007/978-3-642-15193-4_50

    Chapter  Google Scholar 

  13. Silva, F., Urbano, P., Correia, L., Christensen, A.L.: odNEAT: an algorithm for decentralised online evolution of robotic controllers. Evol. Comput. 23(3), 421–449 (2015)

    Article  Google Scholar 

  14. Soltoggio, A., Bullinaria, J.A., Mattiussi, C.: Drr, P., Floreano, D.: Evolutionary advantages of neuromodulated plasticity in dynamic, reward-based scenarios. In: Bullock, S., Noble, J., Watson, R., Bedau, M.A., (eds.): Proceedings of the 11th International Conference on Artificial Life (Alife XI), pp. 569–576. MIT Press, Cambridge (2008)

    Google Scholar 

  15. Risi, S., Stanley, K.O.: A unified approach to evolving plasticity and neural geometry. In: The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2012)

    Google Scholar 

  16. Norouzzadeh, M.S., Clune, J.: Neuromodulation improves the evolution of forward models. In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO 2016, pp. 157–164. ACM, New York (2016)

    Google Scholar 

  17. Löwe, M., Risi, S.: Accelerating the evolution of cognitive behaviors through human-computer collaboration. In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO 2016, pp. 133–140. ACM, New York (2016)

    Google Scholar 

  18. Hebb, D.O.: The Organization of Behavior. Wiley & Sons, New York (1949)

    Google Scholar 

  19. McCloskey, M., Cohen, N.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Bower, G.H. (ed.) The Psychology of Learning and Motivation, vol. 24, pp. 109–164 (1989)

    Google Scholar 

  20. Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. arXiv:1410.5401 (2014)

  21. Greve, R.B., Jacobsen, E.J., Risi, S.: Evolving neural turing machines for reward-based learning. In: Proceedings of the Genetic and Evolutionary Computation Conference 2016, GECCO 2016, pp. 117–124. ACM, New York (2016)

    Google Scholar 

  22. Weston, J., Chopra, S., Bordes, A.: Memory networks. Preprint arXiv:1410.3916 (2014)

  23. Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwińska, A., Colmenarejo, S.G., Grefenstette, E., Ramalho, T., Agapiou, J., et al.: Hybrid computing using a neural network with dynamic external memory. Nature 538(7626), 471–476 (2016)

    Article  Google Scholar 

  24. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002)

    Article  Google Scholar 

  25. Foster, D., Morris, R., Dayan, P., et al.: A model of hippocampally dependent navigation, using the temporal difference learning rule. Hippocampus 10(1), 1–16 (2000)

    Article  Google Scholar 

Download references

Acknowledgment

Computation/simulation for the work described in this paper was supported by the DeIC National HPC Centre, SDU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sebastian Risi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Lüders, B., Schläger, M., Korach, A., Risi, S. (2017). Continual and One-Shot Learning Through Neural Networks with Dynamic External Memory. In: Squillero, G., Sim, K. (eds) Applications of Evolutionary Computation. EvoApplications 2017. Lecture Notes in Computer Science(), vol 10199. Springer, Cham. https://doi.org/10.1007/978-3-319-55849-3_57

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-55849-3_57

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-55848-6

  • Online ISBN: 978-3-319-55849-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics