Skip to main content

Check Regularization: Combining Modularity and Elasticity for Memory Consolidation

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2018 (ICANN 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11140))

Included in the following conference series:

Abstract

Catastrophic forgetting, which means that old tasks are forgotten mostly when new tasks are learned, is a crucial problem of neural networks for autonomous robots. This problem is due to backpropagation overwrites all network parameters, and therefore, can be solved by not overwriting important parameters for the old tasks. Hence, regularization methods, represented by elastic weight consolidation, give the globally stable equilibrium points to the optimal parameters for the old tasks. They unfortunately aim to hold all parameters, even if the regularization is weak. This paper therefore proposes a regularization method, named Check regularization, to consolidate only the important parameters for the tasks and to initialize the other parameters preparing for the future tasks. Simulations with two tasks to be learned sequentially show that the proposed method outperforms the previous method under a condition where the interference between the tasks is severe.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ellefsen, K.O., Mouret, J.B., Clune, J.: Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Comput. Biol. 11(4), e1004128 (2015)

    Article  Google Scholar 

  2. French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 3(4), 128–135 (1999)

    Article  Google Scholar 

  3. Hirai, K., Hirose, M., Haikawa, Y., Takenaka, T.: The development of Honda humanoid robot. In: IEEE International Conference on Robotics and Automation, vol. 2, pp. 1321–1326. IEEE (1998)

    Google Scholar 

  4. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  5. Kamra, N., Gupta, U., Liu, Y.: Deep generative dual memory network for continual learning. arXiv preprint arXiv:1710.10368 (2017)

  6. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  7. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)

    Article  MathSciNet  Google Scholar 

  8. Kobayashi, T., Aoyama, T., Sekiyama, K., Fukuda, T.: Selection algorithm for locomotion based on the evaluation of falling risk. IEEE Trans. Robot. 31(3), 750–765 (2015)

    Article  Google Scholar 

  9. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing System, pp. 1097–1105 (2012)

    Google Scholar 

  10. Langford, J., Li, L., Zhang, T.: Sparse online learning via truncated gradient. J. Mach. Learn. Res. 10, 777–801 (2009)

    MathSciNet  MATH  Google Scholar 

  11. Lee, S.W., Kim, J.H., Jun, J., Ha, J.W., Zhang, B.T.: Overcoming catastrophic forgetting by incremental moment matching. In: Advances in Neural Information Processing Systems, pp. 4655–4665 (2017)

    Google Scholar 

  12. Levine, S., Pastor, P., Krizhevsky, A., Quillen, D.: Learning hand-eye coordination for robotic grasping with large-scale data collection. In: Kulić, D., Nakamura, Y., Khatib, O., Venture, G. (eds.) ISER 2016. SPAR, vol. 1, pp. 173–184. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-50115-4_16

    Chapter  Google Scholar 

  13. McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Psychology of Learning and Motivation, vol. 24, pp. 109–165. Elsevier (1989)

    Google Scholar 

  14. Nguyen, C.V., Li, Y., Bui, T.D., Turner, R.E.: Variational continual learning. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=BkQqq0gRb

  15. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)

  16. Schulman, J., Moritz, P., Levine, S., Jordan, M., Abbeel, P.: High-dimensional continuous control using generalized advantage estimation. In: International Conference for Learning Representations, pp. 1–14 (2016)

    Google Scholar 

  17. Shin, H., Lee, J.K., Kim, J., Kim, J.: Continual learning with deep generative replay. In: Advances in Neural Information Processing Systems, pp. 2994–3003 (2017)

    Google Scholar 

  18. Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354 (2017)

    Article  Google Scholar 

  19. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press, Cambridge (1998)

    Google Scholar 

  20. Tsurumine, Y., Cui, Y., Uchibe, E., Matsubara, T.: Deep dynamic policy programming for robot control with raw images. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1545–1550 (2017)

    Google Scholar 

  21. Van Seijen, H., Mahmood, A.R., Pilarski, P.M., Machado, M.C., Sutton, R.S.: True online temporal-difference learning. J. Mach. Learn. Res. 17(145), 1–40 (2016)

    MathSciNet  MATH  Google Scholar 

  22. Velez, R., Clune, J.: Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks. PloS one 12(11), e0187736 (2017)

    Article  Google Scholar 

  23. Yu, W., Turk, G., Liu, C.K.: Multi-task learning with gradient guided policy specialization. arXiv preprint arXiv:1709.07979 (2017)

  24. Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: International Conference on Machine Learning, pp. 3987–3995 (2017)

    Google Scholar 

Download references

Acknowledgement

This research has been supported by the Kayamori Foundation of Information Science Advancement.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Taisuke Kobayashi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kobayashi, T. (2018). Check Regularization: Combining Modularity and Elasticity for Memory Consolidation. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds) Artificial Neural Networks and Machine Learning – ICANN 2018. ICANN 2018. Lecture Notes in Computer Science(), vol 11140. Springer, Cham. https://doi.org/10.1007/978-3-030-01421-6_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-01421-6_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-01420-9

  • Online ISBN: 978-3-030-01421-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics