Skip to main content

Continual Learning Exploiting Structure of Fractal Reservoir Computing

  • Conference paper
  • First Online:
Book cover Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions (ICANN 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11731))

Included in the following conference series:

Abstract

Neural network has a critical problem, called catastrophic forgetting, where memories for tasks already learned are easily overwritten with memories for a task additionally learned. This problem interferes with continual learning required for autonomous robots, which learn many tasks incrementally from daily activities. To mitigate the catastrophic forgetting, it is important for especially reservoir computing to clarify which neurons should be fired corresponding to each task, since only readout weights are updated according to the degree of firing of neurons. We therefore propose the way to design reservoir computing such that the firing neurons are clearly distinguished from others according to the task to be performed. As a key design feature, we employ fractal network, which has modularity and scalability, to be reservoir layer. In particular, its modularity is fully utilized by designing input layer. As a result, simulations of control tasks using reinforcement learning show that our design mitigates the catastrophic forgetting even when random actions from reinforcement learning prompt parameters to be overwritten. Furthermore, learning multiple tasks with a single network suggests that knowledge for the other tasks can facilitate to learn a new task, unlike the case using completely different networks.

Supported by JSPS KAKENHI, Grant-in-Aid for Young Scientists (B), Grant Number 17K12759.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bartumeus, F., da Luz, M.E., Viswanathan, G., Catalan, J.: Animal search strategies: a quantitative random-walk analysis. Ecology 86(11), 3078–3087 (2005)

    Article  Google Scholar 

  2. Ellefsen, K.O., Mouret, J.B., Clune, J.: Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Comput. Biol. 11(4), e1004128 (2015)

    Article  Google Scholar 

  3. French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 3(4), 128–135 (1999)

    Article  Google Scholar 

  4. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  5. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Nat. Acad. Sci. 114(13), 3521–3526 (2017)

    Article  MathSciNet  Google Scholar 

  6. Kobayashi, T.: Check regularization: combining modularity and elasticity for memory consolidation. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018. LNCS, vol. 11140, pp. 315–325. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01421-6_31

    Chapter  Google Scholar 

  7. Kobayashi, T.: Student-t policy in reinforcement learning to acquire global optimum of robot control. Appl. Intell. (2019, Online first)

    Google Scholar 

  8. Konda, V.R., Tsitsiklis, J.N.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems, pp. 1008–1014 (2000)

    Google Scholar 

  9. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing System, pp. 1097–1105 (2012)

    Google Scholar 

  10. Luo, J., Edmunds, R., Rice, F., Agogino, A.M.: Tensegrity robot locomotion under limited sensory inputs via deep reinforcement learning. In: IEEE International Conference on Robotics and Automation, pp. 6260–6267. IEEE (2018)

    Google Scholar 

  11. McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Psychology of Learning and Motivation, vol. 24, pp. 109–165. Elsevier, Amsterdam (1989)

    Google Scholar 

  12. Peters, J., Schaal, S.: Natural actor-critic. Neurocomputing 71(7–9), 1180–1190 (2008)

    Article  Google Scholar 

  13. Remaki, L., Cheriet, M.: KCS-new kernel family with compact support in scale space: formulation and impact. IEEE Trans. Image Process. 9(6), 970–981 (2000)

    Article  MathSciNet  Google Scholar 

  14. Rozenfeld, H.D., Havlin, S., Ben-Avraham, D.: Fractal and transfractal recursive scale-free nets. New J. Phys. 9(6), 175 (2007)

    Article  Google Scholar 

  15. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  16. Tsurumine, Y., Cui, Y., Uchibe, E., Matsubara, T.: Deep reinforcement learning with smooth policy update: application to robotic cloth manipulation. Robot. Auton. Syst. 112, 72–83 (2019)

    Article  Google Scholar 

  17. Van Seijen, H., Mahmood, A.R., Pilarski, P.M., Machado, M.C., Sutton, R.S.: True online temporal-difference learning. J. Mach. Learn. Res. 17(145), 1–40 (2016)

    MathSciNet  MATH  Google Scholar 

  18. Velez, R., Clune, J.: Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks. PLoS ONE 12(11), e0187736 (2017)

    Article  Google Scholar 

  19. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8(3–4), 229–256 (1992)

    MATH  Google Scholar 

  20. Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: International Conference on Machine Learning, pp. 3987–3995 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Taisuke Kobayashi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kobayashi, T., Sugino, T. (2019). Continual Learning Exploiting Structure of Fractal Reservoir Computing. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions. ICANN 2019. Lecture Notes in Computer Science(), vol 11731. Springer, Cham. https://doi.org/10.1007/978-3-030-30493-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30493-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30492-8

  • Online ISBN: 978-3-030-30493-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics