Skip to main content

Influence of the Chaotic Property on Reinforcement Learning Using a Chaotic Neural Network

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10634))

Included in the following conference series:

Abstract

Aiming for the emergence of higher complicated dynamic function such as “thinking”, our group has set up a hypothesis that internal chaotic dynamics in an agent’s chaotic neural network grows from “exploration” to “thinking” through reinforcement learning, and proposed a new learning method for that. However, even after learning in a simple obstacle avoidance task, the agent sometimes moved irregularly and collided with the obstacle. By reducing the scale of the recurrent connection weights, which is expected to have a deep relation to the chaotic property, the problem was reduced. Then in this paper, the learning performance depending on the recurrent weight scale is observed. The scale has an appropriate value as can be seen in FORCE learning in reservoir computing.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Shibata, K.: Emergence of intelligence through reinforcement learning with a neural network. In: Mellouk, A. (ed.) Advances in Reinforcement Learning, pp. 99–120. InTech (2011)

    Google Scholar 

  2. Shibata, K., Goto, Y.: Significance of function emergence approach based on end-to-end reinforcement learning as suggested by Deep Learning, and Novel Reinforcement Learning using a Chaotic Neural Network toward Emergence of Thinking. Cogn. Stud. 24(1), 96–117 (2017). (in Japanese)

    Google Scholar 

  3. Shibata, K.: Functions that Emerge through End-to-End Reinforcement Learning - The Direction for Artificial General Intelligence - (RLDM17) arXiv:1703.02239v2

  4. Volodymyr, M., et al.: Playing Atari with deep reinforcement learning. In: NIPS Deep Learning Workshop 2013 (2013)

    Google Scholar 

  5. David, S., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016)

    Article  Google Scholar 

  6. Shibata, K., Utsunomiya, H.: Discovery of pattern meaning from delayed rewards by reinforcement learning with a recurrent neural network. In: Proceedings of IJCNN 2011, pp. 1445–1452 (2011)

    Google Scholar 

  7. Shibata, K., Goto, K.: Emergence of flexible prediction-based discrete decision making and continuous motion generation through Actor-Q-Learning. In: Proceedings of ICDL-Epirob 2013, ID 15 (2013)

    Google Scholar 

  8. Kaneko, K., Tsuda, I.: Chaotic itinerancy. Chaos 13(3), 926–936 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  9. Shibata, K., Sakashita, Y.: Reinforcement learning with internal-dynamics-based exploration using a Chaotic Neural Network. In: Proceedings of IJCNN 2015, #15231 (2015)

    Google Scholar 

  10. Shibata, K., Goto, Y.: New Reinforcement Learning Using a Chaotic Neural Network for Emergence of “Thinking” - “Exploration” Grows into “Thinking” through Learning - (RLDM17). arXiv:1705.05551

  11. David, C.S.: Learning in Chaotic Recurrent Neural Networks. Columbia University, Ph.D. Thesis (2009)

    Google Scholar 

  12. Hoerzer, G.M., Legenstein, R., Maass, W.: Emergence of complex computational structures from chaotic neural networks through reward-modulated Hebbian learning. Cereb. Cortex 24(3), 677–690 (2014)

    Article  Google Scholar 

  13. Matsuki, T., Shibata, K.: Reward-based learning of a memory-required task based on the internal dynamics of a Chaotic Neural Network. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds.) ICONIP 2016. LNCS, vol. 9947, pp. 376–383. Springer, Cham (2016). doi:10.1007/978-3-319-46687-3_42

    Chapter  Google Scholar 

  14. Chris, G.L.: Computation at the edge of chaos: phase transitions and emergent computation. Physica D Nonlinear Phenomena 42(1–3), 12–37 (1990). (Elsevier)

    Google Scholar 

Download references

Acknowledgement

This work was supported by JSPS KAKENHI Grant Number 15K00360.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuki Goto .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Goto, Y., Shibata, K. (2017). Influence of the Chaotic Property on Reinforcement Learning Using a Chaotic Neural Network. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10634. Springer, Cham. https://doi.org/10.1007/978-3-319-70087-8_78

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70087-8_78

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70086-1

  • Online ISBN: 978-3-319-70087-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics