Abstract
Aiming for the emergence of “thinking”, we have proposed new reinforcement learning using a chaotic neural network. Then we have set up a hypothesis that the internal chaotic dynamics would grow up into “thinking” through learning. In our previous works, strong recurrent connection weights generate internal chaotic dynamics. On the other hand, chaotic dynamics are often generated by introducing refractoriness in each neuron. Refractoriness is the property that a firing neuron becomes insensitive for a while and observed in biological neurons. In this paper, in the chaos-based reinforcement learning, refractoriness is introduced in each neuron. It is shown that the network can learn a simple goal-reaching task through our new chaos-based reinforcement learning. It can learn with smaller recurrent connection weights than the case without refractoriness. By introducing refractoriness, the agent behavior becomes more exploratory and Lyapunov exponent becomes larger with the same recurrent weight range.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Shibata, K., Goto, Y.: New reinforcement learning using a chaotic neural network for emergence of “thinking” - “exploration” grows into “thinking” through learning. arXiv:1705.05551 (2017)
Volodymyr, M., et al.: Playing atari with deep reinforcement learning. In: NIPS Deep Learning Workshop 2013 (2013)
Shibata, K., Utsunomiya, H.: Discovery of pattern meaning from delayed rewards by reinforcement learning with a recurrent neural network. In: Proceedings of IJCNN 2011, pp. 1445–1452 (2011)
Shibata, K., Goto, K.: Emergence of flexible prediction-based discrete decision making and continuous motion generation through actor-Q-learning. In: Proceedings of ICDL-Epirob, ID 15 (2013)
Shibata, K., Sakashita, Y.: Reinforcement learning with internal-dynamics-based exploration using a chaotic neural network. In: Proceedings of IJCNN (2015)
Goto, Y., Shibata, K.: Emergence of higher exploration in reinforcement learning using a chaotic neural network. In: Proceedings of ICONIP 2016, pp. 40-48 (2016)
Osana, Y., Hagiwara, M.: Successive learning in chaotic neural network. In: Proceedings of IJCNN 1998, vol. 2, pp. 1510–1515 (1998)
Aihara, K., Takabe, T., Toyoda, M.: Chaotic neural networks. Phys. Lett. A 144(6–7), 333–340 (1990)
Acknowledgement
This work was supported by JSPS KAKENHI Grant Number 15K00360.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sato, K., Goto, Y., Shibata, K. (2019). Chaos-Based Reinforcement Learning When Introducing Refractoriness in Each Neuron. In: Kim, JH., Myung, H., Lee, SM. (eds) Robot Intelligence Technology and Applications. RiTA 2018. Communications in Computer and Information Science, vol 1015. Springer, Singapore. https://doi.org/10.1007/978-981-13-7780-8_7
Download citation
DOI: https://doi.org/10.1007/978-981-13-7780-8_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-7779-2
Online ISBN: 978-981-13-7780-8
eBook Packages: Computer ScienceComputer Science (R0)