Abstract
In reinforcement learning, it is necessary to introduce a process of trial and error called an exploration. As a generator for exploration, it seems to be familiar to use the uniform pseudorandom number generator. However, it is known that chaotic source also provides a random-like sequence as like as stochastic source. In this research, we propose an application of the random-like feature of deterministic chaos for a generator of the exploration. As a result, we find that the deterministic chaotic generator for the exploration based on the logistic map gives better performances than the stochastic random exploration generator in a nonstationary shortcut maze problem. In order to understand why the exploration generator based on the logistic map shows the better result, we investigate the learning structures obtained from the two exploration generators.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)
Sutton, R.S., Barto, A.G.: Reinforcement Learning. The MIT press, Cambridge, MA (1998)
Thrun, S.B.: Efficient Exploratio. In: Reinforcement Learning, Technical report CMU-CS-92-102, Carnegie Mellon University, Pittsburgh, PA (1992)
Parker, T.S., Chua, L.O.: Practical Numerical Algorithms for Chaotic Systems. Springer, Heidelberg (1989)
Rummery, G.A., Niranjan, M.: On-line q-learning using connectionist systems, Technical Report CUED/F-INFENG/TR 166, Cambridge University Engineering Department (1994)
Potapov, A.B., Ali, M.K.: Learning, Exploration and Chaotic Policies. International Journal of Modern Physics C 11(7), 1455–1464 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Morihiro, K., Matsui, N., Nishimura, H. (2004). Effects of Chaotic Exploration on Reinforcement Maze Learning. In: Negoita, M.G., Howlett, R.J., Jain, L.C. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2004. Lecture Notes in Computer Science(), vol 3213. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30132-5_112
Download citation
DOI: https://doi.org/10.1007/978-3-540-30132-5_112
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-23318-3
Online ISBN: 978-3-540-30132-5
eBook Packages: Springer Book Archive