Abstract
The key problem in reinforcement learning is the exploration-exploitation tradeoff. An optimistic initialisation of the value function is a popular RL strategy. The problem of this approach is that the algorithm may have relatively low performance after many episodes of learning. In this paper, two extensions to standard optimistic exploration are proposed. The first one is based on different initialisation of the value function of goal states. The second one which builds on the previous idea explicitly separates propagation of low and high values in the state space. Proposed extensions show improvement in empirical comparisons with basic optimistic initialisation. Additionally, they improve anytime performance and help on domains where learning takes place on the sub-space of the large state space, that is, where the standard optimistic approach faces more difficulties.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific (1996)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Kaelbling, L.P., Littman, M.L., Moore, A.P.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237â285 (1996)
Thrun, S.: Efficient exploration in reinforcement learning. Technical Report CMU-CS-92-102, Carnegie Mellon University, Computer Science Department (1992)
Meuleau, N., Bourgine, P.: Exploration of multi-state environments: Local measures and back-propagation of uncertainty. Machine Learning 35(2), 117â154 (1999)
Brafman, R.I., Tennenholtz, M.: R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research (2002)
Szita, I., LĊrincz, A.: The many faces of optimism: a unifying approach. In: Proceedings of the International Conference on Machine Learning (2008)
Anderson, M.L., Oates, T.: A review of recent research on metareasoning and metalearning. AI Magazine 28, 12â16 (2007)
Cohen, P.R.: Empirical methods for artificial intelligence. MIT Press, Cambridge (1995)
Sutton, R.: Learning to predict by the methods of temporal differences. Machine Learning 3, 9â44 (1988)
Singh, S., Dayan, P.: Analytical mean squared error curves for temporal difference learning. Machine Learning 32, 5â40 (1998)
Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Proceedings of the 7th International Conference on Machine Learning, pp. 216â224 (1990)
Epshteyn, A., DeJong, G.: Qualitative reinforcement learning. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 305â312 (2006)
Wiering, M., Schmidhuber, J.: Efficient model-based exploration. In: Proceedings of the 5th international conference on simulation of adaptive behavior: From animals to animats, pp. 223â228 (1998)
Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice Hall, Englewood Cliffs (2002)
Lin, C.S., Kim, H.: Cmac-based adaptive critic self-learning control. IEEE Transactions on Neural Networks 2, 530â533 (1991)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Âİ 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
GrzeĊ, M., Kudenko, D. (2009). Improving Optimistic Exploration in Model-Free Reinforcement Learning. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2009. Lecture Notes in Computer Science, vol 5495. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04921-7_37
Download citation
DOI: https://doi.org/10.1007/978-3-642-04921-7_37
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04920-0
Online ISBN: 978-3-642-04921-7
eBook Packages: Computer ScienceComputer Science (R0)