Skip to main content

Improving Optimistic Exploration in Model-Free Reinforcement Learning

  • Conference paper
Adaptive and Natural Computing Algorithms (ICANNGA 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5495))

Included in the following conference series:

Abstract

The key problem in reinforcement learning is the exploration-exploitation tradeoff. An optimistic initialisation of the value function is a popular RL strategy. The problem of this approach is that the algorithm may have relatively low performance after many episodes of learning. In this paper, two extensions to standard optimistic exploration are proposed. The first one is based on different initialisation of the value function of goal states. The second one which builds on the previous idea explicitly separates propagation of low and high values in the state space. Proposed extensions show improvement in empirical comparisons with basic optimistic initialisation. Additionally, they improve anytime performance and help on domains where learning takes place on the sub-space of the large state space, that is, where the standard optimistic approach faces more difficulties.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bertsekas, D.P., Tsitsiklis, J.N.: Neuro-Dynamic Programming. Athena Scientific (1996)

    Google Scholar 

  2. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  3. Kaelbling, L.P., Littman, M.L., Moore, A.P.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)

    Google Scholar 

  4. Thrun, S.: Efficient exploration in reinforcement learning. Technical Report CMU-CS-92-102, Carnegie Mellon University, Computer Science Department (1992)

    Google Scholar 

  5. Meuleau, N., Bourgine, P.: Exploration of multi-state environments: Local measures and back-propagation of uncertainty. Machine Learning 35(2), 117–154 (1999)

    Article  MATH  Google Scholar 

  6. Brafman, R.I., Tennenholtz, M.: R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research (2002)

    Google Scholar 

  7. Szita, I., LĊ‘rincz, A.: The many faces of optimism: a unifying approach. In: Proceedings of the International Conference on Machine Learning (2008)

    Google Scholar 

  8. Anderson, M.L., Oates, T.: A review of recent research on metareasoning and metalearning. AI Magazine 28, 12–16 (2007)

    Google Scholar 

  9. Cohen, P.R.: Empirical methods for artificial intelligence. MIT Press, Cambridge (1995)

    MATH  Google Scholar 

  10. Sutton, R.: Learning to predict by the methods of temporal differences. Machine Learning 3, 9–44 (1988)

    Google Scholar 

  11. Singh, S., Dayan, P.: Analytical mean squared error curves for temporal difference learning. Machine Learning 32, 5–40 (1998)

    Article  MATH  Google Scholar 

  12. Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Proceedings of the 7th International Conference on Machine Learning, pp. 216–224 (1990)

    Google Scholar 

  13. Epshteyn, A., DeJong, G.: Qualitative reinforcement learning. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 305–312 (2006)

    Google Scholar 

  14. Wiering, M., Schmidhuber, J.: Efficient model-based exploration. In: Proceedings of the 5th international conference on simulation of adaptive behavior: From animals to animats, pp. 223–228 (1998)

    Google Scholar 

  15. Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice Hall, Englewood Cliffs (2002)

    MATH  Google Scholar 

  16. Lin, C.S., Kim, H.: Cmac-based adaptive critic self-learning control. IEEE Transactions on Neural Networks 2, 530–533 (1991)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Âİ 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

GrzeĊ›, M., Kudenko, D. (2009). Improving Optimistic Exploration in Model-Free Reinforcement Learning. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2009. Lecture Notes in Computer Science, vol 5495. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04921-7_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04921-7_37

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04920-0

  • Online ISBN: 978-3-642-04921-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics