Skip to main content

Faster Near-Optimal Reinforcement Learning: Adding Adaptiveness to the E3 Algorithm

  • Conference paper
  • First Online:
Book cover Algorithmic Learning Theory (ALT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1720))

Included in the following conference series:

Abstract

Recently, Kearns and Singh presented the first provably efficient and near-optimal algorithm for reinforcement learning in general Markov decision processes. One of the key contributions of the algorithm is its explicit treatment of the exploration-exploitation trade off. In this paper, we show how the algorithm can be improved by substituting the exploration phase, that builds a model of the underlying Markov decision process by estimating the transition probabilities, by an adaptive sampling method more suitable for the problem. Our improvement is two-folded. First, our theoretical bound on the worst case time needed to converge to an almost optimal policy is significatively smaller. Second, due to the adaptiveness of the sampling method we use, we discuss how our algorithm might perform better in practice than the previous one.

Supported by the EU Science and Technology Fellowship Program (STF13) of the European Commission.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Carlos Domingo, Ricard Gavaldà and Osamu Watanabe. Practical Algorithms for On-line Selection. In Proceedings of the First International Conference on Discovery Science, DS’98. Lecture Notes in Artificial Intelligence 1532:150–161, 1998.

    Google Scholar 

  2. Carlos Domingo, Ricard Gavaldà and Osamu Watanabe. Adaptive Sampling Methods for Scaling Up Knowledge Discovery Algorithms. To appear in Proceedings of the Second International Conference on Discovery Science, DS’99, December 1999.

    Google Scholar 

  3. Leslie Pack Kaebling, Michael L. Littman and Andrew W. Moore. Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4 (1996) 237–285.

    Google Scholar 

  4. Michael Kearns and Daphne Koller. Efficient Reinforcement Learning in Factored MDPs. To appear in the Proc. of the International Joint Conference on Artificial Intelligence, IJCAI’99.

    Google Scholar 

  5. Michael Kearns and Satinder Singh. Near-Optimal Reinforcement Learning in Polynomial Time. In Machine Learning: Proceedings of the 16th International Conference, ICML’99, pages 260–268, 1998.

    Google Scholar 

  6. M.J. Kearns and U.V. Vazirani. An Introduction to Computational Learning Theory. Cambridge University Press, 1994.

    Google Scholar 

  7. Richard J. Lipton and Jeffrey F. Naughton. Query Size Estimation by Adaptive Sampling. Journal of Computer and System Science, 51:18–25, 1995.

    Article  MATH  MathSciNet  Google Scholar 

  8. Richard J. Lipton, Jeffrey F. Naughton, Donovan Schneider and S. Seshadri. Efficient sampling strategies for relational database operations. Theoretical Computer Science, 116:195–226, 1993.

    Article  MATH  MathSciNet  Google Scholar 

  9. R.S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. Cambridge, MA. MIT Press.

    Google Scholar 

  10. Abraham Wald. Sequential Analysis. Wiley Mathematical, Statistics Series, 1947.

    Google Scholar 

  11. Osamu Watanabe. From Computational Learning Theory to Discovery Science. In Proc. of the 26th International Colloquium on Automata, Languages and Programming, Invited talk of ICALP’99. Lecture Notes in Computer Science 1644:134–148, 1999.

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Domingo, C. (1999). Faster Near-Optimal Reinforcement Learning: Adding Adaptiveness to the E3 Algorithm. In: Watanabe, O., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1999. Lecture Notes in Computer Science(), vol 1720. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46769-6_20

Download citation

  • DOI: https://doi.org/10.1007/3-540-46769-6_20

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66748-3

  • Online ISBN: 978-3-540-46769-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics