Abstract
Recently, Kearns and Singh presented the first provably efficient and near-optimal algorithm for reinforcement learning in general Markov decision processes. One of the key contributions of the algorithm is its explicit treatment of the exploration-exploitation trade off. In this paper, we show how the algorithm can be improved by substituting the exploration phase, that builds a model of the underlying Markov decision process by estimating the transition probabilities, by an adaptive sampling method more suitable for the problem. Our improvement is two-folded. First, our theoretical bound on the worst case time needed to converge to an almost optimal policy is significatively smaller. Second, due to the adaptiveness of the sampling method we use, we discuss how our algorithm might perform better in practice than the previous one.
Supported by the EU Science and Technology Fellowship Program (STF13) of the European Commission.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Carlos Domingo, Ricard Gavaldà and Osamu Watanabe. Practical Algorithms for On-line Selection. In Proceedings of the First International Conference on Discovery Science, DS’98. Lecture Notes in Artificial Intelligence 1532:150–161, 1998.
Carlos Domingo, Ricard Gavaldà and Osamu Watanabe. Adaptive Sampling Methods for Scaling Up Knowledge Discovery Algorithms. To appear in Proceedings of the Second International Conference on Discovery Science, DS’99, December 1999.
Leslie Pack Kaebling, Michael L. Littman and Andrew W. Moore. Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4 (1996) 237–285.
Michael Kearns and Daphne Koller. Efficient Reinforcement Learning in Factored MDPs. To appear in the Proc. of the International Joint Conference on Artificial Intelligence, IJCAI’99.
Michael Kearns and Satinder Singh. Near-Optimal Reinforcement Learning in Polynomial Time. In Machine Learning: Proceedings of the 16th International Conference, ICML’99, pages 260–268, 1998.
M.J. Kearns and U.V. Vazirani. An Introduction to Computational Learning Theory. Cambridge University Press, 1994.
Richard J. Lipton and Jeffrey F. Naughton. Query Size Estimation by Adaptive Sampling. Journal of Computer and System Science, 51:18–25, 1995.
Richard J. Lipton, Jeffrey F. Naughton, Donovan Schneider and S. Seshadri. Efficient sampling strategies for relational database operations. Theoretical Computer Science, 116:195–226, 1993.
R.S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. Cambridge, MA. MIT Press.
Abraham Wald. Sequential Analysis. Wiley Mathematical, Statistics Series, 1947.
Osamu Watanabe. From Computational Learning Theory to Discovery Science. In Proc. of the 26th International Colloquium on Automata, Languages and Programming, Invited talk of ICALP’99. Lecture Notes in Computer Science 1644:134–148, 1999.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Domingo, C. (1999). Faster Near-Optimal Reinforcement Learning: Adding Adaptiveness to the E3 Algorithm. In: Watanabe, O., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1999. Lecture Notes in Computer Science(), vol 1720. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46769-6_20
Download citation
DOI: https://doi.org/10.1007/3-540-46769-6_20
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66748-3
Online ISBN: 978-3-540-46769-4
eBook Packages: Springer Book Archive