Synonyms
Definition
An agent acting in a world makes observations, takes actions, and receives rewards for the actions taken. Given a history of such interactions, the agent must make the next choice of action so as to maximize the long-term sum of rewards. To do this well, an agent may take suboptimal actions which allow it to gather the information necessary to later take optimal or near-optimal actions with respect to maximizing the long-term sum of rewards. These information gathering actions are generally considered exploration actions.
Motivation
Since gathering information about the world generally involves taking suboptimal actions compared with a later learned policy, minimizing the number of information gathering actions helps optimize the standard goal in reinforcement learning. In addition, understanding exploration well is key to understanding reinforcement learning well, since exploration is a key aspect of reinforcement learning which is missing from...
Recommended Reading
Abbeel, P., & Ng, A. (2005). Exploration and apprenticeship learning in reinforcement learning. In ICML 2005, Bonn, Germany.
Brafman, R. I., & Tennenholtz, M. (2002). R-MAX – A general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3, 213–231.
Brunskill, E., Leffler, B. R., Li, L., Littman, M. L., & Roy, N. (2008). CORL: A continuous-state offset-dynamics reinforcement learner. In UAI-08, Helsinki, Finland, July 2008.
Kakade, S. (2003). Thesis at Gatsby Computational Neuroscience Unit.
Kakade, S., Kearns, M., & Langford, J. (2003). Exploration in metric state spaces. In ICML 2003.
Kearns, M., & Koller, D. (1999). Efficient reinforcement learning in factored MDPs. In Proceedings of the 16th international joint conference on artificial intelligence (pp. 740–747). San Francisco: Morgan Kaufmann.
Kearns, M., & Singh, S. (1998). Near-optimal reinforcement learning in polynomial time. In ICML 1998 (pp. 260–268). San Francisco: Morgan Kaufmann.
Poupart, P., Vlassis, N., Hoey, J., & Regan, K. (2006). An analytic solution to discrete Bayesian reinforcement learning. In ICML 2006 (pp. 697–704). New York: ACM Press.
Strehl, A. (2007). Thesis at Rutgers University.
Strehl, A. L., Li, L., Wiewiora, E., Langford, J., & Littman, M. L. (2006). PAC model-free reinforcement learning. In Proceedings of the 23rd international conference on machine learning (ICML 2006) (pp. 881–888).
Watkins, C., & Dayan, P. (1992). Q-learning. Machine Learning Journal, 8, 279–292.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this entry
Cite this entry
Langford, J. (2011). Efficient Exploration in Reinforcement Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_244
Download citation
DOI: https://doi.org/10.1007/978-0-387-30164-8_244
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-30768-8
Online ISBN: 978-0-387-30164-8
eBook Packages: Computer ScienceReference Module Computer Science and Engineering