Abstract
We identify two fundamental points of utilizing CBR for an adaptive agent that tries to learn on the basis of trial and error without a model of its environment. The first link concerns the utmost efficient exploitation of experience the agent has collected by interacting within its environment, while the second relates to the acquisition and representation of a suitable behavior policy. Combining both connections, we develop a state-action value function approximation mechanism that relies on case-based, approximate transition graphs and forms the basis on which the agent improves its behavior. We evaluate our approach empirically in the context of dynamic control tasks.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Aha, D., Salzberg, S.: Learning to Catch: Applying Nearest Neighbor Algorithms to Dynamic Control Tasks. In: Cheeseman, P., Oldford, R. (eds.) Selecting Models from Data: Artificial Intelligence and Statistics IV (1994)
Atkeson, C., Moore, A., Schaal, S.: Locally Weighted Learning for Control. Artificial Intelligence Review 11(1-5), 75–113 (1997)
Barto, A., Sutton, R., Anderson, C.: Neuronlike Adaptive Elements that Can Solve Difficult Learning Control Problems. IEEE Transactions on Systems, Man, and Cybernetics, SMC 13(5), 835–846 (1983)
Carbonell, J.: Learning by Analogy: Formulating and Generalizing Plans from Past Experience. In: Michalski, R., Carbonell, J., Mitchell, T. (eds.) Machine Learning: An Artificial Intelligence Approach (1983)
Ernst, D., Geurts, P., Wehenkel, L.: Tree-Based Batch Mode Reinforcement Learning. Journal of Machine Learning Research (2005)
Gabel, T., Riedmiller, M.: CBR for State Value Function Approximation in Reinforcement Learning. In: Muñoz-Ávila, H., Ricci, F. (eds.) ICCBR 2005. LNCS (LNAI), vol. 3620, pp. 206–220. Springer, Heidelberg (2005)
Gordon, G.: Stable Function Approximation in Dynamic Programming. In: ICML, pp. 261–268. Morgan Kaufmann, San Francisco (1995)
Neuroinformatics Group. Reinforcement Learning Benchmarking Site (2007), www.ni.uos.de/index.php?id=930
Macedo, L., Cardoso, A.: Using CBR in the Exploration of Unknown Environments with an Autonomous Agent. In: Funk, P., González Calero, P.A. (eds.) ECCBR 2004. LNCS (LNAI), vol. 3155, pp. 272–286. Springer, Heidelberg (2004)
Peng, J.: Efficient Memory-Based Dynamic Programming. In: Proceedings of the Twelfth International Conference on Machine Learning (ICML 1995), Tahoe City, USA, pp. 438–446. Morgan Kaufmann, San Francisco (1995)
Powell, J., Hauff, B., Hastings, J.: Evaluating the Effectiveness of Exploration and Accumulated Experience in Automatic Case Elicitation. In: Muñoz-Ávila, H., Ricci, F. (eds.) ICCBR 2005. LNCS (LNAI), vol. 3620, pp. 397–407. Springer, Heidelberg (2005)
Puterman, M.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience, USA (2005)
Riedmiller, M.: Neural Fitted Q Iteration – First Experiences with a Data Efficient Neural Reinforcement Learning Method. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, Springer, Heidelberg (2005)
Santamaria, J., Sutton, R., Ram, A.: Experiments with Reinforcement Learning in Problems with Continuous State and Action Spaces. Adaptive Behavior 6(2), 163–217 (1998)
Sutton, R.S., Barto, A.G.: Reinforcement Learning. An Introduction. MIT Press/A Bradford Book, Cambridge, USA (1998)
Watkins, C., Dayan, P.: Q-Learning. Machine Learning 8 (1992)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gabel, T., Riedmiller, M. (2007). An Analysis of Case-Based Value Function Approximation by Approximating State Transition Graphs. In: Weber, R.O., Richter, M.M. (eds) Case-Based Reasoning Research and Development. ICCBR 2007. Lecture Notes in Computer Science(), vol 4626. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74141-1_24
Download citation
DOI: https://doi.org/10.1007/978-3-540-74141-1_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74138-1
Online ISBN: 978-3-540-74141-1
eBook Packages: Computer ScienceComputer Science (R0)