Abstract
In problems modeled as Markov Decision Processes (MDP), knowledge transfer is related to the notion of generalization and state abstraction. Abstraction can be obtained through factored representation by describing states with a set of features. Thus, the definition of the best action to be taken in a state can be easily transferred to similar states, i.e., states with similar features. In this paper we compare forward and backward greedy feature selection to find an appropriate compact set of features for such abstraction, thus facilitating the transfer of knowledge to new problems. We also present heuristic versions of both approaches and compare all of the approaches within a discrete simulated navigation problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming, 1st edn. John Wiley and Sons, New York (1994)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Walsh, T.J., Li, L., Littman, M.L.: Transferring state abstractions between mdps. In: ICML Workshop on Structural Knowledge Transfer for Machine Learning (2006)
Frommberger, L., Wolter, D.: Structural knowledge transfer by spatial abstraction for reinforcement learning agents. Adaptive Behavior 18, 507–525 (2010)
Otterlo, M.V.: Reinforcement learning for relational mdps. In: Machine Learning Conf. of Belgium and the Netherlands, pp. 138–145 (2004)
Matos, T., Bergamo, Y.P., Silva, V.F., Costa, A.H.R.: Stochastic Abstract Policies for Knowledge Transfer in Robotic Navigation Tasks. In: Batyrshin, I., Sidorov, G. (eds.) MICAI 2011, Part I. LNCS, vol. 7094, pp. 454–465. Springer, Heidelberg (2011)
da Silva, V.F., Pereira, F.A., Costa, A.H.R.: Finding Memoryless Probabilistic Relational Policies for Inter-task Reuse. In: Greco, S., Bouchon-Meunier, B., Coletti, G., Fedrizzi, M., Matarazzo, B., Yager, R.R. (eds.) IPMU 2012, Part II. Communications in Computer and Information Science, vol. 298, pp. 107–116. Springer, Heidelberg (2012)
Farahmand, A.M., Ghavamzadeh, M., Szepesvári, C., Mannor, S.: Regularized policy iteration. In: Neural Information Processing Systems, pp. 441–448 (2008)
Painter-Wakefield, C., Parr, R.: Greedy algorithms for sparse reinforcement learning. In: International Conference on Machine Learning (2012)
Geramifard, A., Doshi, F., Redding, J., Roy, N., How, J.: Online discovery of feature dependencies. In: International Conf. on Machine Learning (ICML), pp. 881–888. ACM, New York (2011)
Kroon, M., Whiteson, S.: Automatic feature selection for model-based reinforcement learning in factored mdps. In: ICMLA - International Conf. on Machine Learning and Applications, pp. 324–330 (2009)
Chankong, V., Haimes, Y.Y.: Multiobjective Decision Making: Theory and Methodology. North-Holland, New York (1983)
Ribeiro, C.H.C.: A tutorial on reinforcement learning. In: International Joint Conf. on Neural Networks. INNS Press, Washington DC (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bogdan, K.O.M., da Silva, V.F. (2013). Forward and Backward Feature Selection in Gradient-Based MDP Algorithms. In: Batyrshin, I., González Mendoza, M. (eds) Advances in Artificial Intelligence. MICAI 2012. Lecture Notes in Computer Science(), vol 7629. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37807-2_33
Download citation
DOI: https://doi.org/10.1007/978-3-642-37807-2_33
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-37806-5
Online ISBN: 978-3-642-37807-2
eBook Packages: Computer ScienceComputer Science (R0)