Abstract
Reinforcement learning agents can successfully learn in a variety of difficult tasks. A fundamental problem is that they may learn slowly in complex environments, inspiring the development of speedup methods such as transfer learning. Transfer improves learning by reusing learned behaviors in similar tasks, usually via an inter-task mapping, which defines how a pair of tasks are related. This paper proposes a novel transfer learning technique to autonomously construct an inter-task mapping by using a novel combinations of sparse coding, sparse projection learning, and sparse pseudo-input gaussian processes. Experiments show successful transfer of information between two very different domains: the mountain car and the pole swing-up task. This paper empirically shows that the learned inter-task mapping can be used to successfully (1) improve the performance of a learned policy on a fixed number of samples, (2) reduce the learning times needed by the algorithms to converge to a policy on a fixed number of samples, and (3) converge faster to a near-optimal policy given a large amount of samples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ammar, H.B., Taylor, M.E.: Common subspace transfer for reinforcement learning tasks. In: Proceedings of the Adaptive and Learning Agents Workshop, at AAMAS 2011 (May 2011)
Ammar, H.B., Tuyls, K., Taylor, M.E., Driessens, K., Weiss, G.: Reinforcement learning transfer via sparse coding. In: Proceedings of the Eleventh International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS) (June 2012)
Buşoniu, L., Babuška, R., De Schutter, B., Ernst, D.: Reinforcement Learning and Dynamic Programming Using Function Approximators. CRC Press, Boca Raton (2010)
Jean Kim, S., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for large-scale l1-regularized logistic regression. Journal of Machine Learning Research (2007)
Konidaris, G.: A framework for transfer in reinforcement learning. In: Proceedings of the ICML 2006 Workshop on Structural Knowledge Transfer for Machine Learning (2006)
Kuhlmann, G., Stone, P.: Graph-Based Domain Mapping for Transfer Learning in General Games. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 188–200. Springer, Heidelberg (2007)
Lagoudakis, M.G., Parr, R.: Least-squares policy iteration. J. Mach. Learn. Res. 4, 1107–1149 (2003)
Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. In: NIPS, pp. 801–808 (2007)
Liu, Y., Stone, P.: Value-function-based transfer for reinforcement learning using structure mapping. In: Proceedings of the Twenty-First National Conference on Artificial Intelligence, pp. 415–420 (July 2006)
Nocedal, J., Wright, S.J.: Numerical optimization. Springer (August 1999)
Rasmussen, C.E.: Gaussian processes for machine learning. MIT Press (2006)
Snelson, E., Ghahramani, Z.: Sparse gaussian processes using pseudo-inputs. In: Advance in Neural Information Processing Systems, pp. 1257–1264. MIT Press (2006)
Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction (1998)
Talvitie, E., Singh, S.: An experts algorithm for transfer learning. In: Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (2007)
Taylor, M.E., Kuhlmann, G., Stone, P.: Autonomous transfer for reinforcement learning. In: Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 283–290 (May 2008)
Taylor, M.E., Stone, P.: Cross-domain transfer for reinforcement learning. In: Proceedings of the Twenty-Fourth International Conference on Machine Learning, ICML (June 2007)
Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: A survey. J. Mach. Learn. Res. 10, 1633–1685 (2009)
Taylor, M.E., Whiteson, S., Stone, P.: Transfer via inter-task mappings in policy search reinforcement learning. In: Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, pp. 156–163 (May 2007)
Torrey, L., Walker, T., Shavlik, J., Maclin, R.: Using Advice to Transfer Knowledge Acquired in One Reinforcement Learning Task to Another. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 412–424. Springer, Heidelberg (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ammar, H.B., Taylor, M.E., Tuyls, K., Weiss, G. (2012). Reinforcement Learning Transfer Using a Sparse Coded Inter-task Mapping. In: Cossentino, M., Kaisers, M., Tuyls, K., Weiss, G. (eds) Multi-Agent Systems. EUMAS 2011. Lecture Notes in Computer Science(), vol 7541. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34799-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-642-34799-3_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34798-6
Online ISBN: 978-3-642-34799-3
eBook Packages: Computer ScienceComputer Science (R0)