Abstract
Imitative learning can be considered an essential task of humans development. People use instructions and demonstrations provided by other human experts to acquire knowledge. In order to make an agent capable of learning through demonstrations, we propose a relational framework for learning by imitation. Demonstrations and domain specific knowledge are compactly represented by a logical language able to express complex relational processes. The agent interacts in a stochastic environment and incrementally receives demonstrations. It actively interacts with the human by deciding the next action to execute and requesting demonstration from the expert based on the current learned policy. The framework has been implemented and validated with experiments in simulated agent domains.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Billard, A., Siegwart, R.: Robot learning from demonstration. Robotics and Autonomous Systems 47(2-3), 65–67 (2004)
Schaal, S., Ijspeert, A., Billard, A.: Computational approaches to motor learning by imitation. Philosophical Transactions: Biological Sciences 358(1431), 537–547 (2003)
Meltzoff, A.: The ”like me” framework for recognizing and becoming an intentional agent. Acta Psychologica 124(1), 26–43 (2007)
Schaal, S.: Is imitation learning the route to humanoid robots? Trends in cognitive sciences 3(6), 233–242 (1999)
Nicolescu, M., Mataric, M.: Natural methods for robot task learning: instructive demonstrations, generalization and practice. In: Proceedings of the second international joint conference on Autonomous agents and multiagent systems (AAMAS 2003), pp. 241–248. ACM, New York (2003)
Atkeson, C., Schaal, S.: Robot learning from demonstration. In: Fisher, D. (ed.) Proceedings of the 14th International Conference on Machine Learning (ICML), pp. 12–20 (1997)
Smart, W., Kaelbling, L.: Effective reinforcement learning for mobile robots. In: IEEE International Conference on Robotics and Automation (ICRA), vol. 4, pp. 3404–3410 (2002)
Chernova, S., Veloso, M.: Confidence-based policy learning from demonstration using gaussian mixture models. In: AAMAS 2007: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1–8. ACM, New York (2007)
Bentivegna, D., Atkeson, C., Cheng, G.: Learning from observation and practice using primitives. In: AAAI Fall Symposium Series, Symposium on Real-life Reinforcement Learning (2004)
Price, B., Boutilier, C.: Accelerating reinforcement learning through implicit imitation. Journal of Artificial Intelligence Research 19 (2003)
Schaal, S.: Learning from demonstration. Advances in Neural Information Processing Systems 9, 1040–1046 (1997)
Jansen, B., Belpaeme, T.: A computational model of intention reading in imitation. Robotics and Autonomous Systems 54(5), 394–402 (2006)
Wohlschlager, A., Gattis, M., Bekkering, H.: Action generation and action perception in imitation: An instantiation of the ideomotor principle. Philosophical Transaction of the Royal Society of London: Biological Sciences 358 1431, 501–515 (2003)
Billard, A., Epars, Y., Calinon, S., Cheng, G., Schaal, S.: Discovering Optimal Imitation Strategies. robotics and autonomous systems. Special Issue: Robot Learning from Demonstration 47(2-3), 69–77 (2004)
Jebara, T., Pentland, A.: Statistical imitative learning from perceptual data. In: Proc. ICDL 2002, pp. 191–196 (2002)
Verma, D., Rao, R.: Imitation learning using graphical models. In: Kok, J.N., Koronacki, J., de Mántaras, R.L., Matwin, S., Mladenic, D., Skowron, A. (eds.) ECML 2007. LNCS, vol. 4701, pp. 757–764. Springer, Heidelberg (2007)
Lavrac, N., Dzeroski, S.: Inductive Logic Programming: Techniques and Applications. Ellis Horwood, New York (1994)
Esposito, F., Ferilli, S., Fanizzi, N., Basile, T., Di Mauro, N.: Incremental learning and concept drift in inthelex. Intelligent Data Analysis Journal, Special Issue on Incremental Learning Systems Capable of Dealing with Concept Drift 8(3), 213–237 (2004)
Semeraro, G., Esposito, F., Malerba, D.: Ideal refinement of datalog programs. In: Proietti, M. (ed.) LOPSTR 1995. LNCS, vol. 1048, pp. 120–136. Springer, Heidelberg (1996)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bombini, G., Di Mauro, N., Basile, T.M.A., Ferilli, S., Esposito, F. (2009). Relational Learning by Imitation. In: Håkansson, A., Nguyen, N.T., Hartung, R.L., Howlett, R.J., Jain, L.C. (eds) Agent and Multi-Agent Systems: Technologies and Applications. KES-AMSTA 2009. Lecture Notes in Computer Science(), vol 5559. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01665-3_28
Download citation
DOI: https://doi.org/10.1007/978-3-642-01665-3_28
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-01664-6
Online ISBN: 978-3-642-01665-3
eBook Packages: Computer ScienceComputer Science (R0)