Skip to main content

Relational Learning by Imitation

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5559))

Abstract

Imitative learning can be considered an essential task of humans development. People use instructions and demonstrations provided by other human experts to acquire knowledge. In order to make an agent capable of learning through demonstrations, we propose a relational framework for learning by imitation. Demonstrations and domain specific knowledge are compactly represented by a logical language able to express complex relational processes. The agent interacts in a stochastic environment and incrementally receives demonstrations. It actively interacts with the human by deciding the next action to execute and requesting demonstration from the expert based on the current learned policy. The framework has been implemented and validated with experiments in simulated agent domains.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Billard, A., Siegwart, R.: Robot learning from demonstration. Robotics and Autonomous Systems 47(2-3), 65–67 (2004)

    Article  Google Scholar 

  2. Schaal, S., Ijspeert, A., Billard, A.: Computational approaches to motor learning by imitation. Philosophical Transactions: Biological Sciences 358(1431), 537–547 (2003)

    Article  Google Scholar 

  3. Meltzoff, A.: The ”like me” framework for recognizing and becoming an intentional agent. Acta Psychologica 124(1), 26–43 (2007)

    Article  Google Scholar 

  4. Schaal, S.: Is imitation learning the route to humanoid robots? Trends in cognitive sciences 3(6), 233–242 (1999)

    Article  Google Scholar 

  5. Nicolescu, M., Mataric, M.: Natural methods for robot task learning: instructive demonstrations, generalization and practice. In: Proceedings of the second international joint conference on Autonomous agents and multiagent systems (AAMAS 2003), pp. 241–248. ACM, New York (2003)

    Chapter  Google Scholar 

  6. Atkeson, C., Schaal, S.: Robot learning from demonstration. In: Fisher, D. (ed.) Proceedings of the 14th International Conference on Machine Learning (ICML), pp. 12–20 (1997)

    Google Scholar 

  7. Smart, W., Kaelbling, L.: Effective reinforcement learning for mobile robots. In: IEEE International Conference on Robotics and Automation (ICRA), vol. 4, pp. 3404–3410 (2002)

    Google Scholar 

  8. Chernova, S., Veloso, M.: Confidence-based policy learning from demonstration using gaussian mixture models. In: AAMAS 2007: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1–8. ACM, New York (2007)

    Google Scholar 

  9. Bentivegna, D., Atkeson, C., Cheng, G.: Learning from observation and practice using primitives. In: AAAI Fall Symposium Series, Symposium on Real-life Reinforcement Learning (2004)

    Google Scholar 

  10. Price, B., Boutilier, C.: Accelerating reinforcement learning through implicit imitation. Journal of Artificial Intelligence Research 19 (2003)

    MATH  Google Scholar 

  11. Schaal, S.: Learning from demonstration. Advances in Neural Information Processing Systems 9, 1040–1046 (1997)

    Google Scholar 

  12. Jansen, B., Belpaeme, T.: A computational model of intention reading in imitation. Robotics and Autonomous Systems 54(5), 394–402 (2006)

    Article  Google Scholar 

  13. Wohlschlager, A., Gattis, M., Bekkering, H.: Action generation and action perception in imitation: An instantiation of the ideomotor principle. Philosophical Transaction of the Royal Society of London: Biological Sciences 358 1431, 501–515 (2003)

    Article  Google Scholar 

  14. Billard, A., Epars, Y., Calinon, S., Cheng, G., Schaal, S.: Discovering Optimal Imitation Strategies. robotics and autonomous systems. Special Issue: Robot Learning from Demonstration 47(2-3), 69–77 (2004)

    Article  Google Scholar 

  15. Jebara, T., Pentland, A.: Statistical imitative learning from perceptual data. In: Proc. ICDL 2002, pp. 191–196 (2002)

    Google Scholar 

  16. Verma, D., Rao, R.: Imitation learning using graphical models. In: Kok, J.N., Koronacki, J., de Mántaras, R.L., Matwin, S., Mladenic, D., Skowron, A. (eds.) ECML 2007. LNCS, vol. 4701, pp. 757–764. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  17. Lavrac, N., Dzeroski, S.: Inductive Logic Programming: Techniques and Applications. Ellis Horwood, New York (1994)

    MATH  Google Scholar 

  18. Esposito, F., Ferilli, S., Fanizzi, N., Basile, T., Di Mauro, N.: Incremental learning and concept drift in inthelex. Intelligent Data Analysis Journal, Special Issue on Incremental Learning Systems Capable of Dealing with Concept Drift 8(3), 213–237 (2004)

    Google Scholar 

  19. Semeraro, G., Esposito, F., Malerba, D.: Ideal refinement of datalog programs. In: Proietti, M. (ed.) LOPSTR 1995. LNCS, vol. 1048, pp. 120–136. Springer, Heidelberg (1996)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bombini, G., Di Mauro, N., Basile, T.M.A., Ferilli, S., Esposito, F. (2009). Relational Learning by Imitation. In: Håkansson, A., Nguyen, N.T., Hartung, R.L., Howlett, R.J., Jain, L.C. (eds) Agent and Multi-Agent Systems: Technologies and Applications. KES-AMSTA 2009. Lecture Notes in Computer Science(), vol 5559. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01665-3_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-01665-3_28

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-01664-6

  • Online ISBN: 978-3-642-01665-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics