Skip to main content
Log in

Towards Human-Level Semantics Understanding of Human-Centered Object Manipulation Tasks for HRI: Reasoning About Effect, Ability, Effort and Perspective Taking

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

In its lifetime, a robot should be able to autonomously understand the semantics of different tasks to effectively perform them in different situations. In this context, it is important to distinguish the meaning (in terms of the desired effect) of a task and the means to achieve that task. Our focus is those tasks in which one agent is required to perform a task for another agent, such as give, show, hide, make-accessible, etc. In this paper, we identify that a high-level human-centered combined reasoning, based on perspective taking, efforts and abilities analyses, is the key to understand semantics of such tasks. By combining these aspects, the robot infers sets of hierarchy of facts, which serve for analyzing the effect of a task. We adapt the explanation based learning approach enabling the task understanding from the very first demonstration and continuous refinement with new demonstrations. We argue that such symbolic level understanding of a task, which is not bound to trajectory, kinematics structure or shape of the robot, facilitates generalization to novel situations as well as ease the transfer of acquired knowledge among heterogeneous robots. Further, the knowledge of tasks at such human understandable level of abstraction will enrich the natural human–robot interaction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21

Similar content being viewed by others

References

  1. Agostini A, Torras C, Wörgötter F (2011) Integrating task planning and interactive learning for robots to work in human environments. In: Proceedings of the 22nd international conference on artificial intelligence (IJCAI), pp 2386–2391

  2. Alili S, Alami R, Montreuil V (2008) A task planner for an autonomous social robot. In: Distributed autonomous robotic systems (DARS), pp 335–344

  3. Alili S, Pandey A, Sisbot EA, Alami R (2010) Interleaving symbolic and geometric reasoning for a robotic assistant. In: ICAPS workshop on combining action and motion planning (CAMP)

  4. Argall BD, Chernova S, Veloso M, Browning B (2009) A survey of robot learning from demonstration. Robot Auton Syst 57(5):469–483

    Article  Google Scholar 

  5. Awaad I, Kraetzschmar GK, Hertzberg J (2013) Affordance-based reasoning in robot task planning. In: Planning and robotics (PlanRob) workshop at 23rd international conference on automated planning and scheduling (ICAPS)

  6. Breazeal C, Berlin M, Brooks AG, Gray J, Thomaz AL (2006) Using perspective taking to learn from ambiguous demonstrations. Robot Auton Syst 54:385–393

    Article  Google Scholar 

  7. Calinon S, Dhalluin F, Caldwell D, Billard A (2009) Handling of multiple constraints and motion alternatives in a robot programming by demonstration framework. In: 9th IEEE-RAS international conference on humanoid robots, humanoids 2009, pp 582–588

  8. Cambon S, Alami R, Gravot F (2009) A hybrid approach to intricate motion, manipulation and task planning. Int J Robot Res 28(1):104–126

    Article  Google Scholar 

  9. Cantrell R, Schermerhorn P, Scheutz M (2011) Learning actions from human-robot dialogues. In: IEEE RO-MAN, pp 125–130

  10. Carello C, Grosofsky A, Reichel FD, Solomon HY, Turvey M (1989) Visually perceiving what is reachable. Ecol Psychol 1(1):27–54

    Article  Google Scholar 

  11. Carpenter M, Call J (2002) The chemistry of social learning. Dev Sci 5(1):22–24

    Article  Google Scholar 

  12. Cestnik B (1990) Estimating probabilities: a crucial task in machine learning. In: Proceedings of the ninth European conference on artificial intelligence, ECAI, pp 147–149

  13. Chao C, Cakmak M, Thomaz A (2011) Towards grounding concepts for transfer in goal learning from demonstration. In: IEEE international conference on development and learning (ICDL) vol 2, pp 1–6

  14. Chella A, Dindo H, Infantino I (2006) A cognitive framework for imitation learning. Robot Auton Syst 54(5):403–408

    Article  Google Scholar 

  15. Choi HJ, Mark LS (2004) Scaling affordances for human reach actions. Hum Mov Sci 23(6):785–806

    Article  Google Scholar 

  16. Dantam N, Essa I, Stilman M (2012) Linguistic transfer of human assembly tasks to robots. In: Proceedings of intelligent robots and systems (IROS), pp 237–242

  17. de Silva L, Pandey AK, Alami R (2013) An interface for interleaved symbolic-geometric planning and backtracking. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 232–239

  18. De Silva L, Gharbi M, Pandey AK, Alami R (2014) A new approach to combined symbolic-geometric backtracking in the context of human–robot interaction. In: IEEE international conference on robotics and automation (ICRA)

  19. Dejong G, Mooney R (1986) Explanation-based learning: an alternative view. Mach Learn 1:145–176

    Google Scholar 

  20. Dillmann R (2004) Teaching and learning of robot tasks via observation of human performance. Robot Auton Syst 47:109–116

    Article  Google Scholar 

  21. Dragan A, Gordon G, Srinivasa S (2011) Learning from experience in manipulation planning: setting the right goals. In: Proceedings of the international symposium on robotics research (ISRR)

  22. Ekvall S, Kragic D (2008) Robot learning from demonstration: a task-level planning approach. Int J Adv Robot Syst 5(3):223–234

  23. Flann NS, Dietterich GT (1989) A study of explanation-based methods for inductive learning. Mach Learn 4:187–226

    Article  Google Scholar 

  24. Flavell JH, Everett BA, Croft K, Flavell ER (1981) Young children’s knowledge about visual perception: further evidence for the level 1–level 2 distinction, pp 99–103

  25. Flavell JH, Shipstead SG, Croft K (1978) Young children’s knowledge about visual perception: Hiding objects from others. Child Dev 49(4):1208–1211

    Article  Google Scholar 

  26. Furnkranz J, Flach P (2003) An analysis of rule evaluation metrics. In: Proceedings of the 20th international conference on machine learning (ICML). AAAI Press, pp 202–209

  27. Gardner DL, Mark LS, Ward JA, Edkins H (2001) How do task characteristics affect the transitions between seated and standing reaches? Ecol Psychol 13(4):245–274

    Article  Google Scholar 

  28. Gribovskaya E, Khansari-Zadeh S, Billard A (2011) Learning non-linear multivariate dynamics of motion in robotic manipulators. Int J Robot Res 30(1):80–117

    Article  Google Scholar 

  29. Hopper LM, Lambeth SP, Schapiro SJ, Whiten A (2008) Observational learning in chimpanzees and children studied through ghostconditions. Proc R Soc B Biol Sci 275(1636):835–840

    Article  Google Scholar 

  30. Johnson M, Demiris Y (2005) Perceptual perspective taking and action recognition. Int J Soc Robot 2:181–199

    Google Scholar 

  31. Jkel R, Schmidt-Rohr S, Rhl S, Kasper A, Xue Z, Dillmann R (2012) Learning of planning models for dexterous manipulation based on human demonstrations. Int J Soc Robot 4(4):437–448

    Article  Google Scholar 

  32. Khatib O, Demircan E, Sapio VD, Sentis L, Besier T, Delp S (2009) Robotics-based synthesis of human motion. J Physiol Paris 103:211–219

    Article  Google Scholar 

  33. Kuniyoshi Y, Inaba M, Inoue H (1994) Learning by watching: extracting reusable task knowledge from visual observation of human performance. IEEE Trans Robot Autom 10(6):799–822

    Article  Google Scholar 

  34. Lallee S, Lemaignan S, Lenz A, Melhuish C, Natale L, Skachek S, van Der Zant T, Warneken F, Dominey PF (2010) Towards a platform-independent cooperative human–robot interaction system: I. perception. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 4444–4451

  35. Lee KH, Lee J, Thomaz AL, Bobick AF (2009) Effective robot task learning by focusing on task-relevant objects. In: Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems. IEEE Press, Piscataway, NJ, USA, pp 2551–2556

  36. Lemaignan S, Ros R, Mösenlechner L, Alami R, Beetz M (2010) Oro, a knowledge management module for cognitive architectures in robotics. In: Proceedings of IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 3548–3553

  37. Lempers JD, Flavell ER, Flavell JH (1977) The development in very young children of tacit knowledge concerning visual perception. Genet Psychol Monogr 95(1):3–53

    Google Scholar 

  38. Levas A, Selfridge M (1984) A user-friendly high-level robot teaching system. In: IEEE international conference on robotics and automation. proceedings, vol 1, pp 413–416

  39. Lopes M, Melo FS, Montesano L (2007) Affordance-based imitation learning in robots. In: IROS, pp 1015–1021

  40. Lunsky LL (1965) Learning and instinct in animals. Arch Intern Med 115(6):757–758

    Article  Google Scholar 

  41. Marin-Urias L, Sisbot E, Pandey A, Tadakuma R, Alami R (2009) Towards shared attention through geometric reasoning for human robot interaction. In: 9th IEEE-RAS international conference on humanoid robots, humanoids 2009, pp 331–336

  42. Michael L (2011) Causal learnability. In: Barcelona, International joint conference on artificial intelligence (IJCAI), pp 1014–1020

  43. Montesano L, Lopes M, Bernardino A, Santos-Victor J (2007) Modeling affordances using bayesian networks. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 4102–4107

  44. Muhlig M, Gienger M, Hellbach S, Steil JJ, Goerick C (2009) Task-level imitation learning using variance-based movement optimization. In: IEEE international conference on robotics and automation, pp 1177–1184

  45. Nielsen M (2006) Copying actions and copying outcomes: social learning through the second year. Dev Psychol 42(3):555

    Article  Google Scholar 

  46. Ogawara K, Takamatsu J, Kimura H, Ikeuchi K (2003) Extraction of essential interactions through multiple observations of human demonstrations. IEEE Trans Ind Electron 50(4):667–675

    Article  Google Scholar 

  47. Pandey AK, Alami R (2010) Mightability maps: a perceptual level decisional framework for co-operative and competitive human-robot interaction. In: International conference on intelligent robots and systems (IROS) year 2010, pp 5842–5848

  48. Pandey AK, Alami R (2013) Affordance graph: a framework to encode perspective taking and effort based affordances for day-to-day human–robot interaction. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 2180–2187

  49. Pandey AK, Alami R (2014) Ingredients and a framework of dexterous manipulation skills for robots in human centered environment and hri. J Robot Soc Jpn 32(4):31–37

  50. Pandey AK, Ali M, Alami R (2013) Towards a task-aware proactive sociable robot based on multi-state perspective-taking. Int J Soc Robot 5(2):215–236

    Article  Google Scholar 

  51. Pandey AK, Saut JP, Sidobre D, Alami R (2012) Towards planning human-robot interactive manipulation tasks: task dependent and human oriented autonomous selection of grasp and placement. In: Proceedings of IEEE RAS & EMBS BioRob, pp 1371–1376

  52. Pardowitz M, Dillmann R (2007) Towards life-long learning in household robots: the piagetian approach. In: IEEE 6th international conference on development and learning (ICDL), pp 88–93

  53. Pasula H, Zettlemoyer L, Kaelbling L (2004) Learning probabilistic relational planning rules. In: International conference on automated planning and scheduling (ICAPS), pp 73–82

  54. Piaget J (1945) Play, dreams, and imitation in childhood. Norton, New York

    Google Scholar 

  55. Ros R, Lemaignan S, Sisbot EA, Alami R, Steinwender J, Hamann K, Warneken F (2010) Which one? grounding the referent based on efficient human–robot interaction. In: 19th IEEE international symposium in robot and human interactive communication, pp 570–575

  56. Sapio V, Warren J, Khatib O (2006) Predicting reaching postures using a kinematically constrained shoulder model. In: Lennarcic J, Roth B (eds) Advances in robot kinematics. Springer, Berlin, pp 209–218

    Chapter  Google Scholar 

  57. Saut JP, Sidobre D (2012) Efficient models for grasp planning with a multi-fingered hand. Robot Auton Syst 60(3):347–357

    Article  Google Scholar 

  58. Schmidt-Rohr S, Losch M, Dillmann R (2010) Learning flexible, multi-modal human–robot interaction by observing human–human–interaction. In: IEEE RO-MAN, pp 582–587

  59. Simeon T, p. Laumond J, Lamiraux F (2001) Move3d: a generic platform for path planning. In: 4th international symposium on assembly and task planning, pp 25–30

  60. Tenorth M, Beetz M (2009) Knowrob- knowledge processing for autonomous personal robots. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 4261–4266

  61. Tomasello M (1990) Cultural transmission in the tool use and communicatory signaling of chimpanzees?. Cambridge University Press, Cambridge, MA, pp 274–311

    Google Scholar 

  62. Trafton J, Cassimatis N, Bugajska M, Brock D, Mintz F, Schultz A (2005) Enabling effective human–robot interaction using perspective-taking in robots. Syst Man Cybern Part A Syst Hum IEEE Trans 35(4):460–470

    Article  Google Scholar 

  63. Weber BG, Mateas M, Jhala A (2012) Learning from demonstration for goal-driven autonomy. In: AAAI

  64. Wood DJ (1998) How children think and learn: the social contexts of cognitive development. Blackwell, Oxford

    Google Scholar 

  65. Wusteman J (1992) Explanation-based learning: a survey. Artif Intell Rev 6:243–262

    Article  Google Scholar 

  66. Ye G, Alterovitz R (2011) Demonstration-guided motion planning. In: Proceedings of the international symposium on robotics research (ISRR)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amit Kumar Pandey.

Additional information

This work is partially conducted within Romeo2 (Humanoid Robot Assistant and Companion for Everyday Life) project (http://projetromeo.com/), Funded by BPIFrance in the framework of the Structuring Projects of Competitiveness Clusters (PSPC).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pandey, A.K., Alami, R. Towards Human-Level Semantics Understanding of Human-Centered Object Manipulation Tasks for HRI: Reasoning About Effect, Ability, Effort and Perspective Taking. Int J of Soc Robotics 6, 593–620 (2014). https://doi.org/10.1007/s12369-014-0246-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-014-0246-y

Keywords

Navigation