Skip to main content

View-Invariant Robot Adaptation to Human Action Timing

  • Conference paper
  • First Online:
Intelligent Systems and Applications (IntelliSys 2018)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 868))

Included in the following conference series:

Abstract

In this work we describe a novel method to enable robots to adapt their action timing to the concurrent actions of a human partner in a repetitive joint task. We propose to exploit purely motion-based information to detect view-invariant dynamic instants of observed actions, i.e. moments in which the action dynamic is subject to a severe change. We model such instants as local minima of the movement velocity profile and mark temporal locations that are preserved under projective transformations, i.e. that resist to the mapping on the image planes and then can be considered view-invariant. Also, their level of generality allows them to easily adapt to a variety of human dynamics and settings. We first validate a computational method to detect such instants offline, on a new dataset of cooking activities. Then we propose an online implementation of the method, and we integrate the new functionality in the software framework of the iCub humanoid robot. The experimental testing of the online method proves its robustness in predicting the right intervention time for the robot and in supporting the adaptation of its actions durations in Human-Robot Interaction (HRI) sessions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The dataset and its annotation will be soon made available online. Motion capture sequences will be also provided.

References

  1. Flanagan, J.R., Johansson, R.S.: Action plans used in action observation. Nature 424(6950), 769 (2003)

    Article  Google Scholar 

  2. Neda, Z., Ravasz, E., Brechet, Y., Vicsek, T., Barabasi, A.-L.: The sound of many hands clapping. Nature 403, 849–850 (2000)

    Article  Google Scholar 

  3. Bisio, A., Sciutti, A., Nori, F., Metta, G., Fadiga, L., Sandini, G., Pozzo, T.: Motor contagion during human-human and human-robot interaction. PLoS One 9, e106172 (2014)

    Article  Google Scholar 

  4. Mörtl, A., et al.: Modeling inter-human movement coordination: synchronization governs joint task dynamics. Biol. Cybernet. 106(4–5), 241–59 (2012). 1–19

    Article  MathSciNet  Google Scholar 

  5. Lorenz, T., Mörtl, A., Hirche, S.: Movement synchronization fails during non-adaptive human-robot interaction. In: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, pp. 189–190. IEEE Press, March 2013

    Google Scholar 

  6. Vannucci, F., Sciutti, A., Jacono, M., Sandini, G., Rea, F.: Adaptation to a humanoid robot in a collaborative joint task. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (2017)

    Google Scholar 

  7. Rao, C., Alper, Y., Mubarak, S.: View-invariant representation and recognition of actions. Int. J. Comput. Vis. 50(2), 203–226 (2002)

    Article  Google Scholar 

  8. Noceti, N., Sciutti, A., Sandini, G.: Cognition helps vision: recognizing biological motion using invariant dynamic cues. In: International Conference on Image Analysis and Processing (2015)

    Google Scholar 

  9. Vignolo, A., Rea, F., Noceti, N., Sciutti, A., Odone, F., Sandini, G.: Biological movement detector enhances the attentive skills of humanoid robot iCub. In: IEEE-RAS 16th International Conference on Humanoid Robots, pp. 338–344 (2016)

    Google Scholar 

  10. Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Image Analysis, pp. 363–370 (2003)

    Chapter  Google Scholar 

  11. Metta, G., Natale, L., Nori, F., Sandini, G., Vernon, D., Fadiga, L., Bernardino, A.: The iCub humanoid robot: an open-systems platform for research in cognitive development. Neural Netw. 23(8), 1125–1134 (2010)

    Article  Google Scholar 

  12. Vignolo, A., Noceti, N., Rea, F., Sciutti, A., Odone, F., Sandini, G.: Detecting biological motion for human robot interaction: a link between perception and action. Front. Robot. AI 4, 14 (2017). https://doi.org/10.3389/frobt

    Article  Google Scholar 

  13. Noceti, N., Odone, F., Sciutti, A., Sandini, G.: Exploring biological motion regularities of human actions: a new perspective on video analysis. ACM Trans. Appl. Percept. (TAP) 14(3), 21 (2017)

    Google Scholar 

  14. Bütepage, J., Kragic, D.: Human-Robot Collaboration: From Psychology to Social Robotics. arXiv preprint arXiv:1705.10146 (2017)

  15. Mörtl, A., Lorenz, T., Hirche, S., Vasilaki, E.: Rhythm patterns interaction-synchronization behavior for human-robot joint action. PloS one 9, e95195 (2014)

    Article  Google Scholar 

  16. Ijspeert, A.J., Nakanishi, J., Schaal, S.: Learning rhythmic movements by demonstration using nonlinear oscillators. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2002), no. BIOROB-CONF-2002-003 (2002)

    Google Scholar 

  17. Cabrera, M.E., Wachs, J.P.: A human-centered approach to one-shot gesture learning. Front. Robot. AI 4, 8 (2017). https://doi.org/10.3389/frobt.2017.00008

    Article  Google Scholar 

  18. Shi, Q., Wang, L., Cheng, L., Smola, A.: Discriminative human action segmentation and recognition using semi-Markov model. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

    Google Scholar 

  19. Shao, L., Ji, L., Liu, Y., Zhang, J.: Human action segmentation and recognition via motion and shape analysis. Pattern Recognit. Lett. 33(4), 438–445 (2012)

    Article  Google Scholar 

Download references

Acknowledgment

The research presented here has been supported by the European CODEFROR project (FP7-PIRSES-2013-612555).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicoletta Noceti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Noceti, N., Odone, F., Rea, F., Sciutti, A., Sandini, G. (2019). View-Invariant Robot Adaptation to Human Action Timing. In: Arai, K., Kapoor, S., Bhatia, R. (eds) Intelligent Systems and Applications. IntelliSys 2018. Advances in Intelligent Systems and Computing, vol 868. Springer, Cham. https://doi.org/10.1007/978-3-030-01054-6_56

Download citation

Publish with us

Policies and ethics