Abstract
In this work we describe a novel method to enable robots to adapt their action timing to the concurrent actions of a human partner in a repetitive joint task. We propose to exploit purely motion-based information to detect view-invariant dynamic instants of observed actions, i.e. moments in which the action dynamic is subject to a severe change. We model such instants as local minima of the movement velocity profile and mark temporal locations that are preserved under projective transformations, i.e. that resist to the mapping on the image planes and then can be considered view-invariant. Also, their level of generality allows them to easily adapt to a variety of human dynamics and settings. We first validate a computational method to detect such instants offline, on a new dataset of cooking activities. Then we propose an online implementation of the method, and we integrate the new functionality in the software framework of the iCub humanoid robot. The experimental testing of the online method proves its robustness in predicting the right intervention time for the robot and in supporting the adaptation of its actions durations in Human-Robot Interaction (HRI) sessions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The dataset and its annotation will be soon made available online. Motion capture sequences will be also provided.
References
Flanagan, J.R., Johansson, R.S.: Action plans used in action observation. Nature 424(6950), 769 (2003)
Neda, Z., Ravasz, E., Brechet, Y., Vicsek, T., Barabasi, A.-L.: The sound of many hands clapping. Nature 403, 849–850 (2000)
Bisio, A., Sciutti, A., Nori, F., Metta, G., Fadiga, L., Sandini, G., Pozzo, T.: Motor contagion during human-human and human-robot interaction. PLoS One 9, e106172 (2014)
Mörtl, A., et al.: Modeling inter-human movement coordination: synchronization governs joint task dynamics. Biol. Cybernet. 106(4–5), 241–59 (2012). 1–19
Lorenz, T., Mörtl, A., Hirche, S.: Movement synchronization fails during non-adaptive human-robot interaction. In: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, pp. 189–190. IEEE Press, March 2013
Vannucci, F., Sciutti, A., Jacono, M., Sandini, G., Rea, F.: Adaptation to a humanoid robot in a collaborative joint task. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (2017)
Rao, C., Alper, Y., Mubarak, S.: View-invariant representation and recognition of actions. Int. J. Comput. Vis. 50(2), 203–226 (2002)
Noceti, N., Sciutti, A., Sandini, G.: Cognition helps vision: recognizing biological motion using invariant dynamic cues. In: International Conference on Image Analysis and Processing (2015)
Vignolo, A., Rea, F., Noceti, N., Sciutti, A., Odone, F., Sandini, G.: Biological movement detector enhances the attentive skills of humanoid robot iCub. In: IEEE-RAS 16th International Conference on Humanoid Robots, pp. 338–344 (2016)
Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Image Analysis, pp. 363–370 (2003)
Metta, G., Natale, L., Nori, F., Sandini, G., Vernon, D., Fadiga, L., Bernardino, A.: The iCub humanoid robot: an open-systems platform for research in cognitive development. Neural Netw. 23(8), 1125–1134 (2010)
Vignolo, A., Noceti, N., Rea, F., Sciutti, A., Odone, F., Sandini, G.: Detecting biological motion for human robot interaction: a link between perception and action. Front. Robot. AI 4, 14 (2017). https://doi.org/10.3389/frobt
Noceti, N., Odone, F., Sciutti, A., Sandini, G.: Exploring biological motion regularities of human actions: a new perspective on video analysis. ACM Trans. Appl. Percept. (TAP) 14(3), 21 (2017)
Bütepage, J., Kragic, D.: Human-Robot Collaboration: From Psychology to Social Robotics. arXiv preprint arXiv:1705.10146 (2017)
Mörtl, A., Lorenz, T., Hirche, S., Vasilaki, E.: Rhythm patterns interaction-synchronization behavior for human-robot joint action. PloS one 9, e95195 (2014)
Ijspeert, A.J., Nakanishi, J., Schaal, S.: Learning rhythmic movements by demonstration using nonlinear oscillators. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2002), no. BIOROB-CONF-2002-003 (2002)
Cabrera, M.E., Wachs, J.P.: A human-centered approach to one-shot gesture learning. Front. Robot. AI 4, 8 (2017). https://doi.org/10.3389/frobt.2017.00008
Shi, Q., Wang, L., Cheng, L., Smola, A.: Discriminative human action segmentation and recognition using semi-Markov model. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)
Shao, L., Ji, L., Liu, Y., Zhang, J.: Human action segmentation and recognition via motion and shape analysis. Pattern Recognit. Lett. 33(4), 438–445 (2012)
Acknowledgment
The research presented here has been supported by the European CODEFROR project (FP7-PIRSES-2013-612555).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Noceti, N., Odone, F., Rea, F., Sciutti, A., Sandini, G. (2019). View-Invariant Robot Adaptation to Human Action Timing. In: Arai, K., Kapoor, S., Bhatia, R. (eds) Intelligent Systems and Applications. IntelliSys 2018. Advances in Intelligent Systems and Computing, vol 868. Springer, Cham. https://doi.org/10.1007/978-3-030-01054-6_56
Download citation
DOI: https://doi.org/10.1007/978-3-030-01054-6_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01053-9
Online ISBN: 978-3-030-01054-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)