Abstract
This study aims at describing navigation guidelines and concerning analytic motion models for a mobile interaction robot, which moves together with a human partner. We address particularly the impact of gestures on the coupled motion of this human-robot pair.
We pose that the robot needs to adjust its navigation in accordance to its gestures in a natural manner (mimicking human-human locomotion). In order to justify this suggestion, we first examine the motion patterns of real-world pedestrian dyads in accordance to 4 affective components of interaction (i.e. gestures). Three benchmark variables are derived from pedestrian trajectories and their behavior is investigated with respect to three conditions: (i) presence/absence of isolated gestures, (ii) varying number of simultaneously performed (i.e. concurring) gestures, (iii) varying size of the environment.
It is observed empirically and proven quantitatively that there is a significant difference in the benchmark variables between presence and absence of the gestures, whereas no prominent variation exists in regard to the type of gesture or the number of concurring gestures. Moreover, size of the environment is shown to be a crucial factor in sustainability of the group structure.
Subsequently, we propose analytic models to represent these behavioral variations and prove that our models attain significant accuracy in reflecting the distinctions. Finally, we propose an implementation scheme for integrating the analytic models to practical applications. Our results bear the potential of serving as navigation guidelines for the robot so as to provide a more natural interaction experience for the human counterpart of a robot-pedestrian group on-the-move.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
If a dyad performs more than one gesture (concurring gestures), it is assigned multiple labels.
- 2.
For Anova relating \(\phi \), we use only the values \(\phi \in [0, \pi /2]\) to be able to highlight differences in spread between distributions with the same average value \(\approx \pi /2\).
- 3.
There was no dyad which performed all 4 gestures at once.
- 4.
Note that in modeling \(\phi \), the distinguishing effect is pertained by only \(\kappa \), whereas \(\mu \) is always around \(\pi /2\), as expected.
References
Knapp, M.L., Hall, J.A., Horgan, T.G.: Nonverbal Communication in Human Interaction. Cengage Learning, Boston (2013)
Streeck, J., Knapp, M.L.: The interaction of visual and verbal features in human communication. In: Poyatos, F. (ed.) Advances in Nonverbal Communication, vol. 10, pp. 3ā23. Benjamins, Amsterdam (1992)
Hostetter, A.B.: When do gestures communicate? A meta-analysis. Psychol. Bull. 137(2), 297 (2011)
Neff, M., Wang, Y., Abbott, R., Walker, M.: Evaluating the effect of gesture and language on personality perception in conversational agents. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS, vol. 6356, pp. 222ā235. Springer, Heidelberg (2010). doi:10.1007/978-3-642-15892-6_24
Karam, M.: Ph.D. thesis: A framework for research and design of gesture-based human-computer interactions. Ph.D. dissertation, University of Southampton (2006)
Breazeal, C., Kidd, C.D., Thomaz, A.L., Hoffman, G., Berlin, M.: Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: IROS, pp. 708ā713 (2005)
Rautaray, S.S., Agrawal, A.: Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev. 43(1), 1ā54 (2015)
Gleeson, B., MacLean, K., Haddadi, A., Croft, E., Alcazar, J.: Gestures for industry: intuitive human-robot communication from human observation. In: HRI, pp. 349ā356 (2013)
Matuszek, C., Bo, L., Zettlemoyer, L., Fox, D.: Learning from unscripted deictic gesture and language for human-robot interactions. In: AAAI, pp. 2556ā2563 (2014)
Salem, M., Rohlfing, K., Kopp, S., Joublin, F.: A friendly gesture: investigating the effect of multimodal robot behavior in human-robot interaction. In: RO-MAN, pp. 247ā252 (2011)
Salem, M., Kopp, S., Wachsmuth, I., Rohlfing, K., Joublin, F.: Generation and evaluation of communicative robot gesture. IJSR 4(2), 201ā217 (2012)
Haddington, P., Mondada, L., Nevile, M.: Interaction and Mobility: Language and the Body in Motion, vol. 20. Walter De Gruyter, Berlin (2013)
Mead, R., MatariÄ, M.: Autonomous human-robot proxemics: socially aware navigation based on interaction potential. Auton. Robots 41, 1189ā1201 (2016)
Gullberg, M., Holmqvist, K.: Keeping an eye on gestures: visual perception of gestures in face-to-face communication. Pragmat. Cogn. 7(1), 35ā63 (1999)
Zanlungo, F., Ikeda, T., Kanda, T.: Potential for the dynamics of pedestrians in a socially interacting group. Phys. Rev. E 89(1), 012811 (2014)
Vinciarelli, A., Pantic, M., Bourlard, H.: Social signal processing: survey of an emerging domain. Image Vis. Comput. 27(12), 1743ā1759 (2009)
Cohen, J.: Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychol. Bulle. 70(4), 213 (1968)
Di Eugenio, B., Glass, M.: The kappa statistic: a second look. Comput. Linguist. 30(1), 95ā101 (2004)
Glas, D., Miyashita, T., Ishiguro, H., Hagita, N.: Laser-based tracking of human position and orientation using parametric shape modeling. Adv. Robot. 23(4), 405ā428 (2009)
Mardia, K.V., Jupp, P.E.: Directional Statistics. Wiley, Hoboken (2000)
YĆ¼cel, Z., Zanlungo, F., Ikeda, T., Miyashita, T., Hagita, N.: Deciphering the crowd: modeling and identification of pedestrian group motion. Sensors 13(1), 875ā897 (2013)
Murakami, R., Morales Saiki, L.Y., Satake, S., Kanda, T., Ishiguro, H.: Destination unknown: walking side-by-side without knowing the goal. In: HRI, pp. 471ā478. ACM (2014)
Acknowledgments
This study was supported by JSPS KAKENHI Grant Numbers 15H05322 and 16K12505.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2017 Springer International Publishing AG
About this paper
Cite this paper
YĆ¼cel, Z., Zanlungo, F., Shiomi, M. (2017). Walk the Talk: Gestures in Mobile Interaction. In: Kheddar, A., et al. Social Robotics. ICSR 2017. Lecture Notes in Computer Science(), vol 10652. Springer, Cham. https://doi.org/10.1007/978-3-319-70022-9_22
Download citation
DOI: https://doi.org/10.1007/978-3-319-70022-9_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70021-2
Online ISBN: 978-3-319-70022-9
eBook Packages: Computer ScienceComputer Science (R0)