Abstract
Mirroring and synchronization of non-verbal behavior is an important characteristics of human conduct also in communication. The aims of this paper are to analyze the occurrences of mirroring gestures, which comprise head movements, facial expressions, body postures and hand gestures, in first encounters and to determine whether information about the gestures of an individual can be used to predict the presence and the class of the gestures of the interlocutor. The contribution of related speech token is also investigated. The analysis of the encounters shows that 20–30% of the head movements, facial expressions and body postures are mirrored in the corpus, while there are only few occurrences of mirrored hand gestures. The latter are therefore not included in the prediction experiments. The results of the experiments, in which various machine learning algorithms have been applied, show that information about the shape and duration of the gestures of one participant contributes to the prediction of the presence and class of the gestures of the other participant, and that adding information about the related speech tokens in some cases improves the prediction performance. These results indicate that it is useful to take mirroring into account when designing and implementing cognitive aware info-communicative devices.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Correlation \(r(3117)<0.001)\).
- 2.
References
Allwood J, Cerrato L, Jokinen K, Navarretta C, Paggio P (2007) The MUMIN coding scheme for the annotation of feedback, turn management and sequencing. Multimodal Corpora for Modelling Human Multimodal Behaviour. Spec Issue Int J Lang Resour Eval 41(3–4):273–287
Baranyi P, Csapo A, Sallai G (2015) Cognitive Infocommunications (CogInfoCom). Springer
Bernieri F (1988) Coordinated movement and rapport in teacher-student interactions. J Nonverbal Behav 12:120–138
Bernieri F, Davis J, Rosenthal R, Knee CR (1994) Interactional synchrony and rapport: measuring synchrony in displays devoid of sound and facial affect. Pers Soc Psychol Bull 20:303–311
Campbell N, Scherer S (2010) Comparing measures of synchrony and alignment in dialogue speech timing with respect to turn-taking activity. In: Proceedings of Interspeech. pp 2546–2549
Cohen J (1960) A coefficient of agreement for nominal scales. Educ Psychol Meas 20(1):37–46
Condon W, Sander L (1974) Synchrony demonstrated between movements of the neonate and adult speech. Child Dev 45(2):456–462
Dimberg U, Thunberg M, Elmehed K (2000) Unconscious facial reactions to emotional facial expressions. Psychol Sci 11(1):86–89
Esposito A, Marinaro M (2007) What pauses can tell us about speech and gesture partnership. In: Fundamentals of verbal and nonverbal communication and the biometric issue. NATO publishing series sub-series E: human and societal dynamics, vol 18. IOS Press, pp 45–57
Jokinen K (2013) Multimodal feedback in first encounter interactions. In: Kurosi M (ed) Human-computer interaction. In: Proceedings of 15th international conference on interaction modalities and techniques, HCI International 2013, Las Vegas, NV, USA, 21–26 July, 2013, Part IV. Springer, Berlin, Heidelberg, pp 262–271
Krämer N, Kopp S, Becker-Asano C, Sommer N (2013) Smile and the world will smile with you–the effects of a virtual agent’s smile on users’ evaluation and behavior. Int J Hum Comput Stud 71(3):335–349
McNeill D (2005) Gesture and thought. University of Chicago Press
Navarretta C (2013) Transfer learning in multimodal corpora. In: IEEE (ed). Proceedings of the 4th IEEE international conference on cognitive infocommunications (CogInfoCom2013). Budapest, Hungary, pp 195–200
Navarretta C (2014) Alignment of speech and facial expressions and familiarity in first encounters. In: Proceedings of the 5th IEEE international conference on cognitive infocommunications (CogInfoCom2014). IEEE Signal Processing Society, Vietri, Italy, pp 185–190
Navarretta C (2016) Mirroring facial expressions and emotions in dyadic conversations. In: N.C.C. (Chair), Choukri K, Declerck T, Goggi S, Grobelnik M, Maegaard B, Mariani J, Mazo H, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the tenth international conference on language resources and evaluation (LREC 2016). European Language Resources Association (ELRA), Paris, France, pp 469–474
Navarretta C (2016) Predicting an individual’s gestures from the interlocutor’s co-occurring gestures and related speech. In: 2016 7th IEEE international conference on cognitive infocommunications (CogInfoCom). pp 233–238. https://doi.org/10.1109/CogInfoCom.2016.7804554
Nelissen K, Luppino G, Vanduffel W, Rizzolatti G, Orban GA (2005) Observing others: multiple action representation in the frontal lobe. Science 310(5746):332–336
Paggio P, Ahlsén E, Allwood J, Jokinen K, Navarretta C (2010) The NOMCO multimodal Nordic resource—goals and characteristics. In: Proceedings of LREC 2010. Malta, pp 2968–2973
Paggio P, Navarretta C (2011) Head movements, facial expressions and feedback in danish first encounters interactions: a culture-specific analysis. In: Stephanidis C (ed) 6th international conference on universal access in human-computer interaction—users diversity, UAHCI 2011, Held as Part of HCI International 2011, no. 6766 in LNCS. Springer, Orlando, Florida, pp 583–690
di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G (1992) Understanding motor events: a neurophysiological study. Exp Brain Res 91(1):176–180
Ricciardi E, Bonino D, Sani L, Vecchi T, Guazzelli M, Haxby JV, Fadiga L, Pietrini P (2009) Do we really need vision? How blind people “See” the actions of others. J Neurosci 29(31):9719–9724
Rizzolatti G (2005) The mirror neuron system and its function in humans. Anat Embryol 210:419–421
Rizzolatti G, Arbib M (1998) Language within our grasp. Trends Neurosci 21(5):188–194
Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27:169–192
Rizzolatti G, Fabbri-Destro M (2008) The mirror system and its role in social cognition. Curr Opin Neurobiol 18:179–184
Rizzolatti G, Fadiga L, Gallese V, Fogassi L (1996) Premotor cortex and the recognition of motor actions. Cognit Brain Res 3(2):131–141
Särg D, Jokinen K (2015) Nodding in Estonian first encounters. In: Proceedings of the 2nd European and the 5th Nordic symposium on multimodal communication, pp 87–95
Singer T, Seymour B, O’Doherty J, Kaube H, Dolan RJ, Frith CD (2004) Empathy for pain involves the affective but not sensory components of pain. Science 303(5661):1157–1162
Wicker B, Keysers C, Plailly J, Royet JP, Gallese V, Rizzolatti G (2003) Both of us disgusted in my insula: the common neural basis of seeing and feeling disgust. Neuron 40(3):655–664
Witten IH, Frank E (2005) Data mining: practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature
About this chapter
Cite this chapter
Navarretta, C. (2019). Mirroring and Prediction of Gestures from Interlocutor’s Behavior. In: Klempous, R., Nikodem, J., Baranyi, P. (eds) Cognitive Infocommunications, Theory and Applications. Topics in Intelligent Engineering and Informatics, vol 13. Springer, Cham. https://doi.org/10.1007/978-3-319-95996-2_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-95996-2_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-95995-5
Online ISBN: 978-3-319-95996-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)