skip to main content
chapter

Gesture Generation

Published: 02 October 2021 Publication History
First page of PDF

References

[1]
M. W. Alibali and S. GoldinMeadow. 1993. Gesture–speech mismatch and mechanisms of learning: What the hands reveal about a child’s state of mind. Cogn. Psychol. 25, 4,468–523.
[4]
D. Barrett. 2010. Supernormal Stimuli: How Primal Urges Overran their Evolutionary Purpose. WW Norton & Company.
[5]
J. B. Bavelas. 1994. Gestures as part of speech: Methodological implications. Res. Lang. Soc. Interact. 27, 3, 201–221.
[6]
T. Belpaeme, J. Kennedy, A. Ramachandran, B. Scassellati, and F. Tanaka. 2018. Social robots for education: A review. Sci. Robot. 3, 21, eaat5954.
[7]
K. Bergmann and S. Kopp. 2012. Gestural alignment in natural dialogue. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 34.
[8]
K. Bergmann, H. Rieser, and S. Kopp. 2011. Regulating dialogue with gestures—Towards an empirically grounded simulation with conversational agents. In Proceedings of the SIGDIAL 2011 Conference. 88–97.
[9]
C. L. Breazeal. 2014. Jibo, the world’s first social robot for the home. Indiegogo. https://www.indiegogo.com/projects/jibo-the-world-s-first-socialrobot-for-the-home, checked on, 1, 22, 2019.
[10]
C. Breazeal and B. Scassellati. 1999. How to build robots that make friends and influence people. In Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No. 99CH36289), Vol. 2. IEEE, 858–863.
[11]
C. Breazeal, C. D. Kidd, A. L. Thomaz, G. Hoffman, and M. Berlin. 2005. Effects of nonverbal communication on efficiency and robustness in human–robot teamwork. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 708–713.
[12]
C. Breazeal, N. DePalma, J. Orkin, S. Chernova, and M. Jung. 2013. Crowdsourcing human–robot interaction: New methods and system evaluation in a public environment. J. Hum.-Robot Interact. 2, 1, 82–111.
[13]
P. Bremner, A. Pipe, C. Melhuish, M. Fraser, and S. Subramanian. 2009. Conversational gestures in human–robot interaction. In 2009 IEEE International Conference on Systems, Man and Cybernetics. IEEE, 1645–1649.
[14]
A. Bryman. 2017. Quantitative and qualitative research: Further reflections on their integration. In Mixing Methods: Qualitative and Quantitative Research. Routledge, 57–78.
[15]
J. K. Burgoon. 1991. Relational message interpretations of touch, conversational distance, and posture. J. Nonverbal Behav. 15, 4, 233–259.
[16]
J. K. Burgoon and L. Aho. 1982. Three field experiments on the effects of violations of conversational distance. Commun. Monogr. 49, 2, 71–88.
[17]
G. Calbris. 1990. The Semiotics of French Gestures, Vol. 1900. Indiana University Press.
[18]
G. Calbris. 1995. Anticipation du geste sur la parole. Dins Verbal/Non Verbal, Frères juneaux de la parole. Actes de la journée d’études ANEFLE. Besançon, Université de Franche-Comte, 12–18.
[19]
G. Calbris. 2011. Elements of Meaning in Gesture, Vol. 5. John Benjamins Publishing.
[20]
G. Calbris, J. Montredon, and P. W. Zaü. 1986. Des gestes et des mots pour le dire. Clé International, Paris, 145.
[21]
Z. Cao, G. Hidalgo Martinez, T. Simon, S. Wei, and Y. A. Sheikh. 2019. OpenPose: Realtime multi-person 2D pose estimation using part affinity fields. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[22]
J. Cassell. 1998. A framework for gesture generation and interpretation. Computer Vision in Human–Machine Interaction. 191–215.
[23]
J. Cassell, H. H. Vilhjálmsson, and T. Bickmore. 2004. BEAT: The behavior expression animation toolkit. In Life-Like Characters. Springer, 163–185.
[24]
G. Castellano, S. D. Villalba, and A. Camurri. 2007. Recognising human emotions from body movement and gesture dynamics. In International Conference on Affective Computing and Intelligent Interaction. Springer, 71–82.
[25]
N. E. Chafai, C. Pelachaud, and D. Pelé. 2007. A case study of gesture expressivity breaks. Lang. Resour. Eval. 41, 3–4, 341–365.
[26]
J. Chandler, P. Mueller, and G. Paolacci. 2014. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behav. Res. Methods 46, 1, 112–130.
[27]
E. Charniak. 2000. A maximum-entropy-inspired parser. In 1st Meeting of the North American Chapter of the Association for Computational Linguistics.
[28]
D. Chi, M. Costa, L. Zhao, and N. Badler. 2000. The emote model for effort and shape. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. 173–182.
[29]
C.-C. Chiu and S. Marsella. 2011. How to train your avatar: A data driven approach to gesture generation. In International Workshop on Intelligent Virtual Agents. Springer, 127–140.
[30]
C.-C. Chiu and S. Marsella. 2014. Gesture generation with low-dimensional embeddings. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-Agent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 781–788.
[31]
M. Chu, A. Meyer, L. Foulkes, and S. Kita. 2014. Individual differences in frequency and saliency of speech-accompanying gestures: The role of cognitive abilities and empathy. J. Exp. Psychol. Gen. 143, 2, 694.
[32]
A. J. Cienki and J.-P. Koenig. 1998. Metaphoric gestures and some of their relations to verbal metaphoric expressions. Discourse and Cognition: Bridging the Gap. 189–204.
[33]
K. Cooperrider. 2014. Body-directed gestures: Pointing to the self and beyond. J. Pragmat. 71, 1–16.
[34]
S. Corera and N. Krishnarajah. 2011. Capturing hand gesture movement: A survey on tools, techniques and logical considerations. In Proceedings of Chi Sparks.
[35]
A. B. de Marchena and I.-M. Eigsti. 2014. Context counts: The impact of social context on gesture rate in verbally fluent adolescents with autism spectrum disorder. Gesture 14, 3, 375–393.
[36]
C. M. De Melo, L. Zheng, and J. Gratch. 2009. Expression of moral emotions in cooperating agents. In International Workshop on Intelligent Virtual Agents. Springer, 301–307.
[37]
A. De Santis, B. Siciliano, A. de Luca, and A. Bicchi. 2008. An atlas of physical human–robot interaction. Mech. Mach. Theory 43, 3, 253–270.
[38]
P. DiMaggio. 1997. Culture and cognition. Annu. Rev. Sociol. 23, 1, 263–287.
[39]
W. H. Dittrich, T. Troscianko, S. E. Lea, and D. Morgan. 1996. Perception of emotion from dynamic point-light displays represented in dance. Perception 25, 6, 727–738.
[40]
J. S. Downs, M. B. Holbrook, S. Sheng, and L. F. Cranor. 2010. Are your participants gaming the system? Screening Mechanical Turk workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2399–2402.
[41]
D. Efron. 1941. Gesture and Environment. King’s Crown Press.
[42]
P. Ekman and W. V. Friesen. 1969a. Nonverbal leakage and clues to deception. Psychiatry 32, 1, 88–106.
[43]
P. Ekman and W. V. Friesen. 1969b. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Nonverbal Communication, Interaction, and Gesture. 57–106.
[45]
C. Ennis, R. McDonnell, and C. O’Sullivan. 2010. Seeing is believing: Body motion dominates in multisensory conversations. ACM Tran. Graph. (TOG) 29, 4, 1–9.
[46]
F. Eyben, M. Wöllmer, and B. Schuller. 2009. OpenEAR—Introducing the Munich open-source emotion and affect recognition toolkit. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE, 1–6.
[47]
B. Farahi. 2016. Caress of the gaze: A gaze actuated garment. In ACADIA 2016: Posthuman Frontiers, published in Proceedings of the 36th Annual Conference. USA.
[48]
B. Farahi. 2018. Heart of the matter: Affective computing in fashion and architecture. In ACADIA 2018: Recalibration: Imprecision and Infidelity, Published in proceedings of the 38th Annual Conference. Mexico City, Mexico.
[49]
B. Farahi. 2019. Iridescence: Bio-inspired emotive matter. In ACADIA 2019: Ubiquity and Autonomy, Published in Proceedings of the 39th Annual Conference. Austin.
[50]
A. Feng, Y. Huang, M. Kallmann, and A. Shapiro. 2012. An analysis of motion blending techniques. In International Conference on Motion in Games. Springer, 232–243.
[51]
D. Feng, D. C. Jeong, N. C. Krämer, L. C. Miller, and S. Marsella. 2017. “Is it just me?” Evaluating attribution of negative feedback as a function of virtual instructor’s gender and proxemics. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems. 810–818.
[52]
D. Feng, P. Sequeira, E. Carstensdottir, M. S. El-Nasr, and S. Marsella. 2018. Learning generative models of social interactions with humans-in-the-loop. In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 509–516.
[53]
Y. Ferstl and R. McDonnell. 2018. Investigating the use of recurrent motion modelling for speech gesture generation. In Proceedings of the 18th International Conference on Intelligent Virtual Agents. 93–98.
[54]
Y. Ferstl, M. Neff, and R. McDonnell. 2020. Adversarial gesture generation with realistic gesture phasing. Comput. Graph. 89, 117–130.
[55]
M. Fridin and M. Belokopytov. 2014. Embodied robot versus virtual agent: Involvement of preschool children in motor task performance. Int. J. Hum.-Comput. Int. 30, 6, 459–469.
[56]
R. W. Gibbs Jr. 2008. The Cambridge Handbook of Metaphor and Thought. Cambridge University Press.
[57]
S. Ginosar, A. Bar, G. Kohavi, C. Chan, A. Owens, and J. Malik. 2019. Learning individual styles of conversational gesture. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3497–3506.
[58]
S. Goldin-Meadow and M. W. Alibali. 2013. Gesture’s role in speaking, learning, and creating language. Annu. Rev. Psychol. 64, 257–283.
[59]
S. Goldin-Meadow, H. Nusbaum, S. D. Kelly, and S. Wagner. 2001. Explaining math: Gesturing lightens the load. Psychol Sci. 12, 6, 516–522.
[60]
J. F. Gorostiza, R. Barber, A. M. Khamis, M. Malfaz, R. Pacheco, R. Rivas, A. Corrales, E. Delgado, and M. A. Salichs. 2006. Multimodal human–robot interaction framework for a personal robot. In ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 39–44.
[61]
D. Gouaillier, V. Hugel, P. Blazevic, C. Kilner, J. Monceaux, P. Lafourcade, B. Marnier, J. Serre, and B. Maisonnier. 2009. Mechatronic design of NAO humanoid. In 2009 IEEE International Conference on Robotics and Automation. IEEE, 769–774.
[62]
J. Grady. 1997. Foundations of Meaning: Primary Metaphors and Primary Scenes. University of California, Berkeley.
[63]
H. Gunes and M. Piccardi. 2006. A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior. In 18th International Conference on Pattern Recognition (ICPR’06), Vol. 1. IEEE, 1148–1153.
[64]
A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi. 2018. Social GAN: Socially acceptable trajectories with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2255–2264.
[65]
D. J. Gurney, K. J. Pine, and R. Wiseman. 2013. The gestural misinformation effect: Skewing eyewitness testimony through gesture. Am J. Psychol. 126, 3, 301–314.
[66]
U. Hadar. 1989. Two types of gesture and their role in speech production. J. Lang. Soc. Psychol. 8, 3–4, 221–228.
[67]
L. M. Hiatt, A. M. Harrison, and J. G. Trafton. 2011. Accommodating human variability in human–robot teams through theory of mind. In Twenty-Second International Joint Conference on Artificial Intelligence.
[68]
G. Hoffman and W. Ju. 2014. Designing robots with movement in mind. J. Hum.-Rob. Interact. 3, 1, 91–122.
[69]
J. Holler and G. Beattie. 2003. Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture 3, 2, 127–154.
[70]
T. Holz, M. Dragone, and G. M. O’Hare. 2009. Where robots and virtual agents meet. Int. J. Soc. Robot. 1, 1, 83–93.
[71]
A. B. Hostetter and M. W. Alibali. 2008. Visible embodiment: Gestures as simulated action. Psychon. Bull. Rev. 15, 3, 495–514.
[72]
C. J. Hutto and E. Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth International AAAI Conference on Weblogs and Social Media.
[73]
B.-W. Hwang, S. Kim, and S.-W. Lee. 2006. A full-body gesture database for automatic gesture recognition. In 7th International Conference on Automatic Face and Gesture Recognition (FGR06). IEEE, 243–248.
[74]
J. M. Iverson and S. Goldin-Meadow. 1997. What’s communication got to do with it? Gesture in children blind from birth. Dev. Psychol. 33, 3, 453.
[75]
J. M. Iverson and S. Goldin-Meadow. 1998. Why people gesture when they speak. Nature 396, 6708, 228–228.
[76]
J. M. Iverson and S. Goldin-Meadow. 2001. The resilience of gesture in talk: Gesture in blind speakers and listeners. Dev. Sci. 4, 4, 416–422.
[77]
N. Jacobs and A. Garnham. 2007. The role of conversational hand gestures in a narrative task. J. Mem. Lang. 56, 2, 291–303.
[78]
A. Jamalian and B. Tversky. 2012. Gestures alter thinking about time. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 34.
[79]
K. Jokinen, C. Navarretta, and P. Paggio. 2008. Distinguishing the communicative functions of gestures. In International Workshop on Machine Learning for Multimodal Interaction. Springer, 38–49.
[80]
H. Joo, T. Simon, X. Li, H. Liu, L. Tan, L. Gui, S. Banerjee, T. Godisart, B. Nabbe, I. Matthews, T. Kanade, S. Nobuhara, and Y. Sheikh. 2017. Panoptic Studio: A massively multiview system for social interaction capture. IEEE Trans. Pattern Anal. Mach. Intell. 41, 1, 190–204.
[81]
C. Jost, V. André, B. Le Pévédic, A. Lemasson, M. Hausberger, and D. Duhaut. 2012. Ethological evaluation of human–robot interaction: Are children more efficient and motivated with computer, virtual agent or robots? In 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, 1368–1373.
[82]
S. Joty, G. Carenini, and R. T. Ng. 2015. CODRA: A novel discriminative framework for rhetorical analysis. Comput. Linguist. 41, 3, 385–435.
[83]
S. D. Kelly, D. J. Barr, R. B. Church, and K. Lynch. 1999. Offering a hand to pragmatic understanding: The role of speech and gesture in comprehension and memory. J Mem. Lang. 40, 4, 577–592.
[84]
A. Kendon. 1997. Gesture. Annu. Rev. Anthropol. 26, 1, 109–128.
[85]
A. Kendon. 2000. Language and gesture: Unity or duality. Lang. Gesture 2, 47–63.
[86]
A. Kendon. 2004. Gesture: Visible Action as Utterance. Cambridge University Press.
[87]
P. Khooshabeh, C. McCall, S. Gandhe, J. Gratch, and J. Blascovich. 2011. Does it matter if a computer jokes. In CHI’11 Extended Abstracts on Human Factors in Computing Systems. 77–86.
[88]
M. Kipp. 2014. ANVIL: A universal video research tool. In Handbook of Corpus Phonology. 420–436.
[89]
M. Kipp and J.-C. Martin. 2009. Gesture and emotion: Can basic gestural form features discriminate emotions? In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE, 1–8.
[90]
M. Kipp, M. Neff, and I. Albrecht. 2007. An annotation scheme for conversational gestures: How to economically capture timing and form. Lang. Resour. Eval. 41, 3–4, 325–339.
[91]
S. Kita. 2009. Cross-cultural variation of speech-accompanying gesture: A review. Lang. Cogn. Process. 24, 2, 145–167.
[92]
S. Kita and A. Özyürek. 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal?: Evidence for an interface representation of spatial thinking and speaking. J. Mem. Lang. 48, 1, 16–32.
[93]
N. Kock. 2005. Media richness or media naturalness? The evolution of our biological communication apparatus and its influence on our behavior toward e-communication tools. IEEE Trans. Prof. Commun. 48, 2, 117–130.
[94]
S. Kopp, B. Krenn, S. Marsella, A. N. Marshall, C. Pelachaud, H. Pirker, K. R. Thórisson, and H. Vilhjálmsson. 2006. Towards a common framework for multimodal generation: The behavior markup language. In International Workshop on Intelligent Virtual Agents. Springer, 205–217.
[95]
S. Kopp, K. Bergmann, and I. Wachsmuth. 2008. Multimodal communication from multimodal thinking—towards an integrated model of speech and gesture production. Int. J. Semant. Comput. 2, 01, 115–136.
[96]
S. Kopp, H. van Welbergen, R. Yaghoubzadeh, and H. Buschmeier. 2014. An architecture for fluid real-time conversational agents: Integrating incremental output generation and input processing. J. Multimodal User Interfaces 8, 1, 97–108.
[97]
E. Krahmer and M. Swerts. 2007. The effects of visual beats on prosodic prominence: Acoustic analyses, auditory perception and visual perception. J. Mem. Lang. 57, 3, 396–414.
[98]
N. Krämer, S. Kopp, C. Becker-Asano, and N. Sommer. 2013. Smile and the world will smile with you—The effects of a virtual agent’s smile on users’ evaluation and behavior. Int. J. Hum. Comput. Stud. 71, 3, 335–349.
[99]
F. W. Kron, M. D. Fetters, M. W. Scerbo, C. B. White, M. L. Lypson, M. A. Padilla, G. A. Gliva-McConvey, L. A. Belfore II, T. West, A. M. Wallace, T. C. Guetterman, L. S. Schleicher, R. A. Kennedy, R. S. Mangrulkar, J. F. Cleary, S. C. Marsella, and D. M. Becker. 2017. Using a computer simulation for teaching communication skills: A blinded multisite mixed methods randomized controlled trial. Patient Educ Couns. 100, 4, 748–759.
[100]
G. Lakoff and M. Johnson. 2008. Metaphors We Live By. University of Chicago Press.
[101]
Q. Le, J. Huang, and C. Pelachaud. 2012. A common gesture and speech production framework for virtual and physical agents. In ACM International Conference on Multimodal Interaction.
[102]
ICMI. 2012. Workshop on Speech and Gesture Production in Virtually and Physically Embodied Conversational Agents, October 26, 2012, SantaMonica, CA. ACM.
[103]
Q. A. Le and C. Pelachaud. 2011. Generating co-speech gestures for the humanoid robot NAO through BML. In International Gesture Workshop. Springer, 228–237.
[104]
D. Y. Lee, M. R. Uhlemann, and R. F. Haase. 1985. Counselor verbal and nonverbal responses and perceived expertness, trustworthiness, and attractiveness. J. Couns. Psychol. 32, 2, 181.
[105]
K. M. Lee, Y. Jung, J. Kim, and S. R. Kim. 2006. Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. Int. J. Hum. Comput. Stud. 64, 10, 962–973.
[106]
I. Leite, C. Martinho, and A. Paiva. 2013. Social robots for long-term interaction: A survey. Int. J. Soc. Robot. 5, 2, 291–308.
[107]
T. Leonard and F. Cummins. 2011. The temporal relation between beat gestures and speech. Lang. Cogn. Neurosci. 26, 10, 1457–1471.
[108]
S. C. Levinson. 1996. Language and space. Annu. Rev. Anthropol. 25, 1, 353–382.
[109]
E. T. Levy and D. McNeill. 1992. Speech, gesture, and discourse. Discourse Process. 15, 3, 277–301.
[110]
D. Leyzberg, S. Spaulding, M. Toneva, and B. Scassellati. 2012. The physical presence of a robot tutor increases cognitive learning gains. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 34.
[111]
M. Lhommet and S. C. Marsella. 2013. Gesture with meaning. In International Conference on Intelligent Virtual Agents. Springer, 303–312.
[112]
M. Lhommet and S. Marsella. 2014. Metaphoric gestures: Towards grounded mental spaces. In International Conference on Intelligent Virtual Agents. September. http://www.ccs.neu.edu/marsella/publications/pdf/Lhommet_IVA2014.pdf.
[113]
M. Lhommet, Y. Xu, and S. Marsella. 2015. Cerebella: Automatic generation of nonverbal behavior for virtual humans. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. 4303–4304.
[114]
J. Li. 2015. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. Int. J. Hum. Comput. Stud. 77, 23–37.
[115]
K. P. Lickiss and A. R. Wellens. 1978. Effects of visual accessibility and hand restraint on fluency of gesticulator and effectiveness of message. Percept. Mot. Ski. 46, 3, 925–926.
[116]
P. Luo, M. Kipp, and M. Neff. 2009. Augmenting gesture animation with motion capture data to provide full-body engagement. In International Workshop on Intelligent Virtual Agents. Springer, 405–417.
[117]
R. Maatman, J. Gratch, and S. Marsella. 2005. Natural behavior of a listening agent. In International Workshop on Intelligent Virtual Agents. Springer, 25–36.
[118]
F. Maricchiolo, A. Gnisci, M. Bonaiuto, and G. Ficca. 2009. Effects of different types of hand gestures in persuasive speech on receivers’ evaluations. Lang. Cogn. Neurosci. 24, 2, 239–266.
[119]
S. C. Marsella, S. M. Carnicke, J. Gratch, A. Okhmatovskaia, and A. Rizzo. 2006. An exploration of Delsarte’s structural acting system. In International Workshop on Intelligent Virtual Agents. Springer, 80–92.
[120]
S. Marsella, J. Gratch, and P. Petta. 2010. Computational models of emotion. A Blueprint for Affective Computing—A Sourcebook and Manual 11, 1, 21–46.
[121]
S. Marsella, Y. Xu, M. Lhommet, A. Feng, S. Scherer, and A. Shapiro. 2013. Virtual character performance from speech. In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’13. ACM, New York, NY, 25–35. ISBN: 978-1-4503-2132-7.
[122]
C. McCall, D. P. Bunyan, J. N. Bailenson, J. Blascovich, and A. C. Beall. 2009. Leveraging collaborative virtual environment technology for inter-population research on persuasion in a classroom setting. PRESENCE Teleop. Virt. Environ. 18, 5, 361–369.
[123]
D. McNeill. 1985. So you think gestures are nonverbal? Psychol. Rev. 92, 3, 350.
[124]
D. McNeill. 1992. Hand and Mind: What Gestures Reveal About Thought. University of Chicago Press.
[125]
D. McNeill. 2006. Gesture: A psycholinguistic approach. The Encyclopedia of Language and Linguistics. 58–66.
[126]
D. McNeill, J. Cassell, and E. T. Levy. 1993. Abstract deixis. Semiotica 95, 1–2, 5–20.
[127]
D. Morris. 2015. Bodytalk: A World Guide to Gestures. Random House.
[128]
R. Morris, D. McDuff, and R. Calvo. 2014. Crowdsourcing techniques for affective computing. In The Oxford Handbook of Affective Computing. Oxford University Press, 384–394.
[129]
O. Mubin and C. Bartneck. 2015. Do as I say: Exploring human response to a predictable and unpredictable robot. In Proceedings of the 2015 British HCI Conference. 110–116.
[130]
K. M. Murphy. 2003. Building meaning in interaction: Rethinking gesture classifications. Crossroads of Language, Interaction, and Culture 5, 29–47.
[131]
M. Neff, M. Kipp, I. Albrecht, and H.-P. Seidel. 2008. Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Trans. Graph. (TOG) 27, 1, 1–24.
[132]
M. Neff, Y. Wang, R. Abbott, and M. Walker. 2010. Evaluating the effect of gesture and language on personality perception in conversational agents. In International Conference on Intelligent Virtual Agents. Springer, 222–235.
[133]
V. Ng-Thow-Hing, P. Luo, and S. Okita. 2010. Synchronized gesture and speech production for humanoid robots. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 4617–4624.
[134]
R. Niewiadomski, E. Bevacqua, M. Mancini, and C. Pelachaud. January. 2009. Greta: An Interactive Expressive ECA System, Vol. 2. 1399–1400.
[135]
S. Nishio, K. Ogawa, Y. Kanakogi, S. Itakura, and H. Ishiguro. 2018. Do robot appearance and speech affect people’s attitude? Evaluation through the ultimatum game. In Geminoid Studies. Springer, 263–277.
[136]
S. Nobe. 2000. Where do most spontaneous representational gestures actually occur with respect to speech. Language and Gesture 2, 186.
[137]
M. A. Novack and S. Goldin-Meadow. 2017. Gesture as representational action: A paper about function. Psychon. Bull. Rev. 24, 3, 652–665.
[138]
R. E. Núñez and E. Sweetser. 2006. With the future behind them: Convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cogn. Sci. 30, 3, 401–450.
[139]
M. Ochs, G. de Montcheuil, J.-M. Pergandi, J. Saubesty, C. Pelachaud, D. Mestre, and P. Blache. 2017. An architecture of virtual patient simulation platform to train doctors to break bad news. In Conference on Computer Animation and Social Agents (CASA).
[140]
Ş. Özçalışkan and S. Goldin-Meadow. 2005. Gesture is at the cutting edge of early language development. Cognition 96, 3, B101–B113.
[141]
T. Pedersen, S. Patwardhan, and J. Michelizzi. 2004. WordNet::Similarity—Measuring the relatedness of concepts. In AAAI, Vol. 4. 25–29.
[142]
J. W. Pennebaker, M. E. Francis, and R. J. Booth. 2001. Linguistic Inquiry and Word Count: LIWC 2001. Lawrence Erlbaum Associates, Mahway, NJ. 71, 2001.
[143]
I. Poggi and C. Pelachaud. 2008. Persuasion and the expressivity of gestures in humans and machines. Embodied Communication in Humans and Machines. 391–424.
[144]
I. Poggi and L. Vincze. 2008. Gesture, gaze and persuasive strategies in political discourse. In International LREC Workshop on Multimodal Corpora. Springer, 73–92.
[145]
I. Poggi, C. Pelachaud, F. de Rosis, V. Carofiglio, and B. De Carolis. 2005. Greta. A believable embodied conversational agent. In Multimodal Intelligent Information Presentation. Springer, 3–25.
[146]
F. E. Pollick, H. M. Paterson, A. Bruderlin, and A. J. Sanford. 2001. Perceiving affect from arm movement. Cognition 82, 2, B51–B61.
[147]
G. Radden. 2003. The metaphor time as space across languages. Zeitschrift für interkulturellen Fremdsprachenunterricht, 8, 2.
[148]
F. H. Rauscher, R. M. Krauss, and Y. Chen. 1996. Gesture, speech, and lexical access: The role of lexical movements in speech production. Psychol. Sci. 7, 4, 226–231.
[149]
D. Reidsma, I. de Kok, D. Neiberg, S. C. Pammi, B. van Straalen, K. Truong, and H. van Welbergen. 2011. Continuous interaction with a virtual human. J. Multimodal User Interfaces 4, 2, 97–118.
[150]
L. Ren, A. Patrick, A. A. Efros, J. K. Hodgins, and J. M. Rehg. 2005. A data-driven approach to quantifying natural human motion. ACM Trans. Graph. (TOG) 24, 3, 1090–1097.
[151]
L. D. Riek. 2014. The social co-robotics problem space: Six key challenges. Robotics Challenges and Vision (RCV2013).
[152]
L. D. Riek, P. C. Paul, and P. Robinson. 2010. When my robot smiles at me: Enabling human–robot rapport via real-time head gesture mimicry. J. Multimodal User Interfaces 3, 1–2, 99–108.
[155]
S. V. Rouse. 2015. A reliability analysis of Mechanical Turk data. Comput. Hum. Behav. 43, 304–307.
[156]
M. Salem, S. Kopp, I. Wachsmuth, and F. Joublin. 2010. Generating robot gesture using a virtual agent framework. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 3592–3597.
[157]
M. Salem, S. Kopp, I. Wachsmuth, K. Rohlfing, and F. Joublin. 2012. Generation and evaluation of communicative robot gesture. Int. J. Soc. Robot. 4, 2, 201–217.
[158]
M. Salem, F. Eyssel, K. Rohlfing, S. Kopp, and F. Joublin. 2013. To err is human (-like): Effects of robot gesture on perceived anthropomorphism and likability. Int. J. Soc. Robot. 5, 3, 313–323.
[159]
S. Satake, T. Kanda, D. F. Glas, M. Imai, H. Ishiguro, and N. Hagita. 2009. How to approach humans? Strategies for social robots to initiate interaction. In Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction. 109–116.
[160]
C. Saund, M. Roth, M. Chollet, and S. Marsella. 2019. Multiple metaphors in metaphoric gesturing. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 524–530.
[161]
B. Scassellati. 2002. Theory of mind for a humanoid robot. Auton. Robots 12, 1, 13–24.
[162]
B. Schuller, A. Batliner, S. Steidl, and D. Seppi. 2011. Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge. Speech Commun. 53, 9–10, 1062–1087.
[163]
S. Shigemi, A. Goswami, and P. Vadakkepat. 2019. ASIMO and humanoid robot research at Honda. In Humanoid Robotics: A Reference. Springer, 55–90.
[164]
C. L. Sidner, C. Lee, and N. Lesh. 2003. The role of dialog in human robot interaction. In International Workshop on Language Understanding and Agents for Real World Interaction.
[165]
M. Siegel, C. Breazeal, and M. I. Norton. 2009. Persuasive robotics: The influence of robot gender on human behavior. In 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2563–2568.
[166]
M. Slater, A. Sadagic, M. Usoh, and R. Schroeder. 2000. Small-group behavior in a virtual and real environment: A comparative study. Presence Teleop. Virt Environ. 9, 1, 37–51.
[167]
M. Stone, D. DeCarlo, I. Oh, C. Rodriguez, A. Stere, A. Lees, and C. Bregler. 2004. Speaking with hands: Creating animated conversational characters from recordings of human performance. ACM Trans. Graph. (TOG) 23, 3, 506–513.
[168]
N. M. Sussman and H. M. Rosenfeld. 1982. Influence of culture, language, and sex on conversational distance. J. Pers. Soc. Psychol. 42, 1, 66–74.
[169]
W. R. Swartout, J. Gratch, R. W. Hill Jr, E. Hovy, S. Marsella, J. Rickel, and D. Traum. 2006. Toward virtual humans. AI Magazine 27, 2, 96–96.
[170]
A. Takeuchi and T. Naito. 1995. Situated facial displays: Towards social interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 450–455.
[171]
K. Takeuchi, D. Hasegawa, S. Shirakawa, N. Kaneko, H. Sakuta, and K. Sumi. 2017. Speech-to-gesture generation: A challenge in deep learning approach with bi-directional LSTM. In Proceedings of the 5th International Conference on Human Agent Interaction. 365–369.
[172]
L. Talmy. 1985. Grammatical categories and the lexicon. Language Typology and Syntactic Description, Vol. 3. 57–149.
[173]
S. Thellman, A. Silvervarg, A. Gulz, and T. Ziemke. 2016. Physical vs. virtual agent embodiment and effects on social interaction. In International Conference on Intelligent Virtual Agents. Springer, 412–415.
[174]
X.-T. Truong and T.-D. Ngo. 2016. Dynamic social zone based mobile robot navigation for human comfortable safety in social environments. Int. J. Soc. Robot. 8, 5, 663–684.
[175]
S. Turchyn, I. O. Moreno, C. P. Cánovas, F. F. Steen, M. Turner, J. Valenzuela, and S. Ray. 2018. Gesture annotation with a visual search engine for multimodal communication research. In Thirty-Second AAAI Conference on Artificial Intelligence.
[176]
USC Institute for Creative Technologies. SmartBody. https://smartbody.ict.usc.edu/download2.
[177]
G. Van de Perre, H.-L. Cao, A. De Beir, P. G. Esteban, D. Lefeber, and B. Vanderborght. 2018. Generic method for generating blended gestures and affective functional behaviors for social robots. Auton. Robots 42, 3, 569–580.
[178]
I. Wachsmuth and S. Kopp. 2001. Lifelike gesture synthesis and timing for conversational agents. In International Gesture Workshop. Springer, 120–133.
[179]
J. Wainer, D. J. Feil-Seifer, D. A. Shell, and M. J. Mataric. 2007. Embodiment and human–robot interaction: A task-based perspective. In RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 872–877.
[180]
A. Whiten and R. W. Byrne. 1988. The Machiavellian Intelligence Hypotheses.
[181]
A. D. Wilson, A. F. Bobick, and J. Cassell. 1996. Recovering the temporal structure of natural gesture. In Proceedings of the Second International Conference on Automatic Face and Gesture Recognition. IEEE, 66–71.
[182]
J. R. Wilson, N. Y. Lee, A. Saechao, S. Hershenson, M. Scheutz, and L. Tickle-Degnen. 2017. Hand gestures and verbal acknowledgments improve human–robot rapport. In International Conference on Social Robotics. Springer, 334–344.
[183]
C. Wolff. 2015. A Psychology of Gesture. Routledge.
[184]
Y. Xu, C. Pelachaud, and S. Marsella. 2014. Compound gesture generation: A model based on ideational units. In International Conference on Intelligent Virtual Agents. Springer, 477–491.
[185]
Y. Yoon, W.-R. Ko, M. Jang, J. Lee, J. Kim, and G. Lee. 2019. Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 4303–4309.

Cited By

View all
  • (2025)Human-like Nonverbal Behavior with MetaHumans in Real-World Interaction Studies: An Architecture Using Generative Methods and Motion CaptureProceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction10.5555/3721488.3721663(1279-1283)Online publication date: 4-Mar-2025
  • (2023)Look What I Made It Do - The ModelIT Method for Manually Modeling Nonverbal Behavior of Socially Interactive AgentsInternational Cconference on Multimodal Interaction10.1145/3610661.3616549(200-204)Online publication date: 9-Oct-2023
  • (2023)Virtuelle Realität und sozial interaktive AgentenDigital ist besser?! Psychologie der Online- und Mobilkommunikation10.1007/978-3-662-66608-1_18(261-278)Online publication date: 29-Dec-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Books
The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 1: Methods, Behavior, Cognition
September 2021
538 pages
ISBN:9781450387200
DOI:10.1145/3477322

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 October 2021

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Chapter

Appears in

ACM Books

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)29
  • Downloads (Last 6 weeks)17
Reflects downloads up to 02 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Human-like Nonverbal Behavior with MetaHumans in Real-World Interaction Studies: An Architecture Using Generative Methods and Motion CaptureProceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction10.5555/3721488.3721663(1279-1283)Online publication date: 4-Mar-2025
  • (2023)Look What I Made It Do - The ModelIT Method for Manually Modeling Nonverbal Behavior of Socially Interactive AgentsInternational Cconference on Multimodal Interaction10.1145/3610661.3616549(200-204)Online publication date: 9-Oct-2023
  • (2023)Virtuelle Realität und sozial interaktive AgentenDigital ist besser?! Psychologie der Online- und Mobilkommunikation10.1007/978-3-662-66608-1_18(261-278)Online publication date: 29-Dec-2023
  • (2022)Interactive Narrative and Story-tellingThe Handbook on Socially Interactive Agents10.1145/3563659.3563674(463-492)Online publication date: 27-Oct-2022
  • (2022)Health-Related Applications of Socially Interactive AgentsThe Handbook on Socially Interactive Agents10.1145/3563659.3563672(403-436)Online publication date: 27-Oct-2022
  • (2022)Long-Term Interaction with Relational SIAsThe Handbook on Socially Interactive Agents10.1145/3563659.3563667(195-260)Online publication date: 27-Oct-2022
  • (2022)The Fabric of Socially Interactive Agents: Multimodal Interaction ArchitecturesThe Handbook on Socially Interactive Agents10.1145/3563659.3563664(77-112)Online publication date: 27-Oct-2022
  • (2022)The Handbook on Socially Interactive AgentsundefinedOnline publication date: 27-Oct-2022

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media