Skip to main content Accessibility help
×
Hostname: page-component-7c8c6479df-27gpq Total loading time: 0 Render date: 2024-03-28T22:41:59.150Z Has data issue: false hasContentIssue false

20 - Body Movements Generation for Virtual Characters and Social Robots

from Part III - Machine Synthesis of Social Signals

Published online by Cambridge University Press:  13 July 2017

Aryel Beck
Affiliation:
Nanyang Technological University
Zerrin Yumak
Affiliation:
Nanyang Technological University
Nadia Magnenat-Thalmann
Affiliation:
Nanyang Technological University, Singapore
Judee K. Burgoon
Affiliation:
University of Arizona
Nadia Magnenat-Thalmann
Affiliation:
Université de Genève
Maja Pantic
Affiliation:
Imperial College London
Alessandro Vinciarelli
Affiliation:
University of Glasgow
Get access

Summary

Introduction

It has long been accepted in traditional animation that a character's expressions must be captured throughout the whole body as well as the face (Thomas & Johnston, 1995). Existing artificial agents express themselves using facial expressions, vocal intonation, body movements, and postures. Body language has been a focus of interest in research on embodied agents (virtual humans and social robots). It can be separated into four different areas that should be considered when animating virtual characters as well as social robots. (1) Postures: postures are specific positions that the body takes during a time-frame. Postures are an important modality during social interaction and play an important role as they can signal liking and affiliation (Lakin et al., 2003). Moreover, it has been established that postures are an effective medium to express emotion for humans (De Silva & Bianchi-Berthouze, 2004). Thus, virtual humans and social robots should be endowed with the capability to display adequate body postures. (2) Movement or gestures: throughout most of our daily interactions, gestures are used along with speech for effective communication (Cassell, 2000). For a review of the types of gestures that occur during interactions the reader can refer to Cassell (2000). Movements are also important for expressing emotions. Indeed, it has been shown that many emotions are differentiated by characteristic body movements and that these are effective clues for judging the emotional state of other people in the absence of facial and vocal clues (Atkinson et al., 2004). Body movements include the movements themselves as well as the manner in which they are performed, i.e. speed of movements, dynamics, and curvature – something captured by the traditional animation principles (Thomas & Johnston, 1995; Beck, 2012). Moreover, it should be noted that body movements occur in interaction with other elements, such as speech, facial expressions, gaze, all of which needs to be synchronised. (3) Proxemics: it is the distance between individuals during a social interaction. It is also indicative of emotional state. For example, angry people have a tendency to reduce the distance during social interaction, although this reduction would also be evident between intimate people.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2017

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Adams, R. & Kleck, R. (2005). Effects of direct and averted gaze on the perception of facially communicated emotion.Emotion, 5, 3–11.Google Scholar
Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays.Perception, 33(6), 717–746.Google Scholar
Bailenson, J. N., Blascovich, J., Beall, A. C., & Loomis, J. M. (2003). Interpersonal distance in immersive virtual environments.Personality and Social Psychology Bulletin, 29(7), 819–833.Google Scholar
Barakova, E. L. & Tourens, T. (2010). Expressing and interpreting emotional movements in social games with robots.Personal and Ubiquitous Computing, 14, 457–467.Google Scholar
Beck, A. (2012). Perception of emotional body language displayed by animated characters. PhD dissertation, University of Portsmouth.
Beck, A., Cañamero, L., Damiano, L., et al. (2011). Children interpretation of emotional body language displayed by a robot. In Proceedings of International Conference on Social Robotics (pp. 62–70), Amsterdam.
Beck, A., Cañamero, L., Hiolle, A., et al. (2013). Interpretation of emotional body language displayed by a humanoid robot: A case study with children.International Journal of Social Robotics, 5(3), 325–334.Google Scholar
Beck, A., Hiolle, A., & Cañamero, L. (2013). Using Perlin noise to generate emotional expressions in a robot. In Proceedings of Annual Meeting of the Cognitive Science Society (pp. 1845–1850).
Beck, A., Hiolle, A., Mazel, A., & Cañamero, L. (2010). Interpretation of emotional body language displayed by robots. In Proceedings of the 3rd International Workshop on Affective Interaction in Natural Environments (pp. 37–42).
Beck, A., Stevens, B., Bard, K., & Cañamero, L. (2012). Emotional body language displayed by artificial agents.Transactions on Interactive Intelligent Systems, 2(1), 2–1.Google Scholar
Belpaeme, T., Baxter, P., Read, R. et al. (2012). Multimodal child-robot interaction: Building social bonds.Journal of Human–Robot Interaction, 1(2), 33–53.Google Scholar
Bickmore, T. (2008). Framing and interpersonal stance in relational agents. In Autonomous Agents and Multi-Agent Systems. Workshop on Why Conversational Agents Do What They Do: Functional Representations for Generating Conversational Agent Behavior, Estoril, Portugal.
Breazeal, C., Brooks, A., Gray, J., et al. (2004). Tutelage and collaboration for humanoid robots.International Journal of Humanoid Robotics, 1(2), 315–348.Google Scholar
Busso, C., Deng, Z., Grimm, M., Neumann, U., & Narayanan, S. (2007). Spoken and multimodal dialog systems and applications – rigid head motion in expressive speech animation: Analysis and synthesis.IEEE Transactions on Audio, Speech, and Language Processing, 15(3), 1075.Google Scholar
Cañamero, L. (2008). Animating affective robots for social interaction, in L, Cañamero & R, Aylett (Eds), Animating Expressive Characters for Social Interaction (pp. 103–121). Amsterdam: John Benjamins.
Cao, Y., Tien, W. C., Faloutsos, P., & Pighin, F. (2005). Expressive speech-driven facial animation. ACM Transactions on Graphics, 24(4), 1283–1302.Google Scholar
Cassell, J. (2000). Nudge nudge wink wink: Elements of face-to-face conversation for embodied conversational agents. In J, Cassell, J, Sullivan, S, Prevost, & E, Churchill (Eds), Embodied Conversational Agents (pp. 1–27). Cambridge, MA: MIT Press.
Cassell, J., Vilhjálmsson, H., & Bickmore, T. (2001). BEAT. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles.
Cig, C., Kasap, Z., Egges, A., & Magnenat-Thalmann, N. (2010). Realistic emotional gaze and head behavior generation based on arousal and dominance factors. In R, Boulic, Y, Chrysanthou, & T, Komura (Eds),Motion in Games (vol. 6459, pp. 278–289). Berlin: Springer.
Clavel, C., Plessier, J., Martin, J.-C., Ach, L., & Morel, B. (2009). Combining facial and postural expressions of emotions in a virtual character. In Z, Ruttkay, M, Kipp, A, Nijholt, & H, Vilhjálmsson (Eds), Intelligent Virtual Agents (vol. 5773, pp. 287–300). Berlin: Springer.
Coombes, S. A., Cauraugh, J. H., & Janelle, C. M. (2006). Emotion and movement: Activation of defensive circuitry alters the magnitude of a sustained muscle contraction.Neuroscience Letters, 396(3), 192–196.Google Scholar
Coulson, M. (2004). Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence.Journal of Nonverbal Behavior, 28, 117–139.Google Scholar
Dautenhahn, K. (2013). Human–Robot Interaction. In M, Soegaard & R. F, Dam (Eds), The Encyclopedia of Human–Computer Interaction (2nd edn). Aarhus, Denmark: The Interaction Design Foundation.
De Silva, P. R. & Bianchi-Berthouze, N. (2004). Modeling human affective postures: An information theoretic characterization of posture features.Computer Animation and Virtual Worlds, 15(3–4), 269–276.Google Scholar
Dovidio, J. & Ellyson, S. (1985). Pattern of visual dominance behavior in humans. In S, Ellyson & J, Dovidio (Eds), Power, Dominance, and Nonverbal Behavior (pp. 129–149). New York: Springer.
Egges, A., Molet, T., & Magnenat-Thalmann, N. (2004). Personalised real-time idle motion synthesis. In Proceedings of 12th Pacific Conference on Computer Graphics and Applications (pp. 121–130).
Fredrickson, B. (2004). The broaden-and-build theory of positive emotions.Philosophical Transactions: Biological Sciences, 359, 1367–1377.Google Scholar
Harmon-Jones, E., Gable, P., & Price, T. (2011). Toward an understanding of the influence of affective states on attentional tuning: Comment on Friedman and Förster (2010).Psychology Bulletin, 137, 508–512.Google Scholar
Hartmann, B., Mancini, M., Buisine, S., & Pelachaud, C. (2005). Design and evaluation of expressive gesture synthesis for embodied conversational agents. In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 1095–1096), New York.
Heylen, D., Kopp, S., Marsella, S., Pelachaud, C., & Vilhjálmsson, H. (2008). The next step towards a function markup language. In H, Prendinger, J, Lester, & M, Ishizuka (Eds), Intelligent Virtual Agents (vol. 5208, pp. 270–280). Berlin: Springer.
Huang, C.-M. & Mutlu, B. (2014). Learning-based modeling of multimodal behaviors for humanlike robots. In Proceedings of the 2014 ACM/IEEE International Conference on Human–Robot Interaction (pp. 57–64), New York.
Huang, L., Galinsky, A. D., Gruenfeld, D. H., & Guillory, L. E. (2010). Powerful postures versus powerful roles which is the proximate correlate of thought and behavior? Psychological Science, 22(1), 95–102.Google Scholar
Ishiguro, H. (2005). Android science: Toward a new cross-disciplinary framework. In Proceedings of the 27th Annual Conference of the Cognitive Science Society: Toward Social Mechanisms of Android Science (A CogSci 2005 Workshop) (pp. 1–6).
Kallmann, M. & Marsella, S. (2005). Hierarchical motion controllers for real-time autonomous virtual humans.Lecture Notes in Computer Science, 3661, 253–265.Google Scholar
Kipp, M., Neff, M., Kipp, K., & Albrecht, I. (2007). Towards natural gesture synthesis: Evaluating gesture units in a data-driven approach to gesture synthesis.Lecture Notes in Computer Science, 4722, 15–28.Google Scholar
Kleinsmith, A., Bianchi-Berthouze, N., & Steed, A. (2011). Automatic recognition of non-acted affective postures.IEEE Transactions on Systems, Man, and Cybernetics Part B, 41(4), 1027– 1038.Google Scholar
Kleinsmith, A., De Silva, P. R., & Bianchi-Berthouze, N. (2006). Cross-cultural differences in recognizing affect from body posture.Interacting with Computers, 18(6), 1371–1389.Google Scholar
Knapp, M. (1972). Nonverbal Communication in Human Interaction. New York: Holt, Reinhart and Winston.
Koenemann, J. & Bennewitz, M. (2012). Whole-body imitation of human motions with a Nao humanoid. In Proceedings of the 7th Annual ACM/IEEE International Conference on Human– Robot Interaction (pp. 425–426), New York.
Kopp, S., Krenn, B., Marsella, S., et al. (2006). Towards a common framework for multimodal generation: The behavior markup language. In Proceedings of the 6th International Conference on Intelligent Virtual Agents (pp. 205–217).
Krenn, B. & Sieber, G. (2008). Functional markup for behavior planning: Theory and practice. In Proceedings of the AAMAS 2008 Workshop: Functional Markup Language. Why Conversational Agents Do What They Do.
Laban, R. & Ullmann, L. (1971). The Mastery of Movement. Boston: Plays.
Lakin, J., Jefferis, V., Cheng, C., & Chartrand, T. (2003). The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry.Journal of Nonverbal Behavior, 27(3), 145–162.Google Scholar
Lee, J. & Marsella, S. (2006). Nonverbal behavior generator for embodied conversational agents.Lecture Notes in Computer Science, 4133, 243–255.Google Scholar
Lee, J. & Marsella, S. (2010). Predicting speaker head nods and the effects of affective information.IEEE Transactions on Multimedia, 12(6), 552–562.Google Scholar
Lee, J. & Marsella, S. (2012). Modeling speaker behavior: A comparison of two approaches.Lecture Notes in Computer Science, 7502, 161–174.Google Scholar
Magnenat-Thalmann, N. & Thalmann, D. (2005). Handbook of Virtual Humans. Hoboken, NJ: John Wiley & Sons.
Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J., & Ishiguro, H. (2012). Conversational gaze mechanisms for humanlike robots.Transactions on Interactive Intelligent Systems, 1(2), art. 12.Google Scholar
Neff, M., Kipp, M., Albrecht, I., & Seidel, H.-P. (2008). Gesture modeling and animation based on a probabilistic re-creation of speaker style, ACM Transactions on Graphics, 27(1), art. 5.Google Scholar
Nunez, J., Briseno, A., Rodriguez, D., Ibarra, J., & Rodriguez, V. (2012). Explicit analytic solution for inverse kinematics of bioloid humanoid robot. In Brazilian Robotics Symposium and Latin American Robotics Symposium (pp. 33–38).
Perlin, K. (2002). Improving noise.ACM Transactions on Graphics, 21(3), 681–682.Google Scholar
Pierris, G. & Lagoudakis, M. (2009). An interactive tool for designing complex robot motion patterns. In Proceedings of IEEE International Conference on Robotics and Automation (pp. 4013–4018).
Roether, C. L., Omlor, L., Christensen, A., & Giese, M. A. (2009). Critical features for the perception of emotion from gait.Journal of Vision, 9(6), 15.Google Scholar
Salem, M., Kopp, S.,Wachsmuth, I., Rohlfing, K., & Joublin, F. (2012). Generation and evaluation of communicative robot gesture.International Journal of Social Robotics, 4(2), 201–217.Google Scholar
Schulman, D. & Bickmore, T. (2012). Changes in verbal and nonverbal conversational behavior in long-term interaction. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (pp. 11–18).
Shapiro, A. (2011). Building a character animation system.Lecture Notes in Computer Science, 7060, 98–109.Google Scholar
Snibbe, S., Scheeff, M., & Rahardja, K. (1999). A layered architecture for lifelike robotic motion. In Proceedings of the 9th International Conference on Advanced Robotics, October.
Sun, X. & Nijholt, A. (2011). Multimodal embodied mimicry in interaction.Lecture Notes in Computer Science, 6800, 147–153.Google Scholar
Thiebaux, M., Marsella, S., Marshall, A. N., & Kallmann, M. (2008). Smartbody: Behavior realization for embodied conversational agents. In Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 151–158).
Thomas, F. & Johnston, O. (1995). Disney Animation: The Illusion of Life. New York: Abbeville Press.
Torta, E., Cuijpers, R., Juola, J., & Van der Pol, D. (2011). Design of robust robotic proxemic behaviour.Lecture Notes in Computer Science, 7072, 21–30.Google Scholar
Vilhjálmsson, H., Cantelmo, N., Cassell, J., et al. (2007). The behavior markup language: Recent developments and challenges. In Proceedings of the 7th International Conference on Intelligent Virtual Agents (pp. 99–111).
Wallbott, H. (1998). Bodily expression of emotion. European Journal of Social Psychology, 28(6), 879–896.
Walters, M. L., Dautenhahn, K., Te Boekhorst, R., et al. (2009). An empirical framework for human–robot proxemics. In Proceedings of New Frontiers in Human–Robot Interaction: Symposium at the AISB09 Convention (pp. 144–149).
Yumak, Z., Ren, J., Magnenat-Thalmann, N., & Yuan, J. (2014). Modelling multi-party interactions among virtual characters, robots and humans.Presence: Teleoperators and Virtual Environments, 23(2), 172–190.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×