skip to main content
research-article

Implementing and Evaluating a Laughing Virtual Character

Published:25 February 2017Publication History
Skip Abstract Section

Abstract

Laughter is a social signal capable of facilitating interaction in groups of people: it communicates interest, helps to improve creativity, and facilitates sociability. This article focuses on: endowing virtual characters with computational models of laughter synthesis, based on an expressivity-copying paradigm; evaluating how the physically co-presence of the laughing character impacts on the user’s perception of an audio stimulus and mood. We adopt music as a means to stimulate laughter. Results show that the character presence influences the user’s perception of music and mood. Expressivity-copying has an influence on the user’s perception of music, but does not have any significant impact on mood.

Skip Supplemental Material Section

Supplemental Material

References

  1. V. Adelswärd. 1989. Laughter and dialogue: The social significance of laughter in institutional discourse. Nordic Journal of Linguistics 12, 02 (1989), 107--136.Google ScholarGoogle ScholarCross RefCross Ref
  2. J. N. Bailenson and N. Yee. 2005. Digital chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological Science 16, 10 (2005), 814--819.Google ScholarGoogle ScholarCross RefCross Ref
  3. J. N. Bailenson, J. Blascovich, A. Beall, and J. M. Loomis. 2003. Interpersonal distance in immersive virtual environments. Personality and Social Psychology Bulletin 29, 7 (2003), 819--833.Google ScholarGoogle ScholarCross RefCross Ref
  4. R. Beale and C. Creed. 2009. Affective interaction: How emotional agents affect users. International Journal of Human-Computer Studies 67, 9 (2009), 755--776. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Becker-Asano and H. Ishiguro. 2009. Laughter in social robotics - no laughing matter. In Proceedings of the International Workshop on Social Intelligence Design. 287--300.Google ScholarGoogle Scholar
  6. C. Becker-Asano, T. Kanda, C. Ishi, and H. Ishiguro. 2009. How about laughter? Perceived naturalness of two laughing humanoid robots. In Proceedings of the 3rd International Conference on Affective Computing and Intelligent Interaction. IEEE, 1--6.Google ScholarGoogle Scholar
  7. P. Boersma and D. Weeninck. 2001. Praat, a system for doing phonetics by computer. Glot International 5, 9/10 (2001), 341--345.Google ScholarGoogle Scholar
  8. C. Broekema. 2011. Motion intensity matching in interaction with a virtual agent. In Proceedings of the 14th Twente Student Conference on IT.Google ScholarGoogle Scholar
  9. Antonio Camurri, Gualtiero Volpe, Stefano Piana, Maurizio Mancini, Radoslaw Niewiadomski, Nicola Ferrari, and Corrado Canepa. 2016. The dancer in the eye: Towards a multi-layered computational framework of qualities in movement. In Proceedings of the 3rd International Symposium on Movement and Computing (MOCO’16). ACM, New York, NY, Article 6, 7 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. G. Castellano, M. Mancini, C. Peters, and P. W. McOwan. 2012. Expressive copying behavior for social agents: A perceptual analysis. IEEE Transactions on Systems Man and Cybernetics-Part A-Systems and Humans 42, 3 (2012), 776. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. Chapman. 1983. Humor and laughter in social interaction and some implications for humor research. In Handbook of Humor Research, Vol. 1, P. E. McGhee and J. H. Goldstein (Eds.). 135--157.Google ScholarGoogle ScholarCross RefCross Ref
  12. F. Charles, F. Pecune, G. Aranyi, C. Pelachaud, and M. Cavazza. 2015. ECA control using a single affective user dimension. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. ACM, 183--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. T. L. Chartrand and J. A. Bargh. 1994. The chameleon effect: The perception-behavior link and social interaction. Journal of Personality and Social Psychology 76 (1994), 893--910.Google ScholarGoogle ScholarCross RefCross Ref
  14. D. DeVault, R. Artstein, G. Benn, T. Dey, E. Fast, A. Gainer, K. Georgila, J. Gratch, A. Hartholt, M. Lhommet, and others. 2014. SimSensei kiosk: A virtual human interviewer for healthcare decision support. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 1061--1068. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Y. Ding, J. Huang, N. Fourati, T. Artières, and C. Pelachaud. 2014a. Upper body animation synthesis for a laughing character. In Intelligent Virtual Agents. Lecture Notes in Computer Science, Vol. 8637. 164--173.Google ScholarGoogle ScholarCross RefCross Ref
  16. Y. Ding and C. Pelachaud. 2015. Lip animation synthesis: A unified framework for speaking and laughing virtual agent. In Auditory-Visual Speech Processing (AVSP’15). 78--83.Google ScholarGoogle Scholar
  17. Y. Ding, K. Prepin, J. Huang, C. Pelachaud, and T. Artières. 2014b. Laughter animation synthesis. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems (AAMAS’14). 773--780. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. R. I. M. Dunbar. 2008. Mind the gap: Or why humans are not just great apes. In Proceedings of the British Academy, Vol. 154.Google ScholarGoogle ScholarCross RefCross Ref
  19. D. P. French and S. Sutton. 2010. Reactivity of measurement in health psychology: How much of a problem is it? What can be done about it? British Journal of Health Psychology 15, 3 (2010), 453--468.Google ScholarGoogle ScholarCross RefCross Ref
  20. K. Grammer. 1990. Strangers meet: Laughter and nonverbal signs of interest in opposite-sex encounters. Journal of Nonverbal Behavior 14, 4 (1990), 209--236.Google ScholarGoogle ScholarCross RefCross Ref
  21. H. Griffin, M. Aung, B. Romera-Paredes, C. McLoughlin, G. McKeown, W. Curran, and N. Berthouze. 2015. Perception and automatic recognition of laughter from whole-body motion: Continuous and categorical perspectives. IEEE Transactions on Affective Computing PP, 99 (2015).Google ScholarGoogle Scholar
  22. J. Hofmann, T. Platt, W. Ruch, R. Niewiadomski, and J. Urbain. 2015. The influence of a virtual companion on amusement when watching funny films. Motivation and Emotion 39, 3 (2015), 434--447.Google ScholarGoogle ScholarCross RefCross Ref
  23. T. Huber and W. Ruch. 2007. Laughter as a uniform category? A historic analysis of different types of laughter. In Proceedings of the 10th Congress of the Swiss Society of Psychology.Google ScholarGoogle Scholar
  24. L. W. Hughes and J. B. Avey. 2009. Transforming with levity: Humor, leadership, and follower attitudes. Leadership 8 Organization Development Journal 30, 6 (2009), 540--562.Google ScholarGoogle Scholar
  25. D. Huron. 2004. Music-engendered laughter: An analysis of humor devices in PDQ Bach. In Proceedings of International Conference on Music Perception and Cognition. 700--704.Google ScholarGoogle Scholar
  26. Dana Kovarsky, Maura Curran, and Nicole Zobel Nichols. 2009. Laughter and communicative engagement in interaction. In Seminars in Speech and Language, Vol. 30. © Thieme Medical Publishers, 027--036.Google ScholarGoogle Scholar
  27. M. Lombard, T. B. Ditton, and L. Weinstein. 2009. Measuring presence: The temple presence inventory. In Proceedings of the 12th Annual International Workshop on Presence. 1--15.Google ScholarGoogle Scholar
  28. M. Mancini, G. Varni, D. Glowinski, and G. Volpe. 2012. Computing and evaluating the body laughter index. In Human Behavior Understanding, A. A. Salah, J. Ruiz-del Solar, C. Meriçli, and P.-Y. Oudeyer (Eds.). Lecture Notes in Computer Science, Vol. 7559. Springer, Berlin, 90--98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. V. Markaki, S. Merlino, L. Mondada, and F. Oloff. 2010. Laughter in professional meetings: The organization of an emergent ethnic joke. Journal of Pragmatics 42, 6 (2010), 1526--1542.Google ScholarGoogle ScholarCross RefCross Ref
  30. S. Marsella, J. Gratch, and P. Petta. 2010. Computational models of emotion. A Blueprint for Affective Computing—A Sourcebook and Manual. 21--46.Google ScholarGoogle Scholar
  31. W. S. McCulloch and W. Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics 5, 4 (1943), 115--133.Google ScholarGoogle ScholarCross RefCross Ref
  32. G. McKeown, W. Curran, C. McLoughlin, H. J. Griffin, and N. Bianchi-Berthouze. 2013. Laughter induction techniques suitable for generating motion capture data of laughter associated body movements. In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on. IEEE, 1--5.Google ScholarGoogle Scholar
  33. C. Nagaoka, M. Komori, and S. Yoshikawa. 2007. Embodied Synchrony in Conversation. 331--351.Google ScholarGoogle Scholar
  34. R. Niewiadomski, J. Hofmann, J. Urbain, T. Platt, J. Wagner, B. Piot, H. Cakmak, S. Pammi, T. Baur, S. Dupont, and others. 2013. Laugh-aware virtual agent and its impact on user amusement. In Proceedings of the 2013 International Conference on Autonomous Agents and Multi-agent Systems. 619--626. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. M. J. Owren and J.-A. Bachorowski. 2003. Reconsidering the evolution of nonlinguistic communication: The case of laughter. Journal of Nonverbal Behavior 27 (2003), 183--200. Issue 3.Google ScholarGoogle ScholarCross RefCross Ref
  36. I. S. Pandzic and R. Forcheimer (Eds.). 2002. MPEG4 Facial Animation - The Standard, Implementations and Applications. John Wiley 8 Sons. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. F. Pecune, A. Cafaro, M. Chollet, P. Philippe, and C. Pelachaud. 2014. Suggestions for extending SAIBA with the VIB platform. In Proceedings of the Workshop on Architectures and Standards for Intelligent Virtual Agents at IVA. 336--342.Google ScholarGoogle Scholar
  38. F. Pecune, M. Mancini, B. Biancardi, G. Varni, Y. Ding, C. Pelachaud, G. Volpe, and A. Camurri. 2015. Laughing with a virtual agent. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS’15). 1817--1818. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. S. Piana, M. Mancini, A. Camurri, G. Varni, and G. Volpe. 2013. Automated analysis of non-verbal expressive gesture. In Human Aspects in Ambient Intelligence, T. Bosse, D. J. Cook, M. Neerincx, and F. Sadri (Eds.). Vol. 8. Atlantis Press, 41--54.Google ScholarGoogle Scholar
  40. T. Platt, J. Hofmann, W. Ruch, R. Niewiadomski, and J. Urbain. 2012. Experimental standards in research on AI and humor when considering psychology. In Proceedings of the AAAI Fall Symposium: Artificial Intelligence of Humor.Google ScholarGoogle Scholar
  41. R. Prada and A. Paiva. 2009. Teaming up humans with autonomous synthetic characters. Artificial Intelligence 173, 1 (2009), 80--103. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. K. Prepin, M. Ochs, and C. Pelachaud. 2013. Beyond backchannels: Co-construction of dyadic stance by reciprocal reinforcement of smiles between virtual agents. In Proceedings of COGSCI 2013 the Annual Meeting of the Cognitive Science Society.Google ScholarGoogle Scholar
  43. R. R. Provine. 2001. Laughter: A Scientific Investigation. Penguin.Google ScholarGoogle Scholar
  44. W. Ruch and P. Ekman. 2001. The expressive pattern of laughter. In Emotion, Qualia and Consciousness, A. Kaszniak (Ed.). World Scientific Publishers, Tokyo, 426--443.Google ScholarGoogle Scholar
  45. W. Ruch and S. Rath. 1993. The nature of humor appreciation: Toward an integration of perception of stimulus properties and affective experience. Humor 6 (1993), 363--363.Google ScholarGoogle ScholarCross RefCross Ref
  46. J. Urbain, E. Bevacqua, T. Dutoit, A. Moinet, R. Niewiadomski, C. Pelachaud, B. Picart, J. Tilmanne, and J. Wagner. 2010. The AVLaughterCycle database. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC’10). 19--21.Google ScholarGoogle Scholar
  47. J. Urbain, H. Çakmak, and T. Dutoit. 2013. Automatic phonetic transcription of laughter and its application to laughter synthesis. In Proceedings of ACII. 153--158. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. J. Urbain, R. Niewiadomski, J. Hofmann, E. Bantegnie, T. Baur, N. Berthouze, H. Cakmak, R. Cruz, S. Dupont, M. Geist, H. Griffin, F. Lingenfelser, M. Mancini, M. Miranda, G. Mckeown, S. Pammi, O. Pietquin, B. Piot, T. Platt, W. Ruch, A. Sharma, G. Volpe, and J. Wagner. 2012. Laugh machine. In Proceedings of the 8th International Summer Workshop on Multimodal Interfaces, eNTERFACE 12. 13--34.Google ScholarGoogle Scholar
  49. D. A. Winter. 1990. Biomechanics and Motor Control of Human Movement. John Wiley 8 Sons, Inc., Toronto.Google ScholarGoogle Scholar
  50. R. Zhao, A. Papangelis, and J. Cassell. 2014. Towards a dyadic computational model of rapport management for human-virtual agent interaction. In Intelligent Virtual Agents. Springer, 514--527.Google ScholarGoogle Scholar

Index Terms

  1. Implementing and Evaluating a Laughing Virtual Character

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Internet Technology
      ACM Transactions on Internet Technology  Volume 17, Issue 1
      Special Issue on Affect and Interaction in Agent-based Systems and Social Media and Regular Paper
      February 2017
      213 pages
      ISSN:1533-5399
      EISSN:1557-6051
      DOI:10.1145/3036639
      • Editor:
      • Munindar P. Singh
      Issue’s Table of Contents

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 February 2017
      • Revised: 1 September 2016
      • Accepted: 1 September 2016
      • Received: 1 December 2015
      Published in toit Volume 17, Issue 1

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader