Skip to main content

A Multimodal Corpus Approach for the Study of Spontaneous Emotions

  • Chapter
Affective Information Processing

Abstract

The design of future interactive affective computing systems requires the representation of spontaneous emotions and their associated multimodal signs. Current prototypes are often limited to the detection and synthesis of a few primary emotions and are most of the time grounded on acted data collected in-lab. In order to model the sophisticated relations between spontaneous emotions and their expressions in different modalities, an exploratory approach was defined.We collected and annotated a TV corpus of interviews. The collected data displayed emotions that are more complex than the six basic emotions (anger, fear, joy, sadness, surprise, disgust). We observed superpositions, masking, and conflicts between positive and negative emotions. We report several experiments that enabled us to provide some answers to questions such as how to reliably annotate and represent the multimodal signs of spontaneous complex emotions and at which level of abstraction and temporality.

We also defined a copy-synthesis approach in which these behaviors were annotated, represented, and replayed by an expressive agent, enabling a validation and refinement of our annotations. We also studied individual differences in the perception of these blends of emotions. These experiments enabled the identification and definition of several levels of representation of emotions and their associated expressions that are relevant for spontaneous complex emotions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abrilian, S., Devillers, L., Buisine, S., & Martin, J.-C. (2005). EmoTV1: Annotation of real-life emotions for the specification of multimodal affective interfaces. In 11th International Conference on Human-Computer Interaction (HCII'2005), Las Vegas, NV, Electronic proceedings,LEA.

    Google Scholar 

  2. Abrilian, S., Devillers, L., & Martin, J.-C. (2006). Annotation of emotions in real-life video interviews: Variability between coders. In 5th International Conference on Language Resources and Evaluation (LREC'2006), Genoa, Italy, 2004–2009.

    Google Scholar 

  3. André, E. (2006). Corpus-based approaches to behavior modeling for virtual humans: A critical review. In Modeling Communication with Robots and Virtual Humans. Workshop of the ZiF: Research Group 2005/2006 Embodied Communication in Humans and Machines. Scientific organization: Bielefeld: Ipke Wachsmuth, Newark: Günther Knoblich.

    Google Scholar 

  4. André, E., Rist, T., van Mulken, S., Klesen, M., & Baldes, S. (2000). The automated design of believable dialogues for animated presentation teams. In J. S. J. Cassell, S. Prevost, & E. Churchill (Eds.). Embodied conversational agents. Cambridge, MA: MIT Press: 220–255.

    Google Scholar 

  5. Arfib, D. (2006). Time warping of a piano (and other) video sequences following different emotions. In Workshop on Subsystem Synchronization and Multimodal Behavioral Organization held during Humaine Summer School, Genova.

    Google Scholar 

  6. Argyle, M. (2004). Bodily communication (second ed.). London and New York: Routledge. Taylor & Francis.

    Google Scholar 

  7. Banse, R. & Scherer, K. (1996). Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology 70(3), 614–636.

    Article  Google Scholar 

  8. Bänziger, T., Pirker, H., & Scherer, K. (2006). GEMEP — GEneva multimodal emotion portrayals:A corpus for the study of multimodal emotional expressions. In Workshop: Corpora for research on emotion and affect. 5th International Conference on Language Resources and Evaluation (LREC'2006), Genova, Italy. http://www.limsi.fr/Individu/martin/tmp/LREC2006/WS-MM/final/proceedings-WS-MultimodalCorpora-v3.pdf. 15–19.

    Google Scholar 

  9. Bassili, J. N. (1979). Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face. Journal of Personality and Social Psychology 37(11),2049–2058.

    Article  Google Scholar 

  10. Boone, R. T. & Cunningham, J. G. (1998). Children's decoding of emotion in expressive body movement: The development of cue attunement. Developmental Psychology 34(5), 1007–1016.

    Article  Google Scholar 

  11. Buisine, S. (2005). Conception et Évaluation d'Agents Conversationnels Multimodaux Bidirectionnels. http://stephanie.buisine.free.fr/, Doctorat de Psychologie Cognitive - Ergonomie, Paris V. 8 avril 2005. Direction J.-C. Martin & J.-C. Sperandio. http://stephanie.buisine.free.fr/.

  12. Buisine, S., Abrilian, S., Niewiadomski, R., Martin, J.-C., Devillers, L., & Pelachaud, C. (2006). Perception of blended emotions: from video corpus to expressive agent. In 6th International Conference on Intelligent Virtual Agents (IVA'2006). Marina del Rey, CA, 21–23 August.http://iva2006.ict.usc.edu/. USA: Springer, 93–106.

    Google Scholar 

  13. Cacioppo, J. T., Petty, R. P., Losch, M. E., & Kim, H. S. (1986). Electromyographic activity over facial muscle regions can differentiate the valence and intensity of affective reactions. Journal of Personality and Social Psychology 50, 260–268.

    Article  Google Scholar 

  14. Caridakis, G., Raouzaiou, A., Karpouzis, K., & Kollias, S. (2006). Synthesizing gesture expressiv ity based on real sequences. In Workshop on Multimodal Corpora. From Multimodal Behaviour Theories to Usable Models. 5th International Conference on Language Resources and Evaluation (LREC'2006), Genova, Italy. http://www.limsi.fr/Individu/martin/tmp/LREC2006/WS-MM/final/proceedings-WS-MultimodalCorpora-v3.pdf, 19–23.

    Google Scholar 

  15. Carofiglio, V., de Rosis, F. and Grassano, R. (in press). Dynamic models of mixed emotion activation. In L. Canamero & R. Aylett (Eds.). Animating expressive characters for social interactions. Amsterdam, John Benjamins.

    Google Scholar 

  16. Cassell, J., Bickmore, T., Campbell, L., Vilhjálmsson, H., & Yan, H. (2000). Human conversation as a system framework: Designing embodied conversational agents. In J. S. Cassell, Jr., S. Prevost, & E. Churchill (Eds.), Embodied conversational agents. Cambridge, MA: MIT Press: 29–63.

    Google Scholar 

  17. Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., & Stone, M. (1994). Animated conversation: Rule-based generation of facial expression, gesture and spoken intonation for multiple conversational agents. ACM SIG GRAPH'94. http://www.cs.rutgers.edu/?mdstone/pubs/siggraph94.pdf, 413–420.

    Google Scholar 

  18. Cassell, J., Vilhjàlmsson, H., & Bickmore, T. (2001). BEAT: The Behavior Expression Animation Toolkit. In 28th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH'01), Los Angeles. CA: ACM Press. http://doi.acm.org/10.1145/383259.383315, 477–486.

    Chapter  Google Scholar 

  19. Collier, G. (1985). Emotional expression. Hillsdale, NJ: Lawrence Erlbaum. http://faculty.uccb. ns.ca/?gcollier/

    Google Scholar 

  20. Constantini, E., Pianesi, F., & Prete, M. (2005). Recognizing emotions in human and synthetic faces: The role of the upper and lower parts of the face. In Intelligent User Interfaces (IUI'05), San Diego, CA. USA: ACM, 20–27.

    Chapter  Google Scholar 

  21. Cowie, R. (2000). Emotional states expressed in speech. In ISCA ITRW on Speech and Emotion: Developing a Conceptual Framework for Research, Newcastle, Northern Ireland, 224–231.

    Google Scholar 

  22. Craggs, R. & Wood, M. M. (2004). A categorical annotation scheme for emotion in the linguistic content. In Affective Dialogue Systems (ADS'2004), Kloster Irsee, Germany.

    Google Scholar 

  23. Darwin, C. (1872). The expression of emotion in man and animal. Chicago: University of Chicago Press (reprinted in 1965). http://darwin-online.org.uk/EditorialIntroductions/Free- man TheExpressionoftheEmotions.html.

    Google Scholar 

  24. DeMeijer, M. (1989). The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior 13, 247–268.

    Article  Google Scholar 

  25. de Melo, C. & Paiva, A. (2006). A story about gesticulation expression. In 6th International Conference on Intelligent Virtual Agents (IVA'06), Marina del Rey, CA. Springer, 270–281.

    Google Scholar 

  26. De Silva, P. R., Kleinsmith, A., & Bianchi-Berthouze, N. (2005). Towards unsupervised detection of affective body posture nuances. In 1st International Conference on Affective Computing and Intelligent Interaction (ACII'2005), Beijing, China: Springer, 32–40.

    Chapter  Google Scholar 

  27. Devillers, L., Abrilian, S., & Martin, J.-C. (2005). Representing real life emotions in audiovisual data with non basic emotional patterns and context features. In 1st International Conference on Affective Computing and Intelligent Interaction (ACII'2005), Beijing, China, Berlin: Spinger-Verlag. http://www.affectivecomputing.org/2005, 519–526.

    Chapter  Google Scholar 

  28. Devillers, L., Cowie, R., Martin, J.-C., Douglas-Cowie, E., Abrilian, S., & McRorie, M. (2006a). Real life emotions in French and English TV video clips: An integrated annotation protocol combining continuous and discrete approaches. In 5th international conference on Language Resources and Evaluation (LREC'2006), Genoa, Italy.

    Google Scholar 

  29. Devillers, L., Martin, J.-C., Cowie, R., Douglas-Cowie, E., & Batliner, A. (2006b). Workshop corpora for research on emotion and affect. In 5th International Conference on Language Resources and Evaluation (LREC'2006), Genova, Italy. http://www.limsi.fr/Individu/martin/ tmp/LREC2006/WS-MM/final/proceedings-WS-MultimodalCorpora-v3.pdf.

    Google Scholar 

  30. Douglas-Cowie, E., Campbell, N., Cowie, R., & Roach, P. (2003). Emotional speech; Towards a new generation of databases. Speech Communication 40, 33–60.

    Article  MATH  Google Scholar 

  31. Douglas-Cowie, E., Devillers, L., Martin, J.-C., Cowie, R., Savvidou, S., Abrilian, S., & Cox, C.(2005). Multimodal databases of everyday emotion: Facing up to complexity. In 9th European Conf. Speech Communication and Technology (Interspeech'2005), Lisbon, Portugal, 813–816.

    Google Scholar 

  32. Ech Chafai, N., Pelachaud, C., Pelé, D., & Breton, G. (2006). Gesture expressivity modulations in an ECA application. In 6th International Conference on Intelligent Virtual Agents (IVA'06), Marina del Rey, CA. Springer, 181–192.

    Google Scholar 

  33. Ekman, P. (1982). Emotion in the human face. Cambridge University Press.

    Google Scholar 

  34. Ekman, P. (1999). Basic emotions. In T. Dalgleish and M. J. Power (Eds.), Handbook of cognition & emotion. New York: John Wiley, 301–320.

    Chapter  Google Scholar 

  35. Ekman, P. (2003a). Emotions revealed. Understanding faces and feelings. Weidenfeld & Nicolson. http://emotionsrevealed.com/index.php.

  36. Ekman, P. (2003b). The face revealed. London: Weidenfeld & Nicolson.

    Google Scholar 

  37. Ekman, P. & Friesen, W. V. (1975). Unmasking the face. A guide to recognizing emotions from facial clues. Englewood Cliffs, NJ: Prentice-Hall Inc.

    Google Scholar 

  38. Ekman, P. & and Friesen, W. (1982). Felt, false, miserable smiles. Journal of Nonverbal Behavior 6(4), 238–251.

    Article  Google Scholar 

  39. Ekman, P. & Walla, F. (1978). Facial action coding system (FACS). http://www.cs.cmu.edu/ afs/cs/project/face/www/facs.htm.

  40. Enos, F. & Hirschberg, J. (2006). A framework for eliciting emotional speech: capitalizing on the actor's process. In Workshop on Corpora for Research on Emotion and Affect. 5th International Conference on Language Resources and Evaluation (LREC'2006), Genova, Italy. http://www. limsi.fr/Individu/martin/tmp/LREC2006/WS-MM/final/proceedings-WS-MultimodalCorpora-v3.pdf, 6–10.

    Google Scholar 

  41. Feldman, R. S., Philippot, P., & Custrini, R. J. (1991). Social competence and nonverbal behavior In R. S. F. B. Rimé (Ed.), Fundamentals of nonverbal behavior. Cambridge University Press: 329–350.

    Google Scholar 

  42. Feldman, R. S., & Rim, B. (1991). Fundamentals of nonverbal behavior. Cambridge University Press.

    Google Scholar 

  43. Frigo, S. (2006). The relationship between acted and naturalistic emotional corpora. In Workshop on Corpora for Research on Emotion and Affect. 5th International Conference on Language Resources and Evaluation (LREC'2006), Genova, Italy. http://www.limsi.fr/Individu/martin/tmp/LREC2006/WS-MM/final/proceedings-WS-MultimodalCorpora.v3.pdf.

    Google Scholar 

  44. Gallaher, P. (1992). Individual differences in nonverbal behavior: Dimensions of style. Journal of Personality and Social Psychology, 63, 133–145.

    Article  Google Scholar 

  45. Goodwin, C. (2000). Action and embodiment within situated human interaction. Journal of Pragmatics 32, 1489–1522.

    Article  Google Scholar 

  46. Gouta, K. & Miyamoto, M. (2000). Emotion recognition, facial components associated with various emotions. Shinrigaku Kenkyu 71(3), 211–218.

    Google Scholar 

  47. Gratch, J. & Marsella, S. (2004). A domain-independent framework for modeling emotion. Journal of Cognitive Systems Research 5(4), 269–306.

    Article  Google Scholar 

  48. Gunes, H. & Piccardi, M. (2005). Fusing face and body display for bi-modal emotion recognition:Single frame analysis and multi-frame post integration. In 1st International Conference on Affective Computing and Intelligent Interaction (ACII'2005), Beijing, China: Springer, 102–110.

    Chapter  Google Scholar 

  49. Hartmann, B., Mancini, M., & Pelachaud, C. (2002). Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis. In Computer Animation (CA'2002), Geneva, Switzerland. IEEE Computer Society, 111–119.

    Chapter  Google Scholar 

  50. Hartmann, B., Mancini, M., & Pelachaud, C. (2005). Implementing expressive gesture synthesis for embodied conversational agents. In Gesture Workshop (GW'2005), Vannes, France.

    Google Scholar 

  51. Kaiser, S. & Wehrle, T. (2001). Facial expressions as indicators of appraisal processes. In K. R. Scherer, A. Schorr, & T. Johnstone (Eds.), Appraisal theories of emotions: Theories, methods, research. New York: Oxford University Press, 285–300.

    Google Scholar 

  52. Kaiser, S. & Wehrle, T. (2004). Facial expressions in social interactions: Beyond basic emotions. In L. Cañamero & R. Aylett (Eds.). Advances in consciousness research. Animating expressive characters for social interactions. Amsterdam: John Benjamins publishing company.

    Google Scholar 

  53. Kapur, A., Virji-Babul, N., Tzanetakis, G. & Driessen, P. F. (2005). Gesture-based affective computing on motion capture data. In 1st International Conference on Affective Computing and Intelligent Interaction (ACII'2005), Beijing, China: Springer, 1–8.

    Chapter  Google Scholar 

  54. Karunaratne, S. & Yan, H. (2006). Modelling and combining emotions, visual speech and gestures in virtual head models. Signal Processing: Image Communication 21, 429–449.

    Article  Google Scholar 

  55. Kipp, M. (2001). Anvil — A generic annotation tool for multimodal dialogue. In 7th European Conference on Speech Communication and Technology (Eurospeech'2001), Aalborg, Denmark. http://www.dfki.uni-sb.de/?kipp/research/index.html. 1376–1370.

  56. Kipp, M. (2004). Gesture generation by imitation. From human behavior to computer character animation. Boca Raton, FL, Dissertation.com. http://www.dfki.de/?kipp/dissertation.html.

  57. Kipp, M. (2006). Creativity meets automation: Combining nonverbal action authoring with rules and machine learning. In 6th International Conference on Intelligent Virtual Agents (IVA'06), Marina del Rey, CA. Springer, 230–242.

    Google Scholar 

  58. Knapp, M. L. & Hall, J. A. (2006). Nonverbal communication in human interaction (sixth ed.). Belmont, CA: Thomson Wadsworth.

    Google Scholar 

  59. Lee, B., Kao, E., & Soo, V. (2006). Feeling ambivalent: A model of mixed emotions for virtual agents. In 6th International Conference on Intelligent Virtual Agents (IVA'06), Marina del Rey, CA. Springer, 329–342.

    Google Scholar 

  60. Lee, J. & Marsella, S. (2006). Nonverbal behavior generator for embodied conversational agents. In 6th International Conference on Intelligent Virtual Agents (IVA'06), Marina del Rey, CA. Springer, 243–255.

    Google Scholar 

  61. Martin, J. C. (2006). Multimodal human-computer interfaces and individual differences. Annotation, perception, representation and generation of situated multimodal behaviors. Habilitation à diriger des recherches en Informatique. Université Paris XI. 6th December 2006.

    Google Scholar 

  62. Martin, J.-C., Abrilian, S., Buisine, S., & Devillers, L. (2007). Individual differences in the perception of spontaneous gesture expressivity. In 3rd Conference of the International Society for Gesture Studies, Chicago, USA, 71.

    Google Scholar 

  63. Martin, J.-C., Abrilian, S., & Devillers, L. (2005). Annotating multimodal behaviors occurring during non basic emotions. In 1st International Conference on Affective Computing and Intelligent Interaction (ACII'2005), Beijing, China, Berlin: Springer-Verlag. http://www.affectivecomputing. org/2005, 550–557.

    Chapter  Google Scholar 

  64. Martin, J.-C., Caridakis, G., Devillers, L., Karpouzis, K., & Abrilian, S. (2007). Manual annotation and automatic image processing of multimodal emotional behaviours: Validating the annotation of TV interviews. Special issue of the Journal on Personal and Ubiquitous Computing on Emerging Multimodal Interfaces following the special session of the AIAI 2006 Conference. New York: Springer. http://www.personal-ubicomp.com/. http://www.springerlink com/content/e760255k7068332x/

  65. Martin, J.-C., den Os, E., Kuhnlein, P., Boves, L., Paggio, P. and Catizone, R. (2004). Workshop multimodal corpora: Models of human behaviour for the specification and evaluation of multimodal input and output interfaces. In Association with the 4th International Conference on Language Resources and Evaluation LREC2004 http://www.lrec-conf.org/lrec2004/index.php, Lisbon, Portugal: Centro Cultural de Belem,. http://www.limsi.fr/ Individu/martin/tmp/LREC2006/WS-MM/final/proceedings-WS-MultimodalCorpora-v3.pdf.

  66. Martin, J.-C., Kuhnlein, P., Paggio, P., Stiefelhagen, R., & Pianesi, F. (2006a). Workshop multimodal corpora: From multimodal behaviour theories to usable models. In Association with the 5th International Conference on Language Resources and Evaluation (LREC2006), Genoa, Italy. http://www.limsi.fr/Individu/martin/tmp/LREC2006/WS-MM/final/proceedings-WS-MultimodalCorpora-v3.pdf.

  67. Martin, J.-C., Niewiadomski, R., Devillers, L., Buisine, S., & Pelachaud, C. (2006b). Multimodal complex emotions: Gesture expressivity and blended facial expressions. Special issue of the Journal of Humanoid Robotics on Achieving Human-like Qualities in Interactive Virtual and Physical Humanoids, 3(3). C. Pelachaud and L. Canamero (Eds.), 269–291. http://www.worldscinet.com/ijhr/ 03/0303/S0219843606000825.html.

  68. Maybury, M. & Martin, J.-C. (2002). Workshop on multimodal resources and multimodal systems evaluation. In Conference On Language Resources And Evaluation (LREC'2002), Las Palmas, Canary Islands, Spain. http://www.limsi.fr/Individu/martin/research/articles/ws14.pdf.

    Google Scholar 

  69. McNeill, D. (1992). Hand and mind — What gestures reveal about thoughts. Chicago: University of Chicago Press, IL.

    Google Scholar 

  70. Newlove, J. (1993). Laban for actors and dancers. New York: Routledge.

    Google Scholar 

  71. Niewiadomski, R. (2007). A model of complex facial expressions in interpersonal relations for animated agents, PhD. dissertation, University of Perugia. http://www.dipmat.unipg.it/?radek/tesi/niewiadomski thesis.pdf.

  72. Ochs, M., Niewiadomski, R., Pelachaud, C., & Sadek, D. (2005). Intelligent expressions of emotions. In First International Conference on Affective Computing and Intelligent Interaction (ACII'2005), Beijing, China.

    Google Scholar 

  73. Ortony, A., Clore, G. L., & Collins, A. (1988). The cognitive structure of emotions. Cambridge, MA: Cambridge University Press.

    Google Scholar 

  74. Pandzic, I. S. & Forchheimer, R. (2002). MPEG-4 facial animation. The standard, implementation and applications. John Wiley & Sons, LTD.

    Google Scholar 

  75. Pelachaud, C. (2005). Multimodal expressive embodied conversational agent. In ACM Multimedia, Brave New Topics session. Singapore 6–11 November, 683–689.

    Google Scholar 

  76. Pelachaud, C., Braffort, A., Breton, G., Ech Chadai, N., Gibet, S., Martin, J.-C., Maubert, S., Ochs, M., Pelé, D., Perrin, A., Raynal, M., Réveret, L. and Sadek, D. (2004). Agents conversationels: Syste`mes d'animation Modélisation des comportements multimodaux applications: Agents pédagogiques et agents signeurs. Action Spécifique du CNRS Humain Virtuel. (Eds.).

    Google Scholar 

  77. Pelachaud, C. & Poggi, I. (2002). Subtleties of facial expressions in embodied agents. Journal of Visualization and Computer Animation. Special Issue: Graphical Autonomous Virtual Humans. Issue Edited by Daniel Ballin, Jeff Rickel, Daniel Thalmann, 13(5): 301–312. http://www.dis.uniroma1.it/?pelachau/JVCA02.pdf.

    MATH  Google Scholar 

  78. Plutchik, R. (1980). A general psychoevolutionary theory of emotion. Emotion: Theory, research, and experience: Vol. 1. In R. Plutchik and H. Kellerman (Eds.), Theories of emotion. New York, Academic Press, 3–33.

    Google Scholar 

  79. Poggi, I. (2003). Mind markers. Gestures. meaning and use. M. Rector, I. Poggi, and N. Trigo (Eds.). Oporto, Portugal University Fernando Pessoa Press, 119–132.

    Google Scholar 

  80. Poggi, I., Pelachaud, C., de Rosis, F., Carofiglio, V., & De Carolis, B. (2005). GRETA. A believable embodied conversational agent. multimodal intelligent information presentation. O. Stock and M. Zancarano (Eds.). Kluwer.

    Google Scholar 

  81. Prendinger, H. & Ishizuka, M. (2004). Life-like characters. Tools, affective functions and applications. New York: Springer.

    Google Scholar 

  82. Rehm, M. & André, E. (2005). Catch me if you can - Exploring lying agents in social settings. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS'2005), Utrecht, the Netherlands, 937–944.

    Chapter  Google Scholar 

  83. Richmond, V. P. & Croskey, J. C. (1999). Non verbal behavior in interpersonal relations. Needham Heights, MA: Allyn & Bacon, Inc.

    Google Scholar 

  84. Rist, T., André, E., Baldes, S., Gebhard, P., Klesen, M., Kipp, M., Rist, P., & Schmitt, M. (2003). A review of the development of embodied presentation agents and their application fields. Life-like characters: Tools, affective functions, and applications. H. Prendinger & M. Ishizuka (Eds.). New York: Springer, 377–404.

    Google Scholar 

  85. Sander, D., Grandjean, D., & Scherer, K. (2005). A systems approach to appraisal mechanisms in emotion. Neural Networks 18: 317–352.

    Article  Google Scholar 

  86. Scherer, K. R. (1998). Analyzing Emotion Blends. Proceedings of the Xth Conference of the International Society for Research on Emotions, Würzburg, Germany, Fischer, A.142–148.

    Google Scholar 

  87. Scherer, K. R. (2000). Emotion. Introduction to social psychology: A European perspective. M. H. W. Stroebe (Ed.), Oxford: Blackwell, pp. 151–191.

    Google Scholar 

  88. Schröder, M. (2003). Experimental study of affect burst. Speech Communication. Special issue following the ISCA Workshop on Speech and Emotion 40(1–2): 99–116.

    Article  MATH  Google Scholar 

  89. Siegman, A. W. & Feldstein, S. (1985). Multichannel integrations of nonverbal behavior. LEA.

    Google Scholar 

  90. Tepper, P., Kopp, S., & Cassell, J. (2004). Content in context: Generating language and iconic gesture without a gestionary. In Workshop on Balanced Perception and Action in ECAs at Automous Agents and Multiagent Systems (AAMAS), New York, NY.

    Google Scholar 

  91. Vinayagamoorthy, V., Gillies, M., Steed, A., Tanguy, E., Pan, X., Loscos, C., & Slater, M. (2006). Building expression into virtual characters. In Eurographics Conference State of the Art Reports http://www.cs.ucl.ac.uk/staff/m.gillies/expressivevirtualcharacters.pdf.

  92. Wallbott, H. G. (1998). Bodily expression of emotion. European Journal of Social Psychology 28: 879–896. http://www3.interscience.wiley.com/cgi-bin/abstract/1863/ABSTRACT.

    Article  Google Scholar 

  93. Wallbott, H. G. & Scherer, K. R. (1986). Cues and channels in emotion recognition. Journal of Personality and Social Psychology 51(4), 690–699.

    Article  Google Scholar 

  94. Wegener Knudsen, M., Martin, J.-C., Dybkjær, L., Berman, S., Bernsen, N. O., Choukri, K., Heid, U., Kita, S., Mapelli, V., Pelachaud, C., Poggi, I., van Elswijk, G., & Wittenburg, P. (2002a). Survey of NIMM data resources, current and future user profiles, markets and user needs for NIMM resources. In ISLE Natural Interactivity and Multimodality. Working Group Deliverable D8.1. http://isle.nis.sdu.dk/reports/wp8/.

  95. Wegener Knudsen, M., Martin, J.-C., Dybkjær, L., Machuca Ayuso, M.-J., Bernsen, N. O., Carletta, J., Heid, U., Kita, S., Llisterri, J., Pelachaud, C., Poggi, I., Reithinger, N., van Elswijk, G., & Wittenburg, P. (2002b). Survey of multimodal annotation schemes and best practice. In ISLE Natural Interactivity and Multimodality. Working Group Deliverable D9.1. February. http://isle.nis.sdu.dk/reports/wp9/.

  96. Wiggers, M. (1982). Judgments of facial expressions of emotion predicted from facial behavior. Journal of Nonverbal Behavior 7(2), 101–116.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag London Limited

About this chapter

Cite this chapter

Martin, JC., Devillers, L. (2009). A Multimodal Corpus Approach for the Study of Spontaneous Emotions. In: Tao, J., Tan, T. (eds) Affective Information Processing. Springer, London. https://doi.org/10.1007/978-1-84800-306-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-1-84800-306-4_15

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-84800-305-7

  • Online ISBN: 978-1-84800-306-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics