Skip to main content

A Cross-Cultural Study on the Perception of Emotions: How Hungarian Subjects Evaluate American and Italian Emotional Expressions

  • Conference paper
Cognitive Behavioural Systems

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7403))

Abstract

In the present work a cross-modal evaluation of the visual and auditory channels in conveying emotional information is conducted through perceptual experiments aimed at investigating whether some of the basic emotions are perceptually privileged and whether the perceptual mode, the cultural environment and the language play a role in this preference. To this aim, Hungarian subjects were requested to assess emotional stimuli extracted from Italian and American movies in the single (either mute video or audio alone) and combined audio-video mode. Results showed that among the proposed emotions, anger plays a special role and fear, happiness and sadness are better perceived than surprise and irony in both the cultural environments. The perception of emotions is affected by the communication mode and the language influences the perceptual assessment of emotional information.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Apolloni, B., Aversano, G., Esposito, A.: Preprocessing and classification of emotional features in speech sentences. In: Kotare, Y. (ed.) Proceedings of the IEEE Workshop on Speech and Computer, pp. 49–52 (2000)

    Google Scholar 

  2. Apolloni, B., Esposito, A., Malchiodi, D., Orovas, C., Palmas, G., Taylor, J.G.: A general framework for learning rules from data. IEEE Transactions on Neural Networks 15(6), 1333–1350 (2004)

    Article  Google Scholar 

  3. Apple, W., Hecht, K.: Speaking emotionally: The relation between verbal and vocal communication of affect. Journal of Personality and Social Psychology 42, 864–875 (1982)

    Article  Google Scholar 

  4. Banse, R., Scherer, K.: Acoustic profiles in vocal emotion expression. Journal of Personality & Social Psychology 70(3), 614–636 (1996)

    Article  Google Scholar 

  5. Bryll, R., Quek, F., Esposito, A.: Automatic hand hold detection in natural conversation. In: Proceedings of IEEE Workshop on Cues in Communication, Hawai, December 9 (2001)

    Google Scholar 

  6. Campos, J.J., Barrett, K., Lamb, M.E., Goldsmith, H.H., Stenberg, C.: Socioemotional development. In: Haith, M.M., Campos, J.J. (eds.) Handbook of Child Psychology, 4th edn., vol. 2, pp. 783–915. Wiley, New York (1983)

    Google Scholar 

  7. Chollet, G., Esposito, A., Gentes, A., Horain, P., Karam, W., Li, Z., Pelachaud, C., Perrot, P., Petrovska-Delacrétaz, D., Zhou, D., Zouari, L.: Multimodal Human Machine Interactions in Virtual and Augmented Reality. In: Esposito, A., Hussain, A., Marinaro, M., Martone, R. (eds.) COST Action 2102. LNCS, vol. 5398, pp. 1–23. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  8. Doyle, P.: When is a communicative agent a good idea? In: Proceedings of Inter. Workshop on Communicative and Autonomous Agents, Seattle (1999)

    Google Scholar 

  9. Ekman, P., Friesen, W.V., Hager, J.C.: The facial action coding system: Research Nexus eBook, 2nd edn. Weidenfeld & Nicolson, London (2002)

    Google Scholar 

  10. Ekman, P.: An argument for basic emotions. Cognition and Emotion 6, 169–200 (1992)

    Article  Google Scholar 

  11. Ekman, P.: The argument and evidence about universals in facial expressions of emotion. In: Wagner, H., Manstead, A. (eds.) Handbook of Social Psychophysiology, pp. 143–164. Wiley, Chichester (1989)

    Google Scholar 

  12. Elliott, C.D.: The affective reasoned: A process model of emotion in a multi-agent system. Ph.D Thesis, Institute for Learning sciences, Northwestern University, Evanston, Illinois (1992)

    Google Scholar 

  13. Esposito, A.: The Perceptual and Cognitive Role of Visual and Auditory Channels in Conveying Emotional Information. Cognitive Computation Journal 1(2), 268–278 (2009)

    Article  Google Scholar 

  14. Esposito, A., Riviello, M.T., Di Maio, G.: The COST 2102 Italian Audio and Video Emotional Database. In: Apolloni, B., et al. (eds.) Frontiers in Artificial Intelligence and Applications, vol. 204, pp. 51–61 (2009) ISBN 978-1-60750-072-8 (print) ISBN 978-1-60750-515-0, http://www.booksonline.iospress.nl/Content/View.aspx?piid=14188

  15. Esposito, A., Riviello, M.T., Bourbakis, N.: Cultural Specific Effects on the Recognition of Basic Emotions: A Study on Italian Subjects. In: Holzinger, A., Miesenberger, K. (eds.) USAB 2009. LNCS, vol. 5889, pp. 135–148. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  16. Esposito, A.: The Amount of Information on Emotional States Conveyed by the Verbal and Nonverbal Channels: Some Perceptual Data. In: Stylianou, Y., Faundez-Zanuy, M., Esposito, A. (eds.) WNSP 2005. 277. LNCS, vol. 4391, pp. 249–268. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  17. Ezzat, T., Geiger, G., Poggio, T.: Trainable video-realistic speech animation. In: Proceedings of SIGGRAPH, San Antonio, Texas, pp. 388–397 (July 2002)

    Google Scholar 

  18. Fasel, B., Luettin, J.: Automatic facial expression analysis: A survey. Pattern Recognition 36, 259–275 (2002)

    Article  Google Scholar 

  19. Frick, R.: Communicating emotions: the role of prosodic features. Psychological Bullettin 93, 412–429 (1985)

    Article  Google Scholar 

  20. Friend, M.: Developmental changes in sensitivity to vocal paralanguage. Developmental Science 3, 148–162 (2000)

    Article  Google Scholar 

  21. Fulcher, J.A.: Vocal affect expression as an indicator of affective response. Behavior Research Methods, Instruments, & Computers 23, 306–313 (1991)

    Article  Google Scholar 

  22. Fu, S., Gutierrez-Osuna, R., Esposito, A., Kakumanu, P., Garcia, O.N.: Audio/visual mapping with cross-modal Hidden Markov Models. IEEE Transactions on Multimedia 7(2), 243–252 (2005)

    Article  Google Scholar 

  23. Gutierrez-Osuna, R., Kakumanu, P., Esposito, A., Garcia, O.N., Bojorquez, A., Castello, J., Rudomin, I.: Speech-driven facial animation with realistic dynamic. IEEE Transactions on Multimedia 7(1), 33–42 (2005)

    Article  Google Scholar 

  24. Hozjan, V., Kacic, Z.: A rule-based emotion-dependent feature extraction method for emotion analysis from speech. Journal of the Acoustical Society of America 119(5), 3109–3120 (2006)

    Article  Google Scholar 

  25. Hozjan, V., Kacic, Z.: Context-independent multilingual emotion recognition from speech signals. International Journal of Speech Technology 6, 311–320 (2003)

    Article  Google Scholar 

  26. Huang, C.L., Huang, Y.M.: Facial expression recognition using model-based feature extraction and action parameters Classification. Journal of Visual Communication and Image Representation 8(3), 278–290 (1997)

    Article  Google Scholar 

  27. Izard, C.E.: Innate and universal facial expressions: Evidence from developmental and cross-cultural research. Psychological Bulletin 115, 288–299 (1994)

    Article  Google Scholar 

  28. Izard, C.E.: Human Emotions. Plenum Press, New York (1977)

    Google Scholar 

  29. Kähler, K., Haber, J., Seidel, H.: Geometry-based muscle modeling for facial animation. In: Proceedings of the International Conference on Graphics Interface, pp. 27–36 (2001)

    Google Scholar 

  30. Kakumanu, P., Esposito, A., Garcia, O.N., Gutierrez-Osuna, R.: A comparison of acoustic coding models for speech-driven facial animation. Speech Communication 48, 598–615 (2006)

    Article  Google Scholar 

  31. Kakumanu, P., Gutierrez-Osuna, R., Esposito, A., Bryll, R., Goshtasby, A., Garcia, O.N.: Speech Dirven Facial Animation. In: Proceedings of ACM Workshop on Perceptive User Interfaces, Orlando, November 15-16 (2001)

    Google Scholar 

  32. Kamachi, M., Lyons, M., Gyoba, J.: Japanese Female Facial Expression Database, Psychology Department in Kyushu University, http://www.kasrl.org/jaffe.html

  33. Kanade, T., Cohn, J., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)

    Google Scholar 

  34. Koda, T.: Agents with faces: A study on the effect of personification of software agents. Master Thesis, MIT Media Lab, Cambridge (1996)

    Google Scholar 

  35. Morishima, S.: Face analysis and synthesis. IEEE Signal Processing Magazine 18(3), 26–34 (2001)

    Article  Google Scholar 

  36. Pantic, M., Patras, I., Rothkrantz, J.M.: Facial action recognition in face profile image sequences. In: Proceedings IEEE International Conference Multimedia and Expo., pp. 37–40 (2002)

    Google Scholar 

  37. Pantic, M., Rothkrantz, J.M.: Expert system for automatic analysis of facial expression. Image and Vision Computing Journal 18(11), 881–905 (2000)

    Article  Google Scholar 

  38. Samaria, F., Harter A.: The ORL Database of Faces. Cambridge University Press, Cambridge, http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html

  39. Scherer, K.R.: Vocal communication of emotion: A review of research paradigms. Speech Communication 40, 227–256 (2003)

    Article  MATH  Google Scholar 

  40. Scherer, K.R., Banse, R., Wallbott, H.G.: Emotion inferences from vocal expression correlate across languages and cultures. Journal of Cross-Cultural Psychology 32, 76–92 (2001)

    Article  Google Scholar 

  41. Scherer, R., Oshinsky, J.S.: Cue utilization in emotion attribution from auditory stimuli. Motivation and Emotion 1, 331–346 (1977)

    Article  Google Scholar 

  42. Stocky, T., Cassell, J.: Shared reality: Spatial intelligence in intuitive user interfaces. In: Proceedings of Intelligent User Interfaces, San Francisco, CA, pp. 224–225 (2002)

    Google Scholar 

  43. Tóth, S.L., Sztahó, D., Vicsi, K.: Speech Emotion Perception by Human and Machine. In: Esposito, A., Bourbakis, N.G., Avouris, N., Hatzilygeroudis, I. (eds.) HH and HM Interaction 2007. LNCS (LNAI), vol. 5042, pp. 213–224. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  44. Ververidis, D., Kotropoulos, C.: Emotional Speech Recognition: Resources, Features and Methods. Elsevier Speech Communication 48(9), 1162–1181 (2006)

    Article  Google Scholar 

  45. Vicsi, K., Szaszák, G.: Using prosody to improve automatic speech recognition. Speech Communication 52(5), 413–426 (2010)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Riviello, M.T., Esposito, A., Vicsi, K. (2012). A Cross-Cultural Study on the Perception of Emotions: How Hungarian Subjects Evaluate American and Italian Emotional Expressions. In: Esposito, A., Esposito, A.M., Vinciarelli, A., Hoffmann, R., Müller, V.C. (eds) Cognitive Behavioural Systems. Lecture Notes in Computer Science, vol 7403. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34584-5_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34584-5_38

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34583-8

  • Online ISBN: 978-3-642-34584-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics