Skip to main content
Log in

Effects of emotionally induced language sounds on brain activations for communication

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

Abstract

Emotions play an important role in human communication. We conducted a study to identify the effect of emotions in language sounds in terms of brain functions. The sounds of Japanese sentences spoken with and without emotion were reversed to eliminate their semantic influence on the subjects’ emotional perception. Three sets of sentences with non-emotional and happy, sad, and angry emotional tones were recorded and reversed. The brain activities of 20 native Japanese speakers in their twenties were monitored by near-infrared spectroscopy (NIRS) while they listened to the reversed Japanese sounds with and without the emotions. Our analysis of the experimental results demonstrated that almost all the brain areas monitored by the NIRS probes were activated more when the subjects listened to emotional language sounds than to non-emotional sounds. In particular, the frontopolar cortex area, which is associated with short-term memory, was significantly activated. Since short-term memory is known to provide important information for communication, these results suggest that emotional aspects of language sounds are essential for successful communication and thus should be implemented in human–robot communication systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Brosch T, Pourtois G, Sander D, Vuilleumier P (2011) Additive effects of emotional, endogenous, and exogenous attention: behavioral and electrophysiological evidence. Neuropsychologia 49(7):1779–1787

    Article  Google Scholar 

  2. Ohman A, Lundqvist D, Esteves F (2001) The face in the crowd revisited: a threat advantage with schematic stimuli. J Pers Soc Psychol 80(3):381–396

    Article  Google Scholar 

  3. Jordi V (2014) Handbook of research on synthesizing human emotion in intelligent systems and robotics. IGI Global, Hershey, PA, USA

    Google Scholar 

  4. Javier GR, David S, Rahim R, Aron L, Antonio MC, Isis B, (2015), Speech emotion recognition in emotional feedback for human-robot interaction, Int J Adv Res Artif Intelligence, 4(2)

  5. Nobuyoshi M, Hiroyuki F, Michio O (2006) Minimal design for human–agent communication. Artif life Robot 10:49–54

    Article  Google Scholar 

  6. Ryohei N (1999) Communications, artificial life, and art. Artif Life Robot 3:190–196

    Article  Google Scholar 

  7. Takanori S, Kazuya O, Kazuo T (1997) Spontaneous behavior for cooperation through interaction: an emotionally intelligent robot system. Artif life Robot 1:105–109

    Article  Google Scholar 

  8. Kanu B, Taro A, Yasunari Y, Masayoshi T (2013) Speech synthesis of emotions using vowel features of a speaker. Artif Life Robot 19:27–32

    Google Scholar 

  9. Valentina A, Victor T, vS C (2018) Natural language oral communication in humans under stress. Linguistic cognitive coping strategies for enrichment of artificial intelligence. Procedia Comput Sci 123:4–28

    Google Scholar 

  10. Schirmer A, Kotz SA (2006) Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing. Trends Cognitive Sci 10:24–30

    Article  Google Scholar 

  11. Victoria F, Robert R, Nina H (2007) An introduction to language, 8th edn. Boston, Thomson Wadsworth

    Google Scholar 

  12. Lindquist KA, Barrett LF, Bliss-Moreau E, Russell JA (2006) Language and the perception of emotion. Emotion 6:1:125–138

    Article  Google Scholar 

  13. George MS, Parekh PI, Rosisnsky N, Ketter TA, Kimbrell TA, Heilman KM, Herscovitch P, Post RM (1996) Understanding emotional prosody activates right hemisphere regions. Arch Neurol 53:665–670

    Article  Google Scholar 

  14. Buchanan TW, Lutz K, Mirzazade S, Specht K, Shah NJ, Zilles K, Jancke L (2000) Recognition of emotional prosody and verbal components of spoken language: an fMRI study. Cogn Brain Res 9:227–238

    Article  Google Scholar 

  15. Wildgruber D, Hertrich I, Riecker A, Erb M, Anders S, Grodd W, Ackermann H (2004) Distinct frontal regions subserve evaluation of linguistic and emotional aspects of speech intonation. Cereb Cortex 14:1384–1389

    Article  Google Scholar 

  16. Gandour J, Wong D, Dzemidzic M, Lowe M, Tong Y, Li X (2003) A cross-linguistic fMRI study of perception of intonation and emotion in Chinese. Hum Brain Mapp 18:149–157

    Article  Google Scholar 

  17. Pena M, Maki A, Kovacic D, Dehaene-Lambertz G, Koizumi H, Bouquet F, Mehler J, (2003), Sounds and silence: an optical topography study of language recognition at birth, In: Proceedings of national academy of sciences of the United States of America, 100:11702–11705

  18. Hall MA, (2012), Temporal mapping and connectivity using NIRS for language related task, FIU electronic theses and dissertations, 560

  19. Kuniyoshi S (2009) The language map of the brain. Meiji Shoin, Tokyo, Japan

    Google Scholar 

  20. Ihara A, Wei Q, Matani A, Fujimaki N, Yagura H (2012) Language comprehension dependent on emotional context: a magnetoencephalography study. Neurosci Res 72:50–58

    Article  Google Scholar 

  21. Elizabeth AK, Suzanne C (2003) Effect of negative emotional content on working memory and long-term memory. Emotion 3:378–393

    Article  Google Scholar 

  22. Luis-Alberto PG, Santiago-Omar CM, Felipe TR (2016) Multimodal emotion recognition with evolutionary computation for human-robot interaction. Expert Syst Appl 66:42–61

    Article  Google Scholar 

  23. Breazeal C (2003) Emotion and sociable humanoid robots. Int J Hum Comput Stud 59:119–155

    Article  Google Scholar 

  24. Lisetti C, Nasoz F, LeRouge C, Ozyer O, Alvarez K (2003) Developing multimodal intelligent affective interfaces for tele-home health care. Int J Hum Comput Stud 59:245–255

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Muhammad Nur Adilin Mohd Anuardi.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mohd Anuardi, M.N.A., Yamazaki, A.K. Effects of emotionally induced language sounds on brain activations for communication. Artif Life Robotics 24, 312–317 (2019). https://doi.org/10.1007/s10015-019-00529-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10015-019-00529-x

Keywords

Navigation