Skip to main content

An Experimentation to Measure the Influence of Music on Emotions

  • Conference paper
  • First Online:
HCI in Mobility, Transport, and Automotive Systems (HCII 2023)

Abstract

Several emotion-adaptive systems frameworks have been proposed to enable listeners’ emotional regulation through music reproduction. However, the majority of these frameworks has been implemented only under in-Lab or in-car conditions, in the second case focusing on improving driving performance. Therefore, to the authors’ best knowledge, no research has been conducted for mobility settings, such as trains, planes, yacht, etc. Focusing on this aspect, the proposed approach reports the results obtained from the study of relationship between listener’s induced emotion and music reproduction exploiting an advanced audio system and an innovative technology for face expressions’ recognition. Starting from an experiment in a university lab scenario, with 15 listeners, and a yacht cabin scenario, with 11 listeners, participants’ emotional variability has been deeply investigated reproducing 4 audio enhanced music tracks, to evaluate the listeners’ emotional “sensitivity” to music stimuli. The experimental results indicated that, during the reproduction in the university lab, listeners’ “happiness” and “anger” states were highly affected by the music stimuli and highlighted a possible relationship between music and listeners’ compound emotions. Furthermore, listeners’ emotional engagement was proven to be more affected by music stimuli in the yacht cabin, rather than the university lab.

This work is supported by Marche Region in implementation of the financial programme POR MARCHE FESR 2014–2020, project “Miracle” (Marche Innovation and Research fAcilities for Connected and sustainable Living Environments), CUP B28I19000330007.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Altieri, A., et al.: An adaptive system to manage playlists and lighting scenarios based on the user’s emotions. In: 2019 IEEE International Conference on Consumer Electronics (ICCE), pp. 1–2. IEEE (2019)

    Google Scholar 

  2. Altieri, A., Ceccacci, S., Mengoni, M.: Emotion-aware ambient intelligence: changing smart environment interaction paradigms through affective computing. In: Streitz, N., Konomi, S. (eds.) HCII 2019. LNCS, vol. 11587, pp. 258–270. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21935-2_20

    Chapter  Google Scholar 

  3. Ambadar, Z., Cohn, J.F., Reed, L.I.: All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J. Nonverbal Behav. 33, 17–34 (2009)

    Google Scholar 

  4. Bai, M.R., Shih, G.Y.: Upmixing and downmixing two-channel stereo audio for consumer electronics. IEEE Trans. Consum. Electron. 53(3), 1011–1019 (2007)

    Article  Google Scholar 

  5. Barsoum, E., Zhang, C., Ferrer, C.C., Zhang, Z.: Training deep networks for facial expression recognition with crowd-sourced label distribution. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 279–2831 (2016)

    Google Scholar 

  6. Cecchi, S., Carini, A., Spors, S.: Room response equalization-a review. Appl. Sci. 8(1), 16 (2017)

    Article  Google Scholar 

  7. Dhall, A., Goecke, R., Ghosh, S., Joshi, J., Hoey, J., Gedeon, T.: From individual to group-level emotion recognition: Emotiw 5.0. In: Proceedings of the 19th ACM International Conference On Multimodal Interaction, pp. 524–528 (2017)

    Google Scholar 

  8. Dourou, N., Bruschi, V., Spinsante, S., Cecchi, S.: The influence of listeners’ mood on equalization-based listening experience. In: Acoustics, vol. 4, pp. 746–763. MDPI (2022)

    Google Scholar 

  9. Du, S., Tao, Y., Martinez, A.M.: Compound facial expressions of emotion. Proc. Natl. Acad. Sci. 111(15), E1454–E1462 (2014)

    Article  Google Scholar 

  10. Eerola, T., Vuoskoski, J.K.: A review of music and emotion studies: Approaches, emotion models, and stimuli. Music Perception: An Interdiscip. J. 30(3), 307–340 (2012)

    Article  Google Scholar 

  11. Ekman, P.: An argument for basic emotions. Cogn. Emotion 6(3–4), 169–200 (1992)

    Article  Google Scholar 

  12. Ekman, P., Friesen, W.V.: Facial Action Coding System: A technique for the measurement of facial movement. Consulting Psychologists Press (1978)

    Google Scholar 

  13. FakhrHosseini, M., Jeon, M.: The effects of various music on angry drivers’ subjective, behavioral, and physiological states. In: Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 191–196 (2016)

    Google Scholar 

  14. Fakhrhosseini, S.M., Landry, S., Tan, Y.Y., Bhattarai, S., Jeon, M.: If you’re angry, turn the music on: Music can mitigate anger effects on driving performance. In: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 1–7 (2014)

    Google Scholar 

  15. Febriandirza, A., Chaozhong, W., Zhong, M., Hu, Z., Zhang, H.: The effect of natural sounds and music on driving performance and physiological. Eng. Lett. 25(4) (2017)

    Google Scholar 

  16. Frescura, A., Pyoung, Jik, L.: Emotions and physiological responses elicited by neighbours sounds in wooden residential buildings. Build. Environ. 210, 108729 (2022)

    Google Scholar 

  17. Generosi, A., Ceccacci, S., Faggiano, S., Giraldi, L., Mengoni, M.: A toolkit for the automatic analysis of human behavior in HCI applications in the wild. Adv. Sci. Technol. Eng. Syst. 5(6), 185–192 (2020)

    Article  Google Scholar 

  18. Generosi, A., Ceccacci, S., Mengoni, M.: A deep learning-based system to track and analyze customer behavior in retail store. In: 2018 IEEE 8th International Conference on Consumer Electronics-Berlin (ICCE-Berlin), pp. 1–6. IEEE (2018)

    Google Scholar 

  19. Glasgal, R.: \(360^{\circ }\) localization via 4.x RACE processing. In: Proceedings of the 123rd Audio Engineering Society Convention. New York, USA (Oct 2007)

    Google Scholar 

  20. Goodfellow, I.J., et al.: Challenges in representation learning: a report on three machine learning contests. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013. LNCS, vol. 8228, pp. 117–124. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-42051-1_16

    Chapter  Google Scholar 

  21. Hatziantoniou, P.D., Mourjopoulos, J.N.: Generalized fractional-octave smoothing of audio and acoustic responses. J. Audio Eng. Soc. 48, 259–280 (2000)

    Google Scholar 

  22. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference On Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  23. Hohnerlein, C., Ahrens, J.: Perceptual evaluation of a multiband acoustic crosstalk canceler using a linear loudspeaker array. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 96–100. IEEE (2017)

    Google Scholar 

  24. Ilie, G., Thompson, W.F.: A comparison of acoustic cues in music and speech for three dimensions of affect. Music. Percept. 23(4), 319–330 (2006)

    Article  Google Scholar 

  25. Karyotis, C., Doctor, F., Iqbal, R., James, A., E.: Affect Aware Ambient Intelligence: Current and Future Directions, vol. 298 (2017)

    Google Scholar 

  26. Lattanzi, A., Bettarelli, F., Cecchi, S.: Nu-tech: The entry tool of the hartes toolchain for algorithms design. In: Proceedings of the 124th Audio Engineering Society Convention, pp. 1–8 (2008)

    Google Scholar 

  27. Li, S., Weihong, D.: Deep facial expression recognition: A survey. In: IEEE transactions on affective computing. vol. 13, pp. 1195–1215 (2020)

    Google Scholar 

  28. Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

    Chapter  Google Scholar 

  29. Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision And Pattern Recognition-workshops, pp. 94–101. IEEE (2010)

    Google Scholar 

  30. Mollahosseini, A., Hasani, B., Mahoor, Mohammad, H.: Affectnet: A database for facial expression, valence, and arousal computing in the wild. In: IEEE Transactions on Affective Computing. vol. 10, pp. 18–31 (2017)

    Google Scholar 

  31. Pan, F., Zhang, L., Ou, Y., Zhang, X.: The audio-visual integration effect on music emotion: Behavioral and physiological evidence. PLoS ONE 14(5), e0217040 (2019)

    Article  Google Scholar 

  32. Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161 (1980)

    Article  Google Scholar 

  33. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. American Journal of Health-System Pharmacy (2015)

    Google Scholar 

  34. Song, Y., Dixon, S., Pearce, M.T., Halpern, A.R.: Perceived and induced emotion responses to popular music: categorical and dimensional models. Music Perception: An Interdiscip. J. 33(4), 472–492 (2016)

    Article  Google Scholar 

  35. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition, pp. 2818–2826. IEEE (2016)

    Google Scholar 

  36. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  37. Talipu, A., Generosi, A., Mengoni, M., Giraldi, L.: Evaluation of deep convolutional neural network architectures for emotion recognition in the wild. In: 2019 IEEE 23rd International Symposium on Consumer Technologies (ISCT), pp. 25–27. IEEE (2019)

    Google Scholar 

  38. Thayer, R.E.: Toward a psychological theory of multidimensional activation (arousal). Motiv. Emot. 2, 1–34 (1978)

    Article  Google Scholar 

  39. Valstar, M., Maja, P.: Induced disgust, happiness and surprise: an addition to the MMI facial expression database. In: Proceedings of 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affec. p. 65 (2010)

    Google Scholar 

  40. Van Der Zwaag, M.D., Dijksterhuis, C., De Waard, D., Mulder, B.L., Westerink, J.H., Brookhuis, K.A.: The influence of music on mood and performance while driving. Ergonomics 55(1), 12–22 (2012)

    Article  Google Scholar 

  41. Zentner, M., Grandjean, D., Scherer, K.R.: Emotions evoked by the sound of music: characterization, classification, and measurement. Emotion 8(4), 494 (2008)

    Article  Google Scholar 

  42. Zhu, Y., Wang, Y., Li, G., Guo, X.: Recognizing and releasing drivers’ negative emotions by using music: evidence from driver anger. In: Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 173–178 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Generosi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Generosi, A., Caresana, F., Dourou, N., Bruschi, V., Cecchi, S., Mengoni, M. (2023). An Experimentation to Measure the Influence of Music on Emotions. In: Krömker, H. (eds) HCI in Mobility, Transport, and Automotive Systems. HCII 2023. Lecture Notes in Computer Science, vol 14049. Springer, Cham. https://doi.org/10.1007/978-3-031-35908-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35908-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35907-1

  • Online ISBN: 978-3-031-35908-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics