Abstract
Several emotion-adaptive systems frameworks have been proposed to enable listeners’ emotional regulation through music reproduction. However, the majority of these frameworks has been implemented only under in-Lab or in-car conditions, in the second case focusing on improving driving performance. Therefore, to the authors’ best knowledge, no research has been conducted for mobility settings, such as trains, planes, yacht, etc. Focusing on this aspect, the proposed approach reports the results obtained from the study of relationship between listener’s induced emotion and music reproduction exploiting an advanced audio system and an innovative technology for face expressions’ recognition. Starting from an experiment in a university lab scenario, with 15 listeners, and a yacht cabin scenario, with 11 listeners, participants’ emotional variability has been deeply investigated reproducing 4 audio enhanced music tracks, to evaluate the listeners’ emotional “sensitivity” to music stimuli. The experimental results indicated that, during the reproduction in the university lab, listeners’ “happiness” and “anger” states were highly affected by the music stimuli and highlighted a possible relationship between music and listeners’ compound emotions. Furthermore, listeners’ emotional engagement was proven to be more affected by music stimuli in the yacht cabin, rather than the university lab.
This work is supported by Marche Region in implementation of the financial programme POR MARCHE FESR 2014–2020, project “Miracle” (Marche Innovation and Research fAcilities for Connected and sustainable Living Environments), CUP B28I19000330007.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Altieri, A., et al.: An adaptive system to manage playlists and lighting scenarios based on the user’s emotions. In: 2019 IEEE International Conference on Consumer Electronics (ICCE), pp. 1–2. IEEE (2019)
Altieri, A., Ceccacci, S., Mengoni, M.: Emotion-aware ambient intelligence: changing smart environment interaction paradigms through affective computing. In: Streitz, N., Konomi, S. (eds.) HCII 2019. LNCS, vol. 11587, pp. 258–270. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21935-2_20
Ambadar, Z., Cohn, J.F., Reed, L.I.: All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J. Nonverbal Behav. 33, 17–34 (2009)
Bai, M.R., Shih, G.Y.: Upmixing and downmixing two-channel stereo audio for consumer electronics. IEEE Trans. Consum. Electron. 53(3), 1011–1019 (2007)
Barsoum, E., Zhang, C., Ferrer, C.C., Zhang, Z.: Training deep networks for facial expression recognition with crowd-sourced label distribution. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 279–2831 (2016)
Cecchi, S., Carini, A., Spors, S.: Room response equalization-a review. Appl. Sci. 8(1), 16 (2017)
Dhall, A., Goecke, R., Ghosh, S., Joshi, J., Hoey, J., Gedeon, T.: From individual to group-level emotion recognition: Emotiw 5.0. In: Proceedings of the 19th ACM International Conference On Multimodal Interaction, pp. 524–528 (2017)
Dourou, N., Bruschi, V., Spinsante, S., Cecchi, S.: The influence of listeners’ mood on equalization-based listening experience. In: Acoustics, vol. 4, pp. 746–763. MDPI (2022)
Du, S., Tao, Y., Martinez, A.M.: Compound facial expressions of emotion. Proc. Natl. Acad. Sci. 111(15), E1454–E1462 (2014)
Eerola, T., Vuoskoski, J.K.: A review of music and emotion studies: Approaches, emotion models, and stimuli. Music Perception: An Interdiscip. J. 30(3), 307–340 (2012)
Ekman, P.: An argument for basic emotions. Cogn. Emotion 6(3–4), 169–200 (1992)
Ekman, P., Friesen, W.V.: Facial Action Coding System: A technique for the measurement of facial movement. Consulting Psychologists Press (1978)
FakhrHosseini, M., Jeon, M.: The effects of various music on angry drivers’ subjective, behavioral, and physiological states. In: Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 191–196 (2016)
Fakhrhosseini, S.M., Landry, S., Tan, Y.Y., Bhattarai, S., Jeon, M.: If you’re angry, turn the music on: Music can mitigate anger effects on driving performance. In: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 1–7 (2014)
Febriandirza, A., Chaozhong, W., Zhong, M., Hu, Z., Zhang, H.: The effect of natural sounds and music on driving performance and physiological. Eng. Lett. 25(4) (2017)
Frescura, A., Pyoung, Jik, L.: Emotions and physiological responses elicited by neighbours sounds in wooden residential buildings. Build. Environ. 210, 108729 (2022)
Generosi, A., Ceccacci, S., Faggiano, S., Giraldi, L., Mengoni, M.: A toolkit for the automatic analysis of human behavior in HCI applications in the wild. Adv. Sci. Technol. Eng. Syst. 5(6), 185–192 (2020)
Generosi, A., Ceccacci, S., Mengoni, M.: A deep learning-based system to track and analyze customer behavior in retail store. In: 2018 IEEE 8th International Conference on Consumer Electronics-Berlin (ICCE-Berlin), pp. 1–6. IEEE (2018)
Glasgal, R.: \(360^{\circ }\) localization via 4.x RACE processing. In: Proceedings of the 123rd Audio Engineering Society Convention. New York, USA (Oct 2007)
Goodfellow, I.J., et al.: Challenges in representation learning: a report on three machine learning contests. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013. LNCS, vol. 8228, pp. 117–124. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-42051-1_16
Hatziantoniou, P.D., Mourjopoulos, J.N.: Generalized fractional-octave smoothing of audio and acoustic responses. J. Audio Eng. Soc. 48, 259–280 (2000)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference On Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hohnerlein, C., Ahrens, J.: Perceptual evaluation of a multiband acoustic crosstalk canceler using a linear loudspeaker array. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 96–100. IEEE (2017)
Ilie, G., Thompson, W.F.: A comparison of acoustic cues in music and speech for three dimensions of affect. Music. Percept. 23(4), 319–330 (2006)
Karyotis, C., Doctor, F., Iqbal, R., James, A., E.: Affect Aware Ambient Intelligence: Current and Future Directions, vol. 298 (2017)
Lattanzi, A., Bettarelli, F., Cecchi, S.: Nu-tech: The entry tool of the hartes toolchain for algorithms design. In: Proceedings of the 124th Audio Engineering Society Convention, pp. 1–8 (2008)
Li, S., Weihong, D.: Deep facial expression recognition: A survey. In: IEEE transactions on affective computing. vol. 13, pp. 1195–1215 (2020)
Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision And Pattern Recognition-workshops, pp. 94–101. IEEE (2010)
Mollahosseini, A., Hasani, B., Mahoor, Mohammad, H.: Affectnet: A database for facial expression, valence, and arousal computing in the wild. In: IEEE Transactions on Affective Computing. vol. 10, pp. 18–31 (2017)
Pan, F., Zhang, L., Ou, Y., Zhang, X.: The audio-visual integration effect on music emotion: Behavioral and physiological evidence. PLoS ONE 14(5), e0217040 (2019)
Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161 (1980)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. American Journal of Health-System Pharmacy (2015)
Song, Y., Dixon, S., Pearce, M.T., Halpern, A.R.: Perceived and induced emotion responses to popular music: categorical and dimensional models. Music Perception: An Interdiscip. J. 33(4), 472–492 (2016)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition, pp. 2818–2826. IEEE (2016)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition, pp. 1–9 (2015)
Talipu, A., Generosi, A., Mengoni, M., Giraldi, L.: Evaluation of deep convolutional neural network architectures for emotion recognition in the wild. In: 2019 IEEE 23rd International Symposium on Consumer Technologies (ISCT), pp. 25–27. IEEE (2019)
Thayer, R.E.: Toward a psychological theory of multidimensional activation (arousal). Motiv. Emot. 2, 1–34 (1978)
Valstar, M., Maja, P.: Induced disgust, happiness and surprise: an addition to the MMI facial expression database. In: Proceedings of 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affec. p. 65 (2010)
Van Der Zwaag, M.D., Dijksterhuis, C., De Waard, D., Mulder, B.L., Westerink, J.H., Brookhuis, K.A.: The influence of music on mood and performance while driving. Ergonomics 55(1), 12–22 (2012)
Zentner, M., Grandjean, D., Scherer, K.R.: Emotions evoked by the sound of music: characterization, classification, and measurement. Emotion 8(4), 494 (2008)
Zhu, Y., Wang, Y., Li, G., Guo, X.: Recognizing and releasing drivers’ negative emotions by using music: evidence from driver anger. In: Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 173–178 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Generosi, A., Caresana, F., Dourou, N., Bruschi, V., Cecchi, S., Mengoni, M. (2023). An Experimentation to Measure the Influence of Music on Emotions. In: Krömker, H. (eds) HCI in Mobility, Transport, and Automotive Systems. HCII 2023. Lecture Notes in Computer Science, vol 14049. Springer, Cham. https://doi.org/10.1007/978-3-031-35908-8_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-35908-8_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35907-1
Online ISBN: 978-3-031-35908-8
eBook Packages: Computer ScienceComputer Science (R0)