Abstract
Music plays an important role in our society and has applications broader than just entertainment and pleasure due to its social and physiological effects. There has been recent interest in music, and two active research topics are music information retrieval and music emotion recognition, where data mining and machine learning techniques are integrated with music features and annotations to extract music information such as genres, instrument and its emotional content. In this paper, a machine learning music perception model is proposed to identify emotional content of a given audio stream and study the emotional effects of music. In fact, our developed model has the capability to determine the emotional state of a region (e.g., city) that could be utilized in applications such as marketing, and many other facets of the society such as cognitive development, education, therapy and security. This emotion recognition task is performed by mapping musical acoustic features to corresponding arousal and valence emotion indexes using a linear regression model. A radio-induced emotion dataset (RIED) is compiled from the songs broadcasted on radio in four US major cities (i.e., Houston, New York, Los Angeles and Miami) between October 21, 2017, and November 21, 2017. The RIED is then used as input to the proposed perception model to observe the regional music emotion propensity.
















Similar content being viewed by others
References
Hardy D (1998) Creativity: flow and the psychology of discovery and invention. Pers Psychol 51(3):794
Cooke D (1990) The language of music. Clarendon Press. https://global.oup.com/academic/product/the-language-of-music-9780198161806?cc=us&lang=en&#
North AC, Hargreaves DJ (2008) The social and applied psychology of music. Oxford University Press, Oxford
Schlaug G, Norton A, Overy K, Winner E (2005) Effects of music training on the child’s brain and cognitive development. Ann N Y Acad Sci 1060(1):219–230
Isern B (1964) Music in special education. J Music Ther 1(4):139–142
Bailey LM (1984) The use of songs in music therapy with cancer patients and their families. Music Ther J AAMT 3(1):5–17
Parra F, Miljkovitch R, Persiaux G, Morales M, Scherer S (2017) The multimodal assessment of adult attachment security: developing the biometric attachment test. J Med Internet Res 19(4):e100
Liu Y, Liu Y, Zhao Y, Hua KA (2015) What strikes the strings of your heart?—feature mining for music emotion analysis. IEEE Trans Affect Comput 6(3):247–260
McKeganey SPN (2000) The rise and rise of peer education approaches. Drugs Educ Prev Policy 7(3):293–310
Leming JS (1987) Rock music and the socialization of moral values in early adolescence. Youth Soc 18(4):363–383
Bennett A (2000) Popular music and youth culture: music, identity and place. Macmillan Press Ltd, Houndmills, Basingstoke
Wurtzler S, Campbell BB, Huntemann N, Breiner LA (2003) Communities of the air: radio century, radio culture. Duke University Press, Durham
Kusek D, Leonhard G (2005) The future of music: manifesto for the digital music revolution. Omnibus Press, London
Downie JS (2005) Music information retrieval. Annu Rev Inf Sci Technol 37(1):295–340
Yang Y-H, Chen HH (2011) Music emotion recognition. CRC Press, Cambridge
Panwar S, Das A, Roopaei M, Rad P (2017) A deep learning approach for mapping music genres. In: System of Systems Engineering Conference (SoSE), 2017 12th. IEEE
Baumgartner H (1992) Remembrance of things past: music, autobiographical memory, and emotion. Adv Consum Res 19:613–620
Carlson E, Saarikallio A, Toiviainen P, Bogert B, Kliuchko M, Brattico E (2015) Maladaptive and adaptive emotion regulation through music: a behavioral and neuroimaging study of males and females. Front Hum Neurosci 9:466
Maratos A, Crawford MJ, Procter S (2011) Music therapy for depression: it seems to work, but how? Br J Psychiatry 199(2):92–93
Reynolds G, Barry D, Burke T, Coyle E (2007) Towards a personal automatic music playlist generation algorithm: the need for contextual information. In: Proceedings of the 2nd Audio Mostly Conference: interaction with sound, Fraunhofer Institute for Digital Media Technology, Limenau, Germany, pp 84–89
Masahiro N, Takaesu H, Demachi H, Oono M, Saito H (2008) Development of an automatic music selection system based on runner’s step frequency. In: Proceedings of ISMIR Conference
Hargreaves DJ, North AC (1999) The functions of music in everyday life: redefining the social in music psychology. Psychol Music 27(1):71–83
Stige B (2002) Culture-centered music therapy. In: The Oxford Handbook of Music Therapy
Napiorkowski S (2015) Music mood recognition: state of the art review. MUS-15 July 10
Padial J, Goel A Music Mood Classification. http://cs229.stanford.edu/proj2011/GoelPadial-MusicMoodClassification.pdf. Accessed 24 July 2018
Laurier C et al (2007) Audio music mood classification using support vector machine. MIREX task on Audio Mood Classification, pp 2–4
Thayer RE (1990) The biopsychology of mood and arousal. Oxford University Press, Oxford
Grachten M, Schedl M, Pohle T, Widmer G (2009) The ISMIR Cloud: A Decade of ISMIR Conferences at Your Fingertips. ISMIR
Russell JA (1980) A circumplex model of affect. J Personal Soc Psychol 39(6):1161
Panwar S (2017) Emotional effects of music using machine learning analytics. Diss. The University of Texas at San Antonio
Alajanki A, Yang Y-H, Soleymani M (2017) Developing a benchmark for emotional analysis of music. PLoS ONE 12(3):e0173392
Sanden C, Zhang JZ (2011) An empirical study of multi-label classifiers for music tag annotation. In: Proceedings of the 12th International Society for Music Information Retrieval Conference, ISMIR 2011, pp 717–722
Eyben F, Salomão GL, Sundberg J, Scherer KR (2015) Schuller BW (2015) Emotion in the singing voice—a deeper look at acoustic features in the light of automatic classification. EURASIP J Audio Speech Music Process 1:19
Misron MM, Rosli N, Manaf NA, Hali HA (2014) Music emotion classification (mec): exploiting vocal and instrumental sound features. In: Recent advances on soft computing and data mining. Springer, Cham, pp 539–549
Logan B (2000) Mel frequency cepstral coefficients for music modeling. ISMIR, vol 270
Schmidt EM., Turnbull D, Kim YE (2010) Feature selection for content-based, time-varying musical emotion regression. In: Proceedings of the International Conference on Multimedia Information Retrieval. ACM
Stewart CA, Cockerill TM, Foster I, Hancock D, Merchant N, Skidmore E, Stanzione D, Taylor J, Tuecke S, Turner G, Vaughn M, Gaffney NI (2015) Jetstream: a self-provisioned, scalable science and engineering cloud environment. In: Proceedings of the 2015 XSEDE Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure, XSEDE '15, Article No. 29
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Research 12:2825–2830
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Panwar, S., Rad, P., Choo, KK.R. et al. Are you emotional or depressed? Learning about your emotional state from your music using machine learning. J Supercomput 75, 2986–3009 (2019). https://doi.org/10.1007/s11227-018-2499-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-018-2499-y