Abstract
The recognition of emotions for annotating large-size music datasets is still an open challenge. The problem lies in that most of the solutions require the audio of the songs and user/expert intervention during certain phases of the recognition process. In this paper, we propose an automatic solution for overcoming these drawbacks. It consists of a heterogeneous set of machine learning models that have been developed from Spotify’s Web data services and miner tools. In order to improve the accuracy of resulting annotations, each model is specialized in recognizing a class of emotions. These models have been validated by using the AcousticBrainz database and have been exported to be integrated into a music emotion recognition system. It has been used to emotionally annotate the Spotify music database which is composed by more than 30 million songs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
AcousticBrainz (2015). http://acousticbrainz.org/
DEAM: Mediaeval database for emotional analysis in music (2016). http://cvml.unige.ch/databases/DEAM/
Joblib: Running python functions as pipeline jobs (2019). https://joblib.readthedocs.io/
Álvarez, P., Beltrán, J.R., Baldassarri, S.: Dj-running: wearables and emotions for improving running performance. In: Ahram, T., Karwowski, W., Taiar, R. (eds.) Human Systems Engineering and Design, pp. 847–853. Springer International Publishing, Cham (2019)
Ayata, D., Yaslan, Y., Kamasak, M.: Emotion based music recommendation system using wearable physiological sensors. IEEE Trans. Consum. Electron. 64(2), 1–1 (2018). https://doi.org/10.1109/TCE.2018.2844736
Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13(1), 281–305 (2012). http://dl.acm.org/citation.cfm?id=2503308.2188395
Cheung, W.L., Lu, G.: Music emotion annotation by machine learning. In; 2008 IEEE 10th Workshop on Multimedia Signal Processing, pp. 580–585 (2008)
Chiu, M.C., Ko, L.W.: Develop a personalized intelligent music selection system based on heart rate variability and machine learning. Multimed. Tools Appli. 76, 15607–15639 (2016)
Fessahaye, F., et al.: T-recsys: A novel music recommendation system using deep learning. In: 2019 IEEE International Conference on Consumer Electronics (ICCE), pp. 1–6 (2019). https://doi.org/10.1109/ICCE.2019.8662028
Germain, A., Chakareski, J.: Spotify me: facebook-assisted automatic playlist generation. In: 2013 IEEE 15th International Workshop on Multimedia Signal Processing (MMSP), pp. 25–28, September 2013. https://doi.org/10.1109/MMSP.2013.6659258
Girardi, D., Lanubile, F., Novielli, N.: Emotion detection using noninvasive low cost sensors. In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 125–130, October 2017. https://doi.org/10.1109/ACII.2017.8273589
Hastarita Rachman, F., Sarno, R., Fatichah, C.: Music emotion classification based on lyrics-audio using corpus based emotion. Int. J. Electr. Comput. Eng. (IJECE) 8(3), 1720–1730 (2018). https://doi.org/10.11591/ijece.v8i3.pp1720-1730
Huang, W., Knapp, R.B.: An exploratory study of population differences based on massive database of physiological responses to music. In: 7th International Conference on Affective Computing and Intelligent Interaction (ACII 2017), pp. 524–530. San Antonio, TX, USA (2017). https://doi.org/10.1109/ACII.2017.8273649
Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: Open source scientific tools for Python (2019). http://www.scipy.org/
Kim, J., André, E.: Emotion recognition based on physiological changes in music listening. IEEE Trans. Pattern Anal. Mach. Intell. 30(12), 2067–2083 (2008). https://doi.org/10.1109/TPAMI.2008.26
Laurier, C., Sordo, M., Serrá, J., Herrera, P.: Music mood representations from social tags. In: Proceedings of the 10th International Society for Music Information Retrieval Conference (ISMIR 2009), pp. 381–386, Kobe, Japan (2009)
Liu, H., Fang, Y., Huang, Q.: Music emotion recognition using a variant of recurrent neural network. In: 2018 International Conference on Mathematics, Modeling, Simulation and Statistics Application (MMSSA 2018), vol. 164, pp. 15–18, Atlantis Press (2019)
Madathil, M.: Music recommendation system spotify - collaborative filtering, reports in Computer Music. Aachen University, Germany (2017)
Nalini, N., Palanivel, S.: Music emotion recognition: the combined evidence of mfcc and residual phase. Egypt. Inf. J. 17(1), 1–10 (2016). https://doi.org/10.1016/j.eij.2015.05.004
Panda, R., Malheiro, R., Paiva, R.P.: Novel audio features for music emotion recognition. IEEE Trans. Affect. Comput. Early Access (2018). https://doi.org/10.1109/TAFFC.2018.2820691
Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Pichl, M., Zangerle, E., Specht, G.: Combining spotify and twitter data for generating a recent and public dataset for music recommendation. In: Proceedings of the 26th GI-Workshop Grundlagen von Datenbanken (GvDB 2014), Ritten, Italy (2015)
Russell, J.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161–1178 (1980)
Scherer, K.R.: Which emotions can be induced by music? What are the underlying mechanisms? And how can we measure them. J. New Music Res. 33(3), 239–251 (2004). https://doi.org/10.1080/0929821042000317822
Senachakr, P., Thammasan, N., Ichi Fukui, K., Numao, M.: Music-emotion Recognition Based on Wearable Dry-electrode Electroencephalogram, pp. 235–243 (2017). https://doi.org/10.1142/9789813234079_0018
Shao, X., Cheng, Z., Kankanhalli, M.S.: Music auto-tagging based on the unified latent semantic modeling. Multimed. Tools Appl. 78(1), 161–176 (2019). https://doi.org/10.1007/s11042-018-5632-2
Tian, L., et al.: Recognizing induced emotions of movie audiences: are induced and perceived emotions the same. In: 7th International Conference on Affective Computing and Intelligent Interaction (ACII 2017), pp. 28–35, San Antonio, TX, USA, October 2017
Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.: Multi-label classification of music by emotion. EURASIP J. Audio, Speech Music Process. 2011(1), 4 (2011). https://doi.org/10.1186/1687-4722-2011-426793
Yang, X., Dong, Y., Li, J.: Review of data features-based music emotion recognition methods. Multimed. Syst. 24(4), 365–389 (2018). https://doi.org/10.1007/s00530-017-0559-4
Yang, Y., Lin, Y., Su, Y., Chen, H.H.: Music emotion classification: a regression approach. In: 2007 IEEE International Conference on Multimedia and Expo, pp. 208–211 (2007). https://doi.org/10.1109/ICME.2007.4284623
Yang, Y.H., Chen, H.: Machine recognition of music emotion: a review. ACM Trans. Intell. Syst. Technol. (TIST) 3(3), 40:1–40:30 (2012)
Acknowledgments
This work has been supported by the TIN2015-72241-EXP and TIN2017-84796-C2-2-R projects, granted by the Spanish Ministerio de Economía y Competitividad, and the DisCo-T21-17R and Affective-Lab-T25-17D projects, granted by the Aragonese Government.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
de Quirós, J.G., Baldassarri, S., Beltrán, J.R., Guiu, A., Álvarez, P. (2019). An Automatic Emotion Recognition System for Annotating Spotify’s Songs. In: Panetto, H., Debruyne, C., Hepp, M., Lewis, D., Ardagna, C., Meersman, R. (eds) On the Move to Meaningful Internet Systems: OTM 2019 Conferences. OTM 2019. Lecture Notes in Computer Science(), vol 11877. Springer, Cham. https://doi.org/10.1007/978-3-030-33246-4_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-33246-4_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-33245-7
Online ISBN: 978-3-030-33246-4
eBook Packages: Computer ScienceComputer Science (R0)