Skip to main content
Log in

Review of data features-based music emotion recognition methods

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

The ability of music to induce or convey emotions ensures the importance of its role in human life. Consequently, research on methods for identifying the high-level emotion states of a music segment from its low-level features has attracted attention. This paper offers new insights on music emotion recognition methods based on different combinations of data features that they use during the modeling phase from three aspects, music features only, ground-truth data only, and their combination, and provides a comprehensive review of them. Then, focusing on the relatively popular methods in which the two types of data, music features and ground-truth data, are combined, we further subdivide the methods in the literature according to the label- and numerical-type ground-truth data, and analyze the development of music emotion recognition with the cue of modeling methods and time sequence. Three current important research directions are then summarized. Although much has been achieved in the area of music emotion recognition, many issues remain. We review these issues and put forward some suggestions for future work.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. According to the psychologists’ definition [78], we use the term “music emotion” to represent the audience’s perceptual emotion of music.

  2. http://www.music-ir.org/mirex/wiki/MIREX_HOME.

  3. https://iTunes.Apple.com/HK/app/sensbeat/id725472587?Mt=8.

  4. https://iTunes.Apple.com/HK/app/stereomood-tuning-my-emotions/id524634435?Mt=8.

  5. http://cvml.unige.ch/databases/DEAM/.

References

  1. Ahsan, H., Kumar, V., Jawahar, C.: Multi-label annotation of music. 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR), pp. 1–5. IEEE, New York (2015)

    Google Scholar 

  2. Balkwill, L.L., Thompson, W.F.: A cross-cultural investigation of the perception of emotion in music: psychophysical and cultural cues. Music Percept. Interdiscip. J. 17(1), 43–64 (1999)

    Article  Google Scholar 

  3. Barrington, L., O’Malley, D., Turnbull, D., Lanckriet, G.: User-centered design of a social game to tag music. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 7–10. ACM, New York (2009)

  4. Barthet, M., Fazekas, G., Sandler, M.: Music emotion recognition: from content-to context-based models. In: International Symposium on Computer Music Modeling and Retrieval, pp. 228–252. Springer, Berlin (2012)

  5. Bartoszewski, M., Kwasnicka, H., Markowska-Kaczmar, U., Myszkowski, P.B.: Extraction of emotional content from music data. In: Computer Information Systems and Industrial Management Applications, 2008. CISIM’08. 7th, pp. 293–299. IEEE, New York (2008)

  6. Bernatzky, G., Presch, M., Anderson, M., Panksepp, J.: Emotional foundations of music as a non-pharmacological pain management tool in modern medicine. Neurosci. Biobehav. Rev. 35(9), 1989–1999 (2011)

    Article  Google Scholar 

  7. Chapaneri, S., Lopes, R., Jayaswal, D.: Evaluation of music features for PUK kernel based genre classification. Procedia Comput. Sci. 45, 186–196 (2015)

    Article  Google Scholar 

  8. Chen, S.H., Lee, Y.S., Hsieh, W.C., Wang, J.C.: Music emotion recognition using deep gaussian process. In: 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), pp. 495–498. IEEE, New York (2015)

  9. Chen, Y.A., Wang, J.C., Yang, Y.H., Chen, H.: Linear regression-based adaptation of music emotion recognition models for personalization. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2149–2153. IEEE, New York (2014)

  10. Chen, Y.A., Yang, Y.H., Wang, J.C., Chen, H.: The amg1608 dataset for music emotion recognition. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 693–697. IEEE, New York (2015)

  11. Chin, Y.H., Lin, C.H., Siahaan, E., Wang, I.C., Wang, J.C.: Music emotion classification using double-layer support vector machines. In: 2013 International Conference on Orange Technologies (ICOT), pp. 193–196. IEEE, New York (2013)

  12. Chin, Y.H., Lin, P.C., Tai, T.C., Wang, J.C.: Genre based emotion annotation for music in noisy environment. In: Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on, pp. 863–866. IEEE, New York (2015)

  13. Corrêa, D.C., Rodrigues, F.A.: A survey on symbolic data-based music genre classification. Expert Syst. Appl. 60, 190–210 (2016)

    Article  Google Scholar 

  14. Deng, J.J., Leung, C.H., Milani, A., Chen, L.: Emotional states associated with music: classification, prediction of changes, and consideration in recommendation. ACM Trans. Interact. Intell. Syst. (TiiS) 5(1), 1–36 (2015)

    Article  Google Scholar 

  15. Dewi, K.C., Harjoko, A.: Kid’s song classification based on mood parameters using k-nearest neighbor classification method and self organizing map. In: 2010 International Conference on Distributed Framework and Applications (DFmA), pp. 1–5. IEEE, New York (2010)

  16. Dingle, G.A., Kelly, P.J., Flynn, L.M., Baker, F.A.: The influence of music on emotions and cravings in clients in addiction treatment: a study of two clinical samples. Arts Psychother. 45, 18–25 (2015)

    Article  Google Scholar 

  17. Dobashi, A., Ikemiya, Y., Itoyama, K., Yoshii, K.: A music performance assistance system based on vocal, harmonic, and percussive source separation and content visualization for music audio signals. In: Proceedings of SMC, pp. 99–104 (2015)

  18. Dornbush, S., Fisher, K., McKay, K., Prikhodko, A., Segall, Z.: Xpod-a human activity and emotion aware mobile music player. In: 2005 2nd Asia Pacific Conference on Mobile Technology, Applications and Systems, pp. 1–6. IEEE, New York (2005)

  19. Downie, J.S.: The music information retrieval evaluation exchange (2005–2007): a window into music information retrieval research. Acoust. Sci. Technol. 29(4), 247–255 (2008)

    Article  Google Scholar 

  20. Downie, X., Laurier, C., Ehmann, M.: The 2007 mirex audio mood classification task: Lessons learned. In: Proc. 9th Int. Conf. Music Inf. Retrieval, pp. 462–467 (2008)

  21. Eerola, T., Vuoskoski, J.K.: A comparison of the discrete and dimensional models of emotion in music. Psychol. Music 39(1), 18–49 (2011)

    Article  Google Scholar 

  22. Fan, S., Tan, C., Fan, X., Su, H., Zhang, J.: Heartplayer: a smart music player involving emotion recognition, expression and recommendation. In: International Conference on Multimedia Modeling, pp. 483–485. Springer, Berlin (2011)

  23. Feng, Y., Zhuang, Y., Pan, Y.: Popular music retrieval by detecting mood. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pp. 375–376. ACM, New York (2003)

  24. Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A.D., Koelsch, S.: Universal recognition of three basic emotions in music. Curr. Biol. 19(7), 573–576 (2009)

    Article  Google Scholar 

  25. Fu, Z., Lu, G., Ting, K.M., Zhang, D.: A survey of audio-based music classification and annotation. IEEE Trans. Multimed. 13(2), 303–319 (2011)

    Article  Google Scholar 

  26. Gabrielsson, A.: Emotion perceived and emotion felt: same or different? Musicae Scientiae 5(1 suppl), 123–147 (2002)

    Google Scholar 

  27. Gabrielsson, A., Lindström, E.: The influence of musical structure on emotional expression. Oxford University Press (2001)

  28. Goyal, S., Kim, E.: Application of fuzzy relational interval computing for emotional classification of music. In: 2014 IEEE Conference on Norbert Wiener in the 21st Century (21CW), pp. 1–8. IEEE, New York (2014)

  29. Grimaldi, M., Cunningham, P.D., Kokaram, A.: Discrete wavelet packet transform and ensembles of lazy and eager learners for music genre classification. Multimed. Syst. 11(5), 422–437 (2006)

    Article  Google Scholar 

  30. He, H., Chen, B., Guo, J.: Emotion recognition of pop music based on maximum entropy with priors. In: Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 788–795. Springer, Berlin (2009)

  31. Hevner, K.: Experimental studies of the elements of expression in music. Am. J. Psychol. 48(2), 246–268 (1936)

    Article  Google Scholar 

  32. Hu, X., Yang, Y.H.: Cross-cultural mood regression for music digital libraries. In: Proceedings of the 14th ACM/IEEE-CS Joint Conference on Digital Libraries, pp. 471–472. IEEE Press, New York (2014)

  33. Hu, X., Yang, Y.H.: Cross-dataset and cross-cultural music mood prediction: a case on western and Chinese pop songs. IEEE Trans. Affect. Comput. 8(2), 228–240 (2017)

    Article  Google Scholar 

  34. Imbrasaitė, V., Baltrušaitis, T., Robinson, P.: Emotion tracking in music using continuous conditional random fields and relative feature representation. In: 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 1–6. IEEE, New York (2013)

  35. Janssen, J.H., van den Broek, E.L., Westerink, J.H.: Tune in to your emotions: a robust personalized affective music player. User Model. User-Adapt. Interact. 22(3), 255–279 (2012)

    Article  Google Scholar 

  36. Jun, S., Rho, S., Han, B.J., Hwang, E.: A fuzzy inference-based music emotion recognition system. In: 5th International Conference on Visual Information Engineering, 2008. VIE 2008, pp. 673–677. IET, Stevenage (2008)

  37. Juslin, P.N.: Cue utilization in communication of emotion in music performance: relating performance to perception. J. Exp. Psychol. Hum. Percept. Perform. 26(6), 1797–1812 (2000)

    Article  Google Scholar 

  38. Juslin, P.N., Sloboda, J.A.: Music and emotion: theory and research. Oxford University Press (2001)

  39. Katayose, H., Imai, M., Inokuchi, S.: Sentiment extraction in music. In: 9th International Conference on Pattern Recognition, 1988, pp. 1083–1087. IEEE, New York (1988)

  40. Kim, J., Lee, S., Kim, S., Yoo, W.Y.: Music mood classification model based on arousal–valence values. In: 2011 13th International Conference on Advanced Communication Technology (ICACT), pp. 292–295. IEEE, New York (2011)

  41. Kim, M., Kwon, H.C.: Lyrics-based emotion classification using feature selection by partial syntactic analysis. In: 2011 IEEE 23rd International Conference on Tools with Artificial Intelligence, pp. 960–964. IEEE, New York (2011)

  42. Kim, Y.E., Schmidt, E.M., Emelle, L.: Moodswings: a collaborative game for music mood label collection. ISMIR 8, 231–236 (2008)

    Google Scholar 

  43. Kim, Y.E., Schmidt, E.M., Migneco, R., Morton, B.G., Richardson, P., Scott, J., Speck, J.A., Turnbull, D.: Music emotion recognition: a state of the art review. In: Proc. ISMIR, pp. 255–266. Citeseer (2010)

  44. Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., Patras, I.: Deap: a database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)

    Article  Google Scholar 

  45. Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proc. Eighteenth Int. Conf. Mach. Learn. ICML 1, 282–289 (2001)

    Google Scholar 

  46. Laurier, C., Herrera, P., Mandel, M., Ellis, D.: Audio music mood classification using support vector machine. MIREX Task Audio Mood Classif. 2–4 (2007)

  47. Law, E.L., Von Ahn, L., Dannenberg, R.B., Crawford, M.: Tagatune: a game for music and sound annotation. In: ISMIR, vol. 3, p. 2 (2007)

  48. Lee, C.C., Mower, E., Busso, C., Lee, S., Narayanan, S.: Emotion recognition using a hierarchical binary decision tree approach. Speech Commun. 53(9), 1162–1171 (2011)

    Article  Google Scholar 

  49. Li, J., Gao, S., Han, N., Fang, Z., Liao, J.: Music mood classification via deep belief network. In: 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 1241–1245. IEEE, New York (2015)

  50. Li, T., Ogihara, M.: Detecting emotion in music. ISMIR 3, 239–240 (2003)

    Google Scholar 

  51. Liu, D., Lu, L., Zhang, H.J.: Automatic mood detection from acoustic music data. In: Proceedings of Ismir, pp. 81–87 (2003)

  52. Liu, J.Y., Liu, S.Y., Yang, Y.H.: Lj2m dataset: toward better understanding of music listening behavior and user mood. In: 2014 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE, New York (2014)

  53. Lu, L., Liu, D., Zhang, H.J.: Automatic mood detection and tracking of music audio signals. IEEE Trans. Audio Speech Lang. Process. 14(1), 5–18 (2006)

    Article  Google Scholar 

  54. Ma, A., Sethi, I., Patel, N.: Multimedia content tagging using multilabel decision tree. In: International Symposium on Multimedia, pp. 606–611 (2009)

  55. MacDorman, K.F., Ho, C.C.: Automatic emotion prediction of song excerpts: index construction, algorithm design, and empirical comparison. J. N. Music Res. 36(4), 281–299 (2007)

    Article  Google Scholar 

  56. Madsen, J., Jensen, B.S., Larsen, J.: Learning combinations of multiple feature representations for music emotion prediction. In: Proceedings of the 1st International Workshop on Affect & Sentiment in Multimedia, pp. 3–8. ACM, New York (2015)

  57. Mcadams, S., Giordano, B.L., Mcadams, S., Giordano, B.L.: The oxford handbook of music psychology. Jew. Q. Rev. 11, 72–80 (2009)

    Google Scholar 

  58. McKay, C.: Automatic genre classification of midi recordings. Ph.D. thesis, McGill University (2004)

  59. McKay, C., Fujinaga, I.: Automatic genre classification using large high-level musical feature sets. In: ISMIR, vol. 2004, pp. 525–530. Citeseer (2004)

  60. Mehrabian, A.: Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament. Curr. Psychol. 14(4), 261–292 (1996)

    Article  MathSciNet  Google Scholar 

  61. Morton, B.G., Speck, J.A., Schmidt, E.M., Kim, Y.E.: Improving music emotion labeling using human computation. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 45–48. ACM, New York (2010)

  62. Myint, E.E.P., Pwint, M.: An approach for multi-label music mood classification. In: 2010 2nd International Conference on Signal Processing Systems (ICSPS), vol. 1, pp. V1–290. IEEE, New York (2010)

  63. Neocleous, A., Ramirez, R., Perez, A., Maestre, E.: Modeling emotions in violin audio recordings. In: Proceedings of 3rd International Workshop on Machine Learning and Music, pp. 17–20. ACM, New York (2010)

  64. Nguyen, C.T., Zhan, D.C., Zhou, Z.H.: Multi-modal image annotation with multi-instance multi-label lda. In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, pp. 1558–1564. AAAI Press, Cambridge (2013)

  65. Panda, R., Paiva, R.P.: Using support vector machines for automatic mood tracking in audio music. Audio Engineering Society Convention, pp. 1–8. Audio Engineering Society, New York (2011)

    Google Scholar 

  66. Pao, T.L., Cheng, Y.M., Yeh, J.H., Chen, Y.T., Pai, C.Y., Tsai, Y.W.: Comparison between weighted d-knn and other classifiers for music emotion recognition. In: 3rd International Conference on Innovative Computing Information and Control, 2008. ICICIC’08, pp. 530–530. IEEE, New York (2008)

  67. Park, S.H., Ihm, S.Y., Jang, W.I., Nasridinov, A., Park, Y.H.: A music recommendation method with emotion recognition using ranked attributes. Computer Science and its Applications, pp. 1065–1070. Springer, Berlin (2015)

    Google Scholar 

  68. Patra, B.G., Das, D., Bandyopadhyay, S.: Unsupervised approach to hindi music mood classification. Mining Intelligence and Knowledge Exploration, pp. 62–69. Springer, Berlin (2013)

    Chapter  Google Scholar 

  69. Posner, J., Russell, J.A., Peterson, B.S.: The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev. Psychopathol. 17(03), 715–734 (2005)

    Article  Google Scholar 

  70. Raykar, V.C., Yu, S., Zhao, L.H., Valadez, G.H., Florin, C., Bogoni, L., Moy, L.: Learning from crowds. J. Mach. Learn. Res. 11, 1297–1322 (2010)

    MathSciNet  Google Scholar 

  71. Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161–1178 (1980)

    Article  Google Scholar 

  72. Schmidt, E.M., Kim, Y.E.: Prediction of time-varying musical mood distributions from audio. In: International Society for Music Information Retrieval Conference, Ismir 2010, Utrecht, The Netherlands, August, pp. 465–470 (2010)

  73. Schmidt, E.M., Kim, Y.E.: Prediction of time-varying musical mood distributions using kalman filtering. In: 2010 Ninth International Conference on Machine Learning and Applications (ICMLA), pp. 655–660. IEEE, New York (2010)

  74. Schmidt, E.M., Kim, Y.E.: Learning emotion-based acoustic features with deep belief networks. In: 2011 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 65–68. IEEE, New York (2011)

  75. Schmidt, E.M., Turnbull, D., Kim, Y.E.: Feature selection for content-based, time-varying musical emotion regression. In: Proceedings of the International Conference on Multimedia Information Retrieval, pp. 267–274. ACM, New York (2010)

  76. Schubert, E.: Update of the Hevner adjective checklist. Percept. Motor Skills 96(3 suppl), 1117–1122 (2003)

    Article  MathSciNet  Google Scholar 

  77. Schubert, E.: Modeling perceived emotion with continuous musical features. Music Percept. Interdiscipl. J. 21(4), 561–585 (2004)

    Article  Google Scholar 

  78. Sloboda, J.A., Juslin, P.N.: Psychological perspectives on music and emotion. Oxford University Press (2001)

  79. Soleymani, M., Caro, M.N., Schmidt, E.M., Sha, C.Y., Yang, Y.H.: 1000 songs for emotional analysis of music. In: Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia, pp. 1–6. ACM, New York (2013)

  80. Speck, J.A., Schmidt, E.M., Morton, B.G., Kim, Y.E.: A comparative study of collaborative vs. traditional musical mood annotation. In: ISMIR, pp. 549–554. Citeseer (2011)

  81. Thayer, R.E.: Toward a psychological theory of multidimensional activation (arousal). Motiv. Emot. 2(1), 1–34 (1978)

    Article  Google Scholar 

  82. Thayer, R.E., McNally, R.J.: The biopsychology of mood and arousal. Cognit. Behav. Neurol. 5(1), 65 (1992)

    Google Scholar 

  83. Tomo, T.P., Enriquez, G., Hashimoto, S.: Indonesian puppet theater robot with gamelan music emotion recognition. In: 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1177–1182. IEEE, New York (2015)

  84. Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.P.: Multi-label classification of music into emotions. ISMIR 8, 325–330 (2008)

    Google Scholar 

  85. Turnbull, D., Barrington, L., Torres, D., Lanckriet, G.: Towards musical query-by-semantic-description using the cal500 data set. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 439–446. ACM, New York (2007)

  86. Ujlambkar, A.M., Attar, V.Z.: Mood classification of Indian popular music. In: Proceedings of the CUBE International Information Technology Conference, pp. 278–283. ACM, New York (2012)

  87. Van De Laar, B.: Emotion detection in music, a survey. Twente Stud. Conf. IT 1, 1–7 (2006)

    Google Scholar 

  88. Von Ahn, L.: Games with a purpose. Computer 39(6), 92–94 (2006)

    Article  MathSciNet  Google Scholar 

  89. Wang, J.C., Yang, Y.H., Chang, K., Wang, H.M., Jeng, S.K.: Exploring the relationship between categorical and dimensional emotion semantics of music. In: Proceedings of the Second International ACM Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies, pp. 63–68. ACM, New York (2012)

  90. Wang, J.C., Yang, Y.H., Jhuo, I.H., Lin, Y.Y., Wang, H.M., et al.: The acousticvisual emotion guassians model for automatic generation of music video. In: Proceedings of the 20th ACM International Conference on Multimedia, pp. 1379–1380. ACM, New York (2012)

  91. Wang, J.C., Yang, Y.H., Wang, H.M., Jeng, S.K.: The acoustic emotion gaussians model for emotion-based music annotation and retrieval. In: Proceedings of the 20th ACM International Conference on Multimedia, pp. 89–98. ACM, New York (2012)

  92. Wang, J.C., Yang, Y.H., Wang, H.M., Jeng, S.K.: Personalized music emotion recognition via model adaptation. In: Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific, pp. 1–7. IEEE, New York (2012)

  93. Wang, J.C., Yang, Y.H., Wang, H.M., Jeng, S.K.: Modeling the affective content of music with a gaussian mixture model. IEEE Trans. Affect. Comput. 6(1), 56–68 (2015)

    Article  Google Scholar 

  94. Wieczorkowska, A., Synak, P., Raś, Z.W.: Multi-label classification of emotions in music. In: Intelligent Information Processing and Web Mining, pp. 307–315. Springer, Berlin (2006)

  95. Wu, B., Zhong, E., Horner, A., Yang, Q.: Music emotion recognition by multi-label multi-layer multi-instance multi-view learning. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 117–126. ACM, New York (2014)

  96. Wu, T.L., Jeng, S.K.: Probabilistic estimation of a novel music emotion model. In: International Conference on Multimedia Modeling, pp. 487–497. Springer, Berlin (2008)

  97. Wu, W., Xie, L.: Discriminating mood taxonomy of Chinese traditional music and western classical music with content feature sets. In: Congress on Image and Signal Processing, 2008. CISP’08, vol. 5, pp. 148–152. IEEE, New York (2008)

  98. Xiao, Z., Dellandrea, E., Dou, W., Chen, L.: What is the best segment duration for music mood analysis? In: 2008 International Workshop on Content-Based Multimedia Indexing, pp. 17–24. IEEE, New York (2008)

  99. Xu, J., Li, X., Hao, Y., Yang, G.: Source separation improves music emotion recognition. In: Proceedings of International Conference on Multimedia Retrieval, pp. 423–426. ACM, New York (2014)

  100. Xue, H., Xue, L., Su, F.: Multimodal music mood classification by fusion of audio and lyrics. In: International Conference on Multimedia Modeling, pp. 26–37. Springer, Berlin (2015)

  101. Yang, D., Chen, X., Zhao, Y.: A lda-based approach to lyric emotion regression. Knowledge Engineering and Management, pp. 331–340. Springer, Berlin (2011)

    Chapter  Google Scholar 

  102. Yang, D., Lee, W.S.: Disambiguating music emotion using software agents. ISMIR 4, 218–223 (2004)

    Google Scholar 

  103. Yang, Y.H., Chen, H.H.: Music Emotion Recognition. CRC Press, Boca Raton (2011)

    MATH  Google Scholar 

  104. Yang, Y.H., Chen, H.H.: Ranking-based emotion recognition for music organization and retrieval. IEEE Trans. Audio Speech Lang. Process. 19(4), 762–774 (2011)

    Article  Google Scholar 

  105. Yang, Y.H., Chen, H.H.: Machine recognition of music emotion: a review. ACM Trans. Intell. Syst. Technol. (TIST) 3(3), 40 (2012)

    Google Scholar 

  106. Yang, Y.H., Hu, X.: Cross-cultural music mood classification: a comparison on English and Chinese songs. Int. Soc. Music Inf. Retr. Conf. Ismir 2012, 19–24 (2012)

    Google Scholar 

  107. Yang, Y.H., Lin, Y.C., Su, Y.F., Chen, H.H.: Music emotion classification: a regression approach. In: IEEE International Conference on Multimedia and Expo, pp. 208–211. IEEE, New York (2007)

  108. Yang, Y.H., Lin, Y.C., Su, Y.F., Chen, H.H.: A regression approach to music emotion recognition. IEEE Trans. Audio Speech Lang. Process. 16(2), 448–457 (2008)

    Article  Google Scholar 

  109. Yang, Y.H., Liu, C.C., Chen, H.H.: Music emotion classification: a fuzzy approach. In: Proceedings of the 14th ACM International Conference on Multimedia, pp. 81–84. ACM, New York (2006)

  110. Yang, Y.H., Su, Y.F., Lin, Y.C., Chen, H.H.: Music emotion recognition: the role of individuality. In: Proceedings of the International Workshop on Human-Centered Multimedia, pp. 13–22. ACM, New York (2007)

  111. Yazdani, A., Kappeler, K., Ebrahimi, T.: Affective content analysis of music video clips. In: Proceedings of the 1st International ACM Workshop on Music Information Retrieval with User-Centered and Multimodal Strategies, pp. 7–12. ACM, New York (2011)

  112. Yoon, J.H., Cho, J.H., Lee, J.K., Lee, H.E., et al.: Method and apparatus for visualizing music information. US Patent 20,160,035,323 (2016)

  113. Yu, Y.C., You, S.D., Tsai, D.R.: Magic mirror table for social-emotion alleviation in the smart home. IEEE Trans. Consum. Electron. 58(1), 126–131 (2012)

    Article  Google Scholar 

  114. Zhang, J.L., Huang, X.L., Yang, L.F., Xu, Y., Sun, S.T.: Feature selection and feature learning in arousal dimension of music emotion by using shrinkage methods. Multimedia Syst. 23(2), 251–264 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Li.

Additional information

Communicated by B. Prabhakaran.

Appendix

Appendix

See Table 4.

Table 4 Summary of the literature in MER (studies between 2003 and 2017)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, X., Dong, Y. & Li, J. Review of data features-based music emotion recognition methods. Multimedia Systems 24, 365–389 (2018). https://doi.org/10.1007/s00530-017-0559-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-017-0559-4

Keywords

Navigation