ABSTRACT
Music emotion analytic is useful for many human-centric applications such as medical intervention. Existing studies have shown that music is a low risk, adjunctive and therapeutic medical intervention. However, there is little existing research about the types of music with identified emotions that have the most effect for different medical applications. We would like to discover various music emotions through machine learning analytic so as to identify modelsof how music conveys emotion features, and determine its effectiveness for medical intervention and treatment. We are developing a Human-centric Music Medical Therapy Exploration System which could recognize music emotion features from Chinese Folk Music Library, and recommend suitable music to playback for medical intervention and treatment. Our networked system is based on Support Vector Machine(SVM) algorithm to construct the models for music emotion recognition and information retrieval. Our main contributions are as follows: Firstly, we built up the Chinese folk music emotions and features library; secondly, we conducted evaluation and comparison with other algorithms such as Back Propagation(BP) and Linear Regression to set up the model construction for music emotion recognition and proved that SVM has the best performance; lastly, we integrated blood pressure and heartbeat data analytic into our system to visualize the emotion fluctuation in different music affection and make recommendation for suitable humancentric music medical therapy for hypertensive patients.
- D.P.Solomatine and D.L.Shrestha. Adaboost.rt: Aboosting algorithm for regression problems. In IEEE International Joint Conference, pages 1163--1168, 2004.Google Scholar
- E.A.Pampalk. Matlab toolbox to compute music similarity from audio. In Proceedings of the International Conference on Music Information Retrieval, pages 254--257, 2004.Google Scholar
- E.M.Schmidt, D.Turnbull, and Y.E.Kim. Feature selection for content-based time-varying musical emotion regression. In Proc. ACM Int. Conf.Multimedia Information Retrieval, pages 267--274, 2010. Google ScholarDigital Library
- D. Evans. The effectiveness of music as an intervention for hospital patients: a systematic review.Journal of Advanced Nursing, 37(1):8--18, 2002.Google ScholarCross Ref
- E.Wong and S.Sridharan. Comparison of linear prediction cepstrum coefficients and mel-frequency cepstrum coefficients for language identification. In Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing, pages 95--98, 2001.Google ScholarCross Ref
- G.Tzanetakis and P.Cook. Musical genre classification of audio signals. IEEE Trans. Speech Audio Process, 10(5):293--302, 2002.Google ScholarCross Ref
- D. Guan, X. Chen, and D. Yang. Emotion regression based on multi-model features. In The 9th International Symposium on Computer Music Modeling and Retrieval, pages 70--77, 2012.Google Scholar
- K.Hevner. Experimental studies of the elements of expression in music. American Journal of Psychology, 48:246--268, 1936.Google ScholarCross Ref
- PsySound. http://members.tripod.com/ densil/.Google Scholar
- Q.Lu, X.Chen, D.Yang, and J.Wang. Boosting for multi-modal music emotion classification. In Proceedings of the International Conference on Music Information Retrieval, pages 105--110, 2010.Google Scholar
- S. Weisberg. Applied Linear Regression. John Wiley and Sons, 2005.Google ScholarCross Ref
- X.F.Teng, M.Y.M.Wong, and Y.T.Zhang. The effect of music on hypertensive patients. In Proceedings of the 29th Annual International Conference of the IEEE EMBS, pages 4649--4651, 2007.Google ScholarCross Ref
- Y. H. Yang, C. C. Li, and H. H. Chen. Music emotion classification: A fuzzy approach. In Proceedings of the 14th annual ACM international conference on Multimedia, pages 81--84, 2006. Google ScholarDigital Library
- Y.H.Yang and H.H.Chen. Prediction of the distribution of perceived music emotions using discrete samples. IEEE Trans. Audio, Speech Lang. Process, 19(7):2184--2196, 2011. Google ScholarDigital Library
- Y.P.Lin, C.H.Wang, T.L.Wu, S.K.Jeng, and J.H.Chen. Eeg-based emotion recognition in music listening: A comparison of schemes for multiclass support vector machine. In Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, pages 489--492, 2009. Google ScholarDigital Library
Index Terms
- Human-centric music medical therapy exploration system
Recommendations
Acoustic Features for Music Emotion Recognition and System Building
ICIT '17: Proceedings of the 2017 International Conference on Information TechnologyWe are faced with a massive growth of musical data in the form of digital files. Accurate metadata labeling of music archives is necessary in order to make digital music searchable and to be efficiently organized, not only by file name or song title but ...
Thai Music Emotion Recognition by Linear Regression
ICACR 2018: Proceedings of the 2018 2nd International Conference on Automation, Control and RobotsMusic emotion recognition has rapidly grown in the present because the listener's behavior is changing. Therefore, music emotion recognition helps to understand the emotion of music, listener, and support many careers in the music industry including ...
Computational Analysis of Jazz Music: Estimating Tonality through Chord Progression Distances
CSAE '23: Proceedings of the 7th International Conference on Computer Science and Application EngineeringCurrently, research in music informatics focuses extensively on music theory, particularly on the theoretical systems of Western classical music dating back to the 19th century. However, contemporary popular music genres such as pop, rock, and jazz often ...
Comments