Skip to main content

Advertisement

Log in

A personalized music recommendation system based on electroencephalography feedback

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Numerous domestic and foreign studies have demonstrated that music can relieve stress and that listening to music is one method of stress relief used presently. Although stress-relief music is available on the market, various music genres produce distinct effects on people. Clinical findings have indicated that approximately 30 % of people listen to inappropriate music genres for relaxation and, consequently, their stress level increases. Therefore, to achieve the effect of stress relief, choosing the appropriate music genre is crucial. For example, a 70-year-old woman living in a military community since childhood might not consider general stress-relief music to be helpful in relieving stress, but when patriotic songs are played, her autonomic nervous system automatically relaxes because of her familiarity with the music style. Therefore, people have dissimilar needs regarding stress-relief music. In this paper, we proposed a personalized stress-relieving music recommendation system based on electroencephalography (EEG) feedback. The system structure comprises the following features: (a) automated music categorization, in which a new clustering algorithm, K-MeansH, is employed to precluster music and improve processing time; (b) the access and analysis of users’ EEG data to identify perceived stress-relieving music; and (c) personalized recommendations based on collaborative filtering and provided according to personal preferences. Experimental results indicated that the overall clustering effect of K-MeansH surpassed that of K-Means and K-Medoids by approximately 71 and 57 %, respectively. In terms of accuracy, K-MeansH also surpassed K-Means and K-Medoids.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. [Online] eSense(tm) Meters (2015) Available: http://developer.neurosky.com/docs/doku.php?id=esenses_tm. Accessed on 26 Aug 2015

  2. [Online] MIRtoolbox (2015) Available: https://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox. Accessed on 26 Aug 2015

  3. [Online] Spotify (2005) https://www.spotify.com. Accessed 23 Aug 2015

  4. Blood AJ et al (1999) Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nat Neurosci 2:382–387

    Article  Google Scholar 

  5. Brecheisen S et al. (2006) Hierarchical genre classification for large music collections. IEEE International Conference on Multimedia and Expo, Toronto, pp. 1385–1388

  6. Cabredo R et al (2013) Discovering emotion-inducing music features using EEG signals. J Adv Comput Intell Intell Inf 17:362–370

    Article  Google Scholar 

  7. Carter JR et al (2005) Forearm neurovascular responses during mental stress and vestibular activation. Am J Physiol Heart Circ Physiol 288:904–907

    Article  Google Scholar 

  8. Chang C-J et al (2008) Application of music therapy on competitive state anxiety -the example of chaoyang university of technology female basketball team. J Phys Educ National Chung Hsing Univ Taiwan 9:67–68

    Google Scholar 

  9. Esther M, Wong KY (2003) Effects of music on patient anxiety. AORN J 77:396–401

    Article  Google Scholar 

  10. Hsiao T-Y, Hsieh H-F (2009) Nurse’s experience of using music therapy to relieve acute pain in a post-orthopedic surgery patient. J Nurs Taiwan 56(4):105–110

    Google Scholar 

  11. Hsieh J-K (2010) Headache, migraine music therapy. J Health World Taiwan 292:83–84

    Google Scholar 

  12. Kim J, André E (2008) Emotion recognition based on physiological changes in music listening. IEEE Trans Pattern Anal Mach Intell 30:2067–2083

    Article  Google Scholar 

  13. Kim YE et al. (2010) Music emotion recognition: a state of the art review. International Society for Music Information and Retrieval Conference, Utrecht, pp. 255–266

  14. Lartillot O, Toiviainen PA (2007) Matlab toolbox for musical feature extraction from audio. International Conference on Digital Audio Effects, Bordeaux, pp. 237–244

  15. Lee OK et al (2005) Music and its effect on the physiological responses and anxiety levels of patients receiving mechanical ventilation: a pilot study. J Clin Nurs 14(5):609–620

    Article  Google Scholar 

  16. Li SZ (2000) Content-based audio classification and retrieval using the nearest feature line method. IEEE Trans Speech Audio 8:619–625

    Article  Google Scholar 

  17. Lie L et al (2006) Automatic mood detection and tracking of music audio signals. IEEE Trans Audio Speech Lang 14:5–18

    Article  Google Scholar 

  18. Lin H-C, Chen S-L (2007) Using focus groups to explore the group music therapy experience of long term care elderly. J Nurs Taiwan 54(2):38–46

    Google Scholar 

  19. MacQueen J (1967) Some methods for classification and analysis of multivariate observations. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, University of California Press, Oakland, California, 1:281–297

  20. Michele B, Gail A (2005) Music therapy for reducing surgical anxiety Association of Operating Room Nurses. AORN J 78:816–821

    Google Scholar 

  21. Moraska A et al. (2010) Physiological adjustments to stress measures following massage therapy: a review of the literature. Evid Based Complement Alternat Med 7(4):409–418

  22. Odendaal JW et al (1994) Two-dimensional superresolution radar imaging using the MUSIC algorithm. IEEE Trans Antennas Propag 42:1386–1391

    Article  Google Scholar 

  23. Park H-S, Jun C-H (2009) A simple and fast algorithm for K-medoids clustering. Expert Syst Appl 36:3336–3341

    Article  Google Scholar 

  24. Shin Y-N, Luo T-H (2008) Applications of Kagayashiki music care in health care. J Taiwan Occup Ther Res Practice 4(1):27–33

    Google Scholar 

  25. Trevisan AA, Jones L (2011) Brain music system: standardized brain music therapy. IET Seminar on Assisted Living, 6 April 2011, London, pp. 1–4

  26. Tung H-T, Chen K-M (2000) Implications of music therapy for elderly persons with dementia. J Long-Term Care Taiwan 10(3):296–306

    Google Scholar 

  27. 27. Wang Y (2014) Music emotion cognition model and interactive technology. IEEE Workshop on Electronics, Computer and Applications, 8-9 May 2014, Ottawa, ON, pp. 269–272

  28. Wold E et al (1996) Content-based classification, search, and retrieval of audio. IEEE MultiMedia 3:27–36

    Article  Google Scholar 

  29. Yang Y-H et al (2008) A regression approach to music emotion recognition. IEEE Trans Audio Speech Lang 16:448–457

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by Ministry of Science and Technology (MOST) project of Taiwan [MOST 103-2221-E-415-021-] and [MOST 104-2221-E-415-003-].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shih-Chang Huang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, HY., Huang, SC. & Wu, JH. A personalized music recommendation system based on electroencephalography feedback. Multimed Tools Appl 76, 19523–19542 (2017). https://doi.org/10.1007/s11042-015-3202-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-015-3202-4

Keyword

Navigation