Skip to main content
Log in

Implementing situation-aware and user-adaptive music recommendation service in semantic web and real-time multimedia computing environment

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

With the advent of the ubiquitous era, many studies have been devoted to various situation-aware services in the semantic web environment. One of the most challenging studies involves implementing a situation-aware personalized music recommendation service which considers the user’s situation and preferences. Situation-aware music recommendation requires multidisciplinary efforts including low-level feature extraction and analysis, music mood classification and human emotion prediction. In this paper, we propose a new scheme for a situation-aware/user-adaptive music recommendation service in the semantic web environment. To do this, we first discuss utilizing knowledge for analyzing and retrieving music contents semantically, and a user adaptive music recommendation scheme based on semantic web technologies that facilitates the development of domain knowledge and a rule set. Based on this discussion, we describe our Context-based Music Recommendation (COMUS) ontology for modeling the user’s musical preferences and contexts, and supporting reasoning about the user’s desired emotions and preferences. Basically, COMUS defines an upper music ontology that captures concepts on the general properties of music such as titles, artists and genres. In addition, it provides functionality for adding domain-specific ontologies, such as music features, moods and situations, in a hierarchical manner, for extensibility. Using this context ontology, we believe that logical reasoning rules can be inferred based on high-level (implicit) knowledge such as situations from low-level (explicit) knowledge. As an innovation, our ontology can express detailed and complicated relations among music clips, moods and situations, which enables users to find appropriate music. We present some of the experiments we performed as a case-study for music recommendation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. All Music Guide, Available at: http://allmusic.com

  2. Birmingham W, Dannenberg R, Pardo B (2006) An Introduction to query by humming with the vocal search system. Commun ACM 49(8):49–52

    Article  Google Scholar 

  3. Cano P et al (2005) “Content-based music audio recommendation,” Proc of ACM Multimedia, pp. 211–212

  4. CYC upper ontology, Available at: http://www.cyc.com/cycdoc/vocab/vocab-toc.html

  5. Ellis DPW, Poliner GE (2007) Identifying ‘Cover Songs’ with chroma features and dynamic programming beat tracking. IEEE Conf Acoustic Speech Signal Process ICASSP IV:1429–1432

    Google Scholar 

  6. GarageBand, Available at: http://www.garageband.com/

  7. Grüninger M, Fox MS (1994) “The role of Mariano Fernández López 4–12 competency questions in enterprise engineering,” In: IFIP WG 5.7 Workshop on Benchmarking. Theory and Practice, Trondheim, Norway

  8. Horrocks I, Sattler U (2005) “A tableaux decision procedure for SHOIQ,” In: Proc. of the 19th Int. Joint Conf. on Artificial Intelligence (IJCAI 2005), Morgan Kaufman

  9. Jun S, Rho S, Han B, Hwang E (2008) “A Fuzzy Inference-based music emotion recognition system,” International Conference on Visual Information Engineering, pp. 673–677, July 29 ~ Aug. 1

  10. Juslin PN, Sloboda JA (2001) Music and emotion: theory and research. Oxford University Press, New York

    Google Scholar 

  11. Kanzaki Music Vocabulary, Available at: http://www.kanzaki.com/ns/music

  12. Klaus RS, Marcel RZ (2001) Emotional effects of music: production rules,” music and emotion: theory and research. Oxford University Press, Oxford

    Google Scholar 

  13. Krumhansl C (1990) “Cognitive foundations of musical pitch”, Oxford University Press

  14. Krzysztof J, Przemyslaw K, Katarzna M (2010) “Personalized ontology-based recommender systems for multimedia objects”, Agent and multi-agent technology for internet and enterprise systems, Studies in Computational Intelligence, Vol.289, pp.275-292, 2010

  15. Last.fm, Available at: http://www.last.fm

  16. List T, Fisher RB (2004) “CVML – An XML-based computer vision markup language,” Proceedings of the 17th international conference on pattern recognition, pp. 789–792

  17. Lu L, Liu D, Zhang H-J (2006) Automatic mood detection and tracking of music audio signals. IEEE Trans Audio Speech Lang Process 14(1):5–18

    Article  MathSciNet  Google Scholar 

  18. Mood Logic, Available at: http://www.moodlogic.com/

  19. MPEG-7, Available at: http://mpeg.chiariglione.org/standards/mpeg-7/mpeg-7.htm

  20. MusicBrainz, Available at: http://musicbrainz.org

  21. MyStrands, Available at: http://www.mystrands.com/

  22. Ortony A, Clore GL, Collins L (1998) The cognitive structure of emotions. Cambridge University Press, Cambridge

    Google Scholar 

  23. Oscar C (2008) Foafing the music: bridging the semantic gap in music recommendation. J Web Seman 6(4):256–256

    Google Scholar 

  24. Oscar C, Perfecto H, Xavier S (2006) “A multimodal approach to bridge the music semantic gap,” Semantic and Digital Media Technologies (SAMT)

  25. OWL Web Ontology Language, Available at: http://www.w3.org/TR/owl-ref/

  26. Paulo N et al (2006) “Emotions on agent based simulators for group formation,” Proceedings of the European Simulation and Modeling Conference, pp. 5–18

  27. Pauws S, Eggen B (2002) “PATS: realization and user evaluation of an automatic playlist generator,” Proceedings of ISMIR, pp. 222–227

  28. Poppe C, Martens G, Potter PD, Walle RVD (2010) “Semantic web technologies for video surveillance metadata,” Multimedia Tools and Applications, online published at

  29. Protégé Editor, Available at: http://protege.stanford.edu

  30. Rho S, Han B, Hwang E, Kim M (2008) MUSEMBLE: a novel music retrieval system with automatic voice query transcription and reformulation. J Syst Softw Elsevier 81(7):1065–1080

    Article  Google Scholar 

  31. Richard A, Raphaël T, Steffen S, Lynda H (2009) “COMM: a core ontology for multimedia annotation”, Handbook on ontologies. International Handbooks on Information Systems, pp.403-421

  32. Ruebenstrunk G “Emotional Computers,” Available at: http://ruebenstrunk.de/emeocomp/content.HTM

  33. Russell JA (1980) “A circumplex model of affect,” J Pers Soc Psychol Vol. 39

  34. Song S, Rho S, Hwang E, Kim M (2009) “Music ontology for mood and situation reasoning to support music retrieval and recommendation,” In: Proceedings of the International Conference on Digital Society, Cancun, Mexico, pp. 304–309, 2009

  35. Thayer RE (1989) The biopsychology of mood and arousal. Oxford University Press, New York

    Google Scholar 

  36. W3C. RDF Specification, Available at: http://www.w3c.org/RDF

  37. Wikipedia, Available at: http://en.wikipedia.org/wiki/Information_retrieval

  38. WordNet, Available at: http://wordnet.princeton.edu/

  39. Yves R, Frederick G, “Music ontology specification,” Available at: http://www.musicontology.com/

  40. Yves R, Samer A, Mark S, Frederick G (2007) The music ontology. Proc Int Conf Music Inf Retrieval ISMIR 2007:417–422

    Google Scholar 

Download references

Acknowledgment

“This research was supported by the MKE(Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the NIPA(National IT Industry Promotion Agency)” (NIPA-2011-C1090-1101-0008)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eenjun Hwang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Rho, S., Song, S., Nam, Y. et al. Implementing situation-aware and user-adaptive music recommendation service in semantic web and real-time multimedia computing environment. Multimed Tools Appl 65, 259–282 (2013). https://doi.org/10.1007/s11042-011-0803-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-011-0803-4

Keywords

Navigation