Skip to main content
Log in

Effective music searching approach based on tag combination by exploiting prototypical acoustic content

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Within the music information retrieval community, many studies and applications have focused on tag-based music categorization. The limitation in employing music tags is the ambiguity of each tag. Thus, a single music tag covers too many sub-categories. To circumvent this, multiple tags can be used simultaneously to specify music clips more precisely. However, in conventional music recommendation systems, this might not be achieved because music clips identified by the system might not be prototypical to both or each tag. In this paper, we propose a new technique for ranking proper tag combinations based on the acoustic similarity of music clips. Based on empirical experiments, proper tag combinations are suggested by our proto-typicality analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. http://www.bugs.co.kr

References

  1. Agrawal R and Srikant R (1994) Fast algorithm for mining association rules, Proc. Int. Conf. Very Large Data Bases, pp.487-499, Santiago, Chile

  2. Chen W et al (2007) Document Transformation for Multi-label Feature Selection in Text Categorization, Proc. IEEE Int. Conf. Data Mining, Omaha, USA

  3. Deng J, Leung C (2013) Music retrieval in joint emotion space using audio features and emotional tags. Lect Notes Comput Sci 7732(1):524–534

    Article  Google Scholar 

  4. Font F, Serra J and Serra X (2012) Folksonomy-based Tag Recommendation for Online Audio Clip Sharing, Proc. Int. Soc. Music Information Retrieval, pp. 73–78, Porto, Portugal

  5. Font F, Serra J, Serra X (2014) Class-based tag recommendation and user-based evaluation in online audio clip sharing. Knowl Syst 67(1):131–142

    Article  Google Scholar 

  6. Fu Z, Lu G, Ting K, Zhang D (2011) A survey of audio-based music classification and annotation. IEEE Trans Multimed 13(2):303–319

    Article  Google Scholar 

  7. Huron D (2000) Perceptual and cognitive applications in music information retrieval. Perception 10(1):83–92

    Article  Google Scholar 

  8. Juslin P (2000) Cue utilization in communication of emotion in music performance. J Exp Psychol Hum Percep Per 26(1):1797–1813

    Article  Google Scholar 

  9. Lamere P (2008) Social tagging and music information retrieval. J New Music Res 37(2):101–114

    Article  Google Scholar 

  10. Lartillot O and Toiviainen P (2007) MIR in Matlab (II): A Toolbox for Musical Feature Extraction from Audio, Proc. Int. Soc. Music Information Retrieval, pp. 237–244, Vienna, Austria

  11. Lin YC, Yang YY and Homer H (2011) Exploiting Online Music Tags for Music Emotion Classification, ACM Trans. Multimedia Comput. Comm. Appl., 78(1):Article 26

  12. Liu B, Hsu W, Ma Y (1998) Integrating classification and association rule mining. Proc. Int. Conf Knowledge Discovery and Data Mining, New York

    Google Scholar 

  13. Miotto R and Orio N (2012) A Probabilistic Model to Combine Tags and Acoustic Similarity for Music Retrieval, ACM Trans. Information Systems, 30(2):Article 8

  14. Nanopoulos A and Karydis I (2011) Know Thy Neighbor: Combining Audio Features and Social Tags for Effective Music Similarity, Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, pp. 165–168, Prague, Czech

  15. Ness S R, Theocharis A, Tzanetakis G, Martins L G (2009) Improving automatic music tag annotation using stacked generalization of probabilistic SVM outputs, Proc. 17th ACM Int. Conf. Multimedia, pp. 705–708, Beijing, China

  16. Saari P et al (2013) The role of audio and tags in music mood prediction: a study using semantic layer projection, Proc. 14th Int. Soc. Music Information Retrieval, pp. 201–206, Curitiba, Brazil

  17. Song Y, Dixon S, Pearce M and Halpern A (2013) Do Online Social Tags predict Perceived or Induced Emotional Responses to Music?, Proc. 14rd Int. Soc. Music Information Retrieval, pp. 89–94, Curitiba, Brazil

  18. Tax D (2000) Data Description in Subspaces, Proc. 15th Int. Conf. Pattern Recognition, pp. 672–675, Barcelona, Spain

  19. Tingle D, Kim YE, Turnbull D (2010) Exploring automatic music annotation with “acoustically-objective” tags, Proc. 11th Int. ACM Conf. Multimedia Information Retrieval, pp. 55–62, Philadelphia, USA

  20. Trohidis K, Tsoumakas K, Kalliris G, Vlahavas I (2008) Multi-label classification of music into emotions. Int. Soc Music Information Retrieval, Philadelphia

    Google Scholar 

  21. Turnbull D, Barrington L, Lanckriet G, and Yazdani M (2009) Combining Audio Content and Social Context for Semantic Music Discovery, Proc. 32nd Int. ACM SIGIR Conf. Research and Development in Information Retrieval, pp. 387–394, Boston, USA

  22. Turnbull D, Barrington L, Torres D, Lanckriet G (2008) Semantic annotation and retrieval of music and sound effects. IEEE Trans Audio Speech Lang Proc 16(2):467–476

    Article  Google Scholar 

  23. Wang DD, Li T and Ogihara M (2010) Are Tags better than Audio Features? The Effect of Joint Use of Tags and Audio Content Features for Artistic Style Clustering, Proc. 11th Int. Soc. Music Information Retrieval, pp. 57–62, Utrecht, Netherlands

  24. Wang JC, Shih YC, Wu MS, Wang HM and Jeng SK (2011) Colorizing tags in tag cloud: a novel query-by-tag music search system, Proc. 19th ACM Int. Conf. Multimedia, pp. 293–302, Scottsdale, USA

  25. Yang Y-H, Lin Y-C, Su Y-F, Chen H (2008) A regression approach to music emotion recognition. IEEE Trans Audio Speech Lang Proc 16(2):448–457

    Article  Google Scholar 

  26. Zhao Z, Wang X, Xiang Q, Sarroff A, Li Z, and Wang Y (2010) Large-scale Music Tag Recommendation with Explicit Multiple Attributes, Proc. 18th Int. Conf. Multimedia, pp.204-410, Firenze, Italy

Download references

Acknowledgment

This research is supported by Ministry of Culture, Sports and Tourism (MCST) and Korea Creative Content Agency (KOCCA) in the Culture Technology (CT) Research & Development Program 2016.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dae-Won Kim.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, J., Chae, J. & Kim, DW. Effective music searching approach based on tag combination by exploiting prototypical acoustic content. Multimed Tools Appl 76, 6065–6077 (2017). https://doi.org/10.1007/s11042-016-3554-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-016-3554-4

Keywords

Navigation