skip to main content
10.1145/2072529.2072539acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
research-article

Music genre classification using explicit semantic analysis

Published: 30 November 2011 Publication History

Abstract

Music genre classification is the categorization of a piece of music into its corresponding categorical labels created by humans and has been traditionally performed through a manual process. Automatic music genre classification, a fundamental problem in the musical information retrieval community, has been gaining more attention with advances in the development of the digital music industry. Most current genre classification methods tend to be based on the extraction of short-time features in combination with high-level audio features to perform genre classification. However, the representation of short-time features, using time windows, in a semantic space has received little attention. This paper proposes a vector space model of mel-frequency cepstral coefficients (MFCCs) that can, in turn, be used by a supervised learning schema for music genre classification. Inspired by explicit semantic analysis of textual documents using term frequency-inverse document frequency (tf-idf), a semantic space model is proposed to represent music samples. The effectiveness of this representation of audio samples is then demonstrated in music genre classification using various machine learning classification algorithms, including support vector machines (SVMs) and k-nearest neighbor clustering. Our preliminary results suggest that the proposed method is comparable to genre classification methods that use low-level audio features.

References

[1]
A. Anglade, E. Benetos, M. Mauch, and S. Dixon. Improving music genre classification using automatically induced harmony rules. Journal of New Music Research, 39:349--361, 2010.
[2]
W. Balkema and F. van der Heijden. Music playlist generation by assimilating gmms into soms. Pattern Recognition Letters, 31(11):1396 -- 1402, 2010.
[3]
Z. Cataltepe, Y. Yaslan, and A. Sonmez. Music genre classification using MIDI and audio features. EURASIP J. Appl. Signal Process., 2007:150--150, January 2007.
[4]
C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2, 2011.
[5]
L. Chen, P. Wright, and W. Nejdl. Improving music genre classification using collaborative tagging data. In Web Search and Data Mining, pages 84--93, 2009.
[6]
R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871--1874, 2008.
[7]
A. Flexer, D. Schnitzer, M. Gasser, and G. Widmer. Playlist generation using start and end songs. In J. P. Bello, E. Chew, and D. Turnbull, editors, ISMIR, pages 173--178, 2008.
[8]
E. Gabrilovich and S. Markovitch. Computing semantic relatedness using Wikipedia-based explicit semantic analysis. In Proceedings of the 20th International Joint Conference on Artificial Intelligence, pages 1606--1611, 2007.
[9]
H. Homburg, I. Mierswa, B. Möller, K. Morik, and M. Wurst. A benchmark dataset for audio classification and clustering. In ISMIR, pages 528--531, 2005.
[10]
M. J. Hunt, M. Lennig, and P. Mermelstein. Experiments in syllable-based recognition of continuous speech. In International Conference on Acoustics, Speech, and Signal Processing, 1980.
[11]
O. Lartillot and P. Toiviainen. MIR in Matlab (II): A toolbox for musical feature extraction from audio. International Conference on Music Information Retrieval, 2007.
[12]
T. L. Li and A. B. Chan. Genre classification and the invariance of mfcc features to key and tempo. In Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part I, MMM'11, pages 317--327, Berlin, Heidelberg, 2011. Springer-Verlag.
[13]
T. Lidy and A. Rauber. Improving genre classification by combination of audio and symbolic descriptors using a transcription system. In Proc. ISMIR, 2007.
[14]
B. Logan. Content-based playlist generation: Exploratory experiments. In ISMIR, 2002.
[15]
B. Logan. Music recommendation from song sets. In Proc ISMIR, pages 425--428, 2004.
[16]
B. Logan, A. Kositsky, and P. Moreno. Semantic analysis of song lyrics. In Proceedings of the 2003 IEEE International Conference on Multimedia and Expo, volume 2, pages 827--830, Baltimore,Maryland, USA, 2004.
[17]
Mayer and J. Inesta. Feature selection in a cartesian ensemble of feature subspace classifiers for music categorisation. In Proc. of. ACM Multimedia Workshop on Music and Machine Learning (MML 2010), pages 53--56, Florence (Italy), October 2010. ACM.
[18]
C. McKay and I. Fujinaga. Musical genre classification: Is it worth pursuing and how can it be improved? In ISMIR, pages 101--106, 2006.
[19]
C. McKay and I. Fujinaga. Combining features extracted from audio, symbolic and cultural sources. In ISMIR, pages 597--602, 2008.
[20]
A. Meng, P. Ahrendt, and J. Larsen. Improving music genre classification by short-time feature integration. In IEEE ICASSP, pages 497--500, 2005.
[21]
I. Mierswa and K. Morik. Automatic feature extraction for classifying audio data. Machine Learning Journal, 58:127--149, 2005.
[22]
R. Neumayer and A. Rauber. Integration of text and audio features for genre classification in music information retrieval. In Proceedings of the 29th European Conference on Information Retrieval, pages 724--727, 2007.
[23]
G. Salton, A. Wong, and C. S. Yang. A vector space model for automatic indexing, pages 273--280. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1997.
[24]
N. Scaringella, G. Zoia, and D. Mlynek. Automatic genre classification of music content: a survey. Signal Processing Magazine, IEEE, 23(2):133--141, 2006.
[25]
M. Slaney. Auditory toolbox, version 2. Technical Report 1998--10, Interval Research Corporation, Palo Alto, California, USA, 1998.
[26]
G. Tzanetakis and G. Essl. Automatic musical genre classification of audio signals. In IEEE Transactions on Speech and Audio Processing, pages 293--302, 2001.
[27]
B. Whitman. Combining musical and cultural features for intelligent style detection. In Proc. Int. Conf. Music Information Retrieval (ISMIR), pages 47--52, 2002.

Cited By

View all
  • (2024)Responsible Music Genre Classification Using Interpretable Model-Agnostic Visual ExplainersSN Computer Science10.1007/s42979-024-03584-96:1Online publication date: 27-Dec-2024
  • (2022)Robustness of musical features on deep learning models for music genre classificationExpert Systems with Applications: An International Journal10.1016/j.eswa.2022.116879199:COnline publication date: 1-Aug-2022
  • (2022)Hierarchical mining with complex networks for music genre classificationDigital Signal Processing10.1016/j.dsp.2022.103559127:COnline publication date: 1-Jul-2022
  • Show More Cited By

Index Terms

  1. Music genre classification using explicit semantic analysis

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MIRUM '11: Proceedings of the 1st international ACM workshop on Music information retrieval with user-centered and multimodal strategies
    November 2011
    70 pages
    ISBN:9781450309868
    DOI:10.1145/2072529
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 30 November 2011

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. audio word
    2. explicit semantic analysis
    3. music genre classification
    4. vocabulary

    Qualifiers

    • Research-article

    Conference

    MM '11
    Sponsor:
    MM '11: ACM Multimedia Conference
    November 30, 2011
    Arizona, Scottsdale, USA

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)11
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Responsible Music Genre Classification Using Interpretable Model-Agnostic Visual ExplainersSN Computer Science10.1007/s42979-024-03584-96:1Online publication date: 27-Dec-2024
    • (2022)Robustness of musical features on deep learning models for music genre classificationExpert Systems with Applications: An International Journal10.1016/j.eswa.2022.116879199:COnline publication date: 1-Aug-2022
    • (2022)Hierarchical mining with complex networks for music genre classificationDigital Signal Processing10.1016/j.dsp.2022.103559127:COnline publication date: 1-Jul-2022
    • (2020)Masked Conditional Neural Networks for sound classificationApplied Soft Computing10.1016/j.asoc.2020.106073(106073)Online publication date: Jan-2020
    • (2017)Music Genre Classification Using Masked Conditional Neural NetworksNeural Information Processing10.1007/978-3-319-70096-0_49(470-481)Online publication date: 26-Oct-2017
    • (2015)Deep Neural Networks: A Case Study for Music Genre Classification2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)10.1109/ICMLA.2015.160(655-660)Online publication date: Dec-2015
    • (2015)Fusion of Text and Audio Semantic Representations Through CCAMultimodal Pattern Recognition of Social Signals in Human-Computer-Interaction10.1007/978-3-319-14899-1_7(66-73)Online publication date: 4-Jan-2015
    • (2014)Econo-ESA in semantic text similaritySpringerPlus10.1186/2193-1801-3-1493:1Online publication date: 19-Mar-2014
    • (2014)A Systematic Evaluation of the Bag-of-Frames Representation for Music Information RetrievalIEEE Transactions on Multimedia10.1109/TMM.2014.231101616:5(1188-1200)Online publication date: Aug-2014
    • (2014)Music genre classification via joint sparse low-rank representation of audio featuresIEEE/ACM Transactions on Audio, Speech and Language Processing10.1109/TASLP.2014.235577422:12(1905-1917)Online publication date: 1-Dec-2014
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media