skip to main content
10.1145/1142473.1142587acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
Article

InMAF: indexing music databases via multiple acoustic features

Published:27 June 2006Publication History

ABSTRACT

Music information processing has become very important due to the ever-growing amount of music data from emerging applications. In this demonstration,we present a novel approach for generating small but comprehensive music descriptors to facilitate efficient content music management (accessing and retrieval, in particular). Unlike previous approaches that rely on low-level spectral features adapted from speech analysis technology, our approach integrates human music perception to enhance the accuracy of the retrieval and classification process via PCA and neural networks. The superiority of our method is demonstrated by comparing it with state-of-the-art approaches in the areas of music classification query effectiveness, and robustness against various audio distortion/alternatives.

References

  1. {1} K. Chakrabarti and S. Mehrotra. The hybrid tree: An index structure for high dimensional feature spaces. In ICDE Proceedings, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. {2} T. Li, M. Ogihara, and Q. Li. A comparative study on content-based music genre classification. In ACM SIGIR Proceedings, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. {3} T. Mitchell. Machine Learning. McGRAW-Hill, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. {4} J. Shen, J. Shepherd, and A. Ngu. Towards effective content based music retrieval with multiple acoustic feature combination. In IEEE Trans. on Multimedia, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. {5} T. Tolonen and M. Karjalainen. A computationally efficient multipitch analysis model. IEEE Trans. on Speech and Audio Processing, 8(6):708-716, July 2000.Google ScholarGoogle ScholarCross RefCross Ref
  6. {6} G. Tzanetakis and P. Cook. Musical genre classification of audio signals. IEEE Trans. on Speech and Audio Processing, 10(5):292-302, July 2002.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. InMAF: indexing music databases via multiple acoustic features

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SIGMOD '06: Proceedings of the 2006 ACM SIGMOD international conference on Management of data
        June 2006
        830 pages
        ISBN:1595934340
        DOI:10.1145/1142473

        Copyright © 2006 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 27 June 2006

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • Article

        Acceptance Rates

        Overall Acceptance Rate785of4,003submissions,20%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader