ABSTRACT
The distribution of royalty fees to music right holders is slow and inefficient due to the lack of automation in music recognition and music licensing processes. The challenge for an improved system is to recognise different versions of a music such as remix or cover versions, leading to clear assessment and unique identification of each music work. Through our music data matching system called MDMS, we query many indexed and stored music pieces with a small part of a music piece. The system retrieves the closest stored variant of the input query by using music fingerprints of the underlying melody together with signal processing techniques. Tailored indices based on fingerprint hashes accelerate processing across a large corpus of stored music. Results are found even if the stored versions vary from the query song in terms of one or more music features --- tempo, key/mode, presence of instruments/vocals, and singer --- and the differences are highlighted in the output.
Supplemental Material
- P. Mandl et al. Die Verwertung von Online-Musiknutzungen -- Herausforderungen fuer die IT, pages 126--138. 2016.Google Scholar
- T. Ingham. Over 60,000 tracks are now uploaded to spotify every day. That's nearly one per second, 2021.Google Scholar
- European Parliament. Liability of online service providers for copyrighted content -- regulatory action needed?, 2017.Google Scholar
- R. J. McNab et al. Towards the digital music library: Tune retrieval from acoustic input. In ACM International Conference on Digital Libraries, pages 11--18, 1996. Google ScholarDigital Library
- Y. Zhu et al. Pitch tracking and melody slope matching for song retrieval. In Advances in Multimedia Information Processing, pages 530--537, 2001. Google ScholarDigital Library
- J.R. Jang and H. Lee. Hierarchical filtering method for content-based music retrieval via acoustic input. In ACM Multimedia, pages 401--410, 2001. Google ScholarDigital Library
- Y. Zhu and D. Shasha. Warping indexes with envelope transforms for query by humming. In ACM SIGMOD, pages 181--192, 2003. Google ScholarDigital Library
- A. L. Wang. An industrial-strength audio search algorithm. In ISMIR, pages 7--13, 2003.Google Scholar
- W. Drevo. Audio fingerprinting with python and numpy, 2013.Google Scholar
- R. Hennequin et al. Spleeter: a fast and efficient music source separation tool with pre-trained models. 2020.Google Scholar
Index Terms
MDMS: Music Data Matching System for Query Variant Retrieval
Recommendations
Identifying traditional music instruments on polyphonic Indonesian folksong using mel-frequency cepstral coefficients (MFCC)
MoMM '12: Proceedings of the 10th International Conference on Advances in Mobile Computing & MultimediaIn this study, we report our research on identifying the music instruments on polyphonic Indonesian traditional folksongs. We use Mel-Frequency Cepstral Coefficients (MFCC) as a feature extracted from the songs. The music instrument identification is ...
A singer identification technique for content-based classification of MP3 music objects
CIKM '02: Proceedings of the eleventh international conference on Information and knowledge managementAs there is a growing amount of MP3 music data available on the Internet today, the problems related to music classification and content-based music retrieval are getting more attention recently. In this paper, we propose an approach to automatically ...
A Content Dependent Visualization System for Symbolic Representation of Piano Stream
KES '07: Knowledge-Based Intelligent Information and Engineering Systems and the XVII Italian Workshop on Neural Networks on Proceedings of the 11th International ConferenceThis paper provides an overview on the advances of music information retrieval in symbolic representation of music. Such musical aspects as key, tonality, bass, melody, dynamics, rhythm and patterns are considered as foundation for visualizing systems ...
Comments