single-jc.php

JACIII Vol.23 No.4 pp. 782-790
doi: 10.20965/jaciii.2019.p0782
(2019)

Paper:

Real-Time Optical Music Recognition System for Dulcimer Musical Robot

Zhe Xiao*,**, Xin Chen*,**,†, and Li Zhou***

*School of Automation, China University of Geosciences
No.388 Lumo Road, Hongshan District, Wuhan 430074, China

**Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems
No.388 Lumo Road, Hongshan District, Wuhan 430074, China

***School of Arts and Communication, China University of Geosciences
No.388 Lumo Road, Hongshan District, Wuhan 430074, China

Corresponding author

Received:
February 19, 2019
Accepted:
March 29, 2019
Published:
July 20, 2019
Keywords:
musical robot, OMR, notation recognition, staff line removal, shape model descriptor
Abstract

Traditional optical music recognition (OMR) is an important technology that automatically recognizes scanned paper music sheets. In this study, traditional OMR is combined with robotics, and a real-time OMR system for a dulcimer musical robot is proposed. This system gives the musical robot a stronger ability to perceive and understand music. The proposed OMR system can read music scores, and the recognized information is converted into a standard electronic music file for the dulcimer musical robot, thus achieving real-time performance. During the recognition steps, we treat note groups and isolated notes separately. Specially structured note groups are identified by primitive decomposition and structural analysis. The note groups are decomposed into three fundamental elements: note stem, note head, and note beams. Isolated music symbols are recognized based on shape model descriptors. We conduct tests on real pictures taken live by a camera. The tests show that the proposed method has a higher recognition rate.

Cite this article as:
Z. Xiao, X. Chen, and L. Zhou, “Real-Time Optical Music Recognition System for Dulcimer Musical Robot,” J. Adv. Comput. Intell. Intell. Inform., Vol.23 No.4, pp. 782-790, 2019.
Data files:
References
  1. [1] T. Wang, X. Chen, C. Tan, and H. Fu, “Localization of Substation Fittings Based on a Stereo Vision Method,” J. Adv. Comput. Intell. Intell. Inform., Vol.22, No.6, pp. 861-868, 2018.
  2. [2] T. Okamoto, T. Shiratori, S. Kudoh, et al., “Toward a Dancing Robot With Listening Capability: Keypose-Based Integration of Lower-, Middle-, and Upper-Body Motions for Varying Music Tempos,” IEEE Trans. on Robotics, Vol.30, No.3, pp. 771-778, 2017.
  3. [3] A. Rebelo, I. Fujinaga, F. Paszkiewicz, A. R. S. Marcal, C. Guedes, and J. S. Cardoso, “Optical music recognition: state-of-the-art and open issues,” Int. J. of Multimedia Information Retrieval, Vol.1, No.3, pp. 173-190, 2012.
  4. [4] C. Dalitz, M. Droettboom, B. Pranzas, and I. Fujigana, “A comparative study of staff removal algorithms,” IEEE Trans. on Pattern Anal. Mach. Intell., Vol.30, No.5, pp. 753-766, 2008.
  5. [5] J. S. Cardoso and A. Rebelo, “Robust staffline thickness and distance estimation in binary and gray-level music scores,” Proc. of the 20th Int. Conf. on Pattern Recognition, pp. 1856-1859, 2010.
  6. [6] P. Bellini, I. Bruno, and P. Nesi, “Optical music sheet segmentation,” Proc. of the 1st Int. Conf. on WEB Delivering of Music, pp. 183-190, 2001.
  7. [7] R. Göcke, “Building a system for writer identification on handwritten music scores,” Proc. of the Int. Conf. on Signal Processing, Pattern Recognition, and Applications, pp. 205-255, 2003.
  8. [8] N. Luth, “Automatic identification of music notations,” Proc. of the 2nd Int. Conf. on Web Delivering of Music, pp. 203-210, 2002.
  9. [9] I. Fujinaga, “Staff detection and removal,” S. E. George (Eds.), Visual Perception of Music Notation: On-Line and Off-Line Recognition, IGI Global, pp. 1-39, 2004.
  10. [10] P. Bellini, I. Bruno, and P. Nesi, “Optical music recognition: architecture and algorithms,” Interactive Multi Media Music Technologies, IGI Global, pp. 80-110, 2008.
  11. [11] L. J. Tardón, S. Sammartino, I. Barbancho, V. Gómez, and A. Oliver, “Optical music recognition for scores written in white mensural notation,” EURASIP J. on Image and Video Processing, Article ID: 843401, 2009.
  12. [12] W. Homenda and M. Luckner, “Automatic knowledge acquisition: recognizing music notation with methods of centroids and classifications trees,” Proc. of the 2006 IEEE Int. Joint Conf. on Neural Network, pp. 3382-3388, 2006.
  13. [13] L. Pugin, “Optical music recognition of early typographic prints using hidden Markov models,” Proc. of the 7th Int. Conf. on Music Information Retrieval, pp. 53-56, 2006.
  14. [14] C. Wen, A. Rebelo, J. Zhang, and J. Cardoso, “A new optical music recognition system based on combined neural network,” Pattern Recognition Letters, Vol.58, pp. 1-7, 2015.
  15. [15] A. Rebelo, G. Capela, and J. S. Cardoso, “Optical recognition of music symbols: a comparative study,” Int. J. on Document Anal. and Recognit., Vol.13, No.1, pp. 19-31, 2010.
  16. [16] Z. Xiao, X. Chen, and L. Zhou, “Real-Time Optical Music Recognition for Dulcimer Robot,” The Joint Int. Conf. of ISCIIA & ITCA 2018, No.3M1-3-4, 2018.
  17. [17] J. Adamska, M. Piecuch, M. Podgórski, et al., “Mobile System for Optical Music Recognition and Music Sound Generation,” IFIP Int. Conf. on Computer Information Systems and Industrial Management, pp. 571-582, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 05, 2024