Skip to main content
Log in

An interactive Whistle-to-Music composing system based on transcription, variation and chords generation

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Most people can whistle, sing or hum a song that they are familiar with. However, it is rather difficult for common people without formal music skills or training to compose a song. In this paper, we have constructed an interactive Whistle-to-Music composing system with which a user can compose MIDI format music by whistling into a microphone. The user can experiment with computer-aided composition such as melodic variation and chords generation. The transcription speed is so fast that MIDI notes can be echoed as feedback immediately while the user is still whistling. For computer-aided composition, the given melodic fragments are developed and accompanied with stylish chords generation. We then study users’ experiences and present the results of an experiment which tests how accurately people whistle along a target melody. This preliminary prototype of the proposed system is proved to be a handy tool for computer-aided MIDI music creation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Asif, G., Jonathan, L., David, C., Brian, C.S. (1995). Query by humming: musical information retrieval in an audio database. In Proc. of ACM Multimedia (pp. 231–236)

  2. Hung-che, S., Chong-nan, L. (2007). Whistle for music: using melody transcription and approximate string matching for content-based query over a MIDI database,” Multimedia Tools and Applications, Vol 2, 35(3): 259–283

    Google Scholar 

  3. Chong, (John) Y. (1996). Computer generated music composition, MSc Thesis, M.I.T.

  4. Anthony, H., Linda, S. (2004). EyeMusic: making music with the eyes. In Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME04), Hamamatsu, Japan, June 3–5, 185–188

  5. Mathias, F., Kazuhiro, K., Michael, J.L. (2005). Sonification of facial actions for musical expression. Proceedings of the 2005 International Conference on New Interfaces for Musical Expression (NIME05), Vancouver, BC, Canada. Pp 127–131

  6. Tsuruta S, Fujimoto M, Mizuno M, Takashima Y (1988) Personal computer-music system-song transcription and its application. IEEE Transactions on Consumer Electronics 34(3):819–823

    Article  Google Scholar 

  7. Ian, S., Dan, M., Sumit, B. (2008). MySong: automatic accompaniment generation for vocal melodies. To appear in Proceedings of Computer-Human Interaction

  8. Karlheinz, E. (1995). Lexikon-Sonate: an interactive real-time composition for computer-controlled piano. In proceedings of the II Brazilian Symposium on Computer Music

  9. Bruce, L.J., (1996). Algorithmic composition as a model of creativity, Organised Sound 1(3): 157–165. Internet: http://www.ee.umd.edu/∼blj/algorithmic_composition/algorithmicmodel.html

  10. Robert MK, David RM (2007) A grammatical approach to automatic improvisation, Paper to appear in the Fourth Sound and Music Conference, SMC ’07. Lefkada, Greece

    Google Scholar 

  11. Bilmes, J.A., Li, X., Malkin, J., Kilanski, K., Wright, R., Kirchhoff, K., Subramanya A., Harada, S., Landay, J.A., Dowden, P., Chizeck, H. (2005). The vocal joystick: a voice-based human-computer interface for individuals with motor impairments. In Human Language Technology Conf./Conf. on Empirical Methods in Natural Language Processing

  12. Adam, J.S., Sri, H.K., Murni, M., Pavel, S. (2006). Non-speech input and speech recognition for real-time control of computer games. In The Proceedings of The Eighth International ACM SIGACCESS Conference on Computers & Accessibility, ASSETS 2006, Portland, Oregon

  13. Sama’a, A.H. (2006). Blowtter. A voice-controlled plotter. In Proceedings of HCI 2006 Engage, The 20th BCS HCI Group conference in co-operation with ACM, vol. 2, London, England

  14. Adam, J.S. (2009). Pitch in non-verbal vocal input. ACM SIGACCESS Accessibility and Computing. Issue 94. June, ACM

  15. Opcode Studio Vision Pro (2006). http://www.dg.co.il/Partners/opcode.html, Last Visited May 30, 2006.

  16. AutoScore by Wildcat Canyon Software (2006). http://www.wildcat.com/Web/Wildcat/Html/Site/Homepage.html, Last Visited May 30, 2006

  17. Paul M (1998) Maximum MIDI. Manning, CT, pp 105–204

    Google Scholar 

  18. MIDI Manufacturers Association (1996). Complete MIDI 1.0 Detailed Specification

  19. Frequency Analyzer. http://www.relisoft.com/freeware/index.htm

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hung-Che Shen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shen, HC., Lee, CN. An interactive Whistle-to-Music composing system based on transcription, variation and chords generation. Multimed Tools Appl 53, 253–269 (2011). https://doi.org/10.1007/s11042-010-0510-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-010-0510-6

Keywords

Navigation