Abstract
Contemporary digital musical instruments allow an abundance of means to generate sound. Although superior to traditional instruments in terms of producing a unique audio-visual act, there is still an unmet need for digital instruments that allow performers to generate sounds through movements in an intuitive manner. One of the key factors for an authentic digital music act is a low latency between movements (user commands) and corresponding sounds. Here we present such a low-latency interface that maps the user’s kinematic actions into sound samples. The interface relies on wireless sensor nodes equipped with inertial measurement units and a real-time algorithm dedicated to the early detection and classification of a variety of movements/gestures performed by a user. The core algorithm is based on the approximate inference of a hierarchical generative model with piecewise-linear dynamical components. Importantly, the model’s structure is derived from a set of motion gestures. The performance of the Bayesian algorithm was compared against the k-nearest neighbors (k-NN) algorithm, which showed the highest classification accuracy, in a pre-testing phase, among several existing state-of-the-art algorithms. In almost all of the evaluation metrics the proposed probabilistic algorithm outperformed the k-NN algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bahn, C., Hahn, T., Trueman, D.: Physicality and feedback: a focus on the body in the performance of electronic music. In: Proceedings of the International Computer Music Conference, vol. 2001, pp. 44–51 (2001)
Bevilacqua, F., Guédy, F., Schnell, N., Fléty, E., Leroy, N.: Wireless sensor interface and gesture-follower for music pedagogy. In: Proceedings of the 7th International Conference on New Interfaces for Musical Expression, pp. 124–129. ACM (2007)
Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995)
Bongers, B.: Physical interfaces in the electronic arts. In: Trends in Gestural Control of Music, pp. 41–70 (2000)
Cacoullos, T.: Discriminant Analysis and Applications. Academic Press, New York (2014)
Cadoz, C., Wanderley, M.M.: Gesture-music. In: Trends in Gestural Control of Music, pp. 71–93 (2000)
Choi, I.: From motion to emotion: synthesis of interactivity with gestural primitives. In: Emotional and Intelligent: The Tangled Knot of Cognition, pp. 22–25 (1998)
Du, K.-L., Swamy, M.N.S.: Support vector machines. In: Neural Networks and Statistical Learning. STS, pp. 469–524. Springer, London (2014). https://doi.org/10.1007/978-1-4471-5571-3_16
Ferreira, P.P.: When sound meets movement: performance in electronic dance music. Leonardo Music J. 18, 17–20 (2008)
Forney, G.D.: The viterbi algorithm. Proc. IEEE 61(3), 268–278 (1973)
Goldstein, M.: Gestural coherence and musical interaction design. In: 1998 IEEE International Conference on Systems, Man, and Cybernetics, vol. 2, pp. 1076–1079. IEEE (1998)
Gritten, A., King, E.: Music and Gesture. Ashgate Publishing, Ltd., Farnham (2006)
Hechenbichler, K., Schliep, K.: Weighted k-nearest-neighbor techniques and ordinal classification (2004). http://nbn-resolving.de/urn/resolver.pl?urn=nbn:de:bvb:19-epub-1769-9
Iazzetta, F.: Meaning in musical gesture. In: Trends in Gestural Control of Music, pp. 259–268 (2000)
Jensenius, A.R.: Action-sound: developing methods and tools to study music-related body movement. Ph.D. thesis, Department of Musicology, University of Oslo (2007)
Johnson, M.: pyhsmm-autoregresive repository (2012). https://github.com/mattjj/pyhsmm-autoregressive
Mitchell, T.J.: Soundgrasp: a gestural interface for the performance of live music. In: International Conference on New Interfaces for Musical Expression (NIME) (2011)
Mitra, S., Acharya, T.: Gesture recognition: a survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 37(3), 311–324 (2007)
North, B., Blake, A.: Learning dynamical models using expectation-maximisation. In: 1998 Sixth International Conference on Computer Vision, pp. 384–389. IEEE (1998)
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Schloss, W.A.: Using contemporary technology in live performance: the dilemma of the performer. J. New Music Res. 32(3), 239–242 (2003)
Stuart, C.: The object of performance: aural performativity in contemporary laptop music. Contemp. Music Rev. 22(4), 59–65 (2003)
Tanaka, A.: Sensor-based musical instruments and interactive. In: Oxford Handbook of Computer Music, p. 233 (2009)
Wang, G.: The Chuck Audio Programming Language. A Strongly-Timed and On-the-Fly Environ/Mentality. Princeton University, Princeton (2008)
Watanabe, S.: A widely applicable Bayesian information criterion. J. Mach. Learn. Res. 14, 867–897 (2013)
Winkler, T.: Making motion musical: gesture mapping strategies for interactive computer music. In: ICMC Proceedings, pp. 261–264 (1995)
Yang, M.: Some properties of vector autoregressive processes with markov-switching coefficients. Econometric Theory 16(01), 23–43 (2000)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Marković, D., Malešević, N. (2018). Adaptive Interface for Mapping Body Movements to Sounds. In: Liapis, A., Romero Cardalda, J., Ekárt, A. (eds) Computational Intelligence in Music, Sound, Art and Design. EvoMUSART 2018. Lecture Notes in Computer Science(), vol 10783. Springer, Cham. https://doi.org/10.1007/978-3-319-77583-8_13
Download citation
DOI: https://doi.org/10.1007/978-3-319-77583-8_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-77582-1
Online ISBN: 978-3-319-77583-8
eBook Packages: Computer ScienceComputer Science (R0)