Skip to main content
Log in

Analysis of expression in simple musical gestures to enhance audio in interfaces

  • Original Article
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

Expression could play a key role in the audio rendering of virtual reality applications. Its understanding is an ambitious issue in the scientific environment, and several studies have investigated the analysis techniques to detect expression in music performances. The knowledge coming from these analyses is widely applicable: embedding expression on audio interfaces can drive to attractive solutions to emphasize interfaces in mixed-reality environments. Synthesized expressive sounds can be combined with real stimuli to experience augmented reality, and they can be used in multi-sensory stimulations to provide the sensation of first-person experience in virtual expressive environments. In this work we focus on the expression of violin and flute performances, with reference to sensorial and affective domains. By means of selected audio features, we draw a set of parameters describing performers’ strategies which are suitable both for tuning expressive synthesis instruments and enhancing audio in human–computer interfaces.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Brewster SA, Grease MG (1999) Correcting menu usability problems with sound. Behav Inf Technol 18(3):165–177

    Article  Google Scholar 

  • Camurri A, De Poli G, Leman M, Volpe G (2001) A multi-layered conceptual framework for expressive gesture applications. In: Proceedings of the MOSART workshop on current research directions in computer music, Barcelona, pp 29–34

  • Canazza S, De Poli G, Rodà A, Vidolin A (2003) An abstract control space for communication of sensory expressive intentions in music performance. J New Music Res 32(3):281–294

    Article  Google Scholar 

  • Canazza S, De Poli G, Drioli C, Rodà A, Vidolin A (2004) Modeling and control of expressiveness in music performance. Proc IEEE 92(4):686–701

    Article  Google Scholar 

  • Coker W (1972) Music and meaning: a theoretical introduction to musical aesthetics. The Free Press, New York

    Google Scholar 

  • Dannenberg R, Thom B, Watson D (1997) A machine learning approach to musical style recognition. In: Proceedings of the international computer music conference, San Francisco, USA, pp 344–347

  • De Poli G (2003) Expressiveness in music performance: analysis and modeling. In: Proceedings of the SMAC03 Stockholm music acoustics conference, Stockholm, Sweden, pp 17–20

  • De Poli G, Rodà A, Vidolin A (1998) Note-by-note analysis of the influence of expressive intentions and musical structure in violin performance. J New Music Res 27(3):293–321

    Article  Google Scholar 

  • De Poli G, D’incà G, Mion L (2005) Computational models for audio expressive communication. In: Proceedings of the Audio Engineering Society annual meeting, Como, Italy, 9–12 November 2005

  • Duxbury C, Sandler M, Davies M (2002) A hybrid approach to musical note onset detection. In: Proceedings of the fifth international conference on digital audio effects (DAFX-02), Hamburg, Germany, 26–28 September 2002

  • Friberg A, Sundberg J (1999) Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. J Acoust Soc Am 105(3):1469–1484

    Article  Google Scholar 

  • Friberg A, Schoonderwaldt E, Juslin P, Bresin R (2002) Automatic real-time extraction of musical expression. In: Proceedings of the international computer music conference, Göteborg, Sweden, pp 365–367

  • Gaver W (1986) Auditory icons: using sound in computer interfaces. Hum Comput Interact 2(2):167–177

    Article  Google Scholar 

  • Hermann T, Ritter H (2004) Sound and meaning in auditory data display. Proc IEEE 92(4):730–741

    Article  Google Scholar 

  • Hunt AD, Paradis M, Wanderley M (2003) The importance of parameter mapping in electronic instrument design. J New Music Res 32(4):429–440

    Article  Google Scholar 

  • Leman M (2000) Visualization and calculation of roughness of acoustical musical signals using the synchronization index model (SIM). In: Proceedings of the COST G-6 conference on digital audio effects (DAFX-00), Verona, Italy, pp 125–130

  • Leman M, Lesaffre M, Tanghe K (2001) An introduction to the IPEM toolbox for perception-based music analysis. Mikropolyphonie—The Online Contemporary Music Journal 7

  • Mion L (2003) Application of Bayesian networks to automatic recognition of expressive content of piano improvisations. In: Proceedings of the SMAC03 Stockholm music acoustics conference, Stockholm, Sweden, pp 557–560

  • Mion L, De Poli G (2004) Expressiveness detection of music performances in the kinematics energy space. In: Proceedings of sound and music computing conference (JIM/CIM 04), Paris, France, 20–22 October 2004, pp 257–261

  • Repp B (1990) Patterns of expressive timing in performances of a Beethoven minuet by nineteen pianists. J Acoust Soc Am 88(2):622–641

    Article  PubMed  Google Scholar 

  • Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39:1161–1178

    Article  Google Scholar 

  • Stanton N (1994) Human factors in alarm designs. Taylor & Francis, London

    Google Scholar 

  • Wanderley M, Battier M (2000) Trends in gestural control of music. Edition lectronique. IRCAM, Paris

Download references

Acknowledgements

This research was supported by the European Network of Excellence “Enactive Interfaces” under the sixth framework program of the European Commission. We thank David Pirrò for developing part of the prototype.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luca Mion.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Mion, L., D’Incà, G. Analysis of expression in simple musical gestures to enhance audio in interfaces. Virtual Reality 10, 62–70 (2006). https://doi.org/10.1007/s10055-006-0029-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-006-0029-3

Keywords

Navigation