Skip to main content

Evaluation of Computer Systems for Expressive Music Performance

  • Chapter
  • First Online:

Abstract

In this chapter, we review and summarize different methods for the evaluation of CSEMPs. The main categories of evaluation methods are (1) comparisons with measurements from real performances, (2) listening experiments, and (3) production experiments. Listening experiments can be of different types. For example, in some experiments, subjects may be asked to rate a particular expressive characteristic (such as the emotion conveyed or the overall expression) or to rate the effect of a particular acoustic cue. In production experiments, subjects actively manipulate system parameters to achieve a target performance. Measures for estimating the difference between performances are discussed in relation to the objectives of the model and the objectives of the evaluation. There will be also a section with a presentation and discussion of the Rencon (Performance Rendering Contest). Rencon is a contest for comparing the expressive musical performances of the same score generated by different CSEMPs. Practical examples from previous works are presented, commented on, and analysed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The scores were polyphonic and were selected from a battery of stimuli specially composed at Montreal University (www.brams.umontreal.ca/plab/downloads/Emotional_Clips.zip). This battery consisted of 14 scores per emotion. All 14 × 4 scores were rated along four adjective scales (happy, sad, scary, and peaceful) in a previous study [39]. The highest rated and least ambiguous score for each emotion was selected as stimulus in our experiment.

  2. 2.

    http://skatta.sourceforge.net

  3. 3.

    http://renconmusic.org/icmc2005/

  4. 4.

    http://smc2011.renconmusic.org/

  5. 5.

    Strong accept = 5, weak accept = 4, borderline = 3, weak reject = 2, strong reject = 1.

  6. 6.

    http://smc2011.renconmusic.org/2011/07/15/evaluation-results-stage-ii/

  7. 7.

    http://imslp.info/files/imglnks/usimg/f/fb/IMSLP30364-PMLP01410-Beethoven_Sonaten_Piano_Band1_Peters_Op13.pdf

  8. 8.

    http://smc2011.renconmusic.org/2011/07/15/selected-setpiece/

References

  1. Widmer G, Goebl W (2004) Computational models of expressive music performance: the state of the art. J New Music Res 33(3):203–216

    Article  Google Scholar 

  2. Canazza S, De Poli G, Drioli C, Roda A, Vidolin A (2004) Modeling and control of expressiveness in music performance. Proc IEEE 92(4):686–701

    Article  Google Scholar 

  3. Bresin R, Friberg A (2000) Emotional coloring of computer controlled music performance. Comput Music J 24(4):44–63

    Article  Google Scholar 

  4. Todd NPMcA (1985) A model of expressive timing in tonal music. Music Percept 3:33–58

    Article  Google Scholar 

  5. Friberg A, Bresin R, Frydén L, Sundberg J (1998) Musical punctuation on the microlevel: automatic identification and performance of small melodic units. J New Music Res 27(3):271–292

    Article  Google Scholar 

  6. Cambouropoulos E (1988) Towards a general computational theory of musical structure. PhD thesis, Faculty of Music and Department of Artificial Intelligence, University of Edinburgh

    Google Scholar 

  7. Ahlbäck S (2004) Melody beyond notes. A study in melody cognition. PhD thesis, Department of Musicology, Göteborg University

    Google Scholar 

  8. Bresin R (1998) Artificial neural networks based models for automatic performance of musical scores. J New Music Res 27(3):239–270

    Article  Google Scholar 

  9. Goebl W, Dixon S, De Poli G, Friberg A, Bresin R, Widmer G (2008) Sense in expressive music performance: data acquisition, computational studies, models. In: Polotti P, Rocchesso D (eds) Sound to sense - sense to sound: a state of the art in sound and music computing. Logos Verlag, Berlin, pp 195–242

    Google Scholar 

  10. Goebl W, Widmer G (2009) On the use of computational methods for expressive music performance. In: Crawford T, Gibson L (eds) Modern methods for musicology: prospects, proposals and realities. Ashgate, Aldershot, pp 93–113

    Google Scholar 

  11. Repp BH (1992) Diversity and commonality in music performance: an analysis of timing microstructure in Schumann’s “Träumerei”. J Acoust Soc Am 92(5):2546–2568

    Article  Google Scholar 

  12. Friberg A, Sundström A (2002) Swing ratios and ensemble timing in jazz performance: evidence for a common rhythmic pattern. Music Percept 19(3):333–349

    Article  Google Scholar 

  13. Repp BH (1999) A microcosm of musical expression: II. Quantitative analysis of pianists’ dynamics in the initial measures of Chopin’s Etude in E major. J Acoust Soc Am 105:1972–1988

    Article  Google Scholar 

  14. De Poli G (2004) Methodologies for expressiveness modeling of and for music performance. J New Music Res 33(3):189–202

    Article  Google Scholar 

  15. Friberg A (1995) Matching the rule parameters of Phrase arch to performances of “Träumerei”: a preliminary study. In: Friberg A, Sundberg J (eds) Proceedings of the KTH symposium on Grammars for music performance, 27 May 1995, pp 37–44

    Google Scholar 

  16. Friberg A, Sundberg J (1995) Time discrimination in a monotonic, isochronous sequence. J Acous Soc Am 98(5): 2524–2531.

    Google Scholar 

  17. Repp BH (1995) Detectability of duration and intensity increments in melody tones: a partial connection between music perception and performance. Percept Psychophys 57(8):1217–1232

    Article  Google Scholar 

  18. Zanon P, De Poli G (2003) Estimation of parameters in rule systems for expressive rendering in musical performance. Comput Music J 27:29–46

    Article  Google Scholar 

  19. Zanon P, De Poli G (2003) Estimation of time-varying parameters in rule systems for music performance. J New Music Res 32(3):295–316

    Article  Google Scholar 

  20. Todd NPMcA (1989) A computational model of rubato. Contemporary Music Review 3:69–88

    Article  MathSciNet  Google Scholar 

  21. Juslin PN, Friberg A, Bresin R (2002) Toward a computational model of expression in performance: the GERM model. Musicae Scientiae, Special issue 2001–2002, 63–122

    Google Scholar 

  22. Sundberg J, Friberg A, Bresin A (2003) Attempts to reproduce a pianist’s expressive timing with Director Musices performance rules. J New Music Res 32(3):317–326

    Article  Google Scholar 

  23. Friberg A, Battel GU (2002) Structural communication. In: Parncutt R, McPherson GE (eds) The science and psychology of music performance: creative strategies for teaching and learning. Oxford University Press, New York, pp 199–218

    Google Scholar 

  24. Marsland S (2009) Machine learning: an algorithmic perspective. Chapman & Hall/CRC, Boca Raton

    Google Scholar 

  25. Friberg A, Sundberg J (1999) Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. J Acoust Soc Am 105(3):1469–1484

    Article  Google Scholar 

  26. Widmer G (2003) Discovering simple rules in complex data: a meta-learning algorithm and some surprising musical discoveries. Artif Intell 146(2):129–148

    Article  MathSciNet  MATH  Google Scholar 

  27. Widmer G (2002) Machine discoveries: a few simple, robust local expression principles. J New Music Res 31:37–50

    Article  Google Scholar 

  28. Friberg A, Bresin R, Sundberg J (2006) Overview of the KTH rule system for musical performance. Adv Cogn Psychol Spec Issue Music Perform 2(2–3):145–161

    Google Scholar 

  29. Bresin R (2001) Articulation rules for automatic music performance. In: Schloss A, Dannenberg R, Driessen P (eds) Proceedings of the international computer music conference – ICMC 2001. ICMA, San Francisco, pp 294–297

    Google Scholar 

  30. Goebl W (2001) Melody lead in piano performance: expressive device or artifact? J Acoust Soc Am 110(1):563–572

    Article  Google Scholar 

  31. Bjurling J (2007) Timing in piano music - a model of melody lead. Master of Science thesis, KTH Royal Institute of Technology, School of Computer Science and Communication, Stockholm, Sweden. ISSN-1653–5715. http://www.nada.kth.se/utbildning/grukth/exjobb/rapportlistor/2007/rapporter07/bjurling_johan_07115.pdf

  32. Friberg A (2006) pDM: an expressive sequencer with real-time control of the KTH music performance rules. Comput Music J 30(1):37–48

    Article  Google Scholar 

  33. Bjurling J, Bresin R (2008) Timing in piano music – testing a model of melody lead. In Proceedings of ICMPC 10, Sapporo

    Google Scholar 

  34. Gabrielsson A, Lindström E (2010) The role of structure in the musical expression of emotions. In: Juslin PN, Sloboda JA (eds) Handbook of music and emotion: theory, research, applications. Oxford University Press, Oxford, pp 367–400

    Google Scholar 

  35. Juslin PN, Timmers R (2010) Expression and communication of emotion in music performance. In: Juslin PN, Sloboda JA (eds) Handbook of music and emotion: theory, research, applications. Oxford University Press, Oxford, pp 453–489: 2001–2002

    Google Scholar 

  36. Repp BH (1997) Acoustics, perception, production of legato articulation on a computer-controlled grand piano. J Acoust Soc Am 102(3):1878–1890

    Article  Google Scholar 

  37. Bresin R, Battel GU (2000) Articulation strategies in expressive piano performance. Analysis of legato, staccato, repeated notes in performances of the Andante movement of Mozart's sonata in G major (K 545). J New Music Res 29(3):211–224

    Article  Google Scholar 

  38. Bresin R, Friberg A (2011) Emotion rendering in music: range and characteristic values of seven musical variables. Cortex 47(9):1068–1081

    Article  Google Scholar 

  39. Vieillard S, Peretz I, Gosselin N, Khalfa S, Gagnon L, Bouchard B (2007) Happy, sad, scary and peaceful musical excerpts for research on emotions. Cogn Emot 22(4):720–752

    Article  Google Scholar 

  40. Hashida M, Nakra M, Katayose H, Murao T, Hirata K, Suzuki K, Kitahara T (2008) Rencon: performance rendering contest for automated music systems. In: Proceedings of international conference on music perception and cognition (ICMPC 2008)

    Google Scholar 

  41. Hiraga R, Hashida M, Hirata K, Katayose H, Noike K (2002) RENCON: toward a new evaluation method for performance rendering system. In: Proceedings of internatioal computer music conference, pp 357–361

    Google Scholar 

  42. Hiraga R, Bresin R, Hirata K, Katayose, H (2003) Rencon in 2002. In Proceedings of IJCAI-03 rencon workshop, Acapulco, Mexico, pp 59–64

    Google Scholar 

  43. Hiraga R, Bresin R, Hirata K, Katayose H (2004) Rencon 2004: turing test for musical expression. In NIME ‘04: proceedings of the 4th international conference on New interfaces for musical expression, Hamamatsu, Shizuoka, Japan, pp 120–123

    Google Scholar 

  44. Hiraga R, Bresin R, Katayose H (2006) Rencon 2005. In Proceeding of the 20th annual conference of the Japanese Society for Artificial Intelligence (1D2–1)

    Google Scholar 

  45. Noike K, Toyoda K, Katayose H (2005) An initial implementation of corpus based performance rendering system “COPER”. Info Process Soc Jpn (IPSJ) 2005(14):67–70

    Google Scholar 

  46. Hashida M, Nagata N, Katayose H (2005) A study of description capability of performance characteristics on PopE. In: The 19th annual conference of JSAI

    Google Scholar 

  47. Widmer G, Flossmann S, Grachten M (2009) YQX plays chopin. AI Mag 30(3):35–48

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roberto Bresin .

Editor information

Editors and Affiliations

Questions

Questions

  1. 1.

    What are the two main evaluation methods that can be identified overall in CSEMPs?

  2. 2.

    What is the difference between generality and flexibility in a model?

  3. 3.

    For the comparison with ground truth data approach, name the three ways of modelling and evaluating a system.

  4. 4.

    In which of the above three approaches is the default evaluation method implicit in the methodology?

  5. 5.

    Describe the analysis-by-synthesis modelling approach.

  6. 6.

    What are the elements which may cause the melody lead effect in human piano playing?

  7. 7.

    Describe an interaction listening test.

  8. 8.

    What is one possible way of establishing an evaluation method which could be applied to different CSEMPs?

  9. 9.

    What is an important aspect which could contribute towards evaluation being done more seriously?

  10. 10.

    How might the major issue of lack of performance data and limited test material be addressed by the research community?

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Bresin, R., Friberg, A. (2013). Evaluation of Computer Systems for Expressive Music Performance. In: Kirke, A., Miranda, E. (eds) Guide to Computing for Expressive Music Performance. Springer, London. https://doi.org/10.1007/978-1-4471-4123-5_7

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-4123-5_7

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-4122-8

  • Online ISBN: 978-1-4471-4123-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics