Skip to main content

An Overview of Computer Systems for Expressive Music Performance

  • Chapter
  • First Online:

Abstract

This chapter is a survey of research into automated and semi-automated computer systems for expressive performance of music. We examine the motivation for such systems and then examine a significant sample of the systems developed over the last 30 years. To highlight some of the possible future directions for new research, this chapter uses primary terms of reference based on four elements: testing status, expressive representation, polyphonic ability and performance creativity.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Hiller L, Isaacson L (1959) Experimental music. Composition with an electronic computer. McGraw-Hill, New York

    Google Scholar 

  2. Buxton WAS (1977) A composer’s introduction to computer music. Interface 6:57–72

    Google Scholar 

  3. Roads C (1996) The computer music tutorial. MIT Press, Cambridge

    Google Scholar 

  4. Miranda ER (2001) Composing music with computers. Focal Press, Oxford

    Google Scholar 

  5. Todd NP (1985) A model of expressive timing in tonal music. Music Percept 3:33–58

    Article  Google Scholar 

  6. Friberg A, Sundberg J (1999) Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. J Acoust Soc Am 105:1469–1484

    Article  Google Scholar 

  7. Pace I (2007) Complexity as imaginative stimulant: issues of rubato, barring, grouping, accentuation and articulation in contemporary music, with examples from Boulez, Carter, Feldman, Kagel, Sciarrino, Finnissy. In: Proceedings of the 5th international Orpheus academy for music, theory, Gent, Belgium, Apr 2007

    Google Scholar 

  8. Seashore CE (1938) Psychology of music. McGraw-Hill, New York

    Google Scholar 

  9. Palmer C (1997) Music performance. Annu Rev Psychol 48:115–138

    Article  Google Scholar 

  10. Gabrielsson A (2003) Music performance research at the millennium. Psychol Music 31:221–272

    Article  Google Scholar 

  11. Clarke EF (1998) Generative principles in music performance. In: Sloboda JA (ed) Generative processes in music: the psychology of performance, improvisation, and composition. Clarendon, Oxford, pp 1–26

    Google Scholar 

  12. Gabrielsson A, Juslin P (1996) Emotional expression in music performance: between the performer’s intention and the listener’s experience. Psychol Music 24:68–91

    Article  Google Scholar 

  13. Juslin P (2003) Five facets of musical expression: a psychologist’s perspective on music performance. Psychol Music 31:273–302

    Article  Google Scholar 

  14. Good M (2001) MusicXML for notation and analysis. In: Hewlett WB, Selfridge-Field E (eds) The virtual score: representation, retrieval, restoration. MIT Press, Cambridge, pp 113–124

    Google Scholar 

  15. Lerdahl F, Jackendoff R (1938) A generative theory of tonal music. The MIT Press, Cambridge

    Google Scholar 

  16. Narmour E (1990) The analysis and cognition of basic melodic structures: the implication-realization model. The University of Chicago Press, Chicago

    Google Scholar 

  17. Friberg A, Bresin R, Sundberg J (2006) Overview of the KTH rule system for musical performance. Adv Cognit Psychol 2:145–161

    Article  Google Scholar 

  18. Todd NP (1989) A computational model of Rubato. Contemp Music Rev 3:69–88

    Article  MathSciNet  Google Scholar 

  19. Todd NP (1992) The dynamics of dynamics: a model of musical expression. J Acoust Soc Am 91:3540–3550

    Article  Google Scholar 

  20. Todd NP (1995) The kinematics of musical expression. J Acoust Soc Am 97:1940–1949

    Article  Google Scholar 

  21. Clynes M (1986) Generative principles of musical thought: integration of microstructure with structure. Commun Cognit 3:185–223

    Google Scholar 

  22. Clynes M (1995) Microstructural musical linguistics: composer’s pulses are liked best by the musicians. Cognit: Int J Cognit Sci 55:269–310

    Article  Google Scholar 

  23. Johnson ML (1991) Toward an expert system for expressive musical performance. Computer 24:30–34

    Article  Google Scholar 

  24. Dannenberg RB, Derenyi I (1998) Combining instrument and performance models for high-quality music synthesis. J New Music Res 27:211–238

    Article  Google Scholar 

  25. Dannenberg RB, Pellerin H, Derenyi I (1998) A study of trumpet envelopes. In: Proceedings of the 1998 international computer music conference, Ann Arbor, Michigan, October 1998. International Computer Music Association, San Francisco, pp 57–61

    Google Scholar 

  26. Mazzola G, Zahorka O (1994) Tempo curves revisited: hierarchies of performance fields. Comput Music J 18(1):40–52

    Article  Google Scholar 

  27. Mazzola G (2002) The topos of music – geometric logic of concepts, theory, and performance. Birkhäuser, Basel/Boston

    MATH  Google Scholar 

  28. Hashida M, Nagata N, Katayose H (2006) Pop-E: a performance rendering system for the ensemble music that considered group expression. In: Baroni M, Addessi R, Caterina R, Costa M (eds) Proceedings of 9th international conference on music perception and cognition, Bologna, Spain, August 2006. ICMPC, pp 526–534

    Google Scholar 

  29. Sethares W (2004) Tuning, timbre, spectrum, scale. Springer, London

    Google Scholar 

  30. Finn B (2007) Personal communication

    Google Scholar 

  31. Livingstone SR, Muhlberger R, Brown AR, Loch A (2007) Controlling musical emotionality: an affective computational architecture for influencing musical emotions. Digit Creat 18:43–53

    Article  Google Scholar 

  32. Katayose H, Fukuoka T, Takami K, Inokuchi S (1990) Expression extraction in virtuoso music performances. In: Proceedings of the 10th international conference on pattern recognition, Atlantic City, New Jersey, USA, June 1990. IEEE Press, Los Alamitos, pp 780–784

    Google Scholar 

  33. Aono Y, Katayose H, Inokuchi S (1997) Extraction of expression parameters with multiple regression analysis. J Inf Process Soc Jpn 38:1473–1481

    Google Scholar 

  34. Ishikawa O, Aono Y, Katayose H, Inokuchi S (2000) Extraction of musical performance rule using a modified algorithm of multiple regression analysis. In: Proceedings of the international computer music conference, Berlin, Germany, August 2000. International Computer Music Association, San Francisco, pp 348–351

    Google Scholar 

  35. Canazza S, Drioli C, De Poli G, Roda A, Vidolin A (2000) Audio morphing different expressive intentions for multimedia systems. IEEE Multimed 7:79–83

    Google Scholar 

  36. Canazza S, De Poli G, Drioli C, Roda A, Vidolin A (2001) Expressive morphing for interactive performance of musical scores. In: Proceedings of first international conference on WEB delivering of music, Florence, Italy, Nov 2001. IEEE, Los Alamitos, pp 116–122

    Google Scholar 

  37. Canazza S, De Poli G, Roda A, Vidolin A (2003) An abstract control space for communication of sensory expressive intentions in music performance. J New Music Res 32:281–294

    Article  Google Scholar 

  38. Bresin R (1998) Artificial neural networks based models for automatic performance of musical scores. J New Music Res 27:239–270

    Article  Google Scholar 

  39. Camurri A, Dillon R, Saron A (2000) An experiment on analysis and synthesis of musical expressivity. In: Proceedings of 13th colloquium on musical informatics, L’Aquila, Italy, Sept 2000

    Google Scholar 

  40. Arcos JL, De Mantaras RL, Serra X (1997) SaxEx: a case-based reasoning system for generating expressive musical performances. In: Cook PR (eds) Proceedings of 1997 international computer music conference, Thessalonikia, Greece, Sept 1997. ICMA, San Francisco, pp 329–336

    Google Scholar 

  41. Arcos JL, Lopez De Mantaras R, Serra X (1998) Saxex: a case-based reasoning system for generating expressive musical performance. J New Music Res 27:194–210

    Article  Google Scholar 

  42. Arcos JL, Lopez De Mantaras R (2001) An interactive case-based reasoning approach for generating expressive music. J Appl Intell 14:115–129

    Article  MATH  Google Scholar 

  43. Suzuki T, Tokunaga T, Tanaka H (1999) A case based approach to the generation of musical expression. In: Proceedings of the 16th international joint conference on artificial intelligence, Stockholm, Sweden, Aug 1999. Morgan Kaufmann, San Francisco, pp 642–648

    Google Scholar 

  44. Suzuki T (2003) Kagurame phase-II. In: Gottlob G, Walsh T (eds) Proceedings of 2003 international joint conference on artificial intelligence (Working Notes of RenCon Workshop), Acapulco, Mexico, Aug 2003. Morgan Kauffman, Los Altos

    Google Scholar 

  45. Hirata K, Hiraga R (2002) Ha-Hi-Hun: performance rendering system of high controllability. In: Proceedings of the ICAD 2002 RenCon workshop on performance rendering systems, Kyoto, Japan, July 2002, pp 40–46

    Google Scholar 

  46. Widmer G (2000) Large-scale induction of expressive performance rules: first quantitative results. In: Zannos I (eds) Proceedings of the 2000 international computer music conference, Berlin, Germany, Sept 2000. International Computer Music Association, San Francisco, 344–347

    Google Scholar 

  47. Widmer G (2002) Machine discoveries: a few simple, robust local expression principles. J New Music Res 31:37–50

    Article  Google Scholar 

  48. Widmer G (2003) Discovering simple rules in complex data: a meta-learning algorithm and some surprising musical discoveries. Artif Intell 146:129–148

    Article  MathSciNet  MATH  Google Scholar 

  49. Widmer G, Tobudic A (2003) Playing Mozart by analogy: learning multi-level timing and dynamics strategies. J New Music Res 32:259–268

    Article  Google Scholar 

  50. Tobudic A, Widmer G (2003) Relational ibl in music with a new structural similarity measure. In: Horvath T, Yamamoto A (eds) Proceedings of the 13th international conference on inductive logic programming, Szeged, Hungary, Sept 2003. Springer Verlag, Berlin, pp 365–382

    Google Scholar 

  51. Tobudic A, Widmer G (2003) Learning to play Mozart: recent improvements. In: Hirata K (eds) Proceedings of the IJCAI’03 workshop on methods for automatic music performance and their applications in a public rendering contest (RenCon), Acapulco, Mexico, Aug 2003

    Google Scholar 

  52. Raphael C (2001) Can the computer learn to play music expressively? In: Jaakkola T, Richardson T (eds) Proceedings of eighth international workshop on artificial intelligence and statistics, 2001. Morgan Kaufmann, San Francisco, pp 113–120

    Google Scholar 

  53. Raphael C (2001) A Bayesian network for real-time musical accompaniment. Neural Inf Process Sys 14:1433–1440

    Google Scholar 

  54. Raphael C (2003) Orchestra in a box: a system for real-time musical accompaniment. In: Gottlob G, Walsh T (eds) Proceedings of 2003 international joint conference on artificial intelligence (Working Notes of RenCon Workshop), Acapulco, Mexico, Aug 2003. Morgan Kaufmann, San Francisco, pp 5–10

    Google Scholar 

  55. Grindlay GC (2005) Modelling expressive musical performance with Hidden Markov Models. PhD thesis, University of Santa Cruz, CA

    Google Scholar 

  56. Carlson L, Nordmark A Wikilander R (2003) Reason version 2.5 – getting started. Propellerhead Software

    Google Scholar 

  57. Dorard L, Hardoon DR, Shawe-Taylor J (2007) Can style be learned? A machine learning approach towards ‘performing’ as famous pianists. In: Music, brain and cognition workshop, NIPS 2007, Whistler, Canada

    Google Scholar 

  58. Hazan A, Ramirez R (2006) Modelling expressive performance using consistent evolutionary regression trees. In: Brewka G, Coradeschi S, Perini A, Traverso P (eds) Proceedings of 17th European conference on artificial intelligence (Workshop on Evolutionary Computation), Riva del Garda, Italy, Aug 2006. IOS Press, Washington, DC

    Google Scholar 

  59. Ramirez R, Hazan A (2007) Inducing a generative expressive performance model using a sequential-covering genetic algorithm. In: Proceedings of 2007 genetic and evolutionary computation conference, London, UK, July 2007. ACM Press, New York

    Google Scholar 

  60. Miranda ER, Kirke A, Zhang Q (2010) Artificial evolution of expressive performance of music: an imitative multi-agent systems approach. Comput Music J 34(1):80–96

    Article  Google Scholar 

  61. Dahlstedt P (2007) Autonomous evolution of complete piano pieces and performances. In: Proceedings of ECAL 2007 workshop on music and artificial life (MusicAL 2007), Lisbon, Portugal, Sept 2007

    Google Scholar 

  62. Papadopoulos G, Wiggins GA (1999) AI methods for algorithmic composition: a survey, a critical view, and future prospects. In: Proceedings of the AISB’99 symposium on musical creativity. AISB, Edinburgh

    Google Scholar 

  63. Hiraga R, Bresin R, Hirata K, RenCon KH (2004) Turing test for musical expression proceedings of international conference on new interfaces for musical expression. In: Nagashima Y, Lyons M (eds) Proceedings of 2004 new interfaces for musical expression conference, Hamatsu, Japan, June 2004. Shizuoka University of Art and Culture, ACM Press, New York pp 120–123

    Google Scholar 

  64. Arcos JL, De Mantaras RL (2001) The SaxEx system for expressive music synthesis: a progress report. In: Lomeli C, Loureiro R (eds) Proceedings of the workshop on current research directions in computer music, Barcelona, Spain, Nov 2001. Pompeu Fabra University, Barcelona, pp 17–22

    Google Scholar 

  65. Church M (2004) The mystery of Glenn Gould. Independent Newspaper, Published by Independent Print Ltd, London, UK

    Google Scholar 

  66. Kirke A, Miranda ER (2007) Capturing the aesthetic: radial mappings for cellular automata music. J ITC Sangeet Res Acad 21:15–23

    Google Scholar 

  67. Anders T (2007) Composing music by composing rules: design and usage of a generic music constraint system. PhD thesis, University of Belfast

    Google Scholar 

  68. Tobudic A, Widmer G (2006) Relational IBL in classical music. Mach Learn 64:5–24

    Article  Google Scholar 

  69. Sundberg J, Askenfelt A, Fryden L (1983) Musical performance. A synthesis-by-rule approach. Comput Music J 7:37–43

    Article  Google Scholar 

  70. Bresin R, Friberg A (2000) Emotional coloring of computer-controlled music performances. Comput Music J 24:44–63

    Article  Google Scholar 

  71. Friberg A (2006) pDM: an expressive sequencer with real-time control of the KTH music-performance rules. Comput Music J 30:37–48

    Article  Google Scholar 

  72. Desain P, Honing H (1993) Tempo curves considered harmful. Contemp Music Rev 7:123–138

    Article  Google Scholar 

  73. Thompson WF (1989) Composer-specific aspects of musical performance: an evaluation of Clynes’s theory of pulse for performances of Mozart and Beethoven. Music Percept 7:15–42

    Article  Google Scholar 

  74. Repp BH (1990) Composer’s pulses: science or art. Music Percept 7:423–434

    Article  Google Scholar 

  75. Hashida M, Nagata N, Katayose H (2007) jPop-E: an assistant system for performance rendering of ensemble music. In: Crawford L (eds) Proceedings of 2007 conference on new interfaces for musical expression (NIME07), New York, NY, pp 313–316

    Google Scholar 

  76. Meyer LB (1957) Meaning in music and information theory. J Aesthet Art Crit 15:412–424

    Article  Google Scholar 

  77. Canazza S, De Poli G, Drioli C, Roda A, Vidolin A (2004) Modeling and control of expressiveness in music performance. Proc IEEE 92:686–701

    Article  Google Scholar 

  78. De Poli G (2004) Methodologies for expressiveness modeling of and for music performance. J New Music Res 33:189–202

    Article  Google Scholar 

  79. Lopez De Mantaras R, Arcos JL (2002) AI and music: from composition to expressive performances. AI Mag 23:43–57

    Google Scholar 

  80. Mitchell T (1997) Machine learning. McGraw-Hill, New York

    MATH  Google Scholar 

  81. Emde W, Wettschereck D (1996) Relational instance based learning. In: Saitta L (eds) Proceedings of 13th international conference on machine learning, Bari, Italy, July 1996. Morgan Kaufmann, San Francisco, pp 122–130

    Google Scholar 

  82. Wright M, Berdahl E (2006) Towards machine learning of expressive microtiming in Brazilian drumming. In: Zannos I (eds) Proceedings of the 2006 international computer music conference, New Orleans, USA, Nov 2006. ICMA, San Francisco, pp 572–575

    Google Scholar 

  83. Dixon S, Goebl W, Widmer G (2002) The performance worm: real time visualisation of expression based on Langrer’s tempo-loudness animation. In: Proceedings of the international computer music conference, Goteborg, Sweden, Sept, pp 361–364

    Google Scholar 

  84. Sholkopf B, Smola A, Muller K (1998) Nonlinear component analysis as a kernel eigenvalue problem, Neural computation 10. MIT Press, Cambridge, MA, pp 1299–1319

    Google Scholar 

  85. Mitchell M (1998) Introduction to genetic algorithms. The MIT Press, Cambridge

    MATH  Google Scholar 

  86. Kirke A (1997) Learning and co-operation in mobile multi-robot systems. PhD thesis, University of Plymouth

    Google Scholar 

  87. Chalmers D (2006) Strong and weak emergence. In: Clayton P, Davies P (eds) The re-emergence of emergence. Oxford University Press, Oxford

    Google Scholar 

  88. Ramirez R, Hazan A (2005) Modeling expressive performance in Jazz. In: Proceedings of 18th international Florida Artificial Intelligence Research Society conference (AI in Music and Art), Clearwater Beach, FL, USA, May 2005. AAAI Press, Menlo Park, pp 86–91

    Google Scholar 

  89. Zhang Q, Miranda ER (2006) Towards an interaction and evolution model of expressive music performance. In: Chen Y, Abraham A (eds) Proceedings of the 6th international conference on intelligent systems design and applications, Jinan, China, Oct 2006. IEEE Computer Society, Washington, DC, pp 1189–1194

    Google Scholar 

  90. Cambouropoulos E (2001) The local boundary detection model (LBDM) and its application in the study of expressive timing. In: Schloss R, Dannenberg R (eds) Proceedings of the 2001 international computer music conference, Havana, Cuba, Sept 2001. International Computer Music Association, San Francisco

    Google Scholar 

  91. Krumhansl C (1991) Cognitive foundations of musical pitch. Oxford University Press, Oxford

    Google Scholar 

  92. Temperley D, Sleator D (1999) Modeling meter and harmony: a preference rule approach. Comput Music J 23:10–27

    Article  Google Scholar 

  93. Zhang Q, Miranda ER (2007) Evolving expressive music performance through interaction of artificial agent performers. In: Proceedings of ECAL 2007 workshop on music and artificial life (MusicAL 2007), Lisbon, Portugal, Sept

    Google Scholar 

  94. Miranda ER (2002) Emergent sound repertoires in virtual societies. Comput Music J 26(2):77–90

    Article  Google Scholar 

  95. Dannenberg RB (1993) A brief survey of music representation issues, techniques, and systems. Comput Music J 17:20–30

    Article  Google Scholar 

  96. Laurson M, Kuuskankare M (2003) From RTM-notation to ENP-score-notation. In: Proceedings of Journées d’Informatique Musicale 2003, Montbéliard, France

    Google Scholar 

  97. Bellini P, Nesi P (2001) WEDELMUSIC format: an XML music notation format for emerging applications. In: Proceedings of first international conference on web delivering of music, Florence, Nov 2001. IEEE Press, Los Alamitos, pp 79–86

    Google Scholar 

  98. Good M (2006) MusicXML in commercial applications. In: Hewlett WB, Selfridge-Field E (eds) Music analysis east and west. MIT Press, Cambridge, MA, pp 9–20

    Google Scholar 

  99. Atkinson JJS (2007) Bach: the Goldberg variations. Stereophile, Sept 2007

    Google Scholar 

  100. Toop R (1988) Four facets of the new complexity. Contact 32:4–50

    Google Scholar 

  101. Koelsch S, Siebel WA (2005) Towards a neural basis of music perception. Trends Cogn Sci 9:579–584

    Article  Google Scholar 

  102. Britton JC, Phan KL, Taylor SF, Welsch RC, Berridge KC, Liberzon I (2006) Neural correlates of social and nonsocial emotions: an fMRI study. Neuroimage 31:397–409

    Article  Google Scholar 

  103. Durrant S, Miranda ER, Hardoon D, Shawe-Taylor J, Brechmann A, Scheich H (2007) Neural correlates of tonality in music. In: Proceedings of music, brain, cognition workshop – NIPS Conference, Whistler, Canada

    Google Scholar 

  104. Clarke EF (1993) Generativity, mimesis and the human body in music performance. Contemp Music Rev 9:207–219

    Article  Google Scholar 

  105. Parncutt R (1997) Modeling piano performance: physics and cognition of a virtual pianist. In: Cook PR (eds) Proceedings of 1997 international computer music conference, Thessalonikia, Greece, Sept 1997. ICMA, San Francisco, pp 15–18

    Google Scholar 

  106. Widmer G, Goebl W (2004) Computational models of expressive music performance: the state of the art. J New Music Res 33:203–216

    Article  Google Scholar 

Download references

Acknowledgements

This work was financially supported by the EPSRC-funded project ‘Learning the Structure of Music’, grant EP/D062934/1. An earlier version of this chapter was published in ACM Computing Surveys Vol. 42, No. 1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexis Kirke .

Editor information

Editors and Affiliations

Questions

Questions

  1. 1.

    Give two examples of why humans make their performances sound so different to the so-called perfect performance a computer would give.

  2. 2.

    What is the purpose of the ‘performance context’ module in a generic computer system for expressive music performance?

  3. 3.

    What are two examples of ways in which the performance knowledge system might store its information?

  4. 4.

    Give five reasons that enable computers to perform music expressively.

  5. 5.

    What is the most common form of instrument used in studying computer systems for expressive music performance?

  6. 6.

    What are the two most common forms of expressive performance action?

  7. 7.

    Why is musical structure analysis so significant in computer systems for expressive music performance?

  8. 8.

    In what ways does most Western music usually have a hierarchical structure?

  9. 9.

    What are the potential advantages of combining algorithm composition with expressive performance?

  10. 10.

    Do most of the CSEMPs discussed in this chapter deal with MIDI or audio formats?

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Kirke, A., Miranda, E.R. (2013). An Overview of Computer Systems for Expressive Music Performance. In: Kirke, A., Miranda, E. (eds) Guide to Computing for Expressive Music Performance. Springer, London. https://doi.org/10.1007/978-1-4471-4123-5_1

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-4123-5_1

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-4122-8

  • Online ISBN: 978-1-4471-4123-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics