skip to main content
research-article

A survey of computer systems for expressive music performance

Published: 14 December 2009 Publication History

Abstract

We present a survey of research into automated and semiautomated computer systems for expressive performance of music. We will examine the motivation for such systems and then examine the majority of the systems developed over the last 25 years. To highlight some of the possible future directions for new research, the review uses primary terms of reference based on four elements: testing status, expressive representation, polyphonic ability, and performance creativity.

References

[1]
Anders, T. 2007. Composing music by composing rules: Design and usage of a generic music constraint system. Ph.D. dissertation. University of Belfast, Belfast, Northern Ireland.
[2]
Aono, Y., Katayose, H., and Inokuchi, S. 1997. Extraction of expression parameters with multiple regression analysis. J. Inform. Process. Soc. Japan 38, 1473--1481.
[3]
Arcos, J. L., de Mantaras, R. L., and Serra, X. 1997. SaxEx: A Case-based reasoning system for generating expressive musical performances. In Proceedings of 1997 International Computer Music Conference, Thessalonikia, Greece, September 1997, P. R. Cook, Ed. ICMA, San Francisco, CA, 329--336.
[4]
Arcos, J. L., Lopez de Mantaras, R., and Serra, X. 1998. Saxex: A case-based reasoning system for generating expressive musical performance. J. New Music Res. 27, 194--210.
[5]
Arcos, J. L. and de Mantaras, R. L. 2001. The SaxEx system for expressive music synthesis: A progress report. In Proceedings of the Workshop on Current Research Directions in Computer Music (Barcelona, Spain, November), C. Lomeli and R. Loureiro, Eds. 17--22.
[6]
Arcos, J. L. and Lopez de Mantaras, R. 2001. An interactive case-based reasoning approach for generating expressive music. J. Appl. Intell. 14, 115--129.
[7]
Atkinson, J. 2007. J. S. Bach: The Goldberg Variations. Stereophile, September 2007.
[8]
Bellini, P. and Nesi, P. 2001. WEDELMUSIC format: An XML music notation format for emerging applications. In Proceedings of the 1st International Conference on Web Delivering of Music. Florence, Italy, November. IEEE Press, Los Alamitos, CA, 79--86.
[9]
Bresin, R. 1998. Artificial neural networks based models for automatic performance of musical scores. J. New Music Res. 27, 239--270.
[10]
Bresin, R. and Friberg, A. 2000. Emotional coloring of computer-controlled music performances. Comput. Music J. 24, 44--63.
[11]
Britton, J. C., Phan, K. L., Taylor, S. F., Welsch, R. C., Berridge, K. C., and Liberzon, I. 2006. Neural correlates of social and nonsocial emotions: An fMRI study. NeuroImage 31, 397--409.
[12]
Buxton, W. A. S. 1977. A composer's introduction to computer music. Interface 6, 57--72.
[13]
Cambouropoulos, E. 2001. The Local Boundary Detection Model (LBDM) and its application in the study of expressive timing. In Proceedings of the 2001 International Computer Music Conference, Havana, Cuba, September 2001, R. Schloss and R. Dannenberg, Eds. International Computer Music Association, San Fransisco, CA.
[14]
Camurri, A., Dillon, R., and Saron, A. 2000. An experiment on analysis and synthesis of musical expressivity. In Proceedings of the 13th Colloquium on Musical Informatics (L'Aquila, Italy, September).
[15]
Canazza, S., de Poli, G., Drioli, C., Roda, A., and Zamperini, F. 2000a. Real-time morphing among different expressive intentions on audio playback. In Proceedings of the International Computer Music Conference (San Francisco, CA, Aug.). 356--359.
[16]
Canazza, S., Drioli, C., de Poli, G., Rodà, A., and Vidolin, A. 2000b. Audio morphing different expressive intentions for multimedia systems. IEEE Multimed. 7, 79--83.
[17]
Canazza, S., de Poli, G., Drioli, C., Rodà, A., and Vidolin, A. 2001a. Expressive morphing for interactive performance of musical scores. In Proceedings of 1st International Conference on WEB Delivering of Music (Florence, Italy, Nov.). IEEE Press, Los Alamitos, CA, 116--122.
[18]
Canazza, S., de Poli, G., Rodà, A., Vidolin, A., and Zanon, P. 2001b. Kinematics-energy space for expressive interaction on music performance. In Proceedings of the MOSART Workshop on Current Research Directions on Computer Music (Barcelona, Spain, Nov.). 35--40.
[19]
Canazza, S., de Poli, G., Rodà, A., and Vidolin, A. 2003. An abstract control space for communication of sensory expressive intentions in music performance. J. New Music Res. 32, 281--294.
[20]
Canazza, S., de Poli, G., Drioli, C., Rodà, A., and Vidolin, A. 2004. Modeling and control of expressiveness in music performance. Proc. IEEE 92, 686--701.
[21]
Carlson, L., Nordmark, A., and Wiklander, R. 2003. Reason Version 2.5—Getting Started. Propellorhead Software, Stockholm, Sweden.
[22]
Chalmers, D. 2006. Strong and weak emergence. In The Re-Emergence of Emergence, P. Clayton and P. Davies, Eds. Oxford University Press, Oxford, U.K.
[23]
Church, M. 2004. The mystery of Glenn Gould. The Independent, Nov. 29.
[24]
Clarke, E. F. 1993. Generativity, mimesis and the human body in music performance. Contemp. Music Rev. 9, 207--219.
[25]
Clarke, E. F. 1998. Generative principles in music performance. In Generative Processes in Music: The Psychology of Performance, Improvisation, and Composition, J. A. Sloboda, Ed. Clarendon Press, Oxford, U.K., 1--26.
[26]
Clarke, E. F. and Windsor, W. L. 2000. Real and simulated expression: A listening study. Music Percept. 17, 277--313.
[27]
Clynes, M. 1986. Generative principles of musical thought: Integration of microstructure with structure. Commun. Cognit. 3, 185--223.
[28]
Clynes, M. 1995. Microstructural musical linguistics: Composer's pulses are liked best by the musicians. Cognition: Int. J. Cogn. Sci. 55, 269--310.
[29]
Cope, D. 1992. Computer modelling of musical intelligence in experiments in musical intelligence. Comput. Mus. J. 16, 69--83.
[30]
Dahlstedt, P. 2007. Autonomous evolution of complete piano pieces and performances. In Proceedings of the ECAL Workshop on Music and Artificial Life (MusicAL, Lisbon, Portugal, Sep.), 10.
[31]
Dannenberg, R. B. 1993. A brief survey of music representation issues, Tech. Syst. Comput. Music J. 17, 20--30.
[32]
Dannenberg, R. B. and Derenyi, I. 1998. Combining instrument and performance models for high-quality music synthesis. J. New Music Res. 27, 211--238.
[33]
Dannenberg, R. B., Pellerin, H., and Derenyi, I. 1998. A study of trumpet envelopes. In Proceedings of the International Computer Music Conference, Ann Arbor, Michigan, October 1998. International Computer Music Association, San Francisco, 57--61.
[34]
Desain, P. and Honing, H. 1993. Tempo curves considered harmful. Contemp. Music Rev. 7, 123--138.
[35]
Dixon, S., Goebl, W., and Widmer, G. 2002. The performance worm: Real time visualisation of expression based on Langrer's tempo-loudness animation. In Proceedings of the International Computer Music Conference (Goteborg, Sweden, Sep.). 361--364.
[36]
Dorard, L., Hardoon, D. R., Shawe-Taylor, J. 2007. Can style be learned? A machine learning approach towards ‘performing’ as famous pianists. In Proceedings of the Music, Brain and Cognition Workshop—NIPS (Whistler, Canada).
[37]
Durrant, S., Miranda, E. R., Hardoon, D., Shawe-Taylor, J., Brechmann, A., and Scheich, H. 2007. Neural correlates of tonality in music. In Proceedings of Music, Brain&Cognition Workshop——NIPS Conference.
[38]
Emde, W. and Wettschereck, D. 1996. Relational instance based learning. In Proceedings of 13th International Conference on Machine Learning, Bari, Italy, July 1996, L. Saitta, Ed. Morgan Kaufmann, San Francisco, CA, 122--130.
[39]
Finn, B. 2007. Personal communication. August.
[40]
Friberg, A. 1995. A quantitative rule system for music performance. Ph.D. dissertation. Department of Speech, Music and Hearing, Royal Institute of Technology, Stockholm, Sweden.
[41]
Friberg, A. 2006. pDM: An expressive sequencer with real-time control of the KTH music-performance rules. Comput. Music J. 30, 37--48.
[42]
Friberg, A., Bresin, R., and Sundberg, J. 2006. Overview of the KTH rule system for musical performance. Adv. Cognit. Psych. 2, 145--161.
[43]
Friberg, A. and J. Sundberg. 1999. Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. J. Acoust. Soc. Amer. 105, 1469--1484.
[44]
Gabrielsson, A. 2003. Music performance research at the millenium. Psychol. Music 31, 221--272.
[45]
Gabrielsson, A. and Juslin, P. 1996. Emotional expression in music performance: Between the performer's intention and the listener's experience. Psychol. Music 24, 68--91.
[46]
Good, M. 2001. MusicXML for notation and analysis. In The Virtual Score: Representation, Retrieval, Restoration, W. B. Hewlett and E. Selfridge-Field, Eds. MIT Press, Cambridge, MA, 113--124.
[47]
Good, M. 2006. MusicXML in commercial applications. In Music Analysis East and West, W. B. Hewlett and E. Selfridge-Field, Eds. MIT Press, Cambridge, MA, 9--20.
[48]
Grindlay, G. C. 2005. Modeling expressive musical performance with Hidden Markov Models. Ph.D. dissertation. University of California, Santa Cruz, Santa Cruz, CA.
[49]
Hamanaka, M., Hirata, K., and Tojo, S. 2005. ATTA: Automatic Time-span Tree Analyzer based on extended GTTM. In Proceedings of the 6th International Conference on Music Retrieval Conference (London, U.K., Sep.). 358--365.
[50]
Hashida, M., Nagata, N., and Katayose, H. 2006. Pop-E: A performance rendering system for the ensemble music that considered group expression. In Proceedings of the 9th International Conference on Music Perception and Cognition, Bologna, Spain, August 2006, M. Baroni, R. Addessi, R. Caterina, and M. Costa, Eds. ICMPC, 526--534.
[51]
Hashida, M., Nagata, N., and Katayose, H. 2007. jPop-E: An assistant system for performance rendering of ensemble music. In Proceedings of the Conference on New Interfaces for Musical Expression (NIME'07), L. Crawford, Ed. 313--316.
[52]
Hastie, T., Tibshirani, R., Friedman, J. 2001. The Elements of Statistical Learning. Springer, Berlin, Germany.
[53]
Hazan, A. and Ramirez, R. 2006. Modelling expressive performance using consistent evolutionary regression trees. In Proceedings of 17th European European Conference on Aritifial Intelligence (Workshop on Evolutionary Computation), Riva del Garda, Italy, August 2006, G. Brewka, S. Coradeschi, A. Perini, and P. Traverso, Eds. IOS Press, Amsterdam, The Netherlands.
[54]
Hellman, M. 1991. The nearest neighbor classification rule with a reject option. In Nearest Neighbor (NN) Norms: Pattern Classification Techniques, B. Dasarathy, Ed. IEEE Computer Society Press, Los Alamitos, CA.
[55]
Hiller, L. and Isaacson, L. 1959. Experimental Music. Composition with an Electronic Computer. McGraw Hill, New York, NY.
[56]
Hiraga, R., Bresin, R., and Hirata, K. and K. H. 2004. Rencon 2004: Turing test for musical expression. In Proceedings of the New Interfaces for Musical Expression Conference, Hamatsu, Japan, June, Y. Nagashima and M. Lyons, Eds. Shizuoka University of Art and Culture, Hamatsu, Japan, 120--123.
[57]
Hirata, K. and Hiraga, R. 2002. Ha-Hi-Hun: Performance rendering system of high controllability. In Proceedings of the ICAD 2002 Rencon Workshop on Performance Rendering Systems (Kyoto, Japan, July), 40--46.
[58]
Ishikawa, O., Aono, Y., Katayose, H., and Inokuchi, S. 2000. Extraction of musical performance rule using a modified algorithm of multiple regression analysis. In Proceedings of the International Computer Music Conference, Berlin, Germany, August 2000. International Computer Music Association, San Francisco, CA, 348--351.
[59]
Johnson, M. L. 1991. Toward an expert system for expressive musical performance. Comput. 24, 30--34.
[60]
Juslin, P. 2003. Five facets of musical expression: A psychologist's perspective on music performance. Psychol. Music 31, 273--302.
[61]
Kajanova, Y. 2002. Johann Sebastian Bach and the modern jazz quartet—a desire for sonic sensualism or seriousness. In Bach 2000. Music Between Virgin Forest and Knowledge Society. J. Fukac, Ed. Compostela Group of Universities, 253--260.
[62]
Katayose, H., Fukuoka, T., Takami, K., and Inokuchi, S. 1990. Expression extraction in virtuoso music performances. In Proceedings of the 10th International Conference on Pattern Recognition (Atlantic City, NJ, June). IEEE Press, Los Alamitos, CA, 780--784.
[63]
Kirke, A. 1997. Learning and co-operation in mobile multi-robot systems. Ph.D. dissertation, University of Plymouth, Plymouth, U.K.
[64]
Kirke, A. and Miranda, E. R. 2007a. Capturing the aesthetic: Radial mappings for cellular automata music. J. ITC Sangeet Res. Acad. 21, 15--23.
[65]
Kirke, A. and Miranda, E. R. 2007b. Evaluating mappings for cellular automata music. In Proceedings of ECAL Workshop on Music and Artificial Life (MusicAL, Lisbon, Portugal).
[66]
Koelsch, S. and Siebel, W. A. 2005. Towards a neural basis of music perception. Trends Cognit. Sci. 9, 579--584.
[67]
Kroiss, W. 2000. Parameteroptimierung für ein Modell des musikalischen Ausdrucks mittels Genetischer Algorithmen. Master's thesis. Department of Medical Cybernetics and Artificial Intelligence, University of Vienna, Vienna, Austria.
[68]
Krumhans, C. 1991. Cognitive Foundations of Musical Pitch. Oxford University Press, Oxford, U.K.
[69]
Laurson, M. and Kuuskankare, M. 2003. From RTM-notation to ENP-score-notation. In Proceedings of Journées d'Informatique Musicale (Montbéliard, France).
[70]
Lerdahl, F. and Jackendoff, R. 1983. A Generative Theory of Tonal Music. MIT Press, Cambridge, MA.
[71]
Levenshtein, V. I. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Sov. Phys. Dokl. 10, 707--710.
[72]
Livingstone, S. R., Muhlberger, R., Brown, A. R., and Loch, A. 2007. Controlling musical emotionality: An affective computational architecture for influencing musical emotions. Dig. Creativ. 18, 1 (Mar.), 43--53.
[73]
Lopez de Mantaras, R. and Arcos, J. L. 2002. AI and music: From composition to expressive performances. AI Mag. 23, 43--57.
[74]
Mazzola, G. 2002. The Topos of Music—Geometric Logic of Concepts, Theory, and Performance. Birkhäuser, Basel, Switzerland/Boston, MA.
[75]
Mazzola, G. and Zahorka, O. 1994. Tempo curves revisited: Hierarchies of performance fields. Comput. Music J. 18, 1, 40--52.
[76]
Meyer, L. B. 1957. Meaning in music and information theory. J. Aesthet. Art Crit. 15, 412--424.
[77]
Miranda, E. R. 2001. Composing Music With Computers. Focal Press, Oxford, U.K.
[78]
Miranda, E. R. 2002. Emergent sound repertoires in virtual societies. Comput. Music J. 26, 2, 77--90.
[79]
Mitchell, M. 1996. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA.
[80]
Mitchell, T. 1997. Machine Learning. McGraw-Hill, New York, NY.
[81]
Narmour, E. 1990. The Analysis and Cognition of Basic Melodic Structures: The Implication-Realization Model. University of Chicago Press, Chicago, IL.
[82]
Pace, I. 2007. Complexity as imaginative stimulant: Issues of rubato, barring, grouping, accentuation and articulation in contemporary music, with examples from Boulez, Carter, Feldman, Kagel, Sciarrino, Finnissy. In Proceedings of the 5th International Orpheus Academy for Music&Theory (Gent, Belgium, Apr.).
[83]
Palmer, C. 1997. Music performance. Ann. Rev. Psychol. 48, 115--138.
[84]
Pace, I. 2009. Notation, time and the performer's relationship to the score in contemporary music. In Collected Writings of the Orpheus Institute: Unfolding Time: Studies in Temporality in Twentieth-Century Music, D. Crispin, Ed. Leuven University Press, Leuven, Belgium, 151--192.
[85]
Papadopoulos, G. and Wiggins, G. A. 1999. AI methods for algorithmic composition: A survey, a critical view, and future prospects. In Proceedings of the AISB Symposium on Musical Creativity.
[86]
Parncutt, R. 1997. Modeling piano performance: Physics and cognition of a virtual pianist. In Proceedings of the 1997 International Computer Music Conference, Thessalonikia, Greece, September 1997, P. R. Cook, Ed. ICMA, San Fransisco, CA, 15--18.
[87]
Quinlan, J.R. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco, CA.
[88]
Ramirez, R. and Hazan, A. 2005. Modeling expressive performance in jazz. In Proceedings of the 18th International Florida ArtificiaI Intelligence Research Society Conference (AI in Music and Art), Clearwater Beach, FL, May 2005, AAAI Press, Menlo Park, CA, 86--91.
[89]
Ramirez, R. and Hazan, A. 2007. Inducing a generative expressive performance model using a sequential-covering genetic algorithm. In Proceedings of the Genetic and Evolutionary Computation Conference. (London, UK, July), ACM Press, New York, NY.
[90]
Raphael, C. 2001a. Can the computer learn to play music expressively? In Proceedings of Eighth International Workshop on Artificial Intelligence and Statistics, 2001, T. Jaakkola and T. Richardson, Eds. Morgan Kaufmann, San Franciso, CA, 113--120.
[91]
Raphael, C. 2001b. A Bayesian network for real-time musical accompaniment. Neur. Inform. Process. Syst. 14, 1433--1439.
[92]
Raphael, C. 2003. Orchestra in a box: A system for real-time musical accompaniment. In Proceedings of the 2003 International Joint Conference on Artificial Intelligence (Working Notes of RenCon Workshop), Acapulco, Mexico, August 2003, G. Gottlob and T. Walsh, Eds. Morgan Kaufmann, San Francisco, CA, 5--10.
[93]
Rasmussen, C. and Williams, C. 2006. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA.
[94]
Repp, B. H. 1990. Composer's pulses: Science or art. Music Percept. 7, 423--434.
[95]
Roads, C. 1996. The Computer Music Tutorial. MIT Press, Cambridge, MA.
[96]
Seashore, C. E. 1938. Psychology of Music. McGraw-Hill, New York, NY.
[97]
Sethares, W. 2004. Tuning, Timbre, Spectrum, Scale. Springer, London, U.K.
[98]
Sholkopf, B., Smola, A., and Muller, K. 1998. Nonlinear component analysis as a kernel eigenvalue problem. Neur. Computat. 10, 1299--1319.
[99]
Sundberg, J., Askenfelt, A., and Frydén, L. 1983. Musical performance. A synthesis-by-rule approach. Comput. Music J. 7, 37--43.
[100]
Sundberg, J., Friberg, A., and Bresin, R. 2003. Attempts to reproduce a pianist's expressive timing with Director Musices performance rules. J. New Music Res. 32, 317--325.
[101]
Suzuki, T. 2003. Kagurame Phase-II. In Proceedings of the 2003 International Joint Conference on Artificial Intelligence (Working Notes of RenCon Workshop), Acapulco, Mexico, August 2003, G. Gottlob and T. Walsh, Eds. Morgan Kauffman, San Francisco, CA.
[102]
Suzuki, T., Tokunaga, T., and Tanaka, H. 1999. A case based approach to the generation of musical expression. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, Stockholm, Sweden, August 1999, Morgan Kaufmann, San Francisco, CA, 642--648.
[103]
Temperley, D. and Sleator, D. 1999. Modeling meter and harmony: A preference rule approach. Comput. Music J. 23, 10--27.
[104]
Thompson, W. F. 1989. Composer-specific aspects of musical performance: An evaluation of Clynes's theory of pulse for performances of Mozart and Beethoven. Music Percept. 7, 15--42.
[105]
Tobudic, A. and Widmer, G. 2003a. Relational IBL in music with a new structural similarity measure. In Proceedings of the 13th International Conference on Inductive Logic Programming, Szeged, Hungary, September 2003, T. Horváth and A. Yamamoto, Eds. Springer Verlag, Berlin, Germany, 365--382.
[106]
Tobudic, A. and Widmer, G. 2003b. Learning to play Mozart: Recent improvements. In Proceedings of the IJCAI'03 Workshop on Methods for Automatic Music Performance and their Applications in a Public Rendering Contest (RenCon), Acapulco, Mexico, August, 2003. K. Hirata Ed.
[107]
Tobudic, A. and Widmer, G. 2003c. Technical notes for musical contest category. In Proceedings of the 2003 International Joint Conference on Artificial Intelligence (Working Notes of RenCon Workshop), Acapulco, Mexico, August 2003, G. Gottlob and T. Walsh, Eds. Morgan Kauffman, San Francisco, CA.
[108]
Tobudic, A. and Widmer, G. 2005. Learning to play like the great pianists. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, Edinburgh, UK, August 2005, P. Kaelbling and A. Saffiotti, Eds. Professional Book Center, Denver, CO, 871--876.
[109]
Todd, N. P. 1985. A model of expressive timing in tonal music. Music Percept. 3, 33--58.
[110]
Todd, N. P. 1989. A computational model of Rubato. Contemporary Music Rev. 3, 69--88.
[111]
Todd, N. P. 1992. The dynamics of dynamics: A model of musical expression. J. Acoust. Soc. Amer. 91, 3540--3550.
[112]
Todd, N.P. 1995. The kinematics of musical expression. J. Acoust. Soc. Amer. 97, 1940--1949.
[113]
Toop, R. 1988. Four facets of the New Complexity. CONTACT 32, 4--50.
[114]
Widmer, G. 2000. Large-scale induction of expressive performance rules: First quantitative results. In Proceedings of the 2000 International Computer Music Conference, Berlin, Germany, September 2000, I. Zannos, Ed. International Computer Music Association, San Francisco, CA, 344--347.
[115]
Widmer, G. 2002. Machine discoveries: A few simple, robust local expression principles. J. New Music Res. 31, 37--50.
[116]
Widmer, G. 2003. Discovering simple rules in complex data: A meta-learning algorithm and some surprising musical discoveries. Artific. Intell. 146, 129--148.
[117]
Widmer, G. and Tobudic, A. 2003. Playing Mozart by analogy: Learning multi-level timing and dynamics strategies. J. New Music Res. 32, 259--268.
[118]
Widmer, G. 2004. Computational models of expressive music performance: The state of the art. J. New Music Res. 33, 203--216.
[119]
Witten, I. H. and Frank, E. 2000. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Mateo, CA.
[120]
Wright, M. and Berdahl, E. 2006. Towards machine learning of expressive microtiming in Brazilian drumming. In Proceedings of the 2006 International Computer Music Conference, New Orleans, USA, November 2006, I. Zannos, Ed. ICMA, San Francisco, CA, 572--575.
[121]
Zanon, P. and de Poli, G. 2003. Estimation of parameters in rule system for expressive rendering of musical performance. Comput. Music J. 27, 29--46.
[122]
Zanon, P. and de Poli, G. 2003. Time-varying estimation of parameters in rule systems for music performance. J. New Music Res. 32, 295--315.
[123]
Zhang, Q. and Miranda, E. R. 2006a. Evolving musical performance profiles using genetic algorithms with structural fitness. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, (Seattle, Washington, July), J. V. Diggelen, M. A. Wiering, and E. D. D. Jong, Eds. ACM Press, New York, NY, 1833--1840.
[124]
Zhang, Q. and Miranda, E. R. 2006b. Towards an interaction and evolution model of expressive music performance. In Proceedings of the 6th International Conference on Intelligent Systems Design and Applications (Jinan, China, October), Y. Chen, and A. Abraham, Eds. IEEE Computer Society, Los Alamitos, CA, 1189--1194.
[125]
Zhang, Q. and Miranda, E. R. 2007. Evolving expressive music performance through interaction of artificial agent performers. In Proceedings of the ECAL Workshop on Music and Artificial Life (MusicAL, Lisbon, Portugal, September).

Cited By

View all
  • (2024)Comparative Evaluation in the Wild: Systems for the Expressive Rendering of MusicIEEE Transactions on Artificial Intelligence10.1109/TAI.2024.34087175:10(5290-5303)Online publication date: Oct-2024
  • (2023)Algorithmic (In)Tolerance: Experimenting with Beethoven’s Music on Social Media PlatformsTransactions of the International Society for Music Information Retrieval10.5334/tismir.1486:1(1-12)Online publication date: 3-Jan-2023
  • (2023)Research in Computational Expressive Music Performance and Popular Music Production: A Potential Field of Application?Multimodal Technologies and Interaction10.3390/mti70200157:2(15)Online publication date: 31-Jan-2023
  • Show More Cited By

Index Terms

  1. A survey of computer systems for expressive music performance

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Computing Surveys
    ACM Computing Surveys  Volume 42, Issue 1
    December 2009
    162 pages
    ISSN:0360-0300
    EISSN:1557-7341
    DOI:10.1145/1592451
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 December 2009
    Accepted: 01 September 2008
    Revised: 01 February 2008
    Received: 01 August 2007
    Published in CSUR Volume 42, Issue 1

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Music performance
    2. computer music
    3. generative performance
    4. machine learning

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)31
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Comparative Evaluation in the Wild: Systems for the Expressive Rendering of MusicIEEE Transactions on Artificial Intelligence10.1109/TAI.2024.34087175:10(5290-5303)Online publication date: Oct-2024
    • (2023)Algorithmic (In)Tolerance: Experimenting with Beethoven’s Music on Social Media PlatformsTransactions of the International Society for Music Information Retrieval10.5334/tismir.1486:1(1-12)Online publication date: 3-Jan-2023
    • (2023)Research in Computational Expressive Music Performance and Popular Music Production: A Potential Field of Application?Multimodal Technologies and Interaction10.3390/mti70200157:2(15)Online publication date: 31-Jan-2023
    • (2023)Probing the underlying principles of dynamics in piano performances using a modelling approachFrontiers in Psychology10.3389/fpsyg.2023.126971514Online publication date: 12-Dec-2023
    • (2023)Human-Centred Artificial Intelligence in Sound Perception and Music CompositionIntelligent Systems Design and Applications10.1007/978-3-031-27440-4_21(217-229)Online publication date: 31-May-2023
    • (2022)EmotionBox: A music-element-driven emotional music generation system based on music psychologyFrontiers in Psychology10.3389/fpsyg.2022.84192613Online publication date: 29-Aug-2022
    • (2022)Content based User Preference Modeling in Music GenerationProceedings of the 30th ACM International Conference on Multimedia10.1145/3503161.3548169(2473-2482)Online publication date: 10-Oct-2022
    • (2021)Algorithmic Music for Therapy: Effectiveness and PerspectivesApplied Sciences10.3390/app1119883311:19(8833)Online publication date: 23-Sep-2021
    • (2021)Tool for a real-time automatic assessment of vocal proficiencyJournal of Music, Technology & Education10.1386/jmte_00034_114:1(69-91)Online publication date: 1-Apr-2021
    • (2021)A DT-Neural Parametric Violin Synthesizer2021 International Conference on Electrical Engineering and Informatics (ICEEI)10.1109/ICEEI52609.2021.9611115(1-6)Online publication date: 12-Oct-2021
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media