skip to main content
research-article

The Role of Individual Difference in Judging Expressiveness of Computer-Assisted Music Performances by Experts

Published: 08 December 2014 Publication History

Abstract

Computational systems for generating expressive musical performances have been studied for several decades now. These models are generally evaluated by comparing their predictions with actual performances, both from a performance parameter and a subjective point of view, often focusing on very specific aspects of the model. However, little is known about how listeners evaluate the generated performances and what factors influence their judgement and appreciation. In this article, we present two studies, conducted during two dedicated workshops, to start understanding how the audience judges entire performances employing different approaches to generating musical expression. In the preliminary study, 40 participants completed a questionnaire in response to five different computer-generated and computer-assisted performances, rating preference and describing the expressiveness of the performances. In the second, “GATM” (Gruppo di Analisi e Teoria Musicale) study, 23 participants also completed the Music Cognitive Style questionnaire. Results indicated that music systemizers tend to describe musical expression in terms of the formal aspects of the music, and music empathizers tend to report expressiveness in terms of emotions and characters. However, high systemizers did not differ from high empathizers in their mean preference score across the five pieces. We also concluded that listeners tend not to focus on the basic technical aspects of playing when judging computer-assisted and computer-generated performances. Implications for the significance of individual differences in judging musical expression are discussed.

References

[1]
Philip Alperson. 2004. The philosophy of music: Formalism and beyond. In The Blackwell guide to aesthetics, P. Kivy (Ed.). Blackwell Publishing, Malden, MA, 254--275.
[2]
Christopher Ariza. 2009. The interrogator as critic: The Turing test and the evaluation of generative music systems. Computer Music Journal 33, 2 (Summer 2009), 48--70.
[3]
Simon Baron-Cohen, Rebecca C. Knickmeyer, and Matthew K. Belmonte. 2005. Sex differences in the brain: Implications for explaining autism. Science 310, 5749 (Nov. 2005), 819--823.
[4]
Erica Bisesi, Richard Parncutt, and Anders Friberg. 2011. An accent-based approach to performance rendering: Music theory meets music psychology. In Proceedings of the International Symposium on Performance Science (ISPS’11), D. Edwards, A. Williamon, and L. Bartel (Eds.). 27--32.
[5]
Margaret A. Boden. 2009. Computer models of creativity. AI Magazine 30, 3 (Fall 2009), 23--34.
[6]
Roberto Bresin and Anders Friberg. 2013. Evaluation of computer systems for expressive music performance. In Guide to Computing for Expressive Music Performance, A. Kirke and E. R. Miranda (Eds.). Springer-Verlag, London, 181--203.
[7]
Sergio Canazza, Giovanni De Poli, Carlo Drioli, Antonio Rodà, and Alvise Vidolin. 2004. Modeling and control of expressiveness in music performance. Proc. IEEE 92, 4 (2004), 686--701.
[8]
Sergio Canazza, Giovanni De Poli, Antonio Rodà, and Alvise Vidolin. 2012. Expressiveness in music performance: Analysis, models, mapping, encoding. In Structuring Music through Markup Language: Designs and Architectures, J. Steyn (Ed.). IGI Global, Hershey, PA, 156--186.
[9]
Carl Dahlhaus. 1982. Esthetics of Music. Cambridge University Press, Cambridge & New York.
[10]
Giovanni De Poli. 2004. Methodologies for expressiveness modelling of and for music performance. Journal of New Music Research 33, 3 (2004), 189--202.
[11]
Peter Gregory Dunn, Boris de Ruyter, and Don G. Bouwhuis. 2012. Toward a better understanding of the relation between music preference, listening behavior, and personality. Psychology of Music 40, 4 (July 2012), 411--428.
[12]
Sandra Garrido and Emery Schubert. 2011. Individual differences in the enjoyment of negative emotion in music: A literature review and experiment. Music Perception 28, 3 (2011), 279--295.
[13]
Sandra Garrido, Emery Schubert, Gunter Kreutz, and Andrea Halpern. 2011. Personality and computer music. In Proceedings of Sound and Music Computing Conference (SMC’11), Federico Avanzini (Ed.). smcnetwork.org, Padova, Italy, 1--6.
[14]
Wayne Gillespie and Brett Myors. 2000. Personality of rock musicians. Psychology of Music 28, 2 (2000), 154--165.
[15]
Maarten Grachten and Gehard Widmer. 2012. Linear basis models for prediction and analysis of musical expression. Journal of New Music Research 41, 4 (2012), 311--322.
[16]
Jerry L. Hintze and Ray D. Nelson. 1998. Violin plots: A box plot-density trace synergism. American Statistician 52, 2 (May 1998), 181--184.
[17]
Rumi Hiraga, Roberto Bresin, Keiji Hirata, and Haruhiro Katayose. 2004. Rencon 2004: Turing test for musical expression. In Proceedings of New Interfaces for Musical Expression (NIME’04) Conference. 120--123.
[18]
Rumi Hiraga, Mitsuyo Hashida, Keiji Hirata, Haruhiro Katayose, and Kenzi Noike. 2002. Rencon: Towards a new evaluation method for performance. In Proceedings of International Computer Music Conference (ICMC’02), 357--360.
[19]
Anna Jordanous. 2012. A standardised procedure for evaluating creative systems: Computational creativity evaluation based on what it is to be creative. Cognitive Computation 4, 3 (Sept. 2012), 246--279.
[20]
Peter Kampstra. 2008. Beanplot: A boxplot alternative for visual comparison of distributions. Journal of Statistical Software 28, 1 (November 2008), 1--9.
[21]
Haruhiro Katayose, Mitsuyo Hashida, Giovanni De Poli, and Keiji Hirata. 2012. On evaluating systems for generating expressive music performance: The Rencon experience. Journal of New Music Research 41, 4 (2012), 299--310.
[22]
Haruhiro Katayose and Keita Okudaira. 2004. Using an expressive performance template in a music conducting interface. In Proceedings of New Interfaces for Musical Expression (NIME’04) Conference. 124--129.
[23]
Anthony E. Kemp. 1996. The Musical Temperament. Oxford University Press, Oxford.
[24]
Alexis Kirke and Eduardo R. Miranda. 2009. A survey of computer systems for expressive music performance. ACM Computing Surveys (CSUR) 42, 1 (Dec. 2009), 1--41.
[25]
Alexis Kirke and Eduardo R. Miranda. 2013. An overview of computer systems for expressive music performance. In Guide to Computing for Expressive Music Performance, Alexis Kirke and Eduardo R. Miranda (Eds.). Springer-Verlag, London, 1--47.
[26]
Silvia Knobloch and Dolf Zillmann. 2002. Mood management via the digital jukebox. Journal of Communication 52, 2 (June 2002), 351--366.
[27]
Maria Kozhevnikov. 2007. Cognitive styles in the context of modern psychology: Toward an integrated framework of cognitive style. Psychological Bulletin 133, 3 (2007), 464.
[28]
Gunter Kreutz, Emery Schubert, and Laura Mitchell. 2008. Cognitive styles of music listening. Music Perception 26, 1 (2008), 57--73.
[29]
Viviana Noemi Lemos de Ciuffardi. 2000. Personality features associated with the performance of certain musical instruments. Interdisciplinaria 17, 1 (2000), 1--20.
[30]
Gary E. McPherson and Emery Schubert. 2004. Measuring performance enhancement in music. In Musical Excellence: Strategies and Techniques to Enhance Performance, Aaron Williamon (Ed.). Oxford University Press, Oxford, 61--82.
[31]
Gary E. McPherson and William F. Thompson. 1998. Assessing music performances: Issues and influences. Research Studies in Music Education 10, 1 (June 1998), 12--24.
[32]
Adrian C. North. 2010. Individual differences in musical taste. The American Journal of Psychology 123, 2 (Summer 2010), 199--208.
[33]
Caroline Palmer. 1997. Music performance. Annual Review of Psychology 48, 1 (Feb. 1997), 115--138.
[34]
Bennett Reimer. 1989. A Philosophy of Music Education (2nd ed.). Prentice Hall, Englewood Cliffs, NJ.
[35]
Bennett Reimer. 1997. Should there be a universal philosophy of music education? International Journal of Music Education 29, 1 (May 1997), 4--21.
[36]
Peter J. Rentfrow, L. R. Goldberg, and D. J. Levitin. 2011. The structure of musical preferences: A five-factor model. Journal of Personality and Social Psychology 100, 6 (June 2011), 1139.
[37]
Peter J. Rentfrow and Samuel D. Gosling. 2003. The do re mi’s of everyday life: The structure and personality correlates of music preferences. Journal of Personality and Social Psychology 84, 6 (June 2003), 1236--1256.
[38]
Eugene Sadler-Smith. 2001. The relationship between learning style and cognitive style. Personality and Individual Differences 30, 4 (March 2001), 609--616.
[39]
Michael Stanley, Ron Brooker, and Ross Gilbert. 2002. Examiner perceptions of using criteria in music performance assessment. Research Studies in Music Education 18, 1 (June 2002), 46--56.
[40]
Johan Sundberg, Anders Askenfelt, and Lars Frydén. 1983. Musical performance: A synthesis-by-rule approach. Computer Music Journal 7, 1 (1983), 37--43.
[41]
Shunji Tanaka, Mitsuyo Hashida, and Haruhiro Katayose. 2011. Shunji: A case-based performance rendering system attached importance to phrase expression. In Proceedings of Sound and Music Computing Conference (SMC’11), Federico Avanzini (Ed.). 1--2.
[42]
Sam Thompson. 2007. Determinants of listeners’ enjoyment of a performance. Psychology of Music 35, 1 (Jan. 2007), 20--36.
[43]
Sam Thompson and Aaron Williamon. 2003. Evaluating evaluation: Musical performance assessment as a research tool. Music Perception 21, 1 (Fall 2003), 21--41.
[44]
Chia-Jung Tsay. 2013. Sight over sound in the judgment of music performance. Proceedings of the National Academy of Sciences 110, 36 (2013), 14580--14585.
[45]
Chia-Jung Tsay. 2014. The vision heuristic: Judging music ensembles by sight alone. Organizational Behavior and Human Decision Processes 124, 1 (2014), 24--33.
[46]
Thomas W. Tunks. 1987. Evaluation in music education: The value of measurement/the measurement of value. Bulletin of the Council for Research in Music Education 90 (1987), 53--59.
[47]
Gehard Widmer, Sebastian Flossmann, and Maarten Grachten. 2009. YQX Plays Chopin. AI Magazine 30, 3 (2009), 35--48.
[48]
Gehard Widmer and Werner Goebl. 2004. Computational models of expressive music performance: The state of the art. Journal of New Music Research 33, 3 (2004), 203--216.
[49]
Clemens Wöllner. 2012. Is empathy related to the perception of emotional expression in music? A multimodal time-series analysis. Psychology of Aesthetics, Creativity, and the Arts 6, 3 (Aug. 2012), 214--223.
[50]
William J. Wrigley and Stephen B. Emmerson. 2013. Ecological development of a music performance rating scale for five instrument families. Psychology of Music 41, 1 (Jan. 2013), 97--118.
[51]
Naomi Ziv and Ohad Moran. 2006. Human versus computer: The effect of a statement concerning a musical performance’s source on the evaluation of its quality and expressivity. Empirical Studies of the Arts 24, 2 (2006), 177--191.

Cited By

View all
  • (2023)Distinguishing between musical excerpts learned by novices individually or in pairsPsychology of Music10.1177/0305735623121240852:4(472-488)Online publication date: 2-Dec-2023
  • (2021)Validation of the Music Empathizing inventory in ChinaPsychology of Music10.1177/0305735621104421850:5(1443-1459)Online publication date: 12-Nov-2021
  • (2020)Key Clarity is Blue, Relaxed, and Maluma: Machine Learning Used to Discover Cross-Modal Connections Between Sensory Items and the Music They Spontaneously EvokeProceedings of the 8th International Conference on Kansei Engineering and Emotion Research10.1007/978-981-15-7801-4_22(214-223)Online publication date: 19-Aug-2020
  • Show More Cited By

Index Terms

  1. The Role of Individual Difference in Judging Expressiveness of Computer-Assisted Music Performances by Experts

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Applied Perception
    ACM Transactions on Applied Perception  Volume 11, Issue 4
    January 2015
    132 pages
    ISSN:1544-3558
    EISSN:1544-3965
    DOI:10.1145/2695584
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 December 2014
    Accepted: 01 September 2014
    Revised: 01 July 2014
    Received: 01 October 2013
    Published in TAP Volume 11, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Cognitive styles
    2. human-centered computing
    3. music performance

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)24
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 07 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Distinguishing between musical excerpts learned by novices individually or in pairsPsychology of Music10.1177/0305735623121240852:4(472-488)Online publication date: 2-Dec-2023
    • (2021)Validation of the Music Empathizing inventory in ChinaPsychology of Music10.1177/0305735621104421850:5(1443-1459)Online publication date: 12-Nov-2021
    • (2020)Key Clarity is Blue, Relaxed, and Maluma: Machine Learning Used to Discover Cross-Modal Connections Between Sensory Items and the Music They Spontaneously EvokeProceedings of the 8th International Conference on Kansei Engineering and Emotion Research10.1007/978-981-15-7801-4_22(214-223)Online publication date: 19-Aug-2020
    • (2019)A Multilayered Approach to Automatic Music Generation and Expressive Performance2019 International Workshop on Multilayer Music Representation and Processing (MMRP)10.1109/MMRP.2019.00016(41-48)Online publication date: Jan-2019
    • (2018)Computational Models of Expressive Music Performance: A Comprehensive and Critical ReviewFrontiers in Digital Humanities10.3389/fdigh.2018.000255Online publication date: 24-Oct-2018
    • (2018)The relationship between musical training and musical empathizing and systemizing traitsMusicae Scientiae10.1177/102986491877963624:1(113-129)Online publication date: 12-Jun-2018
    • (2017)Algorithms can Mimic Human Piano Performance: The Deep Blues of MusicJournal of New Music Research10.1080/09298215.2016.126497646:2(175-186)Online publication date: 5-Jan-2017
    • (2017)An evaluation of linear and non-linear models of expressive dynamics in classical piano and symphonic musicMachine Language10.1007/s10994-017-5631-y106:6(887-909)Online publication date: 1-Jun-2017
    • (2015)CaRo 2.0Advances in Human-Computer Interaction10.1155/2015/8504742015(2-2)Online publication date: 1-Jan-2015

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media