skip to main content
10.1145/3678299.3678306acmotherconferencesArticle/Chapter ViewAbstractPublication PagesamConference Proceedingsconference-collections
research-article
Open access

Comparing Paid and Volunteer Participants in Emotional Responses to Sound

Published: 18 September 2024 Publication History

Abstract

The hedonic and affective testing of sound effects lacks methodologies and empirical evidence around best practices for running experiments. We used a paired comparison between raw (untreated) human voice emotes and three “robotized” versions of a voice file. We recruited volunteers from social media, and compared the results with those of a paid participant service, Prolific. Our results showed that there were some statistically significant differences in the responses between the two groups of participants, indicating that researchers should use caution when combining data from different sources of participants.

References

[1]
Faranak Abri, Luis Felipe Gutiérrez, Prerit Datta, David R. W. Sears, Akbar Siami Namin, and Keith S. Jones. 2021. A Comparative Analysis of Modeling and Predicting Perceived and Induced Emotions in Sonification. Electronics 10, 20 (2021).
[2]
Derek A. Albert and Daniel Smilek. 2023. Comparing attentional disengagement between Prolific and MTurk samples. Nature Scientific Reports 13 (2023), 20574. https://doi.org/10.1038/s41598-023-46048-5
[3]
M. Hill A.M. Strange, R.D. Enos and A. Lakeman. 2019. Online volunteer laboratories for human subjects research. PLoS One 14, 8 (2019), e0221676.
[4]
H.-A. Arjmand, J. Hohagen, B. Paton, and N. S. N. S. Rickard. 2017. Emotional Responses to Music: Shifts in Frontal Brain Asymmetry Mark Periods of Musical Change. Front. Psychol. 8 (2017), 2044. https://doi.org/10.3389/fpsyg.2017.02044
[5]
Stephen Baker, Paul Jennings, Garry Dunne, and Roger Williams. 2004. Improving the Effectiveness of Paired Comparison Tests for Automotive Sound Quality. Eleventh International Congress on Sound and Vibration, July (2004), 5–8.
[6]
Emmanuel Bigand, Suzanne Filipic, and Philippe Lalitte. 2005. The Time Course of Emotional Responses to Music. Annals of the New York Academy of Sciences 1060 (2005), 429–437.
[7]
K. C. Collins, Hannah Johnston, Adel Manji, and Bill Kapralos. 2024. Presentation mode and its impact on sentiment in Free Verbalization Responses to Sounds. Audio Engineering Society Convention Paper, June (2024), 15–17.
[8]
K. C. Collins and Adel Manji. 2024. Humanizing AI-Generated Music: Do Listeners hear a difference?Audio Engineering Society Convention 156, Madrid (June 2024), 15–17.
[9]
Eduardo Coutinho and Klaus R. Scherer. 2017. Introducing the Geneva Music-Induced Affect Checklist (GEMIAC) a brief instrument for the rapid assessment of musically induced emotions. Music Perception: An Interdisciplinary Journal 34, 4 (2017), 371–386.
[10]
J. Crumpton and C. L. Bethel. 2016. A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. Int J of Soc Robotics 8 (2016), 271–285.
[11]
Joe Crumpton and Cindy L. Bethel. 2016. A Survey of Using Vocal Prosody to Convey Emotion in Robot Speech. International Journal of Social Robotics 8 (2016), 271–285. https://doi.org/10.1007/s12369-015-0329-4
[12]
Tuomas Eerola, James Armitage, Nadine Lavan, and Sarah Knight. 2021. Online Data Collection in Auditory Perception and Cognition Research: Recruitment, Testing, Data Quality and Ethical Considerations. Auditory Perception & Cognition 4, 3-4 (2021), 251–280. https://doi.org/10.1080/25742442.2021.2007718
[13]
Hauke Egermann, Mary Elizabeth Sutherland, Oliver Grewe, Frederik Nagel, Reinhard Kopiez, and Eckart Altenmüller. 2011. Does music listening in a social context alter experience? A physiological and psychological perspective on emotion. Musicae Scientiae 15, 3 (2011), 307–323.
[14]
Jianyu Fan, Miles Thorogood, Kıvanç Tatar, and Philippe Pasquier. 2018. Quantitative Analysis of the Impact of Mixing on Perceived Emotion of Soundscape Recordings. In Proceedings of the 15th sound and music computing, Limassol, Cyprus.
[15]
Andre Fiebig. 2015. Influence of context effects on sound quality assessments. EuroNoise, 31 May – June 3, Maastricht (2015).
[16]
Alf Gabrielsson. 2006. Emotion Perceived and Emotion Felt: Same and Different. Musicae Scientiae. 5, 1 (2006), 123–147.
[17]
Joseph K. Goodman and Gabriele Paolacci. 2017. Crowdsourcing Consumer Research. Journal of Consumer Research 44 (2017), 196–210. https://doi.org/10.1371/journal.pone.0221676
[18]
Fredrik Hagman. 2010. Emotional response to sound Influence of spatial determinants. Master’s thesis. Department of Civil and Environmental Engineering, Chalmers University of Technology, Sweden.
[19]
S. Hameed, J. Pakarinen, K. Valde, and V. Pulkki. 2004. Psychoacoustic cues in room size perception. Audio Engineering Society Convention 116. André Fiebig. 2015. Influence of context effects on sound quality assessments. EuroNoise, 31 – June 3, Maastricht (May 2004).
[20]
David J. Hauser and Norbert Schqarz. 2016. Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research 48 (2016), 400–407.
[21]
Henkjan Honing and Olivia Ladinig. 2008. The Potential of the Internet for Music Perception Research: A Comment on Lab-Based Versus Web-Based Studies. Empirical Musicology Review 3, 1 (2008).
[22]
Alessandro Iop and Sandra Pauletto. 2018. Perception of Emotions in Multimodal Stimuli: the Case of Knocking on a Door AudioMostly 2018. (2018). https://doi.org/10.5281/zenodo.5113511
[23]
Eun-Sook Jee, Yong-Jeon Jeong, Chong Hui Kim, and Hisato Kobayashi. 2010. Sound design for emotion and intention expression of socially interactive robots. Intelligent Service Robotics 3(3), 199–206 Conor McGinn and Ilaria Torre. In 2019. Can you tell the robot by the voice? An exploratory study on the role of voice in the perception of robots. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 211–221.
[24]
Patrick Juslin. 2013. From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of Life Reviews 10, 3 (2013), 235–266.
[25]
Patrick N. Juslin, S. Liljeström, Daniel Västfjäll, G. Barradas, and A. Silva. 2008. An experience sampling study of emotional reactions to music: Listener, music, and situation. Emotion 8 (2008), 668–683.
[26]
P. N. Juslin and D. Västfjäll. 2008. Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences 31 (2008), 559575.
[27]
K. Kallinen and N. Ravaja. 2007. Comparing speakers versus headphones in listening to news from a computer–individual differences and psychophysiological responses. Computers in Human Behavior 23, 1 (2007), 303–317.
[28]
M. G. Keith, L. Tay, and P. D. Harms. 2017. Systems Perspective of Amazon Mechanical Turk for Organizational Research: Review and Recommendations. Front. Psychol. 8 (2017), 1359. https://doi.org/10.3389/fpsyg.2017.01359
[29]
Sungyoung Kim, Mark J. Indelicato, Hidetaka Imamura, and Hideo Miyazaki. 2016. Height loudspeaker position and its influence on listeners’ hedonic responses. In Audio Engineering Society Conference on Sound Field Control, July 18–20, Guildford, UK.
[30]
O. Ladinig and E. G. Schellenberg. 2011. Liking unfamiliar music: Effects of felt emotion and individual differences. Psychology of Aesthetics, Creativity, and the Arts. https://doi.org/10.1037/a0024671
[31]
A. B. Latupeirissa and R. Bresin. 2020. Understanding non-verbal sound of humanoid robots in films. Workshop on Mental Models of Robots at HRI 2020 Cambridge, UK (2020).
[32]
Judith Liebetrau, Johannes Nowak, Thomas Sporer, Matthias Krause, Martin Rekitt, and Sebastian Schneider. 2013. Paired Comparison as a Method for Measuring Emotions. Audio Engineering Society Convention October (2013), 17–20.
[33]
Athanasios Lykartsis, Andreas Pysiewicz, Henrik von Coler, and Steffen Lepa. 2013. The Emotionality of Sonic Events: Testing the Geneva Emotional Music Scale for Popular and Electroacoustic Music. In Proceedings of the 3rd International Conference on Music & Emotion (ICME3), Jyväskylä, Finland, 11th - 15th June 2013.
[34]
C. C. Marshall, P. S. Goguladinne, M. Maheshwari, A. Sathe, and F. M. Shipman. 2023. Who Broke Amazon Mechanical Turk? An Analysis of Crowdsourcing Data Quality over Time. In Proceedings of the 15th ACM Web Science Conference 2023. 335–345.
[35]
M. Masullo, L. Maffei, T. Iachini, M. Rapuano, F. Cioffi, G. Ruggiero, and F. Ruotolo. 2021. A questionnaire investigating the emotional salience of sounds. Applied Acoustics 182 (2021), 108281.
[36]
J. Merrill, D. Omigie, and M. Wald-Fuhrmann. 2020. Locus of emotion influences psychophysiological reactions to music. PLoS ONE 15, 8 (2020), e0237641. https://doi.org/10.1371/journal
[37]
G. Moiragias and J. Mourjopoulos. 2023. A listener preference model for spatial sound reproduction, incorporating affective response. PLoS ONE 18, 6 (2023), e0285135. https://doi.org/10.1371/journal.pone.0285135
[38]
Brittany A. Mok, Vibha Viswanathan, Agudemu Borjigin, Ravinderjit Singh, Homeira Kafi, and Hari M. Bharadwaj. 2024. Web-based psychoacoustics: Hearing screening, infrastructure, and validation. Behavior Research Methods Vol 56, 1433-1448 Sijia Zhao, Christopher A. Brown, Lori L. Holt and Frederic Dick. 26 (2024), 1–24. https://doi.org/10.3758/s13428-023-02101-9
[39]
K. Mortensen and T. L. Hughes. 2018. Comparing Amazon’s Mechanical Turk Platform to Conventional Data Collection Methods in the Health and Medical Research Literature. J Gen Intern Med. 33, 4 (2018), 533–538. https://doi.org/10.1007/s11606-017-4246-0
[40]
A. J. Moss, C. Rosenzweig, J. Robinson, S. N. Jaffe, and L. Litman. 2023. Is it ethical to use Mechanical Turk for behavioral research? Relevant data from a representative survey of MTurk participants and wages. Behavior Research Methods 55, 8 (2023), 4048–4067.
[41]
Norberto E. Naal-Ruiz, Luz M. Alonso-Valerdi, and David I. Ibarra-Zarate. 2021. Different Headphone Models Modulate Differently Alpha and Theta Brain Oscillations When Listening to the Same Sound. 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) Oct 31 - Nov 4, 2021 (virtual). (2021).
[42]
Eyal Peer, David Rothschild, Andrew Gordon, Zak Evernden, and Ekaterina Damer. 2022. Data quality of platforms and panels for online behavioral research Behavior Research Methods Vol. 54 (2022), 1643–1662.
[43]
Z. Ellen Peng, Sebastian Waz, Emily Buss, Yi Shen, Virginia Richards, Hari Bharadwaj, G. Christopher Stecker, Jordan A. Beim, Adam K. Bosen, Meredith D. Braza, Anna C. Diedesch, Claire M. Dorey, Andrew R. Dykstra, Frederick J. Gallun, Raymond L. Goldsworthy, Lincoln Gray, Eric C. Hoover, Antje Ihlefeld, Thomas Koelewijn, Judy G. Kopun, Juraj Mesik, Daniel E. Shub, and Jonathan H. Venezia. 2022. FORUM: Remote testing for psychological and physiological acoustics. Journal of the Acoustical Society of America 151 (2022), 3116–3128. https://doi.org/10.1121/10.0010422
[44]
E. M. Picou. 2016. Acoustic factors related to emotional responses to sound. Canadian Acoustics 44, 3 (2016).
[45]
J. Plazak and Z. Silver. 2016. Inducing emotion through size-related timbre manipulations: A pilot investigation. In 2016. In Proceedings of the 14th International Conference for Music Perception and Cognition, 276–278.
[46]
R. Read and T. Belpaeme. 2016. People interpret robotic non-linguistic utterances categorically. International Journal of Social Robotics 8 (2016), 31–50.
[47]
Esther Rituerto-González, Clara Luis-Mingueza, and Carmen Peláez-Moreno. 2022. Affective Acoustic Scene Analysis to Detect Threatening Scenarios. 53 Congreso Español de Acústica 2022 F. Abri, L. F. Gutiérrez, P. Datta, D. R. W. Sears, A. Siami Namin and K. S. Jones. 10 (2022), 2519. https://doi.org/10.3390/electronics10202519
[48]
Elizabeth C. Sharp, Luc G. Pelletier, and Chantal Lévesque. 2006. The Double-Edged Sword of Rewards for Participation in Psychology Experiments. Canadian Journal of Behavioural Science 38, 3 (2006), 269–277.
[49]
Lori L. Holt Sijia Zhao, Christopher A. Brown and Frederic Dick. 2022. Robust and Efficient Online Auditory Psychophysics. Trends in Hearing 26 (2022), 1–24.
[50]
E. R. Simon-Thomas, D. J. Keltner, D. Sauter, L. Sinicropi-Yao, and A. Abramson. 2009. The voice conveys specific emotions: Evidence from vocal burst displays. Emotion 9 (2009), 838–846.
[51]
Swarndeep Singh and Rajesh Sagar. 2021. A critical look at online survey or questionnaire-based research studies during COVID-19. Asian Journal of Psychiatry 65 (2021), 102850.
[52]
Yading Song, Simon Dixon, Marcus Pearce, and Andrea Halpern. 2013. Do Online Social Tags Predict Perceived or Induced Emotional Responses to Music?ISMIR (2013), 89–94.
[53]
Y. Song, S. Dixon, M. T. Pearce, and A. R. Halpern. 2016. Perceived and Induced Emotion Responses to Popular Music: Categorical and Dimensional Models. Music Perception 33 (2016), 472–492.
[54]
J. Stupacher, M. J. Hove, and P. Janata. 2016. Audio features underlying perceived groove and sensorimotor synchronization in music. Music Perception 33, 5 (2016), 571–589. https://doi.org/10.1525/mp
[55]
A. Tajadura-Jiménez, P. Larsson, A. Väljamäe, D. Västfjäll, and M. Kleiner. 2010. When room size matters: acoustic influences on emotional responses to sounds. Emotion 10, 3 (2010), 416.
[56]
J. Udesen, T. Piechowiak, and F. Gran. 2015. The effect of vision on psychoacoustic testing with headphone-based virtual sound. Journal of the Audio Engineering Society 63, 7/8 (2015), 552–561.
[57]
Stéphanie Viollon, Catherine Lavandier, and Carolyn Drake. 2002. Influence of visual setting on sound ratings in an urban environment. Applied Acoustics 63, 5 (2002), 493–511.
[58]
Kevin J. P. Woods, H. Max Siegel, James Traer, and Josh H. McDermott. 2017. Headphone screening to facilitate web-based auditory experiments Atten Percept Psychophys 79. 2064–2072 (2017).
[59]
Yves Wycisk, Reinhard Kopiez, Jakob Bergner, Kilian Sander, Stephan Preihs, Jürgen Peissig, and Friedrich Platz. 2023. The Headphone and Loudspeaker Test – Part I: Suggestions for controlling characteristics of playback devices in internet experiments. Behavior Research Methods 55 (2023), 1094–1107.
[60]
Z. Zhang, A. Xiong, S. Zhu, L. Song, J. Mink, and G. Wang. 2022. Beyond Bot Detection: combating Fraudulent Online Survey Takers. WWW 22, 2022 (April 2022), 25–29.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
AM '24: Proceedings of the 19th International Audio Mostly Conference: Explorations in Sonic Cultures
September 2024
565 pages
ISBN:9798400709685
DOI:10.1145/3678299
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 September 2024

Check for updates

Author Tags

  1. connotation
  2. emotion
  3. methodology
  4. methods
  5. remote
  6. user experience
  7. user testing
  8. volunteers

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Social Sciences and Humanities Research Council of Canada

Conference

AM '24

Acceptance Rates

Overall Acceptance Rate 177 of 275 submissions, 64%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 68
    Total Downloads
  • Downloads (Last 12 months)68
  • Downloads (Last 6 weeks)22
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media