skip to main content
10.1145/3652988.3673970acmconferencesArticle/Chapter ViewAbstractPublication PagesivaConference Proceedingsconference-collections
research-article
Open access

Effect of a Virtual Agent's Appearance and Voice on Uncanny Valley and Trust in Human-Agent Collaboration

Published: 26 December 2024 Publication History

Abstract

Anthropomorphic agents are generally evaluated more positively and trustworthy by human users than agents that are not humanlike. However, subtle mismatches in an agent’s appearance and behavior can lead to perceived uncanniness resulting in a disrupted trust during human-agent interaction. This study investigated the impact of an agent’s appearance and voice mismatch on user perception of the agent and their level of trust during a collaborative decision-making task. In a 2×2 between-subjects experimental design, participants performed an emotion recognition task while receiving recommendations from a virtual agent that either had a humanlike or robotic appearance with either a humanlike or synthesized robotic voice (4 conditions). Trust was measured both subjectively using a questionnaire and behaviorally by evaluating participants’ conformity to the agent’s input in their final decision-making. Results indicated that while the agent’s voice-appearance mismatch affected participants’ perception of anthropomorphism, it was not an influential factor in people’s trusting behavior. We discuss these results in the context of task complexity and make recommendations for future research.

References

[1]
Amal Abdulrahman and Deborah Richards. 2022. Is Natural Necessary? Human Voice versus Synthetic Voice for Intelligent Virtual Agents. Multimodal Technologies and Interaction 6, 7 (2022). https://doi.org/10.3390/mti6070051
[2]
Gene M Alarcon, Anthony M Gibson, Sarah A Jessup, and August Capiola. 2021. Exploring the differential effects of trust violations in human-human and human-robot interactions. Applied Ergonomics 93 (2021), 103350. https://doi.org/10.1016/j.apergo.2020.103350
[3]
Anthony L. Baker, Elizabeth K. Phillips, Daniel Ullman, and Joseph R. Keebler. 2018. Toward an Understanding of Trust Repair in Human-Robot Interaction: Current Research and Future Directions. ACM Trans. Interact. Intell. Syst. 8, 4, Article 30 (Nov. 2018), 30 pages. https://doi.org/10.1145/3181671
[4]
Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1 (2009), 71–81. https://doi.org/10.1007/s12369-008-0001-3
[5]
Joyce Berg, John Dickhaut, and Kevin McCabe. 1995. Trust, Reciprocity, and Social History. Games and Economic Behavior 10, 1 (1995), 122–142.
[6]
De’ Aira Bryant, Jin Xu, Kantwon Rogers, and Ayanna Howard. 2021. The Effect of Conceptual Embodiment on Human-Robot Trust During a Youth Emotion Classification Task. In 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO). 211–218. https://doi.org/10.1109/ARSO51874.2021.9542839
[7]
Emna Chérif and Jean-François Lemoine. 2019. Anthropomorphic virtual assistants and the reactions of Internet users: An experiment on the assistant’s voice. Recherche et Applications en Marketing (English Edition) 34, 1 (2019), 28–47. https://doi.org/10.1177/2051570719829432
[8]
Kimberly E. Culley and Poornima Madhavan. 2013. A note of caution regarding anthropomorphism in HCI agents. Computers in Human Behavior 29, 3 (2013), 577–579. https://doi.org/10.1016/j.chb.2012.11.023
[9]
Helen Link Egger, Daniel S. Pine, Eric Nelson, Ellen Leibenluft, Monique Ernst, Kenneth E. Towbin, and Adrian Angold. 2011. The NIMH Child Emotional Faces Picture Set (NIMH-ChEFS): a new set of children’s facial emotion stimuli. International Journal of Methods in Psychiatric Research 20, 3 (2011), 145–156. https://doi.org/10.1002/mpr.343
[10]
Ylva Ferstl, Sean Thomas, Cédric Guiard, Cathy Ennis, and Rachel McDonnell. 2021. Human or Robot? Investigating voice, appearance and gesture motion realism of conversational social agents. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents (Virtual Event, Japan) (IVA ’21). 76–83. https://doi.org/10.1145/3472306.3478338
[11]
Sarita Herse, Jonathan Vitale, and Mary-Anne Williams. 2023. Using Agent Features to Influence User Trust, Decision Making and Task Outcome during Human-Agent Collaboration. International Journal of Human–-Computer Interaction 39, 9 (2023), 1740–1761. https://doi.org/10.1080/10447318.2022.2150691
[12]
Darragh Higgins, Katja Zibrek, Joao Cabral, Donal Egan, and Rachel McDonnell. 2022. Sympathy for the digital: Influence of synthetic voice on affinity, social presence and empathy for photorealistic virtual humans. Computers & Graphics 104 (2022), 116–128. https://doi.org/10.1016/j.cag.2022.03.009
[13]
Jiun-Yin Jian, Ann M Bisantz, and Colin G Drury. 2000. Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics 4, 1 (2000), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04
[14]
Jari Kätsyri, Klaus Förger, Meeri Mäkäräinen, and Tapio Takala. 2015. A review of empirical evidence on different uncanny valley hypotheses: support for perceptual mismatch as one road to the valley of eeriness. Frontiers in Psychology 6 (2015), 390. https://doi.org/10.3389/fpsyg.2015.00390
[15]
Mika Koverola, Anton Kunnari, Jukka Sundvall, and Michael Laakasuo. 2022. General Attitudes Towards Robots Scale (GAToRS): A New Instrument for Social Surveys. International Journal of Social Robotics 14, 7 (2022), 1559–1581. https://doi.org/10.1007/s12369-022-00880-3
[16]
Philipp Kulms and Stefan Kopp. 2019. More Human-Likeness, More Trust? The Effect of Anthropomorphism on Self-Reported and Behavioral Trust in Continued and Interdependent Human-Agent Cooperation. In Proceedings of Mensch Und Computer 2019. Association for Computing Machinery, New York, NY, USA, 31–42. https://doi.org/10.1145/3340764.3340793
[17]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for Appropriate Reliance. Human factors 46, 1 (2004), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
[18]
Guido M. Linders, Julija Vaitonytė, Maryam Alimardani, Kiril O. Mitev, and Max M. Louwerse. 2022. A realistic, multimodal virtual agent for the healthcare domain. In Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents (Faro, Portugal) (IVA ’22). Association for Computing Machinery, New York, NY, USA, Article 37. https://doi.org/10.1145/3514197.3551250
[19]
Martina Mara, Markus Appel, and Timo Gnambs. 2022. Human-like robots and the uncanny valley. Zeitschrift für Psychologie 230, 1 (2022), 33–46. https://doi.org/10.1027/2151-2604/a000486
[20]
Maya B. Mathur and David B. Reichling. 2016. Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition 146 (2016), 22–32. https://doi.org/10.1016/j.cognition.2015.09.008
[21]
Wade J Mitchell, Kevin A Szerszen Sr, Amy Shirong Lu, Paul W Schermerhorn, Matthias Scheutz, and Karl F MacDorman. 2011. A mismatch in the human realism of face and voice produces an uncanny valley. i-Perception 2, 1 (2011), 10–12. https://doi.org/10.1068/i0415
[22]
Reza Moradinezhad and Erin T Solovey. 2021. Investigating trust in interaction with inconsistent embodied virtual agents. International Journal of Social Robotics 13, 8 (2021), 2103–2118. https://doi.org/10.1007/s12369-021-00747-z
[23]
Wenxuan Mou, Martina Ruocco, Debora Zanatto, and Angelo Cangelosi. 2020. When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 956–962. https://doi.org/10.1109/RO-MAN47096.2020.9223551
[24]
Manisha Natarajan and Matthew Gombolay. 2020. Effects of Anthropomorphism and Accountability on Trust in Human Robot Interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cambridge, United Kingdom) (HRI ’20). Association for Computing Machinery, New York, NY, USA, 33–42. https://doi.org/10.1145/3319502.3374839
[25]
Dhaval Parmar, Stefan Olafsson, Dina Utami, Prasanth Murali, and Timothy Bickmore. 2022. Designing empathic virtual agents: manipulating animation, voice, rendering, and empathy to create persuasive agents. Autonomous Agents and Multi-Agent Systems 36, 1 (April 2022), 24 pages. https://doi.org/10.1007/s10458-021-09539-1
[26]
Julija Vaitonytė, Maryam Alimardani, and Max M. Louwerse. 2023. Scoping review of the neural evidence on the uncanny valley. Computers in Human Behavior Reports 9 (2023), 100263. https://doi.org/10.1016/j.chbr.2022.100263
[27]
Julija Vaitonytė, Pieter A. Blomsma, Maryam Alimardani, and Max M. Louwerse. 2021. Realism of the face lies in skin and eyes: Evidence from virtual and human agents. Computers in Human Behavior Reports 3 (2021), 100065. https://doi.org/10.1016/j.chbr.2021.100065
[28]
Adam Waytz, Joy Heafner, and Nicholas Epley. 2014. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology 52 (2014), 113–117. https://doi.org/10.1016/j.jesp.2014.01.005

Index Terms

  1. Effect of a Virtual Agent's Appearance and Voice on Uncanny Valley and Trust in Human-Agent Collaboration

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    IVA '24: Proceedings of the 24th ACM International Conference on Intelligent Virtual Agents
    September 2024
    337 pages
    This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 26 December 2024

    Check for updates

    Author Tags

    1. Anthropomorphism
    2. Appearance
    3. Human-agent collaboration
    4. Trust
    5. Uncanny Valley (UV)
    6. Virtual agents
    7. Voice

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Operational Program Zuid, the European Union, the Ministry of Economic Affairs, the Province of Noord-Brabant and the municipality of Tilburg

    Conference

    IVA '24
    Sponsor:
    IVA '24: ACM International Conference on Intelligent Virtual Agents
    September 16 - 19, 2024
    GLASGOW, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 53 of 196 submissions, 27%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 44
      Total Downloads
    • Downloads (Last 12 months)44
    • Downloads (Last 6 weeks)44
    Reflects downloads up to 24 Jan 2025

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media