skip to main content
10.1145/3613905.3650749acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
Work in Progress

The Effects of Expertise, Humanness, and Congruence on Perceived Trust, Warmth, Competence and Intention to Use Embodied AI

Published: 11 May 2024 Publication History

Abstract

Even though people imagine different embodiments when asked which AI they would like to work with, most studies investigate trust in AI systems without specific physical appearances. This study aims to close this gap by combining influencing factors of trust to analyze their impact on the perceived trustworthiness, warmth, and competence of an embodied AI. We recruited 68 participants who observed three co-working scenes with an embodied AI, presented as expert/novice (expertise), human/AI (humanness), or congruent/slightly incongruent to the environment (congruence). Our results show that the expertise condition had the largest impact on trust, acceptance, and perceived warmth and competence. When controlled for perceived competence, the humanness of the AI and the congruence of its embodiment to the environment also influence acceptance. The results show that besides expertise and the perceived competence of the AI, other design variables are relevant for successful human-AI interaction, especially when the AI is embodied.

Supplemental Material

MP4 File - Video Preview
Video Preview
Transcript for: Video Preview
ZIP File - Task Examples
This supplementary material shows examples of the co-working tasks as participants saw them in the questionnaire (in the original language, German). We show every framing and possible embodiment for the "Configure" task, and the factory worker AI expert for the other two tasks.

References

[1]
Lamia Alam, Mohammed Moshiul Hoque, 2017. A text-based chat system embodied with an expressive agent. Advances in Human-Computer Interaction 2017 (2017).
[2]
Tiffany Barnett White. 2005. Consumer trust and advice acceptance: The moderating roles of benevolence, expertise, and negative emotions. Journal of Consumer Psychology 15, 2 (2005), 141–148.
[3]
Ransome Epie Bawack, Samuel Fosso Wamba, and Kevin Carillo. 2019. Artificial intelligence in practice: Implications for IS research. In AMCIS 2019 Proceedings. AIS Electronic Library, Cancun, Mexico, 10 pages.
[4]
Sophie Berretta, Alina Tausch, Greta Ontrup, Björn Gilles, Corinna Peifer, and Annette Kluge. 2023. Defining human-AI teaming the human-centered way: a scoping review and network analysis. Frontiers in Artificial Intelligence 6 (2023).
[5]
Yochanan E Bigman and Kurt Gray. 2018. People are averse to machines making moral decisions. Cognition 181 (2018), 21–34.
[6]
Erik Brynjolfsson, Tom Mitchell, and Daniel Rock. 2018. What Can Machines Learn, and What Does It Mean for Occupations and the Economy?, In AEA Papers and Proceedings. AEA Papers and Proceedings 108, 43–47. https://doi.org/10.1257/pandp.20181019
[7]
Calvin Burns, Kathryn Mearns, and Peter McGeorge. 2006. Explicit and implicit trust within safety culture. Risk analysis 26, 5 (2006), 1139–1150.
[8]
Astrid Carolus and Carolin Wienrich. 2022. “Imagine this smart speaker to have a body”: An analysis of the external appearances and the characteristics that people associate with voice assistants. Frontiers in Computer Science 4 (2022), 981435.
[9]
Noah Castelo, Maarten W Bos, and Donald R Lehmann. 2019. Task-dependent algorithm aversion. Journal of Marketing Research 56, 5 (2019), 809–825.
[10]
Shelly Chaiken and Durairaj Maheswaran. 1994. Heuristic processing can bias systematic processing: effects of source credibility, argument ambiguity, and task importance on attitude judgment.Journal of personality and social psychology 66, 3 (1994), 460.
[11]
Ewart J De Visser, Marieke MM Peeters, Malte F Jung, Spencer Kohn, Tyler H Shaw, Richard Pak, and Mark A Neerincx. 2020. Towards a theory of longitudinal trust calibration in human–robot teams. International journal of social robotics 12, 2 (2020), 459–478.
[12]
The R Foundation. 2024. R 4.3.2. The R Foundation. https://www.r-project.org/
[13]
SoSci Survey GmbH. 2023. SoSci Survey. SoSci Survey GmbH. https://www.soscisurvey.de/
[14]
Mar Gonzalez-Franco, Eyal Ofek, Ye Pan, Angus Antley, Anthony Steed, Bernhard Spanlang, Antonella Maselli, Domna Banakou, Nuria Pelechano, Sergio Orts-Escolano, 2020. The rocketbox library and the utility of freely available rigged avatars. Frontiers in virtual reality 1 (2020), 20.
[15]
Anthony G Greenwald and Mahzarin R Banaji. 1995. Implicit social cognition: attitudes, self-esteem, and stereotypes.Psychological review 102, 1 (1995), 4.
[16]
Peter A. Hancock, Deborah R. Billings, Kristin E. Schaefer, Jessie Y. C. Chen, Ewart J. De Visser, and Raja Parasuraman. 2011. A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society 53, 5 (2011), 517–527. https://doi.org/10.1177/0018720811417254
[17]
P. A. Hancock, Theresa T. Kessler, Alexandra D. Kaplan, John C. Brill, and James L. Szalma. 2021. Evolving Trust in Robots: Specification Through Sequential and Comparative Meta-Analyses. Human Factors: The Journal of the Human Factors and Ergonomics Society 63, 7 (2021), 1196–1229. https://doi.org/10.1177/0018720820922080
[18]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors 57, 3 (2015), 407–434.
[19]
Yoyo Tsung-Yu Hou and Malte F Jung. 2021. Who is the expert? Reconciling algorithm aversion and algorithm appreciation in AI-supported decision making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–25.
[20]
Kirtan Jha, Aalap Doshi, Poojan Patel, and Manan Shah. 2019. A comprehensive review on automation in agriculture using artificial intelligence. Artificial Intelligence in Agriculture 2 (2019), 1–12.
[21]
Alexandra D. Kaplan, Theresa T. Kessler, J. Christopher Brill, and P. A. Hancock. 2023. Trust in Artificial Intelligence: Meta-Analytic Findings. Human Factors: The Journal of the Human Factors and Ergonomics Society 65, 2 (2023), 337–359. https://doi.org/10.1177/00187208211013988
[22]
Yoon Jeon Koh and S Shyam Sundar. 2010. Effects of specialization in computers, web sites, and web agents on e-commerce trust. International journal of human-computer studies 68, 12 (2010), 899–912.
[23]
Yoon Jeon Koh and S Shyam Sundar. 2010. Heuristic versus systematic processing of specialist versus generalist sources in online media. Human Communication Research 36, 2 (2010), 103–124.
[24]
Nicole C Krämer, Bernd Tietz, and Gary Bente. 2003. Effects of embodied interface agents and their gestural activity. In International Workshop on Intelligent Virtual Agents. Springer, 292–300.
[25]
Philipp Krop, Sebastian Oberdörfer, and Marc Erich Latoschik. 2023. Traversing the Pass: Improving the Knowledge Retention of Serious Games Using a Pedagogical Agent. In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents. 1–8.
[26]
Marc Erich Latoschik and Carolin Wienrich. 2022. Congruence and plausibility, not presence: Pivotal conditions for XR experiences and effects, a novel approach. Frontiers in Virtual Reality 3 (2022), 694433.
[27]
Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5, 1 (2018), 2053951718756684.
[28]
Glenn Leshner, Byron Reeves, and Clifford Nass. 1998. Switching channels: The effects of television channels on the mental representations of television news. J. Broad. & Elec. Media 42 (1998), 21.
[29]
Hongxia Lin, Oscar Hengxuan Chi, and Dogan Gursoy. 2020. Antecedents of customers’ acceptance of artificially intelligent robotic device use in hospitality services. Journal of Hospitality Marketing & Management 29, 5 (2020), 530–549.
[30]
Jinghuai Lin, Johrine Cronjé, Ivo Käthner, Paul Pauli, and Marc Erich Latoschik. 2023. Measuring Interpersonal Trust towards Virtual Humans with a Virtual Maze Paradigm. IEEE Transactions on Visualization and Computer Graphics 29, 5 (2023), 2401–2411.
[31]
Jinghuai Lin, Johrine Cronjé, Carolin Wienrich, Paul Pauli, and Marc Erich Latoschik. 2023. Visual Indicators Representing Avatars’ Authenticity in Social Virtual Reality and Their Impacts on Perceived Trustworthiness. IEEE Transactions on Visualization and Computer Graphics 29, 11 (2023), 4589–4599.
[32]
Jinghuai Lin and Marc Erich Latoschik. 2022. Digital body, identity and privacy in social virtual reality: A systematic review. Frontiers in Virtual Reality 3 (2022), 974652.
[33]
Jennifer M Logg, Julia A Minson, and Don A Moore. 2019. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes 151 (2019), 90–103.
[34]
Joy Lu, Dokyun Lee, Tae Wan Kim, and David Danks. 2019. Good explanation for algorithmic transparency, In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. AIES ’20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 20, 1, 93.
[35]
JT Luo, Peter McGoldrick, Susan Beatty, and Kathleen A Keeling. 2006. On-screen characters: Their design and influence on consumer trust. Journal of Services Marketing 20, 2 (2006), 112–124.
[36]
Maria Madsen and Shirley Gregor. 2000. Measuring human-computer trust. In 11th Australasian Conference on Information Systems, Vol. 53. Citeseer, Springer, Brisbane, Australia, 6–8.
[37]
David Mal, Erik Wolf, Nina Döllinger, Mario Botsch, Carolin Wienrich, and Marc Erich Latoschik. 2022. Virtual human coherence and plausibility–Towards a validated scale. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, IEEE, Christchurch, New Zealand, 788–789.
[38]
David Mal, Erik Wolf, Nina Döllinger, Carolin Wienrich, and Marc Erich Latoschik. 2023. The Impact of Avatar and Environment Congruence on Plausibility, Embodiment, Presence, and the Proteus Effect in Virtual Reality. IEEE Transactions on Visualization and Computer Graphics 29, 5 (2023), 2358–2368. https://doi.org/10.1109/TVCG.2023.3247089
[39]
Roger C Mayer, James H Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.
[40]
Helen M McBreen and Mervyn A Jack. 2001. Evaluating humanoid synthetic agents in e-retail applications. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 31, 5 (2001), 394–405.
[41]
D Harrison McKnight, Vivek Choudhury, and Charles Kacmar. 2002. The impact of initial consumer trust on intentions to transact with a web site: a trust building model. The journal of strategic information systems 11, 3-4 (2002), 297–323.
[42]
Meta. 2024. Oculus Lipsync. Meta. https://developer.oculus.com/documentation/unity/audio-ovrlipsync-unity/
[43]
Microsoft. 2024. Azure Text to Speech. Microsoft. https://azure.microsoft.com/en-us/products/ai-services/text-to-speech
[44]
Clifford Nass, Jonathan Steuer, and Ellen R. Tauber. 1994. Computers are social actors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, Massachusetts, USA) (CHI ’94). Association for Computing Machinery, New York, NY, USA, 72–78. https://doi.org/10.1145/191666.191703
[45]
David Obremski, Paula Friedrich, Philipp Schaper, and Birgit Lugrin. 2023. Effects of Social Ingroup Cues on Empathy Towards an Intelligent Virtual Agent With a Mixed-Cultural Background. In Proceedings of the Conference for Affective Computing and Intelligent Interaction 2023. in press.
[46]
Prolific. 2024. R 4.3.2. Prolific. https://www.prolific.com/
[47]
Byron Reeves and Clifford Nass. 1996. The media equation: How people treat computers, television, and new media like real people. Vol. 10. Cambridge University Press, Cambridge.
[48]
Bettina C Riedl, Julia V Gallenkamp, and Arnold Picot. 2013. The moderating role of virtuality on trust in leaders and the consequences on performance. In 2013 46th Hawaii International Conference on System Sciences. IEEE, IEEE, Wailea, HI, USA, 373–385.
[49]
Yves Rosseel. 2012. lavaan: An R Package for Structural Equation Modeling., 36 pages. https://doi.org/10.18637/jss.v048.i02
[50]
Christine Rzepka and Benedikt Berger. 2018. User interaction with AI-enabled systems: A systematic review of IS research. In ICIS 2018 Proceedings, Vol. 39. AIS Electronic Library, San Francisco, 17 pages.
[51]
Klaus Schwab. 2017. The fourth industrial revolution. Currency, New York.
[52]
Samantha Straka, Martin Jakobus Koch, Astrid Carolus, Marc Erich Latoschik, and Carolin Wienrich. 2023. How do employees imagine AI they want to work with: A drawing study. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (, Hamburg, Germany,) (CHI EA ’23). Association for Computing Machinery, New York, NY, USA, Article 160, 8 pages. https://doi.org/10.1145/3544549.3585631
[53]
The Handbrake Team. 2023. Handbrake 1.7.2. The Handbrake Team. https://handbrake.fr/
[54]
Unity Technologies. 2022. Unity 2021.3.11f1. Unity Technologies. https://unity.com/releases/editor/whats-new/2021.3.11
[55]
Anna-Sophie Ulfert, Eleni Georganta, Carolina Centeio Jorge, Siddharth Mehrotra, and Myrthe Tielman. 2023. Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework. European Journal of Work and Organizational Psychology 32, 1 (2023), 1–14. https://doi.org/10.1080/1359432x.2023.2200172
[56]
Viswanath Venkatesh, James YL Thong, and Xin Xu. 2016. Unified theory of acceptance and use of technology: A synthesis and the road ahead. Journal of the association for Information Systems 17, 5 (2016), 328–376.
[57]
Vinod Venkatraman, Angelika Dimoka, Paul A Pavlou, Khoi Vo, William Hampton, Bryan Bollinger, Hal E Hershfield, Masakazu Ishihara, and Russell S Winer. 2015. Predicting advertising success beyond traditional measures: New insights from neurophysiological methods and market response modeling. Journal of Marketing Research 52, 4 (2015), 436–452.
[58]
Franziska Westermeier, Larissa Brübach, Marc Erich Latoschik, and Carolin Wienrich. 2023. Exploring Plausibility and Presence in Mixed Reality Experiences. IEEE Transactions on Visualization and Computer Graphics 29, 5 (2023), 2680–2689.
[59]
Carolin Wienrich and Marc Erich Latoschik. 2021. eXtended Artificial Intelligence: New Prospects of Human-AI Interaction Research. Frontiers in Virtual Reality 2 (2021), 94. https://doi.org/10.3389/frvir.2021.686783
[60]
Carolin Wienrich, Clemens Reitelbach, and Astrid Carolus. 2021. The Trustworthiness of Voice Assistants in the Context of Healthcare Investigating the Effect of Perceived Expertise on the Trustworthiness of Voice Assistants, Providers, Data Receivers, and Automatic Speech Recognition. Frontiers in Computer Science 3 (2021), 53.
[61]
Kun-Hsing Yu, Andrew L Beam, and Isaac S Kohane. 2018. Artificial intelligence in healthcare. Nature biomedical engineering 2, 10 (2018), 719–731.
[62]
Naim Zierau, Christian Engel, Matthias Söllner, and Jan Marco Leimeister. 2020. Trust in smart personal assistants: A systematic literature review and development of a research agenda. In International Conference on Wirtschaftsinformatik (WI).-Potsdam, Germany. SSRN, Potsdam, Germany, 16 pages.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI EA '24: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems
May 2024
4761 pages
ISBN:9798400703317
DOI:10.1145/3613905
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2024

Check for updates

Author Tags

  1. Congruence
  2. Framing
  3. Intelligent Agents
  4. Technology Acceptance
  5. Trust
  6. Visualization

Qualifiers

  • Work in progress
  • Research
  • Refereed limited

Funding Sources

  • German Federal Ministry of Labour and Affairs

Conference

CHI '24

Acceptance Rates

Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 785
    Total Downloads
  • Downloads (Last 12 months)785
  • Downloads (Last 6 weeks)145
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media