skip to main content
10.1145/3613905.3650825acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
Work in Progress

Practising Appropriate Trust in Human-Centred AI Design

Published: 11 May 2024 Publication History

Abstract

Appropriate trust, trust which aligns with system trustworthiness, in Artificial Intelligence (AI) systems has become an important area of research. However, there remains debate in the community about how to design for appropriate trust. This debate is a result of the complex nature of trust in AI, which can be difficult to understand and evaluate, as well as the lack of holistic approaches to trust. In this paper, we aim to clarify some of this debate by operationalising appropriate trust within the context of the Human-Centred AI Design (HCD) process. To do so, we organised three workshops with 13 participants total from design and development backgrounds. We carried out design activities to stimulate discussion on appropriate trust in the HCD process. This paper aims to help researchers and practitioners understand appropriate trust in AI through a design lens by illustrating how it interacts with the HCD process.

Supplemental Material

MP4 File - Video Preview
Video Preview
Transcript for: Video Preview

References

[1]
Arjun Akula, Shuai Wang, and Song-Chun Zhu. 2020. Cocox: Generating conceptual and counterfactual explanations via fault-lines. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 2594–2601.
[2]
Basel Alhaji, Michael Prilla, and Andreas Rausch. 2021. Trust dynamics and verbal assurances in human robot physical collaboration. Frontiers in artificial intelligence 4 (2021), 703504.
[3]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 81, 16 pages. https://doi.org/10.1145/3411764.3445717
[4]
Natalie C Benda, Laurie L Novak, Carrie Reale, and Jessica S Ancker. 2022. Trust in AI: why we should be designing for Appropriate reliance. Journal of the American Medical Informatics Association 29, 1 (2022), 207–212.
[5]
Nick Bostrom and Eliezer Yudkowsky. 2018. The ethics of artificial intelligence. In Artificial intelligence safety and security. Chapman and Hall/CRC, 57–69.
[6]
V Braun and V Clarke. 2021. Thematic analysis: a practical guide [eBook version]. SAGE moradi H, vaezi A. lessons learned from Korea: COVID-19 pandemic 41 (2021), 873–4.
[7]
Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019. Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[8]
Jessie YC Chen and Michael J Barnes. 2014. Human–agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems 44, 1 (2014), 13–29.
[9]
Victoria Clarke and Virginia Braun. 2013. Successful qualitative research: A practical guide for beginners. Successful qualitative research (2013), 1–400.
[10]
Sven Coppers, Davy Vanacken, and Kris Luyten. 2020. Fortniot: Intelligible predictions to improve user understanding of smart home behavior. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 4 (2020), 1–24.
[11]
Andrea Ferrario and Michele Loi. 2022. How Explainability Contributes to Trust in AI. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1457–1466. https://doi.org/10.1145/3531146.3533202
[12]
Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable active learning (XAL) toward ai explanations as interfaces for machine teachers. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (2021), 1–28.
[13]
Siddharth Gulati, Sonia Sousa, and David Lamas. 2019. Design, development and evaluation of a human-computer trust scale. Behaviour & Information Technology 38, 10 (Oct. 2019), 1004–1015. https://doi.org/10.1080/0144929x.2019.1656779
[14]
Beverley Hancock, Elizabeth Ockleford, and Kate Windridge. 2001. An introduction to qualitative research. Trent focus group London.
[15]
Gaole He, Stefan Buijsman, and Ujwal Gadiraju. 2023. How Stated Accuracy of an AI System and Analogies to Explain Accuracy Affect Human Reliance on the System. 7, CSCW2, Article 276 (Oct. 2023), 29 pages. https://doi.org/10.1145/3610067
[16]
Tove Helldin, Göran Falkman, Maria Riveiro, and Staffan Davidsson. 2013. Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving. In Proceedings of the 5th international conference on automotive user interfaces and interactive vehicular applications. 210–217.
[17]
Robert R Hoffman. 2017. A taxonomy of emergent trusting in the human–machine relationship. Cognitive Systems Engineering (2017), 137–164.
[18]
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 624–635.
[19]
Theodore Jensen, Mohammad Maifi Hasan Khan, Md Abdullah Al Fahim, and Yusuf Albayram. 2021. Trust and anthropomorphism in tandem: the interrelated nature of automated agent appearance and reliability in trustworthiness perceptions. In Designing interactive systems conference 2021. 1470–1480.
[20]
Carolina Centeio Jorge, Emma M van Zoelen, Ruben Verhagen, Siddharth Mehrotra, Catholijn M Jonker, and Myrthe L Tielman. 2024. Appropriate context-dependent artificial trust in human-machine teamwork. In Putting AI in the Critical Loop. Elsevier, 41–60. https://doi.org/10.1016/B978-0-443-15988-6.00007-8
[21]
Wiard Jorritsma, Fokie Cnossen, and Peter MA van Ooijen. 2015. Improving the radiologist–CAD interaction: designing for appropriate trust. Clinical radiology 70, 2 (2015), 115–122.
[22]
Robert Jungk and Norbert Müllert. 1987. Future Workshops: How to create desirable futures. Inst. for Social Inventions.
[23]
Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández. 2023. Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (CHIcago, IL, USA) (FAccT ’23). Association for Computing Machinery, New York, NY, USA, 77–88. https://doi.org/10.1145/3593013.3593978
[24]
Q Vera Liao and S Shyam Sundar. 2022. Designing for responsible trust in AI systems: A communication perspective. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 1257–1268.
[25]
Han Liu, Vivian Lai, and Chenhao Tan. 2021. Understanding the effect of out-of-distribution examples and interactive explanations on human-ai decision making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–45.
[26]
Siddharth Mehrotra, Chadha Degachi, Oleksandra Vereschak, Catholijn M Jonker, and Myrthe L Tielman. 2023. A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction. arXiv preprint arXiv:2311.06305 (2023).
[27]
Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M Jonker, and Myrthe L Tielman. 2023. Building Appropriate Trust in AI: The Significance of Integrity-Centered Explanations. In HHAI. 436–439.
[28]
Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M Jonker, and Myrthe L Tielman. 2023. Integrity Based Explanations for Fostering Appropriate Trust in AI Agents. ACM Transactions on Interactive Intelligent Systems (2023).
[29]
Kazuo Okamura and Seiji Yamada. 2020. Adaptive trust calibration for human-AI collaboration. Plos one 15, 2 (2020), e0229132.
[30]
Scott Ososky, David Schuster, Elizabeth Phillips, and Florian G Jentsch. 2013. Building appropriate trust in human-robot teams. In 2013 AAAI spring symposium series.
[31]
Raja Parasuraman and Victor Riley. 1997. Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors 39, 2 (June 1997), 230–253. https://doi.org/10.1518/001872097778543886
[32]
Elizabeth B-N Sanders and Pieter Jan Stappers. 2012. Convivial toolbox: Generative research for the front end of design. BIS.
[33]
Douglas Schuler and Aki Namioka. 1993. Participatory design: Principles and practices. CRC Press.
[34]
Sonia Sousa, Jose Cravino, Paulo Martins, and David Lamas. 2023. Human-centered trust framework: An HCI perspective. arXiv preprint arXiv:2305.03306 (2023).
[35]
Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces (College Station, TX, USA) (Iui ’21). Association for Computing Machinery, New York, NY, USA, 109–119. https://doi.org/10.1145/3397481.3450662
[36]
Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer, and Abraham Bernstein. 2022. Capable but amoral? Comparing AI and human expert collaboration in ethical decision making. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–17.
[37]
Kristen Vaccaro, Ziang Xiao, Kevin Hamilton, and Karrie Karahalios. 2021. Contestability for content moderation. Proceedings of the ACM on human-computer interaction 5, CSCW2 (2021), 1–28.
[38]
Helena Vasconcelos, Matthew Jörke, Madeleine Grunde-McLaughlin, Tobias Gerstenberg, Michael S. Bernstein, and Ranjay Krishna. 2023. Explanations Can Reduce Overreliance on AI Systems During Decision-Making. Proc. ACM Hum.-Comput. Interact. 7, CSCW1, Article 129 (April 2023), 38 pages. https://doi.org/10.1145/3579605
[39]
Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proc. ACM Hum.-Comput. Interact. 5, CSCW2, Article 327 (Oct. 2021), 39 pages. https://doi.org/10.1145/3476068
[40]
Xinru Wang and Ming Yin. 2021. Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. In 26th International Conference on Intelligent User Interfaces. 318–328.
[41]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L Arendt. 2020. How do visual explanations foster end users’ appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 189–201.
[42]
Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376301
[43]
Mireia Yurrita, Tim Draws, Agathe Balayn, Dave Murray-Rust, Nava Tintarev, and Alessandro Bozzon. 2023. Disentangling Fairness Perceptions in Algorithmic Decision-Making: the Effects of Explanations, Human Oversight, and Contestability. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (, Hamburg, Germany, ) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 134, 21 pages. https://doi.org/10.1145/3544548.3581161
[44]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT⁎’20). Association for Computing Machinery, New York, NY, USA, 295–305. https://doi.org/10.1145/3351095.3372852

Cited By

View all
  • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 21-Sep-2024

Index Terms

  1. Practising Appropriate Trust in Human-Centred AI Design

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI EA '24: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems
    May 2024
    4761 pages
    ISBN:9798400703317
    DOI:10.1145/3613905
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 May 2024

    Check for updates

    Author Tags

    1. AI design
    2. appropriate trust
    3. human-centered design

    Qualifiers

    • Work in progress
    • Research
    • Refereed limited

    Conference

    CHI '24

    Acceptance Rates

    Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)594
    • Downloads (Last 6 weeks)57
    Reflects downloads up to 20 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and ChallengesACM Journal on Responsible Computing10.1145/36964491:4(1-45)Online publication date: 21-Sep-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media