skip to main content
10.1145/3639592.3639625acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaicccConference Proceedingsconference-collections
research-article

Exploration of Explainable AI for Trust Development on Human-AI Interaction

Published: 13 April 2024 Publication History

Abstract

In recent years, the revolutionary impact of Artificial Intelligence (AI) cannot be overstated. This groundbreaking technology has radically transformed how we perform our daily tasks, thereby redefining the very fabric of our society. However, as our reliance on AI systems continues to grow, the need for calibrated trust becomes increasingly pressing. To address this concern, the concept of Explainable AI (XAI) has been introduced to provide human-level explanations. The primary goal is to offer cognitive information that prompts informed trust decisions. Nevertheless, it is essential to recognize that trust is a multidimensional construct, involving various means of processing beyond mere explanations. To fully understand and explore these dimensions within the context of XAI, this research aims to uncover and comprehend the additional facets of trust. Through an exploratory survey, it was confirmed that XAI serves a vital purpose in facilitating trust, and it can be effectively processed through affective means. Furthermore, the presentation of information beyond the depth of explanation was found to play a significant role in moderating trust formation during human-AI interactions.

References

[1]
Adadi, A. and Berrada, M. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access. 6, (2018), 52138–52160.
[2]
Albayram, Y. 2019. Investigating the Effect of System Reliability, Risk, and Role on Users’ Emotions and Attitudes toward a Safety-Critical Drone System. International Journal of Human–Computer Interaction. 35, 9 (May 2019), 761–772.
[3]
Angerschmid, A. 2022. Fairness and Explanation in AI-Informed Decision Making. Machine Learning and Knowledge Extraction. 4, 2 (Jun. 2022), 556–579.
[4]
Antoniadi, A.M. 2021. Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review. Applied Sciences. 11, 11 (May 2021), 5088.
[5]
Appelganc, K. 2022. How Much Reliability Is Enough? A Context-Specific View on Human Interaction With (Artificial) Agents From Different Perspectives. Journal of Cognitive Engineering and Decision Making. 16, 4 (Dec. 2022), 207–221.
[6]
Asan, O. 2020. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. Journal of Medical Internet Research. 22, 6 (Jun. 2020), e15154.
[7]
Bartlett, J.E. 2001. Organizational Research: Determining Appropriate Sample Size in Survey Research. Information technology, learning, and performance journal. 19, 1 (2001).
[8]
Böckle, M. 2021. Can You Trust the Black Box? The Effect of Personality Traits on Trust in AI-Enabled User Interfaces. Artificial Intelligence in HCI. H. Degen and S. Ntoa, eds. Springer International Publishing. 3–20.
[9]
Buck, R. 2018. The User Affective Experience Scale: A Measure of Emotions Anticipated in Response to Pop-Up Computer Warnings. International Journal of Human–Computer Interaction. 34, 1 (Jan. 2018), 25–34.
[10]
Chancey, E.T. 2017. Trust and the Compliance–Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on Trust and Dependence. Human Factors: The Journal of the Human Factors and Ergonomics Society. 59, 3 (May 2017), 333–345.
[11]
Chien, S.-Y. 2018. The Effect of Culture on Trust in Automation: Reliability and Workload. ACM Transactions on Interactive Intelligent Systems. 8, 4 (Nov. 2018), 1–31.
[12]
Chromik, M. and Butz, A. 2021. Human-XAI Interaction: A Review and Design Principles for Explanation User Interfaces. Human-Computer Interaction – INTERACT 2021. C. Ardito, eds. Springer International Publishing. 619–640.
[13]
Cirqueira, D. 2021. Towards Design Principles for User-Centric Explainable AI in Fraud Detection. Artificial Intelligence in HCI. H. Degen and S. Ntoa, eds. Springer International Publishing. 21–40.
[14]
Cochran, W.G. 1977. Sampling techniques. Wiley.
[15]
Dazeley, R. 2021. Levels of explainable artificial intelligence for human-aligned conversational explanations. Artificial Intelligence. 299, (Oct. 2021), 103525.
[16]
Doncieux, S. 2022. Human-centered AI and robotics. AI Perspectives. 4, 1 (Jan. 2022), 1.
[17]
Doshi-Velez, F. and Kim, B. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv:1702.08608 [cs, stat]. (Mar. 2017).
[18]
Ehsan, U. and Riedl, M.O. 2020. Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach. (2020).
[19]
Felzmann, H. 2020. Towards Transparency by Design for Artificial Intelligence. Science and Engineering Ethics. 26, 6 (Dec. 2020), 3333–3361.
[20]
Ferreira, J.J. and Monteiro, M. 2021. Designer-User Communication for XAI: An epistemological approach to discuss XAI design. (2021).
[21]
Forgas, J.P. 1995. Mood and judgment: The affect infusion model (AIM). Psychological Bulletin. 117, 1 (1995), 39–66.
[22]
Gan, Y. 2021. Integrating aesthetic and emotional preferences in social robot design: An affective design approach with Kansei Engineering and Deep Convolutional Generative Adversarial Network. International Journal of Industrial Ergonomics. 83, (May 2021), 103128.
[23]
Gerlings, J. 2020. Reviewing the Need for Explainable Artificial Intelligence (xAI). (2020).
[24]
Gunning, D. 2019. XAI—Explainable artificial intelligence. Science Robotics. 4, 37 (Dec. 2019), eaay7120.
[25]
Haque, A.B. 2023. Explainable Artificial Intelligence (XAI) from a user perspective: A synthesis of prior literature and problematizing avenues for future research. Technological Forecasting and Social Change. 186, (Jan. 2023), 122120.
[26]
Hatherley, J.J. 2020. Limits of trust in medical AI. Journal of Medical Ethics. 46, 7 (Jul. 2020), 478–481.
[27]
Hoffman, R.R. 2018. Metrics for Explainable AI: Challenges and Prospects. (2018), 51.
[28]
Holzinger, A. 2019. Causability and explainability of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery. 9, 4 (Jul. 2019).
[29]
Huegli, D. 2020. Automation reliability, human–machine system performance, and operator compliance: A study with airport security screeners supported by automated explosives detection systems for cabin baggage screening. Applied Ergonomics. 86, (Jul. 2020), 103094.
[30]
Iyer, R. 2018. Transparency and Explanation in Deep Reinforcement Learning Neural Networks. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (New Orleans LA USA, Dec. 2018), 144–150.
[31]
Kelley, H.H. 1973. The processes of causal attribution. American Psychologist. 28, 2 (Feb. 1973), 107–128.
[32]
Körber, M. 2018. Introduction matters: Manipulating trust in automation and reliance in automated driving. Applied Ergonomics. 66, (Jan. 2018), 18–31.
[33]
Kundu, S. 2021. Special Session: Reliability Analysis for AI/ML Hardware. 2021 IEEE 39th VLSI Test Symposium (VTS) (San Diego, CA, USA, Apr. 2021), 1–10.
[34]
Kuppa, A. and Le-Khac, N.-A. 2020. Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security. 2020 International Joint Conference on Neural Networks (IJCNN) (Glasgow, United Kingdom, Jul. 2020), 1–8.
[35]
Lee, J.D. and See, K.A. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors. (2004), 31.
[36]
Liao, Q.V. and Varshney, K.R. 2021. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv:2110.10790 [cs]. (Oct. 2021).
[37]
Linardatos, P. 2020. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy. 23, 1 (Dec. 2020), 18.
[38]
Lopes, P. 2022. XAI Systems Evaluation: A Review of Human and Computer-Centred Methods. Applied Sciences. 12, 19 (Sep. 2022), 9423.
[39]
Matsui, T. and Yamada, S. 2019. Designing Trustworthy Product Recommendation Virtual Agents Operating Positive Emotion and Having Copious Amount of Knowledge. Frontiers in Psychology. 10, (Apr. 2019), 675.
[40]
Min, J. 2022. Reliability Analysis of Artificial Intelligence Systems Using Recurrent Events Data from Autonomous Vehicles. Journal of the Royal Statistical Society Series C: Applied Statistics. 71, 4 (Aug. 2022), 987–1013.
[41]
Mohseni, S. 2020. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. arXiv:1811.11839 [cs]. (Aug. 2020).
[42]
Mokyr, J. 2015. The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different? Journal of Economic Perspectives. 29, 3 (Aug. 2015), 31–50.
[43]
Naiseh, M. 2021. Explainable Recommendations and Calibrated Trust: Two Systematic User Errors. Computer. 54, 10 (Oct. 2021), 28–37.
[44]
Punyatoya, P. 2019. Effects of cognitive and affective trust on online customer behavior. Marketing Intelligence & Planning. 37, 1 (Feb. 2019), 80–96.
[45]
Rai, A. 2020. Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science. 48, 1 (Jan. 2020), 137–141.
[46]
Schaubroeck, J. 2011. Cognition-based and affect-based trust as mediators of leader behavior influences on team performance. Journal of Applied Psychology. 96, 4 (2011), 863–871.
[47]
Schoonderwoerd, T.A.J. 2021. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of Human-Computer Studies. 154, (Oct. 2021), 102684.
[48]
Seva, R.R. 2005. Development of a Conceptual Model of Product Emotion in the Pre-Purchase Context. 11th International Conference on HumaneComputer Interaction. (2005).
[49]
Sisk, M. 2022. Analyzing XAI Metrics: Summary of the Literature Review.
[50]
Sugden, R.A. 2000. Cochran's rule for simple random sampling. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 62, 4 (Nov. 2000), 787–793.
[51]
Varošanec, I. 2022. On the Path to the Future: Mapping the Notion of Transparency in the EU Regulatory Framework for AI. SSRN Electronic Journal. (2022).
[52]
Vorapongsathorn, T. 2004. A comparison of type I error and power of Bartlett's test, Levene's test and Cochran's test under violation of assumptions. 26, 4 (2004).
[53]
van der Waa, J. 2021. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence. 291, (Feb. 2021), 103404.
[54]
Weiner, B. 2000. Attributional Thoughts about Consumer Behavior. Journal of Consumer Research. 27, 3 (Dec. 2000), 382–387.
[55]
Wells, L. and Bednarz, T. 2021. Explainable AI and Reinforcement Learning—A Systematic Review of Current Approaches and Trends. Frontiers in Artificial Intelligence. 4, (May 2021), 550030.
[56]
Zhang, Y. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona Spain, Jan. 2020), 295–305.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
AICCC '23: Proceedings of the 2023 6th Artificial Intelligence and Cloud Computing Conference
December 2023
280 pages
ISBN:9798400716225
DOI:10.1145/3639592
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 April 2024

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Affective Design
  2. Explainable AI
  3. HAI
  4. Human-AI Interaction
  5. KEYWORDS • XAI
  6. Trust

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

AICCC 2023

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 108
    Total Downloads
  • Downloads (Last 12 months)108
  • Downloads (Last 6 weeks)19
Reflects downloads up to 21 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media