skip to main content
10.1145/3544548.3580945acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence

Published: 19 April 2023 Publication History

Abstract

Biases in Artificial Intelligence (AI) systems or their results are one important issue that demands AI explainability. Despite the prevalence of AI applications, the general public are not necessarily equipped with the ability to understand how the black-box algorithms work and how to deal with biases. To inform designs for explainable AI (XAI), we conducted in-depth interviews with major stakeholders, both end-users (n = 24) and engineers (n = 15), to investigate how they made sense of AI applications and the associated biases according to situations of high and low stakes. We discussed users’ perceptions and attributions about AI biases and their desired levels and types of explainability. We found that personal relevance and boundaries as well as the level of stake are two major dimensions for developing user trust especially during biased situations and informing XAI designs.

Supplementary Material

MP4 File (3544548.3580945-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
Samira Abbasgholizadeh Rahimi, Michelle Cwintal, Yuhui Huang, Pooria Ghadiri, Roland Grad, Dan Poenaru, Genevieve Gore, Hervé Tchala Vignon Zomahoun, France Légaré, and Pierre Pluye. 2022. Application of artificial intelligence in shared decision making: scoping review. JMIR Medical Informatics 10, 8 (2022), e36199.
[2]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. [n. d.]. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–18.
[3]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access 6(2018), 52138–52160.
[4]
Chelsea Barabas, Madars Virza, Karthik Dinakar, Joichi Ito, and Jonathan Zittrain. 2018. Interventions over predictions: Reframing the ethical debate for actuarial risk assessment. In Conference on Fairness, Accountability and Transparency. PMLR, 62–76.
[5]
Andrea Brennen. 2020. What Do People Really Want When They Say They Want" Explainable AI?" We Asked 60 Stakeholders. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–7.
[6]
Francesco Camastra and Alessandro Vinciarelli. 2015. Machine learning for audio, image and video analysis: theory and applications. Springer.
[7]
Maartje MA De Graaf and Bertram F Malle. 2017. How people explain action (and autonomous intelligent systems should too). In 2017 AAAI Fall Symposium Series.
[8]
Ashley Deeks. 2019. The Judicial Demand for Explainable Artificial Intelligence. Columbia Law Review 119, 7 (2019), 1829–1850.
[9]
Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces. 275–285.
[10]
John Fox, David Glasspool, Dan Grecu, Sanjay Modgil, Matthew South, and Vivek Patkar. 2007. Argumentation-based inference and decision making–A medical perspective. IEEE intelligent systems 22, 6 (2007), 34–41.
[11]
Batya Friedman. 1996. Value-sensitive design. interactions 3, 6 (1996), 16–23.
[12]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for datasets. arXiv preprint arXiv:1803.09010(2018).
[13]
Katy Ilonka Gero, Zahra Ashktorab, Casey Dugan, Qian Pan, James Johnson, Werner Geyer, Maria Ruiz, Sarah Miller, David R Millen, Murray Campbell, 2020. Mental Models of AI Agents in a Cooperative Game Setting. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[14]
Riccardo Guidotti, Anna Monreale, Dino Pedreschi, and Fosca Giannotti. 2021. Principles of explainable artificial intelligence. In Explainable AI within the digital transformation and cyber physical systems. Springer, 9–31.
[15]
David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web 2, 2(2017).
[16]
Jess Hohenstein and Malte Jung. 2020. AI as a moral crumple zone: The effects of AI-mediated communication on attribution and trust. Computers in Human Behavior 106 (2020), 106190.
[17]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 9 (2019), 389–399.
[18]
Fabrice Jotterand and Clara Bosco. 2020. Keeping the “human in the loop” in the age of artificial intelligence. Science and Engineering Ethics 26, 5 (2020), 2455–2460.
[19]
Harold H Kelley. 1967. Attribution theory in social psychology. In Nebraska symposium on motivation. University of Nebraska Press.
[20]
Jean-Baptiste Lamy, Boomadevi Sekar, Gilles Guezennec, Jacques Bouaud, and Brigitte Séroussi. 2019. Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach. Artificial Intelligence in Medicine 94 (2019), 42–53.
[21]
Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[22]
Duri Long and Brian Magerko. 2020. What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–16.
[23]
Michael A Madaio, Luke Stark, Jennifer Wortman Vaughan, and Hanna Wallach. 2020. Co-designing checklists to understand organizational challenges and opportunities around fairness in ai. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[24]
Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. 2019. A grounded interaction protocol for explainable artificial intelligence. arXiv preprint arXiv:1903.02409(2019).
[25]
Sherin Mary Mathews. 2019. Explainable artificial intelligence applications in NLP, biomedical, and malware classification: a literature review. In Intelligent Computing-Proceedings of the Computing Conference. Springer, 1269–1292.
[26]
Christian Meske, Enrico Bunde, Johannes Schneider, and Martin Gersch. 2022. Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Information Systems Management 39, 1 (2022), 53–63.
[27]
Tim Miller. 2019. " But why?" Understanding explainable artificial intelligence. XRDS: Crossroads, The ACM Magazine for Students 25, 3 (2019), 20–25.
[28]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
[29]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.
[30]
Brent Mittelstadt. 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 11 (2019), 501–507.
[31]
Joseph Happy Okoh. 2022. 10 Impacts of Artificial Intelligence on Our Everyday Life.
[32]
Franziska Pilling, Haider Ali Akmal, Joseph Lindley, Adrian Gradinar, and Paul Coulton. 2022. Making AI Infused Products and Services more Legible. Leonardo (Oct 2022), 1–11. https://doi.org/gq9nf8
[33]
Inioluwa Deborah Raji, Peggy Xu, Colleen Honigsberg, and Daniel Ho. 2022. Outsider oversight: Designing a third party audit ecosystem for ai governance. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. 557–571.
[34]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.
[35]
Justus Robertson, Athanasios Vasileios Kokkinakis, Jonathan Hook, Ben Kirman, Florian Block, Marian F Ursu, Sagarika Patra, Simon Demediuk, Anders Drachen, and Oluseyi Olarewaju. 2021. Wait, but why?: assessing behavior explanation strategies for real-time strategy games. In 26th International Conference on Intelligent User Interfaces. 32–42.
[36]
Stuart Russell and Peter Norvig. 2002. Artificial intelligence: a modern approach. (2002).
[37]
Wojciech Samek, Thomas Wiegand, and Klaus-Robert Müller. 2017. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296(2017).
[38]
Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency. 59–68.
[39]
Kacper Sokol. 2019. Fairness, accountability and transparency in artificial intelligence: A case study of logical predictive models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 541–542.
[40]
Harini Suresh and John V Guttag. 2019. A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002(2019).
[41]
Maxwell Szymanski, Martijn Millecamp, and Katrien Verbert. 2021. Visual, textual or hybrid: the effect of user expertise on different explanations. In 26th International Conference on Intelligent User Interfaces. 109–119.
[42]
Prasanna Tambe, Peter Cappelli, and Valery Yakubovich. 2019. Artificial intelligence in human resources management: Challenges and a path forward. California Management Review 61, 4 (2019), 15–42.
[43]
Erico Tjoa and Cuntai Guan. 2020. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Transactions on Neural Networks and Learning Systems (2020).
[44]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
[45]
Fabio Massimo Zanzotto. 2019. Human-in-the-loop Artificial Intelligence. Journal of Artificial Intelligence Research 64 (2019), 243–252.
[46]
Brahim Zarouali, Tom Dobber, Guy De Pauw, and Claes de Vreese. 2020. Using a Personality-Profiling Algorithm to Investigate Political Microtargeting: Assessing the Persuasion Effects of Personality-Tailored Ads on Social Media. Communication Research(2020), 009365022096196.
[47]
Tal Zarsky. 2016. The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values 41, 1 (2016), 118–132.
[48]
Carlos Zednik. 2019. Solving the black box problem: a normative framework for explainable artificial intelligence. Philosophy & Technology(2019), 1–24.

Cited By

View all
  • (2024)Charting Competence: A Holistic Scale for Measuring Proficiency in Artificial Intelligence LiteracyJournal of Educational Computing Research10.1177/0735633124126120662:7(1675-1704)Online publication date: 18-Jul-2024
  • (2024)When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systemsProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686066(1-17)Online publication date: 16-Sep-2024
  • (2024)Outcome First or Overview First? Optimizing Patient-Oriented Framework for Evidence-Based Healthcare Treatment Selections with XAI ToolsCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681859(248-254)Online publication date: 11-Nov-2024
  • Show More Cited By

Index Terms

  1. Contextualizing User Perceptions about Biases for Human-Centered Explainable Artificial Intelligence

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
    April 2023
    14911 pages
    ISBN:9781450394215
    DOI:10.1145/3544548
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. AI bias
    2. Artificial Intelligence
    3. Explainability
    4. Explainable AI (XAI)
    5. Human-Centered Computing
    6. Human-Computer Interaction (HCI)
    7. Transparency

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • National Science and Technology Council of Taiwan
    • National Science and Technology Council of Taiwan

    Conference

    CHI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)800
    • Downloads (Last 6 weeks)86
    Reflects downloads up to 20 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Charting Competence: A Holistic Scale for Measuring Proficiency in Artificial Intelligence LiteracyJournal of Educational Computing Research10.1177/0735633124126120662:7(1675-1704)Online publication date: 18-Jul-2024
    • (2024)When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systemsProceedings of the Second International Symposium on Trustworthy Autonomous Systems10.1145/3686038.3686066(1-17)Online publication date: 16-Sep-2024
    • (2024)Outcome First or Overview First? Optimizing Patient-Oriented Framework for Evidence-Based Healthcare Treatment Selections with XAI ToolsCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681859(248-254)Online publication date: 11-Nov-2024
    • (2024)Exploring How Users Attribute Responsibilities Across Different Stakeholders in Human-AI InteractionCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681852(202-208)Online publication date: 11-Nov-2024
    • (2024)VIME: Visual Interactive Model Explorer for Identifying Capabilities and Limitations of Machine Learning Models for Sequential Decision-MakingProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676323(1-21)Online publication date: 13-Oct-2024
    • (2024)Towards Explainability as a Functional Requirement: A Vision to Integrate the Legal, End-User, and ML Engineer PerspectivesProceedings of the 2nd International Workshop on Responsible AI Engineering10.1145/3643691.3648590(16-19)Online publication date: 16-Apr-2024
    • (2024)exHARProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435008:1(1-30)Online publication date: 6-Mar-2024
    • (2024)Towards a Non-Ideal Methodological Framework for Responsible MLProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642501(1-17)Online publication date: 11-May-2024
    • (2024)I lose vs. I earn: Consumer perceived price fairness toward algorithmic (vs. human) price discriminationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642280(1-17)Online publication date: 11-May-2024
    • (2024)Is this AI sexist? The effects of a biased AI’s anthropomorphic appearance and explainability on users’ bias perceptions and trustInternational Journal of Information Management10.1016/j.ijinfomgt.2024.10277576(102775)Online publication date: Jun-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media