skip to main content
10.1145/3640544.3645239acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
extended-abstract

Human-Centered Evaluation of Explanations in AI-Assisted Decision-Making

Published:05 April 2024Publication History

ABSTRACT

AI explanations have been increasingly used to help people better utilize AI recommendations in AI-assisted decision making. While numerous technical transparency approaches have been established, a human-centered perspective is needed for understanding how human decision makers use and process AI explanations. In my thesis, I start with an empirical exploration of how AI explanations shape the way people understand and utilize AI decision aids. Next, I move to the time‑evolving nature of AI explanations, exploring how explanation changes due to AI model updates affect human decision makers’ perception and usage of AI models. Lastly, I construct computational human behavior models to gain a more quantitative understandings of human decision makers’ cognitive interactions with AI explanations. I conclude with future work on carefully identifying user needs for explainable AI in an era when AI models are becoming more complex and human-AI collaboration scenarios are increasingly diversified.

References

  1. Ben Green and Yiling Chen. 2019. The principles and limits of algorithm-in-the-loop decision making. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–24.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Bernease Herman. 2017. The promise and peril of human evaluation for model interpretability. arXiv preprint arXiv:1711.07414 (2017).Google ScholarGoogle Scholar
  3. Jongbin Jung, Connor Concannon, Ravi Shroff, Sharad Goel, and Daniel G Goldstein. 2017. Simple rules for complex decisions. arXiv preprint arXiv:1702.04690 (2017).Google ScholarGoogle Scholar
  4. Vivian Lai, Yiming Zhang, Chacha Chen, Q Vera Liao, and Chenhao Tan. 2023. Selective explanations: Leveraging human input to align explainable ai. arXiv preprint arXiv:2301.09656 (2023).Google ScholarGoogle Scholar
  5. Q Vera Liao and Jennifer Wortman Vaughan. 2023. AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap. arXiv preprint arXiv:2306.01941 (2023).Google ScholarGoogle Scholar
  6. Yin Lou, Rich Caruana, and Johannes Gehrke. 2012. Intelligible models for classification and regression. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. 150–158.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).Google ScholarGoogle Scholar
  8. Aditi Mishra, Utkarsh Soni, Jinbin Huang, and Chris Bryan. 2022. Why? why not? when? visual explanations of agent behaviour in reinforcement learning. In 2022 IEEE 15th Pacific Visualization Symposium (PacificVis). IEEE, 111–120.Google ScholarGoogle ScholarCross RefCross Ref
  9. Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–52.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013).Google ScholarGoogle Scholar
  12. Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 180–186.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Xinru Wang, Chen Liang, and Ming Yin. 2023. The effects of AI biases and explanations on human decision fairness: a case study of bidding in rental housing markets. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 3076–3084.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Xinru Wang, Zhuoran Lu, and Ming Yin. 2022. Will you accept the ai recommendation? predicting human behavior in ai-assisted decision making. In Proceedings of the ACM Web Conference 2022. 1697–1708.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Xinru Wang and Ming Yin. 2021. Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. In 26th international conference on intelligent user interfaces. 318–328.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Xinru Wang and Ming Yin. 2022. Effects of explanations in ai-assisted decision making: Principles and comparisons. ACM Transactions on Interactive Intelligent Systems 12, 4 (2022), 1–36.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Xinru Wang and Ming Yin. 2023. Watch Out for Updates: Understanding the Effects of Model Explanation Updates in AI-Assisted Decision Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Ming Yin and Yu-An Sun. 2015. Human behavior models for virtual agents in repeated decision making under uncertainty. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. 581–589.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 295–305.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Human-Centered Evaluation of Explanations in AI-Assisted Decision-Making

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          IUI '24 Companion: Companion Proceedings of the 29th International Conference on Intelligent User Interfaces
          March 2024
          182 pages
          ISBN:9798400705090
          DOI:10.1145/3640544

          Copyright © 2024 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 5 April 2024

          Check for updates

          Qualifiers

          • extended-abstract
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate746of2,811submissions,27%
        • Article Metrics

          • Downloads (Last 12 months)61
          • Downloads (Last 6 weeks)61

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format