ABSTRACT
AI explanations have been increasingly used to help people better utilize AI recommendations in AI-assisted decision making. While numerous technical transparency approaches have been established, a human-centered perspective is needed for understanding how human decision makers use and process AI explanations. In my thesis, I start with an empirical exploration of how AI explanations shape the way people understand and utilize AI decision aids. Next, I move to the time‑evolving nature of AI explanations, exploring how explanation changes due to AI model updates affect human decision makers’ perception and usage of AI models. Lastly, I construct computational human behavior models to gain a more quantitative understandings of human decision makers’ cognitive interactions with AI explanations. I conclude with future work on carefully identifying user needs for explainable AI in an era when AI models are becoming more complex and human-AI collaboration scenarios are increasingly diversified.
- Ben Green and Yiling Chen. 2019. The principles and limits of algorithm-in-the-loop decision making. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–24.Google ScholarDigital Library
- Bernease Herman. 2017. The promise and peril of human evaluation for model interpretability. arXiv preprint arXiv:1711.07414 (2017).Google Scholar
- Jongbin Jung, Connor Concannon, Ravi Shroff, Sharad Goel, and Daniel G Goldstein. 2017. Simple rules for complex decisions. arXiv preprint arXiv:1702.04690 (2017).Google Scholar
- Vivian Lai, Yiming Zhang, Chacha Chen, Q Vera Liao, and Chenhao Tan. 2023. Selective explanations: Leveraging human input to align explainable ai. arXiv preprint arXiv:2301.09656 (2023).Google Scholar
- Q Vera Liao and Jennifer Wortman Vaughan. 2023. AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap. arXiv preprint arXiv:2306.01941 (2023).Google Scholar
- Yin Lou, Rich Caruana, and Johannes Gehrke. 2012. Intelligible models for classification and regression. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. 150–158.Google ScholarDigital Library
- Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).Google Scholar
- Aditi Mishra, Utkarsh Soni, Jinbin Huang, and Chris Bryan. 2022. Why? why not? when? visual explanations of agent behaviour in reinforcement learning. In 2022 IEEE 15th Pacific Visualization Symposium (PacificVis). IEEE, 111–120.Google ScholarCross Ref
- Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and measuring model interpretability. In Proceedings of the 2021 CHI conference on human factors in computing systems. 1–52.Google ScholarDigital Library
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1135–1144.Google ScholarDigital Library
- Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013).Google Scholar
- Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 180–186.Google ScholarDigital Library
- Xinru Wang, Chen Liang, and Ming Yin. 2023. The effects of AI biases and explanations on human decision fairness: a case study of bidding in rental housing markets. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 3076–3084.Google ScholarDigital Library
- Xinru Wang, Zhuoran Lu, and Ming Yin. 2022. Will you accept the ai recommendation? predicting human behavior in ai-assisted decision making. In Proceedings of the ACM Web Conference 2022. 1697–1708.Google ScholarDigital Library
- Xinru Wang and Ming Yin. 2021. Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. In 26th international conference on intelligent user interfaces. 318–328.Google ScholarDigital Library
- Xinru Wang and Ming Yin. 2022. Effects of explanations in ai-assisted decision making: Principles and comparisons. ACM Transactions on Interactive Intelligent Systems 12, 4 (2022), 1–36.Google ScholarDigital Library
- Xinru Wang and Ming Yin. 2023. Watch Out for Updates: Understanding the Effects of Model Explanation Updates in AI-Assisted Decision Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–19.Google ScholarDigital Library
- Ming Yin and Yu-An Sun. 2015. Human behavior models for virtual agents in repeated decision making under uncertainty. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems. 581–589.Google ScholarDigital Library
- Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 295–305.Google ScholarDigital Library
Index Terms
- Human-Centered Evaluation of Explanations in AI-Assisted Decision-Making
Recommendations
Are Explanations Helpful? A Comparative Study of the Effects of Explanations in AI-Assisted Decision-Making
IUI '21: Proceedings of the 26th International Conference on Intelligent User InterfacesThis paper contributes to the growing literature in empirical evaluation of explainable AI (XAI) methods by presenting a comparison on the effects of a set of established XAI methods in AI-assisted decision making. Specifically, based on our review of ...
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
CSCWAI explanations are often mentioned as a way to improve human-AI decision-making, but empirical studies have not found consistent evidence of explanations' effectiveness and, on the contrary, suggest that they can increase overreliance when the AI system ...
Watch Out for Updates: Understanding the Effects of Model Explanation Updates in AI-Assisted Decision Making
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing SystemsAI explanations have been increasingly used to help people better utilize AI recommendations in AI-assisted decision making. While AI explanations may change over time due to updates of the AI model, little is known about how these changes may affect ...
Comments