ABSTRACT
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with the proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI have become far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability [2, 4]. Model explainability is considered a prerequisite for building trust and adoption of AI systems in high stakes domains such as lending and healthcare [1] requiring reliability, safety, and fairness. It is also critical to automated transportation, and other industrial applications with significant socio-economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale [5, 6, 8]. In fact, the field of explainability in AI/ML is at an inflexion point. There is a tremendous need from the societal, regulatory, commercial, end-user, and model developer perspectives. Consequently, practical and scalable explainability approaches are rapidly becoming available. The challenges for the research community include: (i) achieving consensus on the right notion of model explainability, (ii) identifying and formalizing explainability tasks from the perspectives of various stakeholders, and (iii) designing measures for evaluating explainability techniques.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI [3] from various perspectives. We will then provide a brief overview of several explainability techniques and tools. The rest of the tutorial will focus on the real-world application of explainability techniques in industry. We will present case studies spanning several domains such as:
• Search and Recommendation systems: Understanding of search and recommendations systems, as well as how retrieval and ranking decisions happen in real-time [7]. Example applications include explanation of decisions made by an AI system towards job recommendations, ranking of potential candidates for job posters, and content recommendations.
• Sales: Understanding of sales predictions in terms of customer up-sell/churn.
• Fraud Detection: Examining and explaining AI systems that determine whether a content or event is fraudulent.
• Lending: How to understand/interpret lending decisions made by an AI system.
We will focus on the sociotechnical dimensions, practical challenges, and lessons learned during development and deployment of these systems, which would be beneficial for researchers and practitioners interested in explainable AI. Finally, we will discuss open challenges and research directions for the community.
- Ahmad, M. A.; Eckert, C.; and Teredesai, A. 2018. Interpretable machine learning in healthcare. In ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics.Google Scholar
- Bird, S.; Hutchinson, B.; Kenthapadi, K.; Kiciman, E.; and Mitchell, M. 2019. Fairness-aware machine learning: Practical challenges and lessons learned. In KDD Tutorial.Google Scholar
- Došilović, F. K.; Brčić, M.; and Hlupić, N. 2018. Explainable artificial intelligence: A survey. In IEEE International convention on information and communication technology, electronics and microelectronics (MIPRO).Google Scholar
- Gunning, D. 2017. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web.Google Scholar
- Lakkaraju, H.; Kamar, E.; Caruana, R.; and Leskovec, J. 2017. Interpretable & explorable approximations of black box models. In Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT-ML).Google Scholar
- Lipton, Z. C. 2018. The mythos of model interpretability. Communications of the ACM 61(10).Google Scholar
- Qiu, D., and Qian, Y. 2019. Relevance debugging and explaining at LinkedIn. In OpML.Google Scholar
- Tan, S.; Caruana, R.; Hooker, G.; Koch, P.; and Gordo, A. 2018. Learning global additive explanations for neural nets using model distillation. arXiv preprint arXiv:1801.08640.Google Scholar
Recommendations
Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities
AbstractThe past decade has seen significant progress in artificial intelligence (AI), which has resulted in algorithms being adopted for resolving a variety of problems. However, this success has been met by increasing model complexity and ...
Explainable AI for the Arts: XAIxArts
C&C '23: Proceedings of the 15th Conference on Creativity and CognitionThis first workshop on explainable AI for the Arts (XAIxArts) brings together a community of researchers and creative practitioners in Human-Computer Interaction (HCI), Interaction Design, AI, explainable AI (XAI), and Digital Arts to explore the role ...
Methods and standards for research on explainable artificial intelligence: Lessons from intelligent tutoring systems
AbstractThe DARPA Explainable Artificial Intelligence (AI) (XAI) Program focused on generating explanations for AI programs that use machine learning techniques. This article highlights progress during the DARPA Program (2017‐2021) relative to research ...
Lessons learned in the work on intelligent tutoring systems that apply to system design in Explainable AI. image image
Comments