skip to main content
10.1145/3351095.3375664acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
tutorial

Explainable AI in industry: practical challenges and lessons learned: implications tutorial

Published: 27 January 2020 Publication History

Abstract

Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with the proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI have become far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability [2, 4]. Model explainability is considered a prerequisite for building trust and adoption of AI systems in high stakes domains such as lending and healthcare [1] requiring reliability, safety, and fairness. It is also critical to automated transportation, and other industrial applications with significant socio-economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale [5, 6, 8]. In fact, the field of explainability in AI/ML is at an inflexion point. There is a tremendous need from the societal, regulatory, commercial, end-user, and model developer perspectives. Consequently, practical and scalable explainability approaches are rapidly becoming available. The challenges for the research community include: (i) achieving consensus on the right notion of model explainability, (ii) identifying and formalizing explainability tasks from the perspectives of various stakeholders, and (iii) designing measures for evaluating explainability techniques.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI [3] from various perspectives. We will then provide a brief overview of several explainability techniques and tools. The rest of the tutorial will focus on the real-world application of explainability techniques in industry. We will present case studies spanning several domains such as:
• Search and Recommendation systems: Understanding of search and recommendations systems, as well as how retrieval and ranking decisions happen in real-time [7]. Example applications include explanation of decisions made by an AI system towards job recommendations, ranking of potential candidates for job posters, and content recommendations.
• Sales: Understanding of sales predictions in terms of customer up-sell/churn.
• Fraud Detection: Examining and explaining AI systems that determine whether a content or event is fraudulent.
• Lending: How to understand/interpret lending decisions made by an AI system.
We will focus on the sociotechnical dimensions, practical challenges, and lessons learned during development and deployment of these systems, which would be beneficial for researchers and practitioners interested in explainable AI. Finally, we will discuss open challenges and research directions for the community.

References

[1]
Ahmad, M. A.; Eckert, C.; and Teredesai, A. 2018. Interpretable machine learning in healthcare. In ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics.
[2]
Bird, S.; Hutchinson, B.; Kenthapadi, K.; Kiciman, E.; and Mitchell, M. 2019. Fairness-aware machine learning: Practical challenges and lessons learned. In KDD Tutorial.
[3]
Došilović, F. K.; Brčić, M.; and Hlupić, N. 2018. Explainable artificial intelligence: A survey. In IEEE International convention on information and communication technology, electronics and microelectronics (MIPRO).
[4]
Gunning, D. 2017. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web.
[5]
Lakkaraju, H.; Kamar, E.; Caruana, R.; and Leskovec, J. 2017. Interpretable & explorable approximations of black box models. In Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT-ML).
[6]
Lipton, Z. C. 2018. The mythos of model interpretability. Communications of the ACM 61(10).
[7]
Qiu, D., and Qian, Y. 2019. Relevance debugging and explaining at LinkedIn. In OpML.
[8]
Tan, S.; Caruana, R.; Hooker, G.; Koch, P.; and Gordo, A. 2018. Learning global additive explanations for neural nets using model distillation. arXiv preprint arXiv:1801.08640.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
January 2020
895 pages
ISBN:9781450369367
DOI:10.1145/3351095
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 January 2020

Check for updates

Qualifiers

  • Tutorial

Conference

FAT* '20
Sponsor:

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)43
  • Downloads (Last 6 weeks)0
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Vessel Trajectory Data Mining: A ReviewIEEE Access10.1109/ACCESS.2025.352595213(4827-4856)Online publication date: 2025
  • (2024)Demystifying Applications of Explainable Artificial Intelligence (XAI) in e-CommerceRole of Explainable Artificial Intelligence in E-Commerce10.1007/978-3-031-55615-9_7(101-116)Online publication date: 26-Apr-2024
  • (2023)Explainable AI (XAI)Knowledge-Based Systems10.1016/j.knosys.2023.110273263:COnline publication date: 5-Mar-2023
  • (2022)A Generic Approach to Extend Interpretability of Deep NetworksProgress in Artificial Intelligence10.1007/978-3-031-16474-3_40(488-499)Online publication date: 31-Aug-2022
  • (2021)Classification of Explainable Artificial Intelligence Methods through Their Output FormatsMachine Learning and Knowledge Extraction10.3390/make30300323:3(615-661)Online publication date: 4-Aug-2021
  • (2021)An AI adoption model for SMEs: a conceptual frameworkIFAC-PapersOnLine10.1016/j.ifacol.2021.08.08254:1(702-708)Online publication date: 2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media