skip to main content
10.1145/3366424.3383110acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Explainable AI in Industry: Practical Challenges and Lessons Learned

Published: 20 April 2020 Publication History

Abstract

Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with the proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI have become far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability [11]. Model explainability is considered a prerequisite for building trust and adoption of AI systems in high stakes domains such as lending and healthcare [1] which require reliability, safety, and fairness. It is also critical to automated transportation, and other industrial applications with significant socio-economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale [14, 15, 25]. The challenges for the research community include: (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI [6] from societal, legal, enterprise, end-user, and model developer perspectives, and present techniques & tools for providing explainability as part of AI/ML systems [13]. Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges & implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, hiring, lending, sales, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the WWW community.

References

[1]
Muhammad Aurangzeb Ahmad, Carly Eckert, and Ankur Teredesai. 2018. Interpretable Machine Learning in Healthcare. In ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics.
[2]
André Altmann, Laura Toloşi, Oliver Sander, and Thomas Lengauer. 2010. Permutation Importance. Bioinformatics 26, 10 (May 2010), 1340–1347. https://doi.org/10.1093/bioinformatics/btq134
[3]
Rajiv Khanna Been Kim and Sanmi Koyejo. 2016. Examples are not Enough, Learn to Criticize! Criticism for Interpretability. In NeurIPS.
[4]
Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In KDD.
[5]
Anupam Datta, Shayak Sen, and Yair Zick. 2016. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP).
[6]
Filip Karlo Došilović, Mario Brčić, and Nikica Hlupić. 2018. Explainable artificial intelligence: A survey. In IEEE International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO).
[7]
Nicholas Frosst and Geoffrey E. Hinton. 2017. Distilling a Neural Network Into a Soft Decision Tree. In International Workshop on Comprehensibility and Explanation in AI and ML.
[8]
Sahin Cem Geyik, Stuart Ambler, and Krishnaram Kenthapadi. 2019. Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search. In KDD.
[9]
Sahin Cem Geyik and Krishnaram Kenthapadi. October 2018. Building Representative Talent Search at LinkedIn. (October 2018). LinkedIn engineering blog post, Available at https://engineering.linkedin.com/blog/2018/10/building-representative-talent-search-at-linkedin.
[10]
Rory Mc Grath, Luca Costabello, Chan Le Van, Paul Sweeney, Farbod Kamiab, Zhao Shen, and Freddy Lecue. 2018. Interpretable Credit Application Predictions With Counterfactual Explanations. arXiv preprint arXiv:1811.05245(2018).
[11]
David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web (2017).
[12]
Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In ICML.
[13]
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Eric Horvitz. 2016. Discovering Unknown Unknowns of Predictive Models. In NeurIPS.
[14]
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2017. Interpretable & explorable approximations of black box models. In Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT-ML).
[15]
Zachary C Lipton. 2018. The mythos of model interpretability. Commun. ACM 61, 10 (2018).
[16]
Yin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker. 2013. Accurate intelligible models with pairwise interactions. In KDD.
[17]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In NeurIPS.
[18]
Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. 2017. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition 65(2017), 211–222.
[19]
Daniel Qiu and Yucheng Qian. 2019. Relevance Debugging and Explaining at LinkedIn. In OpML.
[20]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Model-agnostic interpretability of machine learning. In ICML Workshop on Human Interpretability in Machine Learning (WHI).
[21]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In AAAI.
[22]
Lloyd S Shapley. 1953. A Value for n-Person Games. In Contributions to the Theory of Games II. 307–317.
[23]
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In ICML.
[24]
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In ICML.
[25]
Sarah Tan, Rich Caruana, Giles Hooker, Paul Koch, and Albert Gordo. 2018. Learning Global Additive Explanations for Neural Nets Using Model Distillation. In NeurIPS Workshop on Machine Learning for Health (ML4H).

Cited By

View all
  • (2025)Machine Learning in the Teaching Quality of University Teachers: Systematic Review of the Literature 2014–2024Information10.3390/info1603018116:3(181)Online publication date: 27-Feb-2025
  • (2025)A Cross Attention Approach to Diagnostic Explainability Using Clinical Practice Guidelines for DepressionIEEE Journal of Biomedical and Health Informatics10.1109/JBHI.2024.348357729:2(1333-1342)Online publication date: Feb-2025
  • (2025)The advancement of Artificial Intelligence in Education: Insights from a 1976–2024 bibliometric analysisJournal of Research on Technology in Education10.1080/15391523.2025.2456044(1-17)Online publication date: 11-Feb-2025
  • Show More Cited By

Index Terms

  1. Explainable AI in Industry: Practical Challenges and Lessons Learned
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      WWW '20: Companion Proceedings of the Web Conference 2020
      April 2020
      854 pages
      ISBN:9781450370240
      DOI:10.1145/3366424
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 20 April 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      WWW '20
      Sponsor:
      WWW '20: The Web Conference 2020
      April 20 - 24, 2020
      Taipei, Taiwan

      Acceptance Rates

      Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)188
      • Downloads (Last 6 weeks)16
      Reflects downloads up to 02 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Machine Learning in the Teaching Quality of University Teachers: Systematic Review of the Literature 2014–2024Information10.3390/info1603018116:3(181)Online publication date: 27-Feb-2025
      • (2025)A Cross Attention Approach to Diagnostic Explainability Using Clinical Practice Guidelines for DepressionIEEE Journal of Biomedical and Health Informatics10.1109/JBHI.2024.348357729:2(1333-1342)Online publication date: Feb-2025
      • (2025)The advancement of Artificial Intelligence in Education: Insights from a 1976–2024 bibliometric analysisJournal of Research on Technology in Education10.1080/15391523.2025.2456044(1-17)Online publication date: 11-Feb-2025
      • (2025)Effect of large language models artificial intelligence chatgpt chatbot on achievement of computer education studentsEducation and Information Technologies10.1007/s10639-024-13293-8Online publication date: 7-Jan-2025
      • (2024)Integrating Explainable Machine Learning in Clinical Decision Support Systems: Study Involving a Modified Design Thinking ApproachJMIR Formative Research10.2196/504758(e50475)Online publication date: 16-Apr-2024
      • (2024)Are Large Language Models the New Interface for Data Pipelines?Proceedings of the International Workshop on Big Data in Emergent Distributed Environments10.1145/3663741.3664785(1-6)Online publication date: 9-Jun-2024
      • (2024)Unveiling Climate Drivers via Feature Importance Shift Analysis in New ZealandProceedings of the ACM Web Conference 202410.1145/3589334.3648147(4595-4606)Online publication date: 13-May-2024
      • (2024) Would you trust an AI team member? Team trust in human– AI teams Journal of Occupational and Organizational Psychology10.1111/joop.1250497:3(1212-1241)Online publication date: 27-Apr-2024
      • (2024)Explainable Artificial Intelligence for a Better Understanding of Naturalist DataLinking with Nature in the Digital Age10.1002/9781394297580.ch5(73-102)Online publication date: 17-May-2024
      • (2024)Digital Transformation Using Industry 4.0 and Artificial IntelligenceTopics in Artificial Intelligence Applied to Industry 4.010.1002/9781394216147.ch2(19-37)Online publication date: 5-Apr-2024
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media