skip to main content
10.1145/3351095.3375664acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
tutorial

Explainable AI in industry: practical challenges and lessons learned: implications tutorial

Published:27 January 2020Publication History

ABSTRACT

Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with the proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI have become far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability [2, 4]. Model explainability is considered a prerequisite for building trust and adoption of AI systems in high stakes domains such as lending and healthcare [1] requiring reliability, safety, and fairness. It is also critical to automated transportation, and other industrial applications with significant socio-economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.

As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale [5, 6, 8]. In fact, the field of explainability in AI/ML is at an inflexion point. There is a tremendous need from the societal, regulatory, commercial, end-user, and model developer perspectives. Consequently, practical and scalable explainability approaches are rapidly becoming available. The challenges for the research community include: (i) achieving consensus on the right notion of model explainability, (ii) identifying and formalizing explainability tasks from the perspectives of various stakeholders, and (iii) designing measures for evaluating explainability techniques.

In this tutorial, we will first motivate the need for model interpretability and explainability in AI [3] from various perspectives. We will then provide a brief overview of several explainability techniques and tools. The rest of the tutorial will focus on the real-world application of explainability techniques in industry. We will present case studies spanning several domains such as:

• Search and Recommendation systems: Understanding of search and recommendations systems, as well as how retrieval and ranking decisions happen in real-time [7]. Example applications include explanation of decisions made by an AI system towards job recommendations, ranking of potential candidates for job posters, and content recommendations.

• Sales: Understanding of sales predictions in terms of customer up-sell/churn.

• Fraud Detection: Examining and explaining AI systems that determine whether a content or event is fraudulent.

• Lending: How to understand/interpret lending decisions made by an AI system.

We will focus on the sociotechnical dimensions, practical challenges, and lessons learned during development and deployment of these systems, which would be beneficial for researchers and practitioners interested in explainable AI. Finally, we will discuss open challenges and research directions for the community.

References

  1. Ahmad, M. A.; Eckert, C.; and Teredesai, A. 2018. Interpretable machine learning in healthcare. In ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics.Google ScholarGoogle Scholar
  2. Bird, S.; Hutchinson, B.; Kenthapadi, K.; Kiciman, E.; and Mitchell, M. 2019. Fairness-aware machine learning: Practical challenges and lessons learned. In KDD Tutorial.Google ScholarGoogle Scholar
  3. Došilović, F. K.; Brčić, M.; and Hlupić, N. 2018. Explainable artificial intelligence: A survey. In IEEE International convention on information and communication technology, electronics and microelectronics (MIPRO).Google ScholarGoogle Scholar
  4. Gunning, D. 2017. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web.Google ScholarGoogle Scholar
  5. Lakkaraju, H.; Kamar, E.; Caruana, R.; and Leskovec, J. 2017. Interpretable & explorable approximations of black box models. In Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT-ML).Google ScholarGoogle Scholar
  6. Lipton, Z. C. 2018. The mythos of model interpretability. Communications of the ACM 61(10).Google ScholarGoogle Scholar
  7. Qiu, D., and Qian, Y. 2019. Relevance debugging and explaining at LinkedIn. In OpML.Google ScholarGoogle Scholar
  8. Tan, S.; Caruana, R.; Hooker, G.; Koch, P.; and Gordo, A. 2018. Learning global additive explanations for neural nets using model distillation. arXiv preprint arXiv:1801.08640.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
    January 2020
    895 pages
    ISBN:9781450369367
    DOI:10.1145/3351095

    Copyright © 2020 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 27 January 2020

    Check for updates

    Qualifiers

    • tutorial

    Upcoming Conference

    FAccT '24

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader