skip to main content
10.1145/3580305.3599182acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
abstract
Free Access

AI Explainability 360 Toolkit for Time-Series and Industrial Use Cases

Published:04 August 2023Publication History

ABSTRACT

With the growing adoption of AI, trust and explainability have become critical which has attracted a lot of research attention over the past decade and has led to the development of many popular AI explainability libraries such as AIX360, Alibi, OmniXAI, etc. Despite that, applying explainability techniques in practice often poses challenges such as lack of consistency between explainers, semantically incorrect explanations, or scalability. Furthermore, one of the key modalities that has been less explored, both from the algorithmic and practice point of view, is time-series. Several application domains involve time-series including Industry 4.0, asset monitoring, supply chain or finance to name a few.

The AIX360 library (https://github.com/Trusted-AI/AIX360) has been incubated by the Linux Foundation AI & Data open-source projects and it has gained significant popularity: its public GitHub repository has over 1.3K stars and has been broadly adopted in the academic and applied settings. Motivated by industrial applications, large scale client projects and deployments in software products in the areas of IoT, asset management or supply chain, the AIX360 library has been recently expanded significantly to address the above challenges. AIX360 now includes new techniques including support for time-series modality introducing time series based explainers such as TS-LIME, TS Saliency explainer, TS-ICE and TS-SHAP. It also introduces improvements in generating model agnostic, consistent, diverse, and scalable explanations, and new algorithms for tabular data.

In this hands-on tutorial, we provide an overview of the library with the focus on the latest additions, time series explainers and use cases such as forecasting, time series anomaly detection or classification, and hands-on demonstrations based on industrial use-cases selected to demonstrate practical challenges and how they are addressed. The audience will be able to evaluate different types of explanations with a focus on practical aspects motivated by real deployments.

References

  1. Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019).Google ScholarGoogle Scholar
  2. Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2020. AI explainability 360: hands-on tutorial. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 696--696.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2022. AI Explainability 360: Impact and Design. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 12651--12657.Google ScholarGoogle ScholarCross RefCross Ref
  4. Prithwish Chakraborty, Bum Chul Kwon, Sanjoy Dey, Amit Dhurandhar, Daniel Gruen, Kenney Ng, Daby Sow, and Kush R Varshney. 2020. Tutorial on human-centered explainability for healthcare. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 3547--3548.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems, Vol. 31 (2018).Google ScholarGoogle Scholar
  6. Amit Dhurandhar, Tejaswini Pedapati, Avinash Balakrishnan, Pin-Yu Chen, Karthikeyan Shanmugam, and Ruchir Puri. 2019. Model agnostic contrastive explanations for structured data. arXiv preprint arXiv:1906.00117 (2019).Google ScholarGoogle Scholar
  7. Alex Goldstein, Adam Kapelner, Justin Bleich, and Emil Pitkin. 2015. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. journal of Computational and Graphical Statistics, Vol. 24, 1 (2015), 44--65.Google ScholarGoogle Scholar
  8. Jayant Haritsa, Shourya Roy, Manish Gupta, Sharad Mehrotra, Balaji Vasan Srinivasan, and Yogesh Simmhan. 2021. Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD). ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. 2009. The elements of statistical learning: data mining, inference, and prediction. Vol. 2. Springer.Google ScholarGoogle Scholar
  10. Ramaravind Kommiya Mothilal, Divyat Mahajan, Chenhao Tan, and Amit Sharma. 2021. Towards unifying feature attribution and counterfactual explanations: Different means to the same end. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 652--663.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 607--617.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Vikas C. Raykar, Arindam Jati, Sumanta Mukherjee, Nupur Aggarwal, Kanthi Sarpatwar, Giridhar Ganapavarapu, and Roman Vaculin. 2023. TsSHAP: Robust model agnostic feature-based explainability for time series forecasting. arxiv: 2303.12316 [cs.LG]Google ScholarGoogle Scholar
  13. Shravan Kumar Sajja, Sumanta Mukherjee, and Satyam Dwivedi. 2023. Semi-supervised counterfactual explanations. arxiv: 2303.12634 [cs.LG]Google ScholarGoogle Scholar
  14. Arnaud Van Looveren and Janis Klaise. 2021. Interpretable counterfactual explanations guided by prototypes. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13-17, 2021, Proceedings, Part II 21. Springer, 650--665.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., Vol. 31 (2017), 841.Google ScholarGoogle Scholar

Index Terms

  1. AI Explainability 360 Toolkit for Time-Series and Industrial Use Cases

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      KDD '23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
      August 2023
      5996 pages
      ISBN:9798400701030
      DOI:10.1145/3580305

      Copyright © 2023 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 4 August 2023

      Check for updates

      Qualifiers

      • abstract

      Acceptance Rates

      Overall Acceptance Rate1,133of8,635submissions,13%

      Upcoming Conference

      KDD '24
    • Article Metrics

      • Downloads (Last 12 months)301
      • Downloads (Last 6 weeks)40

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Access Granted

    The conference sponsors are committed to making content openly accessible in a timely manner.
    This article is provided by ACM and the conference, through the ACM OpenTOC service.