ABSTRACT
With the growing adoption of AI, trust and explainability have become critical which has attracted a lot of research attention over the past decade and has led to the development of many popular AI explainability libraries such as AIX360, Alibi, OmniXAI, etc. Despite that, applying explainability techniques in practice often poses challenges such as lack of consistency between explainers, semantically incorrect explanations, or scalability. Furthermore, one of the key modalities that has been less explored, both from the algorithmic and practice point of view, is time-series. Several application domains involve time-series including Industry 4.0, asset monitoring, supply chain or finance to name a few.
The AIX360 library (https://github.com/Trusted-AI/AIX360) has been incubated by the Linux Foundation AI & Data open-source projects and it has gained significant popularity: its public GitHub repository has over 1.3K stars and has been broadly adopted in the academic and applied settings. Motivated by industrial applications, large scale client projects and deployments in software products in the areas of IoT, asset management or supply chain, the AIX360 library has been recently expanded significantly to address the above challenges. AIX360 now includes new techniques including support for time-series modality introducing time series based explainers such as TS-LIME, TS Saliency explainer, TS-ICE and TS-SHAP. It also introduces improvements in generating model agnostic, consistent, diverse, and scalable explanations, and new algorithms for tabular data.
In this hands-on tutorial, we provide an overview of the library with the focus on the latest additions, time series explainers and use cases such as forecasting, time series anomaly detection or classification, and hands-on demonstrations based on industrial use-cases selected to demonstrate practical challenges and how they are addressed. The audience will be able to evaluate different types of explanations with a focus on practical aspects motivated by real deployments.
- Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2019. One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019).Google Scholar
- Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2020. AI explainability 360: hands-on tutorial. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 696--696.Google ScholarDigital Library
- Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, et al. 2022. AI Explainability 360: Impact and Design. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 12651--12657.Google ScholarCross Ref
- Prithwish Chakraborty, Bum Chul Kwon, Sanjoy Dey, Amit Dhurandhar, Daniel Gruen, Kenney Ng, Daby Sow, and Kush R Varshney. 2020. Tutorial on human-centered explainability for healthcare. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 3547--3548.Google ScholarDigital Library
- Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems, Vol. 31 (2018).Google Scholar
- Amit Dhurandhar, Tejaswini Pedapati, Avinash Balakrishnan, Pin-Yu Chen, Karthikeyan Shanmugam, and Ruchir Puri. 2019. Model agnostic contrastive explanations for structured data. arXiv preprint arXiv:1906.00117 (2019).Google Scholar
- Alex Goldstein, Adam Kapelner, Justin Bleich, and Emil Pitkin. 2015. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. journal of Computational and Graphical Statistics, Vol. 24, 1 (2015), 44--65.Google Scholar
- Jayant Haritsa, Shourya Roy, Manish Gupta, Sharad Mehrotra, Balaji Vasan Srinivasan, and Yogesh Simmhan. 2021. Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD). ACM.Google ScholarDigital Library
- Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman. 2009. The elements of statistical learning: data mining, inference, and prediction. Vol. 2. Springer.Google Scholar
- Ramaravind Kommiya Mothilal, Divyat Mahajan, Chenhao Tan, and Amit Sharma. 2021. Towards unifying feature attribution and counterfactual explanations: Different means to the same end. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 652--663.Google ScholarDigital Library
- Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 607--617.Google ScholarDigital Library
- Vikas C. Raykar, Arindam Jati, Sumanta Mukherjee, Nupur Aggarwal, Kanthi Sarpatwar, Giridhar Ganapavarapu, and Roman Vaculin. 2023. TsSHAP: Robust model agnostic feature-based explainability for time series forecasting. arxiv: 2303.12316 [cs.LG]Google Scholar
- Shravan Kumar Sajja, Sumanta Mukherjee, and Satyam Dwivedi. 2023. Semi-supervised counterfactual explanations. arxiv: 2303.12634 [cs.LG]Google Scholar
- Arnaud Van Looveren and Janis Klaise. 2021. Interpretable counterfactual explanations guided by prototypes. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13-17, 2021, Proceedings, Part II 21. Springer, 650--665.Google ScholarDigital Library
- Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., Vol. 31 (2017), 841.Google Scholar
Index Terms
- AI Explainability 360 Toolkit for Time-Series and Industrial Use Cases
Recommendations
AI Explainability 360 Toolkit
CODS-COMAD '21: Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD)As machine learning algorithms make inroads into our lives and society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. Moreover, these stakeholders, whether they be government regulators, affected ...
Artificial agents’ explainability to support trust: considerations on timing and context
AbstractStrategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to ...
AI explainability 360: an extensible toolkit for understanding data and machine learning models
As artificial intelligence algorithms make further inroads in high-stakes societal applications, there are increasing calls from multiple stakeholders for these algorithms to explain their outputs. To make matters more challenging, different personas of ...
Comments