skip to main content
10.1145/3505711.3505716acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicaaiConference Proceedingsconference-collections
research-article

Towards Quantification of Explainability Algorithms

Published: 28 March 2022 Publication History

Abstract

Despite the proliferation in the use cases and the escalation in accuracy of Deep Neural Networks, the fact that such networks are black box models hamper our reliability in these models. Explainable Artificial Intelligence (XAI) algorithms work on building the trust in such models by attempting to interpret the basis of the predictions. However, due to the lack of work focused on developing quantitative approaches for such explanations, these explanations remain subjective in nature. Hence, the current XAI models require endorsement from the domain expert to justify the explainability. In this paper, we propose an approach which aims at obtaining quantitative explanations of three CNN architectures (namely InceptionV3, InceptionResNetV2, NASNetLarge) implemented using LIME XAI algorithm by proposing four general desiderata - Consistency, Efficiency, Integrity and Preciseness. The obtained experimental results offer enough information to form a quantitative relationship between the explanations of the three CNN models.

Supplementary Material

p31-rokade-supplement (p31-rokade-supplement.pptx)
Presentation slides

References

[1]
David Alvarez-Melis and Tommi S. Jaakkola. 2018. Towards Robust Interpretability with Self-Explaining Neural Networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems(Montréal, Canada) (NIPS’18). Curran Associates Inc., Red Hook, NY, USA, 7786–7795.
[2]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. 248–255. https://doi.org/10.1109/CVPR.2009.5206848
[3]
A. Dhurandhar, Vijay Iyengar, Ronny Luss, and Karthikeyan Shanmugam. 2017. TIP: Typifying the Interpretability of Procedures. ArXiv abs/1706.02952(2017).
[4]
Sorelle A Friedler, Chitradeep Dutta Roy, Carlos Scheidegger, and Dylan Slack. 2019. Assessing the Local Interpretability of Machine Learning Models. arXiv preprint arXiv:1902.03501(2019).
[5]
Andreas Holzinger, André Carrington, and Heimo Müller. 2020. Measuring the Quality of Explanations: The System Causability Scale (SCS). KI - Künstliche Intelligenz 34, 2 (01 Jun 2020), 193–198. https://doi.org/10.1007/s13218-020-00636-z
[6]
S. R. Islam, W. Eberle, and S. Ghafoor. 2020. Towards Quantification of Explainability in Explainable Artificial Intelligence Methods. ArXiv abs/1911.10104(2020).
[7]
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2018. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). arxiv:1711.11279 [stat.ML]
[8]
Scott M. Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 4768–4777.
[9]
W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. 2019. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 116, 44(2019), 22071–22080. https://doi.org/10.1073/pnas.1900654116 arXiv:https://www.pnas.org/content/116/44/22071.full.pdf
[10]
Marco Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations. Association for Computational Linguistics, San Diego, California. https://doi.org/10.18653/v1/N16-3020
[11]
Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In 2017 IEEE International Conference on Computer Vision (ICCV). 618–626. https://doi.org/10.1109/ICCV.2017.74
[12]
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning Important Features through Propagating Activation Differences. In Proceedings of the 34th International Conference on Machine Learning - Volume 70 (Sydney, NSW, Australia) (ICML’17). JMLR.org, 3145–3153.
[13]
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70 (Sydney, NSW, Australia) (ICML’17). JMLR.org, 3319–3328.
[14]
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A. Alemi. 2017. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (San Francisco, California, USA) (AAAI’17). AAAI Press, 4278–4284.
[15]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. 2016. Rethinking the Inception Architecture for Computer Vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, Los Alamitos, CA, USA, 2818–2826. https://doi.org/10.1109/CVPR.2016.308
[16]
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. 2018. Learning Transferable Architectures for Scalable Image Recognition. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8697–8710. https://doi.org/10.1109/CVPR.2018.00907

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICAAI '21: Proceedings of the 5th International Conference on Advances in Artificial Intelligence
November 2021
199 pages
ISBN:9781450390699
DOI:10.1145/3505711
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 March 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. CNN
  2. Explainability
  3. LIME
  4. Quantification
  5. XAI

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICAAI 2021

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 107
    Total Downloads
  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)0
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media