Skip to main content

Generating Local Textual Explanations for CNNs: A Semantic Approach Based on Knowledge Graphs

  • Conference paper
  • First Online:
AIxIA 2021 – Advances in Artificial Intelligence (AIxIA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13196))

Abstract

Explainable Artificial Intelligence (XAI) has recently become an active research field due to the need for transparency and accountability when deploying AI models for high-stake decision making. In Computer Vision, despite state-of-the-art Convolutional Neural Networks (CNNs) have achieved great performance, understanding their decision processes, especially when a mistake occur, is still a known challenge. Current XAI methods for explaining CNNs mostly rely on visually highlighting parts of the image that contributed the most to the outcome. Although helpful, such visual clues do not provide a deeper understanding of the neural representation and need to be interpreted by humans. This limits scalability and possibly adds bias to the explainability process, in particular when the outcome is not the one expected. In this paper, we propose a method that provides textual explanations for CNNs in image classification tasks. The explanations generated by our approach can be easily understood by humans, which makes our method more scalable and less dependent on human interpretation. In addition, our approach gives the opportunity to link neural representations with knowledge. In the proposed approach we extend our notion of co-activation graph to include input data and we use such graph to connect neural representations from trained CNNs with external knowledge. Then, we use link prediction algorithms to predict semantic attributes of unseen input data. Finally, we use the results of these predictions to generate factual and counterfactual textual explanations of classification mistakes. Preliminary results show that when the link prediction accuracy is high, our method can generate good textual factual and counterfactual explanations that do not need human interpretation. Despite a more extensive evaluation is still ongoing, this indicates the potential of our approach in combining neural representations and knowledge graphs to generate explanations for mistakes in semantic terms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682).

  2. 2.

    https://neo4j.com.

References

  1. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012, https://www.sciencedirect.com/science/article/pii/S1566253519308103

  2. Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: quantifying interpretability of deep visual representations. In: Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  3. Byrne, R.M.J.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial Intelligence Organization, pp. 6276–6282 (2019). https://doi.org/10.24963/ijcai.2019/876

  4. Chatzimparmpas, A., Martins, R.M., Jusufi, I., Kerren, A.: A survey of surveys on the use of visualization for interpreting machine learning models. Inf. Visual. 147387162090467 (2020). https://doi.org/10.1177/1473871620904671

  5. Cui, Y., Song, Y., Sun, C., Howard, A., Belongie, S.: Large scale fine-grained categorization and domain-specific transfer learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018

    Google Scholar 

  6. Fong, R., Vedaldi, A.: Net2vec: quantifying and explaining how concepts are encoded by filters in deep neural networks (2018). http://arxiv.org/abs/1801.03454

  7. Futia, G., Vetrò, A.: On the integration of knowledge graphs into deep learning models for a more comprehensible AI-Three challenges for future research. Information 11(2) (2020). https://doi.org/10.3390/info11020122

  8. Garcia-Gasulla, D., et al.: On the behavior of convolutional nets for feature extraction (2017). http://arxiv.org/abs/1703.01127

  9. Garcia-Gasulla, D., et al.: An out-of-the-box full-network embedding for convolutional neural networks. In: 2018 IEEE International Conference on Big Knowledge (ICBK), pp. 168–175 (2018). https://doi.org/10.1109/ICBK.2018.00030

  10. Grün, F., Rupprecht, C., Navab, N., Tombari, F.: A taxonomy and library for visualizing learned features in convolutional neural networks. arXiv preprint arXiv:1606.07757 (2016)

  11. Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G.Z.: XAI-explainable artificial intelligence. Sci. Robot. 4(37), eaay7120 (2019). https://doi.org/10.1126/scirobotics.aay7120, https://openaccess.city.ac.uk/id/eprint/23405/, this is the author’s version of the work. It is posted here by permission of the AAAS for personal use, not for redistribution. The definitive version was published in Science Robotics 4(37) (2019). https://doi.org/10.1126/scirobotics.aay7120

  12. Hendricks, L.A., Hu, R., Darrell, T., Akata, Z.: Grounding visual explanations (2018)

    Google Scholar 

  13. Horta, V.A.C., Mileo, A.: Towards explaining deep neural networks through graph analysis. In: Anderst-Kotsis, G., et al. (eds.) Database and Expert Systems Applications, pp. 155–165. Springer International Publishing, Cham (2019)

    Chapter  Google Scholar 

  14. Horta, V.A., Tiddi, I., Little, S., Mileo, A.: Extracting knowledge from deep neural networks through graph analysis. Future Gener. Comput. Syst. 120, 109–118 (2021). https://doi.org/10.1016/j.future.2021.02.009, https://www.sciencedirect.com/science/article/pii/S0167739X21000613

  15. Kenny, E.M., Keane, M.T.: On generating plausible counterfactual and semi-factual explanations for deep learning. In: Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, 2–9 February 2021, pp. 11575–11585. AAAI Press (2021). https://ojs.aaai.org/index.php/AAAI/article/view/17377

  16. Lecue, F.: On the role of knowledge graphs in explainable AI. Semant. Web 11(1), 41–51 (2019). https://doi.org/10.3233/SW-190374

    Article  Google Scholar 

  17. Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank citation ranking: bringing order to the web. Tech. Rep. 1999–66, Stanford InfoLab, November 1999, http://ilpubs.stanford.edu:8090/422/previous number = SIDL-WP-1999-0120

  18. Qin, Z., Yu, F., Liu, C., Chen, X.: How convolutional neural network see the world - a survey of convolutional neural network visualization methods (2018)

    Google Scholar 

  19. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Mach. Intell. 1(5), 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x

  20. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). http://arxiv.org/abs/1409.1556

  21. Smyth, B., Keane, M.T.: A few good counterfactuals: generating interpretable, plausible and diverse counterfactual explanations (2021). https://arxiv.org/abs/2101.09056

  22. Suzuki, M., Kamcya, Y., Kutsuna, T., Mitsumoto, N.: Understanding the reason for misclassification by generating counterfactual images. In: 2021 17th International Conference on Machine Vision and Applications (MVA), pp. 1–5 (2021). https://doi.org/10.23919/MVA51890.2021.9511352

  23. Tjoa, E., Guan, C.: A survey on explainable artificial intelligence (XAI): towards medical XAI (2019). http://arxiv.org/abs/1907.07374

  24. Van Hoeck, N., Watson, P.D., Barbey, A.K.: Cognitive neuroscience of human counterfactual reasoning. Front. Hum. Neurosci. 9, 420 (2015). https://doi.org/10.3389/fnhum.2015.00420

    Article  Google Scholar 

  25. Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extr. 3(3), 615–661 (2021). https://doi.org/10.3390/make3030032

  26. Wan, A., et al.: NBDT: neural-backed decision trees (2020). https://arxiv.org/abs/2004.00221

  27. Welinder, P., et al.: Caltech-UCSD birds 200. Technical report CNS-TR-2010-001, California Institute of Technology (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vitor A. C. Horta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Horta, V.A.C., Mileo, A. (2022). Generating Local Textual Explanations for CNNs: A Semantic Approach Based on Knowledge Graphs. In: Bandini, S., Gasparini, F., Mascardi, V., Palmonari, M., Vizzari, G. (eds) AIxIA 2021 – Advances in Artificial Intelligence. AIxIA 2021. Lecture Notes in Computer Science(), vol 13196. Springer, Cham. https://doi.org/10.1007/978-3-031-08421-8_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08421-8_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08420-1

  • Online ISBN: 978-3-031-08421-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics