Skip to main content

Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2024)

Abstract

We introduce a novel metric for measuring semantic continuity in Explainable AI methods and machine learning models. We posit that for models to be truly interpretable and trustworthy, similar inputs should yield similar explanations, reflecting a consistent semantic understanding. By leveraging XAI techniques, we assess semantic continuity in the task of image recognition. We conduct experiments to observe how incremental changes in input affect the explanations provided by different XAI methods. Through this approach, we aim to evaluate the models’ capability to generalize and abstract semantic concepts accurately and to evaluate different XAI methods in correctly capturing the model behaviour. This paper contributes to the broader discourse on AI interpretability by proposing a quantitative measure for semantic continuity for XAI methods, offering insights into the models’ and explainers’ internal reasoning processes, and promoting more reliable and transparent AI systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Brack, M., Friedrich, F., Hintersdorf, D., Struppek, L., Schramowski, P., Kersting, K.: SEGA: instructing text-to-image models using semantic guidance. In: Thirty-seventh Conference on Neural Information Processing Systems (2023). https://openreview.net/forum?id=KIPAIy329j

  2. Cugny, R., Aligon, J., Chevalier, M., Roman Jimenez, G., Teste, O.: Autoxai: a framework to automatically select the most adapted xai solution. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 315–324 (2022)

    Google Scholar 

  3. Galli, A., Marrone, S., Moscato, V., Sansone, C.: Reliability of explainable artificial intelligence in adversarial perturbation scenarios. In: Del Bimbo, A., Cucchiara, R., Sclaroff, S., Farinella, G.M., Mei, T., Bertini, M., Escalante, H.J., Vezzani, R. (eds.) Pattern Recognition. ICPR International Workshops and Challenges. pp. 243–256. Springer, Cham (2021)

    Google Scholar 

  4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015)

    Google Scholar 

  5. Hedström, A.: Explainable Artificial Intelligence : How to Evaluate Explanations of Deep Neural Network Predictions using the Continuity Test. Master’s thesis, KTH, School of Electrical Engineering and Computer Science (EECS) (2020)

    Google Scholar 

  6. Hedström, A., et al.: Quantus: an explainable ai toolkit for responsible evaluation of neural network explanations and beyond. J. Mach. Learn. Res. 24(34), 1–11 (2023). http://jmlr.org/papers/v24/22-0142.html

  7. Huang, W., Zhao, X., Jin, G., Huang, X.: Safari: versatile and efficient evaluations for robustness of interpretability. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1988–1998 (2023)

    Google Scholar 

  8. Kamath, S., Mittal, S., Deshpande, A., Balasubramanian, V.N.: Rethinking robustness of model attributions (2023)

    Google Scholar 

  9. Kendall, M.G.: A new measure of rank correlation. Biometrika 30(1/2), 81–93 (1938)

    Article  Google Scholar 

  10. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2017)

    Google Scholar 

  11. Kokhlikyan, N., et al.: Captum: A unified and generic model interpretability library for pytorch. arXiv preprint arXiv:2009.07896 (2020)

  12. Le, P.Q., Nauta, M., Nguyen, V.B., Pathak, S., Schlötterer, J., Seifert, C.: Benchmarking explainable ai - a survey on available toolkits and open challenges. In: Elkind, E. (ed.) Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, pp. 6665–6673. International Joint Conferences on Artificial Intelligence Organization (8 2023). https://doi.org/10.24963/ijcai.2023/747, survey Track

  13. Liu, Y., Khandagale, S., White, C., Neiswanger, W.: Synthetic benchmarks for scientific research in explainable machine learning. In: Advances in Neural Information Processing Systems Datasets Track (2021)

    Google Scholar 

  14. Liu, Y., Meijer, C., Oostrum, L.: Onnx model trained on the simple geometric dataset, January 2022. https://doi.org/10.5281/zenodo.5907059

  15. Lopes, P., Silva, E., Braga, C., Oliveira, T., Rosado, L.: Xai systems evaluation: a review of human and computer-centred methods. Appl. Sci. 12(19), 9423 (2022)

    Article  Google Scholar 

  16. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017)

    Google Scholar 

  17. Nauta, M., Seifert, C.: The co-12 recipe for evaluating interpretable part-prototype image classifiers. In: World Conference on Explainable Artificial Intelligence, pp. 397–420. Springer (2023)

    Google Scholar 

  18. Nauta, M., Trienes, J., Pathak, S., Nguyen, E., Peters, M., Schmitt, Y., Schlötterer, J., van Keulen, M., Seifert, C.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable ai. ACM Comput. Surv. 55(13s), 1–42 (2023)

    Article  Google Scholar 

  19. Oostrum, L., Liu, Y., Meijer, C., Ranguelova, E., Bos, P.: Simple geometric shapes, July 2021. https://doi.org/10.5281/zenodo.5012825

  20. Park, C., et al.: VATUN: visual Analytics for Testing and Understanding Convolutional Neural Networks. In: Agus, M., Garth, C., Kerren, A. (eds.) EuroVis 2021 - Short Papers. The Eurographics Association (2021). https://doi.org/10.2312/evs.20211047

  21. Pearson, K.: Notes on regression and inheritance in the case of two parents proceedings of the royal society of london, 58, 240–242. K Pearson (1895)

    Google Scholar 

  22. Petsiuk, V., Das, A., Saenko, K.: Rise: Randomized input sampling for explanation of black-box models. arXiv preprint arXiv:1806.07421 (2018)

  23. Ranguelova, E., et al.: dianna. https://doi.org/10.5281/zenodo.5801485. https://github.com/dianna-ai/dianna

  24. Ranguelova, E., et al.: Dianna: Deep insight and neural network analysis. J, Open Source Softw. 7(80), 4493 (2022). https://doi.org/10.21105/joss.04493

  25. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  26. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695, June 2022

    Google Scholar 

  27. Samek, W.: Explainable deep learning: concepts, methods, and new developments. In: Explainable Deep Learning AI, pp. 7–33. Elsevier (2023)

    Google Scholar 

  28. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  29. Shen, Y., Yang, C., Tang, X., Zhou, B.: Interfacegan: Interpreting the disentangled face representation learned by gans. TPAMI (2020)

    Google Scholar 

  30. Sietzen, S., Lechner, M., Borowski, J., Hasani, R., Waldner, M.: Interactive analysis of cnn robustness. Comput. Graph. Forum 40(7), 253–264 (2021). https://doi.org/10.1111/cgf.14418. https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14418

  31. Spearman, C.: The proof and measurement of association between two things. Am. J. Psychol. 100(3/4), 441–471 (1987)

    Article  Google Scholar 

  32. Stein, B.V., Raponi, E., Sadeghi, Z., Bouman, N., Van Ham, R.C.H.J., Bäck, T.: A comparison of global sensitivity analysis methods for explainable ai with an application in genomic prediction. IEEE Access 10, 103364–103381 (2022). https://doi.org/10.1109/ACCESS.2022.3210175

  33. Székely, G.J., Rizzo, M.L., Bakirov, N.K.: Measuring and testing dependence by correlation of distances (2007)

    Google Scholar 

  34. Wu, S., Sang, J., Zhao, X., Chen, L.: An experimental study of semantic continuity for deep learning models (2020)

    Google Scholar 

  35. Yang, M., Kim, B.: Benchmarking Attribution Methods with Relative Feature Importance. CoRR abs/1907.09701 (2019)

    Google Scholar 

  36. Yang, W., Le, H., Savarese, S., Hoi, S.: Omnixai: A library for explainable ai (2022). https://doi.org/10.48550/ARXIV.2206.01612. https://arxiv.org/abs/2206.01612

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Niki van Stein .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, Q. et al. (2024). Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2153. Springer, Cham. https://doi.org/10.1007/978-3-031-63787-2_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-63787-2_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-63786-5

  • Online ISBN: 978-3-031-63787-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics