Skip to main content

Graph Edits for Counterfactual Explanations: A Comparative Study

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2024)

Abstract

Counterfactuals have been established as a popular explainability technique which leverages a set of minimal edits to alter the prediction of a classifier . When considering conceptual counterfactuals on images, the edits requested should correspond to salient concepts present in the input data. At the same time, conceptual distances are defined by knowledge graphs, ensuring the optimality of conceptual edits. In this work, we extend previous endeavors on graph edits as counterfactual explanations by conducting a comparative study which encompasses both supervised and unsupervised Graph Neural Network (GNN) approaches. To this end, we pose the following significant research question: should we represent input data as graphs, which is the optimal GNN approach in terms of performance and time efficiency to generate minimal and meaningful counterfactual explanations for black-box image classifiers?

A. Dimitriou and N. Chaidos—Contributed equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abid, A., Yuksekgonul, M., Zou, J.: Meaningfully debugging model mistakes using conceptual counterfactual explanations (2022)

    Google Scholar 

  2. Akula, A., Wang, S., Zhu, S.: CoCoX: generating conceptual and counterfactual explanations via fault-lines. In: Proceedings of the AAAI Conference on Artificial Intelligence, April 2020, vol. 34, pp. 2594–2601 (2020)

    Google Scholar 

  3. Augustin, M., Boreiko, V., Croce, F., Hein, M.: Diffusion visual counterfactual explanations (2022)

    Google Scholar 

  4. Browne, K., Swift, B.: Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks (2020)

    Google Scholar 

  5. Chang, C.H., Creager, E., Goldenberg, A., Duvenaud, D.: Explaining image classifiers by counterfactual generation (2019)

    Google Scholar 

  6. Dervakos, E., Thomas, K., Filandrianos, G., Stamou, G.: Choose your data wisely: a framework for semantic counterfactuals (2023)

    Google Scholar 

  7. Dimitriou, A., Lymperaiou, M., Filandrianos, G., Thomas, K., Stamou, G.: Structure your data: towards semantic graph counterfactuals (2024)

    Google Scholar 

  8. Farid, K., Schrodi, S., Argus, M., Brox, T.: Latent diffusion counterfactual explanations. arXiv preprint arXiv:2310.06668 (2023)

  9. Fey, M., Lenssen, J.E.: Fast graph representation learning with PyTorch geometric. In: ICLR Workshop on Representation Learning on Graphs and Manifolds (2019)

    Google Scholar 

  10. Filandrianos, G., Thomas, K., Dervakos, E., Stamou, G.: Conceptual edits as counterfactual explanations. In: Proceedings of the AAAI 2022 Spring Symposium on Machine Learning and Knowledge Engineering for Hybrid Intelligence (AAAI-MAKE 2022), 21–23 March 2022, Stanford University, Palo Alto, California, USA (2022)

    Google Scholar 

  11. Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., Lee, S.: Counterfactual visual explanations. In: International Conference on Machine Learning, pp. 2376–2384. PMLR (2019)

    Google Scholar 

  12. Hasibi, R., Michoel, T.: A graph feature auto-encoder for the prediction of unobserved node features on biological networks. BMC Bioinform. 22(1) (2021). https://doi.org/10.1186/s12859-021-04447-3

  13. Hendricks, L.A., Hu, R., Darrell, T., Akata, Z.: Grounding visual explanations. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 269–286. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_17

    Chapter  Google Scholar 

  14. Jonker, R., Volgenant, A.: A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing 38(4), 325–340 (1987)

    Article  MathSciNet  Google Scholar 

  15. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  16. Kipf, T.N., Welling, M.: Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016)

  17. Kondor, R.: Diffusion kernels on graphs and other discrete structures. In: International Conference on Machine Learning, vol. 2002, pp. 315–322 (2002)

    Google Scholar 

  18. Krishna, R., et al.: Visual genome: connecting language and vision using crowdsourced dense image annotations. Int. J. Comput. Vis. 123(1), 32–73 (2017)

    Article  MathSciNet  Google Scholar 

  19. Li, Y., Gu, C., Dullien, T., Vinyals, O., Kohli, P.: Graph matching networks for learning the similarity of graph structured objects (2019)

    Google Scholar 

  20. Lymperaiou, M., Filandrianos, G., Thomas, K., Stamou, G.: Counterfactual edits for generative evaluation (2023)

    Google Scholar 

  21. Menis-Mastromichalakis, O., Liartis, J., Stamou, G.: Beyond one-size-fits-all: adapting counterfactual explanations to user objectives. In: ACM CHI Workshop on Human-Centered Explainable AI (HCXAI) (2024)

    Google Scholar 

  22. Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)

    Article  Google Scholar 

  23. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007

  24. Nikolentzos, G., Siglidis, G., Vazirgiannis, M.: Graph kernels: a survey. J. Artif. Intell. Res. 72, 943–1027 (2021). https://doi.org/10.1613/jair.1.13225

    Article  MathSciNet  Google Scholar 

  25. OpenAI: ChatGPT: conversational language model (2023)

    Google Scholar 

  26. OpenAI: GPT-4 technical report. arXiv preprint arXiv:2303.08774 (2023)

  27. Pan, S., Hu, R., Long, G., Jiang, J., Yao, L., Zhang, C.: Adversarially regularized graph autoencoder for graph embedding. arXiv preprint arXiv:1802.04407 (2019)

  28. Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., Flach, P.: FACE: feasible and actionable counterfactual explanations. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES ’20, February 2020. ACM (2020). https://doi.org/10.1145/3375627.3375850

  29. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead (2019)

    Google Scholar 

  30. Sanfeliu, A., Fu, K.S.: A distance measure between attributed relational graphs for pattern recognition. IEEE Trans. Syst. Man Cybern. SMC–13(3), 353–362 (1983). https://doi.org/10.1109/TSMC.1983.6313167

    Article  Google Scholar 

  31. Vandenhende, S., Mahajan, D., Radenovic, F., Ghadiyaram, D.: Making heads or tails: towards semantically consistent visual counterfactuals. arXiv preprint arXiv:2203.12892 (2022)

  32. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

  33. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR (2018)

    Google Scholar 

  34. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018)

  35. Zeng, Z., Tung, A.K.H., Wang, J., Feng, J., Zhou, L.: Comparing stars: on approximating graph edit distance. Proc. VLDB Endow. 2(1), 25–36 (2009). https://doi.org/10.14778/1687627.1687631

    Article  Google Scholar 

  36. Zhao, W., Oyama, S., Kurihara, M.: Generating natural counterfactual visual explanations. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20 (2020)

    Google Scholar 

  37. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1452–1464 (2017)

    Article  Google Scholar 

Download references

Acknowledgments

This research work is Co-funded from the European Union’s Horizon Europe Research and Innovation programme under Grant Agreement No. 101119714 - dAIry 4.0.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Angeliki Dimitriou or Nikolaos Chaidos .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dimitriou, A., Chaidos, N., Lymperaiou, M., Stamou, G. (2024). Graph Edits for Counterfactual Explanations: A Comparative Study. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2154. Springer, Cham. https://doi.org/10.1007/978-3-031-63797-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-63797-1_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-63796-4

  • Online ISBN: 978-3-031-63797-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics