skip to main content
10.1145/3539597.3570421acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article
Public Access

Towards Faithful and Consistent Explanations for Graph Neural Networks

Published: 27 February 2023 Publication History

Abstract

Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over recent years. Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions. Though various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions. However, an inductive bias is deep-rooted in this framework: several subgraphs can result in the same or similar outputs as the original graphs. Consequently, they have the danger of providing spurious explanations and fail to provide consistent explanations. Applying them to explain weakly-performed GNNs would further amplify these issues. To address this problem, we theoretically examine the predictions of GNNs from the causality perspective. Two typical reasons of spurious explanations are identified: confounding effect of latent variables like distribution shift, and causal factors distinct from the original input. Observing that both confounding effects and diverse causal rationales are encoded in internal representations, we propose a simple yet effective countermeasure by aligning embeddings. Concretely, concerning potential shifts in the high-dimensional space, we design a distribution-aware alignment algorithm based on anchors. This new objective is easy to compute and can be incorporated into existing techniques with no or little effort. Theoretical analysis shows that it is in effect optimizing a more faithful explanation objective in design, which further justifies the proposed approach.

References

[1]
James Atwood and Don Towsley. 2016. Diffusion-convolutional neural networks. In Advances in neural information processing systems. 1993--2001.
[2]
Federico Baldassarre and Hossein Azizpour. 2019. Explainability techniques for graph convolutional networks. arXiv preprint arXiv:1905.13686 (2019).
[3]
Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2013. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203 (2013).
[4]
Hryhorii Chereda, A. Bleckmann, F. Kramer, A. Leha, and T. Beißbarth. 2019. Utilizing Molecular Network Information via Graph Convolutional Neural Networks to Predict Metastatic Event in Breast Cancer. Studies in health technology and informatics, Vol. 267 (2019), 181--186.
[5]
Enyan Dai, Wei Jin, Hui Liu, and Suhang Wang. 2022a. Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels. arXiv preprint arXiv:2201.00232 (2022).
[6]
Enyan Dai and Suhang Wang. 2021. Towards Self-Explainable Graph Neural Network. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 302--311.
[7]
Enyan Dai, Tianxiang Zhao, Huaisheng Zhu, Junjie Xu, Zhimeng Guo, Hui Liu, Jiliang Tang, and Suhang Wang. 2022b. A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. arXiv preprint arXiv:2204.08570 (2022).
[8]
David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. 2015. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems. 2224--2232.
[9]
Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd, Vol. 96. 226--231.
[10]
Lukas Faber, Amin K. Moghaddam, and Roger Wattenhofer. 2021. When Comparing to Ground Truth is Wrong: On Evaluating GNN Explanation Methods. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 332--341.
[11]
Wenqi Fan, Y. Ma, Qing Li, Yuan He, Y. Zhao, Jiliang Tang, and D. Yin. 2019. Graph Neural Networks for Social Recommendation. The World Wide Web Conference (2019).
[12]
Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural Message Passing for Quantum Chemistry. In ICML.
[13]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. Advances in neural information processing systems, Vol. 30 (2017).
[14]
Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, and Yi Chang. 2020. Graphlime: Local interpretable model explanations for graph neural networks. arXiv preprint arXiv:2001.06216 (2020).
[15]
Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[16]
Shuai Lin, Chen Liu, Pan Zhou, Zi-Yuan Hu, Shuojia Wang, Ruihui Zhao, Yefeng Zheng, Liang Lin, Eric Xing, and Xiaodan Liang. 2022. Prototypical graph contrastive learning. IEEE Transactions on Neural Networks and Learning Systems (2022).
[17]
Wanyu Lin, Hao Lan, and Baochun Li. 2021. Generative causal explanations for graph neural networks. In International Conference on Machine Learning. PMLR, 6666--6679.
[18]
Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, and Xiang Zhang. 2020. Parameterized explainer for graph neural network. Advances in neural information processing systems, Vol. 33 (2020), 19620--19631.
[19]
Elman Mansimov, O. Mahmood, Seokho Kang, and Kyunghyun Cho. 2019. Molecular Geometry Prediction using a Deep Generative Graph Neural Network. Scientific Reports, Vol. 9 (2019).
[20]
Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, Jörg Schlötterer, Maurice van Keulen, and Christin Seifert. 2022. From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. arXiv preprint arXiv:2201.08164 (2022).
[21]
Phillip E Pope, Soheil Kolouri, Mohammad Rostami, Charles E Martin, and Heiko Hoffmann. 2019. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10772--10781.
[22]
Jiahua Rao, Shuangjia Zheng, and Yuedong Yang. 2021. Quantitative Evaluation of Explainable Graph Neural Networks for Molecular Property Prediction. arXiv preprint arXiv:2107.04119 (2021).
[23]
Thomas Schnake, Oliver Eberle, Jonas Lederer, Shinichi Nakajima, Kristof T Schütt, Klaus-Robert Müller, and Grégoire Montavon. 2020. Higher-order explanations of graph neural networks via relevant walks. arXiv preprint arXiv:2006.03589 (2020).
[24]
Caihua Shan, Yifei Shen, Yao Zhang, Xiang Li, and Dongsheng Li. 2021. Reinforcement Learning Enhanced Explainer for Graph Neural Networks. Advances in Neural Information Processing Systems, Vol. 34 (2021).
[25]
Daniil Sorokin and Iryna Gurevych. 2018. Modeling Semantics with Gated Graph Neural Networks for Knowledge Base Question Answering. ArXiv, Vol. abs/1808.04126 (2018).
[26]
S. Tang, Bo Li, and Haijun Yu. 2019. ChebNet: Efficient and Stable Constructions of Deep Neural Networks with Rectified Power Units using Chebyshev Approximations. ArXiv, Vol. abs/1911.05467 (2019).
[27]
Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis. 2006. The max-min hill-climbing Bayesian network structure learning algorithm. Machine learning, Vol. 65, 1 (2006), 31--78.
[28]
Petar Velivc ković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903 (2017).
[29]
Minh N Vu and My T Thai. 2020. Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. arXiv preprint arXiv:2010.05788 (2020).
[30]
Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, and Tat-Seng Chua. 2020a. Disentangled graph collaborative filtering. In Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval. 1001--1010.
[31]
Xiang Wang, Yingxin Wu, An Zhang, Xiangnan He, and Tat-seng Chua. 2020b. Causal Screening to Interpret Graph Neural Networks. (2020).
[32]
Ying-Xin Wu, Xiang Wang, An Zhang, Xiangnan He, and Tat-Seng Chua. 2022. Discovering Invariant Rationales for Graph Neural Networks. arXiv preprint arXiv:2201.12872 (2022).
[33]
Teng Xiao, Zhengyu Chen, Zhimeng Guo, Zeyang Zhuang, and Suhang Wang. 2022. Decoupled Self-supervised Learning for Non-Homophilous Graphs. arXiv e-prints (2022), arXiv--2206.
[34]
Teng Xiao, Zhengyu Chen, Donglin Wang, and Suhang Wang. 2021. Learning how to propagate messages in graph neural networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 1894--1903.
[35]
Junjie Xu, Enyan Dai, Xiang Zhang, and Suhang Wang. 2022. HP-GMN: Graph Memory Networks for Heterophilous Graphs. arXiv preprint arXiv:2210.08195 (2022).
[36]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).
[37]
Rex Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019. Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, Vol. 32 (2019), 9240.
[38]
Hao Yuan, Haiyang Yu, Shurui Gui, and Shuiwang Ji. 2020. Explainability in graph neural networks: A taxonomic survey. arXiv preprint arXiv:2012.15445 (2020).
[39]
Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, and Shuiwang Ji. 2021. On explainability of graph neural networks via subgraph explorations. In International Conference on Machine Learning. PMLR, 12241--12252.
[40]
Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. Advances in neural information processing systems, Vol. 31 (2018).
[41]
Zaixi Zhang, Qi Liu, Hao Wang, Chengqiang Lu, and Cheekong Lee. 2021. ProtGNN: Towards Self-Explaining Graph Neural Networks. arXiv preprint arXiv:2112.00911 (2021).
[42]
Tianxiang Zhao, Xianfeng Tang, Xiang Zhang, and Suhang Wang. 2020. Semi-Supervised Graph-to-Graph Translation. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 1863--1872.
[43]
Tianxiang Zhao, Xiang Zhang, and Suhang Wang. 2021. GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks. In Proceedings of the Fourteenth ACM International Conference on Web Search and Data Mining.
[44]
Tianxiang Zhao, Xiang Zhang, and Suhang Wang. 2022. Exploring edge disentanglement for node classification. In Proceedings of the ACM Web Conference 2022. 1028--1036.

Cited By

View all
  • (2025)Can Graph Neural Networks be Adequately Explained? A SurveyACM Computing Surveys10.1145/371112257:5(1-36)Online publication date: 24-Jan-2025
  • (2025)Counterfactual Learning on Graphs: A SurveyMachine Intelligence Research10.1007/s11633-024-1519-z22:1(17-59)Online publication date: 24-Jan-2025
  • (2024)EiG-SearchProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693412(33069-33088)Online publication date: 21-Jul-2024
  • Show More Cited By

Index Terms

  1. Towards Faithful and Consistent Explanations for Graph Neural Networks

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WSDM '23: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining
    February 2023
    1345 pages
    ISBN:9781450394079
    DOI:10.1145/3539597
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 February 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. explainability
    2. graph neural networks

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    WSDM '23

    Acceptance Rates

    Overall Acceptance Rate 498 of 2,863 submissions, 17%

    Upcoming Conference

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)313
    • Downloads (Last 6 weeks)24
    Reflects downloads up to 28 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Can Graph Neural Networks be Adequately Explained? A SurveyACM Computing Surveys10.1145/371112257:5(1-36)Online publication date: 24-Jan-2025
    • (2025)Counterfactual Learning on Graphs: A SurveyMachine Intelligence Research10.1007/s11633-024-1519-z22:1(17-59)Online publication date: 24-Jan-2025
    • (2024)EiG-SearchProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693412(33069-33088)Online publication date: 21-Jul-2024
    • (2024)Interpretable Imitation Learning with Dynamic Causal RelationsProceedings of the 17th ACM International Conference on Web Search and Data Mining10.1145/3616855.3635827(967-975)Online publication date: 4-Mar-2024
    • (2024)Disambiguated Node Classification with Graph Neural NetworksProceedings of the ACM Web Conference 202410.1145/3589334.3645637(914-923)Online publication date: 13-May-2024
    • (2024)Towards Inductive and Efficient Explanations for Graph Neural NetworksIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2024.336258446:8(5245-5259)Online publication date: Aug-2024
    • (2024)Towards explaining graph neural networks via preserving prediction ranking and structural dependencyInformation Processing and Management: an International Journal10.1016/j.ipm.2023.10357161:2Online publication date: 12-Apr-2024
    • (2023)T-SaS: Toward Shift-aware Dynamic Adaptation for Streaming DataProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3615267(4244-4248)Online publication date: 21-Oct-2023
    • (2023)Skill Disentanglement for Imitation Learning from Suboptimal DemonstrationsProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3580305.3599506(3513-3524)Online publication date: 6-Aug-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media