Abstract
Graph Neural Networks (GNNs) derive outstanding performance in many graph-based tasks, as the model becomes more and more popular, explanation techniques are desired to tackle its black-box nature. While the mainstream of existing methods studies instance-level explanations, we propose Glocal-Explainer to generate model-level explanations, which consumes local information of substructures in the input graph to pursue global explainability. Specifically, we investigate faithfulness and generality of each explanation candidate. In the literature, fidelity and infidelity are widely considered to measure faithfulness, yet the two metrics may not align with each other, and have not yet been incorporated together in any explanation technique. On the contrary, generality, which measures how many instances share the same explanation structure, is not yet explored due to the computational cost in frequent subgraph mining. We introduce adapted subgraph mining technique to measure generality as well as faithfulness during explanation candidate generation. Furthermore, we formally define the glocal explanation generation problem and map it to the classic weighted set cover problem. A greedy algorithm is employed to find the solution. Experiments on both synthetic and real-world datasets show that our method produces meaningful and trustworthy explanations with decent quantitative evaluation results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In this work, we explicitly focus on explanation for topology structure of the input graph since feature explanation in GNNs is analogous to that in non-graph based neural networks, which has been widely studied [25].
- 2.
- 3.
For the ease of demonstration, assume the set of instances one candidate can explain is known for the moment.
- 4.
To cater to the plotting convention of skyline problem so that the domination set locates in the upper right corner.
References
Börzsönyi, S., Kossmann, D., Stocker, K.: The skyline operator. In: ICDE, pp. 421–430. IEEE Computer Society (2001)
Byrne, R.M.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: IJCAI, pp. 6276–6282. ijcai.org (2019)
Chvátal, V.: A greedy heuristic for the set-covering problem. Math. Oper. Res. 4(3), 233–235 (1979)
Cohen, M., Gudes, E.: Diagonally subgraphs pattern mining. In: DMKD, pp. 51–58. ACM (2004)
Debnath, A.K., Lopez de Compadre, R.L., Debnath, G., Shusterman, A.J., Hansch, C.: Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. J. Med. Chem. 34(2), 786–797 (1991)
Huan, Z., Quanming, Y., Weiwei, T.: Search to aggregate neighborhood for graph neural network. In: ICDE, pp. 552–563. IEEE (2021)
Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: ICLR. OpenReview.net (2017)
Jiang, C., Coenen, F., Zito, M.: A survey of frequent subgraph mining algorithms. Knowl. Eng. Rev. 28(1), 75–105 (2013)
Kersting, K., Kriege, N.M., Morris, C., Mutzel, P., Neumann, M.: Benchmark data sets for graph kernels (2016). http://graphkernels.cs.tu-dortmund.de
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR. OpenReview.net (2017)
Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: ICML, vol. 70, pp. 1885–1894. PMLR (2017)
Li, Z., et al.: Hierarchical bipartite graph neural networks: towards large-scale e-commerce applications. In: ICDE, pp. 1677–1688. IEEE (2020)
Liang, J., Bai, B., Cao, Y., Bai, K., Wang, F.: Adversarial infidelity learning for model interpretation. In: SIGKDD, ACM (2020)
Liu, B., Zhao, P., Zhuang, F., Xian, X., Liu, Y., Sheng, V.S.: Knowledge-aware hypergraph neural network for recommender systems. In: Jensen, C.S., et al. (eds.) DASFAA 2021. LNCS, vol. 12683, pp. 132–147. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73200-4_9
Lucic, A., ter Hoeve, M., Tolomei, G., de Rijke, M., Silvestri, F.: CF-GNNExplainer: counterfactual explanations for graph neural networks. arXiv preprint arXiv:2102.03322 (2021)
Luo, D., et al.: Parameterized explainer for graph neural network. In: NIPS (2020)
Numeroso, D., Bacciu, D.: Explaining deep graph networks with molecular counterfactuals. CoRR abs/2011.05134 (2020)
Pope, P.E., Kolouri, S., Rostami, M., Martin, C.E., Hoffmann, H.: Explainability methods for graph convolutional neural networks. In: CVPR. IEEE Computer Society (2019)
Saigo, H., Nowozin, S., Kadowaki, T., Kudo, T., Tsuda, K.: gBoost: a mathematical programming approach to graph classification and regression. Mach. Learn. 75(1), 69–89 (2009)
Sanchez-Lengeling, B., et al.: Evaluating attribution for graph neural networks. In: NIPS (2020)
Schlichtkrull, M.S., Cao, N.D., Titov, I.: Interpreting graph neural networks for NLP with differentiable edge masking. CoRR abs/2010.00577 (2020)
Schnake, T., et al.: Higher-order explanations of graph neural networks via relevant walks. arXiv preprint arXiv:2006.03589 (2020)
Velickovic, P., et al.: Graph attention networks. In: ICLR. OpenReview.net (2018)
Vu, M.N., Thai, M.T.: PGM-explainer: probabilistic graphical model explanations for graph neural networks. In: NIPS (2020)
Xiao-Hui, L., et al.: A survey of data-driven and knowledge-aware explainable AI. TKDE (2020)
Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? In: ICLR. OpenReview.net (2019)
Yan, X., Han, J.: gSpan: graph-based substructure pattern mining. In: ICDM, pp. 721–724. IEEE Computer Society (2002)
Yan, X., Han, J.: CloseGraph: mining closed frequent graph patterns. In: SIGKDD, pp. 286–295. ACM (2003)
Ying, Z., Bourgeois, D., You, J., Zitnik, M., Leskovec, J.: GNNExplainer: generating explanations for graph neural networks. In: NIPS (2019)
Yuan, H., Tang, J., Hu, X., Ji, S.: XGNN: towards model-level explanations of graph neural networks. In: SIGKDD, pp. 430–438. ACM (2020)
Yuan, H., Yu, H., Gui, S., Ji, S.: Explainability in graph neural networks: a taxonomic survey. CoRR abs/2012.15445 (2020)
Yuan, H., Yu, H., Wang, J., Li, K., Ji, S.: On explainability of graph neural networks via subgraph explorations. In: ICML. PMLR (2020)
Zhang, J., Liang, S., Deng, Z., Shao, J.: Spatial-temporal attention network for temporal knowledge graph completion. In: Jensen, C.S., et al. (eds.) DASFAA 2021. LNCS, vol. 12681, pp. 207–223. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73194-6_15
Acknowledgment
This work is partially supported by National Key Research and Development Program of China Grant No. 2018AAA0101100, the Hong Kong RGC GRF Project 16209519, CRF Project C6030-18G, C1031-18G and C5026-18G, AOE Project AoE/E-603/18, RIF Project R6020-19, Theme-based project TRS T41-603/20R, China NSFC No. 61729201, Guangdong Basic and Applied Basic Research Foundation 2019B151530001, Hong Kong ITC ITF grants ITS/044/18FX and ITS/470/18FX, Microsoft Research Asia Collaborative Research Grant, HKUST-NAVER/LINE AI Lab, Didi-HKUST joint research lab, HKUST-Webank joint research lab grants.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lv, G., Chen, L., Cao, C.C. (2022). On Glocal Explainability of Graph Neural Networks. In: Bhattacharya, A., et al. Database Systems for Advanced Applications. DASFAA 2022. Lecture Notes in Computer Science, vol 13245. Springer, Cham. https://doi.org/10.1007/978-3-031-00123-9_52
Download citation
DOI: https://doi.org/10.1007/978-3-031-00123-9_52
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-00122-2
Online ISBN: 978-3-031-00123-9
eBook Packages: Computer ScienceComputer Science (R0)