ABSTRACT
In recent years, graph neural networks (GNNs) have achieved encouraging performance in the processing of graph data generated in non-Euclidean space. GNNs learn node features by aggregating and combining neighbor information, which is applied to many graphics tasks. However, the complex deep learning structure is still regarded as a black box, which is difficult to obtain the full trust of human beings. Due to the lack of interpretability, the application of graph neural network is greatly limited. Therefore, we propose an interpretable method, called GANExplainer, to explain GNNs at the model level. Our method can implicitly generate the characteristic subgraph of the graph without relying on specific input examples as the interpretation of the model to the data. GANExplainer relies on the framework of generative-adversarial method to train the generator and discriminator at the same time. More importantly, when constructing the discriminator, the corresponding graph rules are added to ensure the effectiveness of the generated characteristic subgraph. We carried out experiments on synthetic dataset and chemical molecules dataset and verified the effect of our method on model level interpreter from three aspects: accuracy, fidelity and sparsity.
- S. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” arXiv:1705.07874 [cs, stat], Nov. 2017, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1705.07874Google Scholar
- K. K. Thekumparampil, C. Wang, S. Oh, and L.-J. Li, “Attention-based Graph Neural Network for Semi-supervised Learning,” arXiv:1803.03735 [cs, stat], Mar. 2018, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1803.03735Google Scholar
- K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps,” arXiv:1312.6034 [cs], Apr. 2014, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1312.6034Google Scholar
- H. Yuan, H. Yu, S. Gui, and S. Ji, “Explainability in Graph Neural Networks: A Taxonomic Survey,” arXiv:2012.15445 [cs], Mar. 2021, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/2012.15445Google Scholar
- P. E. Pope, S. Kolouri, M. Rostami, C. E. Martin, and H. Hoffmann, “Explainability Methods for Graph Convolutional Neural Networks,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, Jun. 2019, pp. 10764–10773. doi: 10.1109/CVPR.2019.01103.Google ScholarCross Ref
- F. Baldassarre and H. Azizpour, “Explainability Techniques for Graph Convolutional Networks,” arXiv:1905.13686 [cs, stat], May 2019, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1905.13686Google Scholar
- A. A. Hagberg, L. A. National, L. Alamos, D. A. Schult, and P. J. Swart, “Exploring Network Structure , Dynamics , and Function using NetworkX,” 2008.Google Scholar
- I. J. Goodfellow , “Generative Adversarial Networks,” arXiv:1406.2661 [cs, stat], Jun. 2014, Accessed: May 12, 2022. [Online]. Available: http://arxiv.org/abs/1406.2661Google Scholar
- R. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “GNNExplainer: Generating Explanations for Graph Neural Networks,” arXiv:1903.03894 [cs, stat], Nov. 2019, Accessed: Apr. 28, 2022. [Online]. Available: http://arxiv.org/abs/1903.03894Google Scholar
- P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks,” arXiv:1710.10903 [cs, stat], Feb. 2018, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1710.10903Google Scholar
- J. You, B. Liu, R. Ying, V. Pande, and J. Leskovec, “Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation,” arXiv:1806.02473 [cs, stat], Feb. 2019, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1806.02473Google Scholar
- H. Wang , “GraphGAN: Graph Representation Learning with Generative Adversarial Nets,” arXiv:1711.08267 [cs, stat], Nov. 2017, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1711.08267Google Scholar
- Q. Huang, M. Yamada, Y. Tian, D. Singh, D. Yin, and Y. Chang, “GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks,” arXiv:2001.06216 [cs, stat], Sep. 2020, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/2001.06216Google Scholar
- M. Simonovsky and N. Komodakis, “GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders,” arXiv:1802.03480 [cs], Feb. 2018, Accessed: May 12, 2022. [Online]. Available: http://arxiv.org/abs/1802.03480Google Scholar
- K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How Powerful are Graph Neural Networks?,” arXiv:1810.00826 [cs, stat], Feb. 2019, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1810.00826Google Scholar
- T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved Techniques for Training GANs,” arXiv:1606.03498 [cs], Jun. 2016, Accessed: May 12, 2022. [Online]. Available: http://arxiv.org/abs/1606.03498Google Scholar
- I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved Training of Wasserstein GANs,” arXiv:1704.00028 [cs, stat], Dec. 2017, Accessed: May 12, 2022. [Online]. Available: http://arxiv.org/abs/1704.00028Google Scholar
- M. S. Schlichtkrull, N. De Cao, and I. Titov, “Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking,” arXiv:2010.00577 [cs, stat], Apr. 2021, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/2010.00577Google Scholar
- R. Schwarzenberg, M. Hübner, D. Harbecke, C. Alt, and L. Hennig, “Layerwise Relevance Visualization in Convolutional Text Graph Classifiers,” arXiv:1909.10911 [cs], Sep. 2019, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/1909.10911Google Scholar
- Y. Li, O. Vinyals, C. Dyer, R. Pascanu, and P. Battaglia, “Learning Deep Generative Models of Graphs,” arXiv:1803.03324 [cs, stat], Mar. 2018, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1803.03324Google Scholar
- N. De Cao and T. Kipf, “MolGAN: An implicit generative model for small molecular graphs,” arXiv:1805.11973 [cs, stat], May 2018, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/1805.11973Google Scholar
- W. Jin, R. Barzilay, and T. Jaakkola, “Multi-Objective Molecule Generation using Interpretable Substructures,” arXiv:2002.03244 [cs, stat], Jul. 2020, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/2002.03244Google Scholar
- R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, “Network Motifs: Simple Building Blocks of Complex Networks,” Science, vol. 298, no. 5594, pp. 824–827, Oct. 2002, doi: 10.1126/science.298.5594.824.Google ScholarCross Ref
- U. Alon, “Network motifs: theory and experimental approaches,” Nat Rev Genet, vol. 8, no. 6, pp. 450–461, Jun. 2007, doi: 10.1038/nrg2102.Google ScholarCross Ref
- H. Yuan, H. Yu, J. Wang, K. Li, and S. Ji, “On Explainability of Graph Neural Networks via Subgraph Explorations,” arXiv:2102.05152 [cs], May 2021, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/2102.05152Google Scholar
- D. Luo , “Parameterized Explainer for Graph Neural Network,” arXiv:2011.04573 [cs], Nov. 2020, Accessed: Apr. 28, 2022. [Online]. Available: http://arxiv.org/abs/2011.04573Google Scholar
- Y. Zhang, D. Defazio, and A. Ramesh, “RelEx: A Model-Agnostic Relational Model Explainer,” arXiv:2006.00305 [cs, stat], May 2020, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/2006.00305Google Scholar
- T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” arXiv:1609.02907 [cs, stat], Feb. 2017, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1609.02907Google Scholar
- L. Yu, W. Zhang, J. Wang, and Y. Yu, “SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient,” arXiv:1609.05473 [cs], Aug. 2017, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1609.05473Google Scholar
- H. Yuan, J. Tang, X. Hu, and S. Ji, “XGNN: Towards Model-Level Explanations of Graph Neural Networks,” Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 430–438, Aug. 2020, doi: 10.1145/3394486.3403085.Google ScholarDigital Library
Index Terms
- GANExplainer: Explainability Method for Graph Neural Network with Generative Adversarial Nets
Recommendations
Graph Contrastive Learning with Generative Adversarial Network
KDD '23: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data MiningGraph Neural Networks (GNNs) have demonstrated promising results on exploiting node representations for many downstream tasks through supervised end-to-end training. To deal with the widespread label scarcity issue in real-world applications, Graph ...
Removable edges in a 5-connected graph and a construction method of 5-connected graphs
An edge e of a k-connected graph G is said to be a removable edge if G@?e is still k-connected. A k-connected graph G is said to be a quasi (k+1)-connected if G has no nontrivial k-separator. The existence of removable edges of 3-connected and 4-...
Comments