skip to main content
10.1145/3581807.3581850acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccprConference Proceedingsconference-collections
research-article

GANExplainer: Explainability Method for Graph Neural Network with Generative Adversarial Nets

Published:22 May 2023Publication History

ABSTRACT

In recent years, graph neural networks (GNNs) have achieved encouraging performance in the processing of graph data generated in non-Euclidean space. GNNs learn node features by aggregating and combining neighbor information, which is applied to many graphics tasks. However, the complex deep learning structure is still regarded as a black box, which is difficult to obtain the full trust of human beings. Due to the lack of interpretability, the application of graph neural network is greatly limited. Therefore, we propose an interpretable method, called GANExplainer, to explain GNNs at the model level. Our method can implicitly generate the characteristic subgraph of the graph without relying on specific input examples as the interpretation of the model to the data. GANExplainer relies on the framework of generative-adversarial method to train the generator and discriminator at the same time. More importantly, when constructing the discriminator, the corresponding graph rules are added to ensure the effectiveness of the generated characteristic subgraph. We carried out experiments on synthetic dataset and chemical molecules dataset and verified the effect of our method on model level interpreter from three aspects: accuracy, fidelity and sparsity.

References

  1. S. Lundberg and S.-I. Lee, “A Unified Approach to Interpreting Model Predictions,” arXiv:1705.07874 [cs, stat], Nov. 2017, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1705.07874Google ScholarGoogle Scholar
  2. K. K. Thekumparampil, C. Wang, S. Oh, and L.-J. Li, “Attention-based Graph Neural Network for Semi-supervised Learning,” arXiv:1803.03735 [cs, stat], Mar. 2018, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1803.03735Google ScholarGoogle Scholar
  3. K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps,” arXiv:1312.6034 [cs], Apr. 2014, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1312.6034Google ScholarGoogle Scholar
  4. H. Yuan, H. Yu, S. Gui, and S. Ji, “Explainability in Graph Neural Networks: A Taxonomic Survey,” arXiv:2012.15445 [cs], Mar. 2021, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/2012.15445Google ScholarGoogle Scholar
  5. P. E. Pope, S. Kolouri, M. Rostami, C. E. Martin, and H. Hoffmann, “Explainability Methods for Graph Convolutional Neural Networks,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, Jun. 2019, pp. 10764–10773. doi: 10.1109/CVPR.2019.01103.Google ScholarGoogle ScholarCross RefCross Ref
  6. F. Baldassarre and H. Azizpour, “Explainability Techniques for Graph Convolutional Networks,” arXiv:1905.13686 [cs, stat], May 2019, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1905.13686Google ScholarGoogle Scholar
  7. A. A. Hagberg, L. A. National, L. Alamos, D. A. Schult, and P. J. Swart, “Exploring Network Structure , Dynamics , and Function using NetworkX,” 2008.Google ScholarGoogle Scholar
  8. I. J. Goodfellow , “Generative Adversarial Networks,” arXiv:1406.2661 [cs, stat], Jun. 2014, Accessed: May 12, 2022. [Online]. Available: http://arxiv.org/abs/1406.2661Google ScholarGoogle Scholar
  9. R. Ying, D. Bourgeois, J. You, M. Zitnik, and J. Leskovec, “GNNExplainer: Generating Explanations for Graph Neural Networks,” arXiv:1903.03894 [cs, stat], Nov. 2019, Accessed: Apr. 28, 2022. [Online]. Available: http://arxiv.org/abs/1903.03894Google ScholarGoogle Scholar
  10. P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks,” arXiv:1710.10903 [cs, stat], Feb. 2018, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1710.10903Google ScholarGoogle Scholar
  11. J. You, B. Liu, R. Ying, V. Pande, and J. Leskovec, “Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation,” arXiv:1806.02473 [cs, stat], Feb. 2019, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1806.02473Google ScholarGoogle Scholar
  12. H. Wang , “GraphGAN: Graph Representation Learning with Generative Adversarial Nets,” arXiv:1711.08267 [cs, stat], Nov. 2017, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1711.08267Google ScholarGoogle Scholar
  13. Q. Huang, M. Yamada, Y. Tian, D. Singh, D. Yin, and Y. Chang, “GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks,” arXiv:2001.06216 [cs, stat], Sep. 2020, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/2001.06216Google ScholarGoogle Scholar
  14. M. Simonovsky and N. Komodakis, “GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders,” arXiv:1802.03480 [cs], Feb. 2018, Accessed: May 12, 2022. [Online]. Available: http://arxiv.org/abs/1802.03480Google ScholarGoogle Scholar
  15. K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How Powerful are Graph Neural Networks?,” arXiv:1810.00826 [cs, stat], Feb. 2019, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1810.00826Google ScholarGoogle Scholar
  16. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved Techniques for Training GANs,” arXiv:1606.03498 [cs], Jun. 2016, Accessed: May 12, 2022. [Online]. Available: http://arxiv.org/abs/1606.03498Google ScholarGoogle Scholar
  17. I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved Training of Wasserstein GANs,” arXiv:1704.00028 [cs, stat], Dec. 2017, Accessed: May 12, 2022. [Online]. Available: http://arxiv.org/abs/1704.00028Google ScholarGoogle Scholar
  18. M. S. Schlichtkrull, N. De Cao, and I. Titov, “Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking,” arXiv:2010.00577 [cs, stat], Apr. 2021, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/2010.00577Google ScholarGoogle Scholar
  19. R. Schwarzenberg, M. Hübner, D. Harbecke, C. Alt, and L. Hennig, “Layerwise Relevance Visualization in Convolutional Text Graph Classifiers,” arXiv:1909.10911 [cs], Sep. 2019, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/1909.10911Google ScholarGoogle Scholar
  20. Y. Li, O. Vinyals, C. Dyer, R. Pascanu, and P. Battaglia, “Learning Deep Generative Models of Graphs,” arXiv:1803.03324 [cs, stat], Mar. 2018, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1803.03324Google ScholarGoogle Scholar
  21. N. De Cao and T. Kipf, “MolGAN: An implicit generative model for small molecular graphs,” arXiv:1805.11973 [cs, stat], May 2018, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/1805.11973Google ScholarGoogle Scholar
  22. W. Jin, R. Barzilay, and T. Jaakkola, “Multi-Objective Molecule Generation using Interpretable Substructures,” arXiv:2002.03244 [cs, stat], Jul. 2020, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/2002.03244Google ScholarGoogle Scholar
  23. R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii, and U. Alon, “Network Motifs: Simple Building Blocks of Complex Networks,” Science, vol. 298, no. 5594, pp. 824–827, Oct. 2002, doi: 10.1126/science.298.5594.824.Google ScholarGoogle ScholarCross RefCross Ref
  24. U. Alon, “Network motifs: theory and experimental approaches,” Nat Rev Genet, vol. 8, no. 6, pp. 450–461, Jun. 2007, doi: 10.1038/nrg2102.Google ScholarGoogle ScholarCross RefCross Ref
  25. H. Yuan, H. Yu, J. Wang, K. Li, and S. Ji, “On Explainability of Graph Neural Networks via Subgraph Explorations,” arXiv:2102.05152 [cs], May 2021, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/2102.05152Google ScholarGoogle Scholar
  26. D. Luo , “Parameterized Explainer for Graph Neural Network,” arXiv:2011.04573 [cs], Nov. 2020, Accessed: Apr. 28, 2022. [Online]. Available: http://arxiv.org/abs/2011.04573Google ScholarGoogle Scholar
  27. Y. Zhang, D. Defazio, and A. Ramesh, “RelEx: A Model-Agnostic Relational Model Explainer,” arXiv:2006.00305 [cs, stat], May 2020, Accessed: May 13, 2022. [Online]. Available: http://arxiv.org/abs/2006.00305Google ScholarGoogle Scholar
  28. T. N. Kipf and M. Welling, “Semi-Supervised Classification with Graph Convolutional Networks,” arXiv:1609.02907 [cs, stat], Feb. 2017, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1609.02907Google ScholarGoogle Scholar
  29. L. Yu, W. Zhang, J. Wang, and Y. Yu, “SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient,” arXiv:1609.05473 [cs], Aug. 2017, Accessed: Dec. 18, 2021. [Online]. Available: http://arxiv.org/abs/1609.05473Google ScholarGoogle Scholar
  30. H. Yuan, J. Tang, X. Hu, and S. Ji, “XGNN: Towards Model-Level Explanations of Graph Neural Networks,” Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 430–438, Aug. 2020, doi: 10.1145/3394486.3403085.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. GANExplainer: Explainability Method for Graph Neural Network with Generative Adversarial Nets

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICCPR '22: Proceedings of the 2022 11th International Conference on Computing and Pattern Recognition
      November 2022
      683 pages
      ISBN:9781450397056
      DOI:10.1145/3581807

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 22 May 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited
    • Article Metrics

      • Downloads (Last 12 months)36
      • Downloads (Last 6 weeks)6

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format