Abstract:
Numerous explainability techniques have been developed to reveal the prediction principles of Graph Neural Networks (GNNs) across diverse domains. However, many existing ...Show MoreMetadata
Abstract:
Numerous explainability techniques have been developed to reveal the prediction principles of Graph Neural Networks (GNNs) across diverse domains. However, many existing approaches, particularly those concentrating on model-level explanations, tend to grapple with the tunnel vision problem, leading to less-than-optimal outcomes and constraining users' comprehensive understanding of GNNs. Furthermore, these methods typically require hyperparameters to mold the explanations, introducing unintended human biases. In response, we present GAXG, a global and self-adaptive optimal graph topology generation framework for explaining GNNs' prediction principles at model-level. GAXG addresses the challenges of tunnel vision and hyperparameter reliance by integrating a strategically tailored Monte Carlo Tree Search (MCTS) algorithm. Notably, our tailored MCTS algorithm is modified to incorporate an Edge Mask Learning and Simulated Annealing-based subgraph screening strategy during the expansion phase, resolving the inherent time-consuming challenges of the tailored MCTS and enhancing the quality of the generated explanatory graph topologies. Experimental results underscore GAXG's effectiveness in discovering global explanations for GNNs, outperforming leading explainers on most evaluation metrics.
Published in: IEEE Transactions on Network Science and Engineering ( Volume: 11, Issue: 6, Nov.-Dec. 2024)