Exploiting Neighbor Effect: Conv-Agnostic GNN Framework for Graphs With Heterophily | IEEE Journals & Magazine | IEEE Xplore

Exploiting Neighbor Effect: Conv-Agnostic GNN Framework for Graphs With Heterophily


Abstract:

Due to the homophily assumption in graph convolution networks (GCNs), a common consensus in the graph node classification task is that graph neural networks (GNNs) perfor...Show More

Abstract:

Due to the homophily assumption in graph convolution networks (GCNs), a common consensus in the graph node classification task is that graph neural networks (GNNs) perform well on homophilic graphs but may fail on heterophilic graphs with many interclass edges. However, the previous interclass edges’ perspective and related homo-ratio metrics cannot well explain the GNNs’ performance under some heterophilic datasets, which implies that not all the interclass edges are harmful to GNNs. In this work, we propose a new metric based on the von Neumann entropy to reexamine the heterophily problem of GNNs and investigate the feature aggregation of interclass edges from an entire neighbor identifiable perspective. Moreover, we propose a simple yet effective Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on the heterophily datasets by learning the neighbor effect for each node. Specifically, we first decouple the feature of each node into the discriminative feature for downstream tasks and the aggregation feature for graph convolution (GC). Then, we propose a shared mixer module to adaptively evaluate the neighbor effect of each node to incorporate the neighbor information. The proposed framework can be regarded as a plug-in component and is compatible with most GNNs. The experimental results over nine well-known benchmark datasets indicate that our framework can significantly improve performance, especially for the heterophily graphs. The average performance gain is 9.81%, 25.81%, and 20.61% compared with graph isomorphism network (GIN), graph attention network (GAT), and GCN, respectively. Extensive ablation studies and robustness analysis further verify the effectiveness, robustness, and interpretability of our framework. Code is available at https://github.com/JC-202/CAGNN.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 35, Issue: 10, October 2024)
Page(s): 13383 - 13396
Date of Publication: 17 May 2023

ISSN Information:

PubMed ID: 37195851

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.