Abstract:
Graph neural networks (GNNs) have achieved promising results for node classification tasks. However, due to the memorization effects, GNNs easily overfit in the presence ...Show MoreMetadata
Abstract:
Graph neural networks (GNNs) have achieved promising results for node classification tasks. However, due to the memorization effects, GNNs easily overfit in the presence of label noise. Currently, this problem remains open. In this paper, we propose CLaSS to bridge this important gap. It is a first-of-its-kind sample selection-based framework for training noise-resistant GNNs. Specifically, CLaSS performs graph contrastive learning to improve the robustness of GNNs and effectively leverages unlabeled nodes. To mitigate the confirmation bias, two GNNs are trained through co-teaching to select true-labeled nodes and filter out noisy nodes for each other. CLaSS also performs label refinement on selected clean nodes and guessing on selected noisy ones, with the help of node clustering. Extensive experiments on four real-world datasets demonstrate significant advantages of CLaSS over five state-of-the-art approaches.
Published in: ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Date of Conference: 14-19 April 2024
Date Added to IEEE Xplore: 18 March 2024
ISBN Information: