skip to main content
10.1145/3447548.3467364acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Public Access

NRGNN: Learning a Label Noise Resistant Graph Neural Network on Sparsely and Noisily Labeled Graphs

Published: 14 August 2021 Publication History

Abstract

Graph Neural Networks (GNNs) have achieved promising results for semi-supervised learning tasks on graphs such as node classification. Despite the great success of GNNs, many real-world graphs are often sparsely and noisily labeled, which could significantly degrade the performance of GNNs, as the noisy information could propagate to unlabeled nodes via graph structure. Thus, it is important to develop a label noise-resistant GNN for semi-supervised node classification. Though extensive studies have been conducted to learn neural networks with noisy labels, they mostly focus on independent and identically distributed data and assume a large number of noisy labels are available, which are not directly applicable for GNNs. Thus, we investigate a novel problem of learning a robust GNN with noisy and limited labels. To alleviate the negative effects of label noise, we propose to link the unlabeled nodes with labeled nodes of high feature similarity to bring more clean label information. Furthermore, accurate pseudo labels could be obtained by this strategy to provide more supervision and further reduce the effects of label noise. Our theoretical and empirical analysis verify the effectiveness of these two strategies under mild conditions. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed method in learning a robust GNN with noisy and limited labels.

Supplementary Material

MP4 File (KDD21-2706.mp4)
This is a presentation video for: NRGNN: Learning a Label Noise-Resistant Graph Neural Network on Sparsely and Noisily Labeled Graphs.

References

[1]
Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2014. Spectral networks and locally connected networks on graphs. ICLR (2014).
[2]
Jie Chen, Tengfei Ma, and Cao Xiao. 2018. Fastgcn: fast learning with graph convolutional networks via importance sampling. ICLR (2018).
[3]
Enyan Dai and Suhang Wang. 2021. Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information. In WSDM. 680--688.
[4]
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS. 3844--3852.
[5]
Hande Dong, Jiawei Chen, Fuli Feng, Xiangnan He, Shuxian Bi, Zhaolin Ding, and Peng Cui. 2020. On the Equivalence of Decoupled Graph Convolution Network and Label Propagation. arXiv preprint arXiv:2010.12408 (2020).
[6]
Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. ICML (2017).
[7]
Jacob Goldberger and Ehud Ben-Reuven. 2016. Training deep neural-networks using a noise adaptation layer. (2016).
[8]
Chen Gong, Hengmin Zhang, Jian Yang, and Dacheng Tao. 2017. Learning with inadequate and incorrect supervision. In ICDM. IEEE, 889--894.
[9]
Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In NeurIPS. 1024--1034.
[10]
Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. 2018. Co-teaching: Robust training of deep neural networks with extremely noisy labels. arXiv preprint arXiv:1804.06872 (2018).
[11]
Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In WWW. 173--182.
[12]
Mikael Henaff, Joan Bruna, and Yann LeCun. 2015. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163 (2015).
[13]
Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning. PMLR, 2304--2313.
[14]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
[15]
Thomas N Kipf and Max Welling. 2016a. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
[16]
Thomas N Kipf and Max Welling. 2016b. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308 (2016).
[17]
Sneha Kudugunta and Emilio Ferrara. 2018. Deep neural networks for bot detection. Information Sciences, Vol. 467 (2018), 312--322.
[18]
Ron Levie, Federico Monti, Xavier Bresson, and Michael M Bronstein. 2018. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Transactions on Signal Processing, Vol. 67, 1 (2018), 97--109.
[19]
Junnan Li, Richard Socher, and Steven CH Hoi. 2020. Dividemix: Learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394 (2020).
[20]
Qimai Li, Zhichao Han, and Xiao-Ming Wu. 2018. Deeper insights into graph convolutional networks for semi-supervised learning. AAAI (2018).
[21]
Rui Li, Shengjie Wang, and Kevin Chen-Chuan Chang. 2012. Multiple location profiling for users and relationships from social network and content. arXiv preprint arXiv:1208.0288 (2012).
[22]
Xingjun Ma, Yisen Wang, Michael E Houle, Shuo Zhou, Sarah Erfani, Shutao Xia, Sudanthi Wijewickrema, and James Bailey. 2018. Dimensionality-driven learning with noisy labels. In ICML. PMLR, 3355--3364.
[23]
Eran Malach and Shai Shalev-Shwartz. 2017. Decoupling" when to update" from" how to update". arXiv preprint arXiv:1706.02613 (2017).
[24]
Miller McPherson, Lynn Smith-Lovin, and James M Cook. 2001. Birds of a feather: Homophily in social networks. Annual review of sociology, Vol. 27, 1 (2001), 415--444.
[25]
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NeurIPS. 3111--3119.
[26]
Mark Newman. 2018. Networks. Oxford university press.
[27]
Duc Tam Nguyen, Chaithanya Kumar Mummadi, Thi Phuong Nhung Ngo, Thi Hoai Phuong Nguyen, Laura Beggel, and Thomas Brox. 2019. Self: Learning to filter noisy labels with self-ensembling. arXiv preprint arXiv:1910.01842 (2019).
[28]
Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. 2016. Learning convolutional neural networks for graphs. In ICML. 2014--2023.
[29]
Hoang NT, Choong Jun Jin, and Tsuyoshi Murata. 2019. Learning graph neural networks with noisy labels. arXiv preprint arXiv:1905.01591 (2019).
[30]
Shirui Pan, Jia Wu, Xingquan Zhu, Chengqi Zhang, and Yang Wang. 2016. Tri-party deep network representation. Network, Vol. 11, 9 (2016), 12.
[31]
Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. 2017. Making deep neural networks robust to label noise: A loss correction approach. In CVPR. 1944--1952.
[32]
Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. 2014. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 (2014).
[33]
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI magazine, Vol. 29, 3 (2008), 93--93.
[34]
Xianfeng Tang, Yandong Li, Yiwei Sun, Huaxiu Yao, Prasenjit Mitra, and Suhang Wang. 2020 a. Transferring Robustness for Graph Neural Network Against Poisoning Attacks. In WSDM. 600--608.
[35]
Xianfeng Tang, Huaxiu Yao, Yiwei Sun, Yiqi Wang, Jiliang Tang, Charu Aggarwal, Prasenjit Mitra, and Suhang Wang. 2020 b. Investigating and Mitigating Degree-Related Biases in Graph Convoltuional Networks. In CIKM. 1435--1444.
[36]
Petar Velivc ković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. ICLR (2017).
[37]
Daixin Wang, Jianbin Lin, Peng Cui, Quanhui Jia, Zhen Wang, Yanming Fang, Quan Yu, Jun Zhou, Shuang Yang, and Yuan Qi. 2019. A semi-supervised graph attentive network for financial fraud detection. ICDM (2019).
[38]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826 (2018).
[39]
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In SIGKDD. 974--983.
[40]
Bing Yu, Haoteng Yin, and Zhanxing Zhu. 2017. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv preprint arXiv:1709.04875 (2017).
[41]
Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. 2019. How does disagreement help generalization against label corruption?. In International Conference on Machine Learning. PMLR, 7164--7173.
[42]
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2016. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016).
[43]
Huan Zhang, Zhao Zhang, Mingbo Zhao, Qiaolin Ye, Min Zhang, and Meng Wang. 2020 b. Robust triple-matrix-recovery-based auto-weighted label propagation for classification. IEEE TNNLS, Vol. 31, 11 (2020), 4538--4552.
[44]
Mengmei Zhang, Chuan Shi, Linmei Hu, and Xiao Wang. 2020 a. Adversarial Label-Flipping Attack and Defense for Graph Neural Networks. ICDM (2020).
[45]
Tianxiang Zhao, Xianfeng Tang, Xiang Zhang, and Suhang Wang. 2020. Semi-Supervised Graph-to-Graph Translation. In CIKM. 1863--1872.
[46]
Tianxiang Zhao, Xiang Zhang, and Suhang Wang. 2021. GraphSMOTE: Imbalanced Node Classification on Graphs with Graph Neural Networks. In WSDM. 833--841.

Cited By

View all
  • (2025)Motif-aware curriculum learning for node classificationNeural Networks10.1016/j.neunet.2024.107089184(107089)Online publication date: Apr-2025
  • (2025)Rethinking the impact of noisy labels in graph classification: A utility and privacy perspectiveNeural Networks10.1016/j.neunet.2024.106919182(106919)Online publication date: Feb-2025
  • (2025)Enhancing robustness in implicit feedback recommender systems with subgraph contrastive learningInformation Processing & Management10.1016/j.ipm.2024.10396262:1(103962)Online publication date: Jan-2025
  • Show More Cited By

Index Terms

  1. NRGNN: Learning a Label Noise Resistant Graph Neural Network on Sparsely and Noisily Labeled Graphs

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      KDD '21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining
      August 2021
      4259 pages
      ISBN:9781450383325
      DOI:10.1145/3447548
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 14 August 2021

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. graph neural network
      2. noisy labels
      3. robustness

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      KDD '21
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

      Upcoming Conference

      KDD '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)658
      • Downloads (Last 6 weeks)83
      Reflects downloads up to 28 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Motif-aware curriculum learning for node classificationNeural Networks10.1016/j.neunet.2024.107089184(107089)Online publication date: Apr-2025
      • (2025)Rethinking the impact of noisy labels in graph classification: A utility and privacy perspectiveNeural Networks10.1016/j.neunet.2024.106919182(106919)Online publication date: Feb-2025
      • (2025)Enhancing robustness in implicit feedback recommender systems with subgraph contrastive learningInformation Processing & Management10.1016/j.ipm.2024.10396262:1(103962)Online publication date: Jan-2025
      • (2025)Soft-GNN: towards robust graph neural networks via self-adaptive data utilizationFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-024-3575-519:4Online publication date: 1-Apr-2025
      • (2025)Counterfactual Learning on Graphs: A SurveyMachine Intelligence Research10.1007/s11633-024-1519-z22:1(17-59)Online publication date: 24-Jan-2025
      • (2024)Mitigating label noise on graphs via topological sample selectionProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694283(53944-53972)Online publication date: 21-Jul-2024
      • (2024)3D-FuMProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/997(8635-8639)Online publication date: 3-Aug-2024
      • (2024)Robust heterophilic graph learning against label noise for anomaly detectionProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/271(2451-2459)Online publication date: 3-Aug-2024
      • (2024)SPORT: A Subgraph Perspective on Graph Classification with Label NoiseACM Transactions on Knowledge Discovery from Data10.1145/368746818:9(1-20)Online publication date: 28-Aug-2024
      • (2024)Exploring the Potential of Large Language Models (LLMs)in Learning on GraphsACM SIGKDD Explorations Newsletter10.1145/3655103.365511025:2(42-61)Online publication date: 28-Mar-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media