Skip to main content

Training Graph Convolutional Neural Network Against Label Noise

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13110))

Included in the following conference series:

  • 1705 Accesses

Abstract

For node classification task, graph convolutional neural network (GCN) has achieved competitive performance on graph-structured data. Under semi-supervised setting, only a small portion of nodes are labeled for training. Many existing works have a perfect assumption that all the class labels used for training are completely accurate. However, noises are inevitably involved in the process of labeling, which can cause a degraded model performance. Yet few works focus on how to deal with noisy labels on graph data. Techniques against label noise on image domain can’t be applied to graph data directly. In this paper, we propose a framework, called super-nodes assisted label correction and dynamic graph adjustment based GCN (SuLD-GCN), which aims to reduce the negative impact of noise via label correction to obtain a higher-quality labels. We introduce the super-node to construct a new graph, which contributes to connecting nodes with the same class label more strongly. During iterations, we select nodes with high predicted confidence to correct their labels. Simultaneously, we adjust the graph structure dynamically. Experiments on public datasets demonstrate the effectiveness of our proposed method, yielding a significant improvement over state-of-art baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Wu, Z., Pan, S., Chen, F., et al.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 4–24 (2020)

    Article  MathSciNet  Google Scholar 

  2. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)

  3. Veličković, P., Cucurull, G., Casanova, A., et al.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)

  4. Hamilton, W.L., Ying, R., Leskovec, J.: Inductive representation learning on large graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1025–1035 (2017)

    Google Scholar 

  5. Hoang, N.T., Choong, J.J., Murata, T.: Learning graph neural networks with noisy labels (2019)

    Google Scholar 

  6. Karimi, D., Dou, H., Warfield, S.K., et al.: Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. Med. Image Anal. 65, 101759 (2020)

    Article  Google Scholar 

  7. Srivastava, N., Hinton, G., Krizhevsky, A., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  8. Xiao, T., Xia, T., Yang, Y., et al.: Learning from massive noisy labeled data for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2691–2699 (2015)

    Google Scholar 

  9. Liu, T., Tao, D.: Classification with noisy labels by importance reweighting. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 447–461 (2015)

    Article  Google Scholar 

  10. Han, B., Yao, Q., Yu, X., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. arXiv preprint arXiv:1804.06872 (2018)

  11. Li, Y., Yin, J., Chen, L.: Unified robust training for graph neural networks against label noise. In: Karlapalem, K., et al. (eds.) PAKDD 2021. LNCS (LNAI), vol. 12712, pp. 528–540. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-75762-5_42

    Chapter  Google Scholar 

  12. Wang, H., Leskovec, J.: Unifying graph convolutional neural networks and label propagation. arXiv preprint arXiv:2002.06755 (2020)

  13. de Aquino Afonso, B.K., Berton, L.: Analysis of label noise in graph-based semi-supervised learning. In: Proceedings of the 35th Annual ACM Symposium on Applied Computing, pp. 1127–1134 (2020)

    Google Scholar 

  14. Yu, X., Han, B., Yao, J., et al.: How does disagreement help generalization against label corruption? In: International Conference on Machine Learning, pp. 7164–7173. PMLR (2019)

    Google Scholar 

  15. Wang, Y., Liu, W., Ma, X., et al.: Iterative learning with open-set noisy labels. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8688–8696 (2018)

    Google Scholar 

  16. Nguyen, D.T., Mummadi, C.K., Ngo, T.P.N., et al.: Self: learning to filter noisy labels with self-ensembling. arXiv preprint arXiv:1910.01842 (2019)

  17. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015)

    Google Scholar 

  18. Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Advances in Neural Information Processing Systems, pp. 950–957 (1992)

    Google Scholar 

  19. Mnih, V., Hinton, G.E.: Learning to label aerial images from noisy data. In: Proceedings of the 29th International Conference on Machine Learning (ICML 2012), pp. 567–574 (2012)

    Google Scholar 

  20. Manwani, N., Sastry, P.S.: Noise tolerance under risk minimization. IEEE Trans. Cybern. 43(3), 1146–1151 (2013)

    Article  Google Scholar 

  21. Van Rooyen, B., Menon, A.K., Williamson, R.C.: Learning with symmetric label noise: the importance of being unhinged. arXiv preprint arXiv:1505.07634 (2015)

  22. Zhang, Z., Sabuncu, M.R.: Generalized cross entropy loss for training deep neural networks with noisy labels. In: 32nd Conference on Neural Information Processing Systems (NeurIPS) (2018)

    Google Scholar 

  23. Ghosh, A., Kumar, H., Sastry, P.S.: Robust loss functions under label noise for deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1 (2017)

    Google Scholar 

  24. Patrini, G., Rozza, A., Krishna Menon, A., et al.: Making deep neural networks robust to label noise: a loss correction approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1944–1952 (2017)

    Google Scholar 

  25. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126–1135. PMLR (2017)

    Google Scholar 

  26. Huang, J., Qu, L., Jia, R., et al.: O2U-Net: a simple noisy label detection approach for deep neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3326–3334 (2019)

    Google Scholar 

  27. Sen, P., Namata, G., Bilgic, M., et al.: Collective classification in network data. AI Mag. 29(3), 93–93 (2008)

    Google Scholar 

  28. Shchur, O., Mumme, M., Bojchevski, A., et al.: Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868 (2018)

  29. Malach, E., Shalev-Shwartz, S.: Decoupling “when to update” from “how to update”. arXiv preprint arXiv:1706.02613 (2017)

  30. Jiang, L., Zhou, Z., Leung, T., et al.: MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: International Conference on Machine Learning, pp. 2304–2313. PMLR (2018)

    Google Scholar 

Download references

Acknowledgements

This work is sponsored by the National Natural Science Foundation of China (Grant No. 61571266), Beijing Municipal Natural Science Foundation (No. L192026), and Tsinghua-Foshan Innovation Special Fund (TFISF) (No. 2020THFS0111).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ji Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhuo, Y., Zhou, X., Wu, J. (2021). Training Graph Convolutional Neural Network Against Label Noise. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13110. Springer, Cham. https://doi.org/10.1007/978-3-030-92238-2_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92238-2_56

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92237-5

  • Online ISBN: 978-3-030-92238-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics