Skip to main content
Log in

Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Graph neural networks (GNNs) have achieved significant success in graph representation learning. Nevertheless, the recent work indicates that current GNNs are vulnerable to adversarial perturbations, in particular structural perturbations. This, therefore, narrows the application of GNN models in real-world scenarios. Such vulnerability can be attributed to the model’s excessive reliance on incomplete data views (e.g., graph convolutional networks (GCNs) heavily rely on graph structures to make predictions). By integrating the information from multiple perspectives, this problem can be effectively addressed, and typical views of graphs include the node feature view and the graph structure view. In this paper, we propose C2oG, which combines these two typical views to train sub-models and fuses their knowledge through co-training. Due to the orthogonality of the views, sub-models in the feature view tend to be robust against the perturbations targeted at sub-models in the structure view. C2oG allows sub-models to correct one another mutually and thus enhance the robustness of their ensembles. In our evaluations, C2oG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In Proc. the 5th International Conference on Learning Representations, April 2017.

  2. Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph attention networks. In Proc. the 6th International Conference on Learning Representations, April 30-May 3, 2018.

  3. Hamilton W L, Ying Z, Leskovec J. Inductive representation learning on large graphs. In Proc. the Annual Conference on Neural Information Processing Systems, Dec. 2017, pp.1024-1034.

  4. Fey M, Lenssen J E. Fast graph representation learning with PyTorch geometric. arXiv:1903.02428, 2019. https://arxiv.org/abs/1903.02428v3, Oct. 2021.

  5. Dai H, Li H, Tian T, Huang X, Wang L, Zhu J, Song L. Adversarial attack on graph structured data. In Proc. the 35th International Conference on Machine Learning, July 2018, pp.1115-1124.

  6. Zügner D, Akbarnejad A, Günnemann S. Adversarial attacks on neural networks for graph data. In Proc. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, August 2018, pp.2847-2856. https://doi.org/10.1145/3219819.3220078.

  7. Bojchevski A, Günnemann S. Adversarial attacks on node embeddings via graph poisoning. In Proc. the 36th International Conference on Machine Learning, June 2019, pp.695-704.

  8. Zügner D, Günnemann S. Adversarial attacks on graph neural networks via meta learning. In Proc. the 7th International Conference on Learning Representations, May 2019.

  9. Wu H, Wang C, Tyshetskiy Y, Docherty A, Lu K, Zhu L. Adversarial examples for graph data: Deep insights into attack and defense. In Proc. the 28th International Joint Conference on Artificial Intelligence, August 2019, pp.4816-4823. https://doi.org/10.24963/ijcai.2019/669.

  10. Zhang X, Zitnik M. GNNGuard: Defending graph neural networks against adversarial attacks. In Proc. the Annual Conference on Neural Information Processing Systems, Dec. 2020, pp.9263-9275.

  11. Chen L, Li X, Wu D. Enhancing robustness of graph convolutional networks via dropping graph connections. In Proc. the European Conference Machine Learning and Knowledge Discovery in Databases, Sept. 2020, pp.412-428. https://doi.org/10.1007/978-3-030-67664-3_25.

  12. Luo D, Cheng W, Yu W, Zong B, Ni J, Chen H, Zhang X. Learning to drop: Robust graph neural network via topological denoising. In Proc. the 14th ACM International Conference on Web Search and Data Mining, March 2021, pp.779-787. https://doi.org/10.1145/3437963.3441734.

  13. Jin W, Ma Y, Liu X, Tang X, Wang S, Tang J. Graph structure learning for robust graph neural networks. In Proc. the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, August 2020, pp.66-74. https://doi.org/10.1145/3394486.3403049.

  14. Entezari N, Al-Sayouri S A, Darvishzadeh A, Papalexakis E E. All you need is low (rank): Defending against adversarial attacks on graphs. In Proc. the 13th ACM International Conference on Web Search and Data Mining, Feb. 2020, pp.169-177. https://doi.org/10.1145/3336191.3371789.

  15. Chang H, Rong Y, Xu T, Bian Y, Zhou S, Wang X, Huang J, Zhu W. Not all low-pass filters are robust in graph convolutional networks. In Proc. the Annual Conference on Neural Information Processing, Dec. 2021, pp.25058-25071.

  16. Pang T, Xu K, Du C, Chen N, Zhu J. Improving adversarial robustness via promoting ensemble diversity. In Proc. the 36th International Conference on Machine Learning, June 2019, pp.4970-4979.

  17. Kariyappa S, Qureshi M K. Improving adversarial robustness of ensembles with diversity training. arXiv:1901.09981, 2019. http://arxiv.org/abs/1901.09981, Oct. 2021.

  18. Yang H, Zhang J, Dong H, Inkawhich N, Gardner A, Touchet A, Wilkes W, Berry H, Li H. DVERGE: Diversifying vulnerabilities for enhanced robust generation of ensembles. In Proc. the Annual Conference on Neural Information Processing Systems, Dec. 2020, pp.5505-5515.

  19. McCallum A K, Nigam K, Rennie J, Seymore K. Automating the construction of Internet portals with machine learning. Information Retrieval, 2000, 3(2): 127-163. https://doi.org/10.1023/A:1009953814988.

    Article  Google Scholar 

  20. Blum A, Mitchell T. Combining labeled and unlabeled data with co-training. In Proc. the 11th Annual Conference on Computational Learning Theory, July 1998, pp.92-100. https://doi.org/10.1145/279943.279962.

  21. Wang W, Zhou Z H. Analyzing co-training style algorithms. In Proc. the 18th European Conference on Machine Learning, Sept. 2007, pp.454-465. https://doi.org/10.1007/978-3-540-74958-5_42.

  22. Guo C, Pleiss G, Sun Y, Weinberger K Q. On calibration of modern neural networks. In Proc. the 34th International Conference on Machine Learning, August 2017, pp.1321-1330.

  23. Jin W, Derr T, Wang Y, Ma Y, Liu Z, Tang J. Node similarity preserving graph convolutional networks. In Proc. the 14th ACM International Conference on Web Search and Data Mining, March 2021, pp.148-156. https://doi.org/10.1145/3437963.3441735.

  24. Shanthamallu U S, Thiagarajan J J, Spanias A. Uncertainty-matching graph neural networks to defend against poisoning attacks. In Proc. the 35th AAAI Conference on Artificial Intelligence, Feb. 2021, pp.9524-9532.

  25. Hansen L K, Salamon P. Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1990, 12(10): 993-1001. https://doi.org/10.1109/34.58871.

    Article  Google Scholar 

  26. Breiman L. Bagging predictors. Machine Learning, 1996, 24(2): 123-140. https://doi.org/10.1023/A:1018054314350.

    Article  MATH  Google Scholar 

  27. Dietterich T G. Ensemble methods in machine learning. In Proc. the 1st International Workshop on Multiple Classifier Systems, June 2000, pp.1-15. https://doi.org/10.1007/3-540-45014-9_1.

  28. Kuncheva L I, Whitaker C J. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning, 2003, 51(2): 181-207. https://doi.org/10.1023/A:1022859003006.

    Article  MATH  Google Scholar 

  29. Abney S P. Bootstrapping. In Proc. the 40th Annual Meeting of the Association for Computational Linguistics, July 2002, pp.360-367. https://doi.org/10.3115/1073083.1073143.

  30. Balcan M, Blum A, Yang K. Co-training and expansion: Towards bridging theory and practice. In Proc. the Annual Conference on Neural Information Processing Systems, Dec. 2004, pp.89-96.

  31. Wang W, Zhou Z. A new analysis of co-training. In Proc. the 27th International Conference on Machine Learning, June 2010, pp.1135-1142.

  32. Wang W, Zhou Z H. Theoretical foundation of co-training and disagreement-based algorithms. arXiv:1708.04403, 2017. http://arxiv.org/abs/1708.04403, Oct. 2021.

  33. Ma F, Meng D, Xie Q, Li Z, Dong X. Self-paced co-training. In Proc. the 34th International Conference on Machine Learning, August 2017, pp.2275-2284.

  34. Ma F, Meng D, Dong X, Yang Y. Self-paced multi-view co-training. Journal of Machine Learning Research, 2020, 21: Article No. 57.

  35. Qiao S, Shen W, Zhang Z, Wang B, Yuille A L. Deep co-training for semi-supervised image recognition. In Proc. the 15th European Conference Computer Vision, Sept. 2018, pp.142-159. https://doi.org/10.1007/978-3-030-01267-0_9.

  36. Tramèr F, Papernot N, Goodfellow I, Boneh D, McDaniel P. The space of transferable adversarial examples. arXiv:1704.03453, 2017. http://arxiv.org/abs/1704.03453, Oct. 2021.

  37. Klicpera J, Bojchevski A, Günnemann S. Predict then propagate: Graph neural networks meet personalized PageRank. In Proc. the 7th International Conference on Learning Representations, May 2019.

  38. Chen M, Wei Z, Huang Z, Ding B, Li Y. Simple and deep graph convolutional networks. In Proc. the 37th International Conference on Machine Learning, July 2020, pp.1725-1735.

  39. Chung F R. Spectral Graph Theory. American Mathematical Society, 1997.

  40. Von Luxburg U. A tutorial on spectral clustering. Statistics and Computing, 2007, 17(4): 395-416. https://doi.org/10.1007/s11222-007-9033-z.

    Article  MathSciNet  Google Scholar 

  41. Teixeira L, Jalaian B, Ribeiro B. Are graph neural networks miscalibrated? arXiv:1905.02296, 2019. http://arxiv.org/abs/1905.02296, Oct. 2021.

  42. Sen P, Namata G, Bilgic M, Getoor L, Galligher B, Eliassi-Rad T. Collective classification in network data. AI Magazine, 2008, 29(3): 93. https://doi.org/10.1609/aimag.v29i3.2157.

    Article  Google Scholar 

  43. Geisler S, Schmidt T, Şirin H, Zügner D, Bojchevski A, Günnemann S. Robustness of graph neural networks at scale. In Proc. the Annual Conference on Neural Information Processing Systems, Dec. 2021, pp.7637-7649.

  44. Li Y, Jin W, Xu H, Tang J. DeepRobust: A platform for adversarial attacks and defenses. In Proc. the 35th AAAI Conference on Artificial Intelligence, Feb. 2021, pp.16078-16080.

  45. Shchur O, Mumme M, Bojchevski A, Günnemann S. Pitfalls of graph neural network evaluation. In Proc. the Relational Representation Learning Workshop of 2018 NeurIPS, Dec. 2018.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hui-Jun Wu.

Supplementary Information

ESM 1

(PDF 100 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, XG., Wu, HJ., Zhou, X. et al. Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training. J. Comput. Sci. Technol. 37, 1161–1175 (2022). https://doi.org/10.1007/s11390-022-2129-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-022-2129-2

Keywords

Navigation