Skip to main content

Black-Box Graph Backdoor Defense

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14491))

  • 175 Accesses

Abstract

Recently, graph neural networks (GNNs) have been proven to be vulnerable to backdoor attacks, wherein the test prediction of the model is manipulated by poisoning the training dataset with trigger-embedded malicious samples during learning. Current defense methods against GNN backdoor are not practical due to their requirement for access to the GNN parameters and training samples. To address this issue, we present a Black-box GNN Backdoor Defense strategy, BloGBaD, that eliminates the backdoor without model parameter information and training dataset. Specifically, BloGBaD involves two primary phases: 1) test sample filtration, which identifies toxic graph nodes via the Gaussian mixture model and purifies their trigger features through clustering and filtration; and 2) model fine-tuning, which fine-tunes the model to a backdoor-free state by a loss function with a penalty regularization for poisoned features. We demonstrate the effectiveness of our method through extensive experiments on various datasets and attack algorithms under the assumption of black-box conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: a comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020)

    Article  Google Scholar 

  2. Chen, G., Wu, J., Yang, W., Bashir, A.K., Li, G., Hammoudeh, M.: Leveraging graph convolutional-lstm for energy-efficient caching in blockchain-based green iot. IEEE Trans. Green Commun. Netw. 5(3), 1154–1164 (2021). https://doi.org/10.1109/TGCN.2021.3069395

  3. Chen, H., Fu, C., Zhao, J., Koushanfar, F.: Deepinspect: a black-box trojan detection and mitigation framework for deep neural networks. In: IJCAI. vol. 2, p. 8 (2019)

    Google Scholar 

  4. Chen, J., Xiong, H., Zheng, H., Zhang, J., Jiang, G., Liu, Y.: Dyn-backdoor: backdoor attack on dynamic link prediction. arXiv preprint arXiv:2110.03875 (2021)

  5. Chen, L., et al.: Neighboring backdoor attacks on graph convolutional network. arXiv preprint arXiv:2201.06202 (2022)

  6. Chen, Y., Ye, Z., Zhao, H., Wang, Y., et al.: Feature-based graph backdoor attack in the node classification task. Int. J. Intell. Syst. 2023 (2023)

    Google Scholar 

  7. Dai, E., Lin, M., Zhang, X., Wang, S.: Unnoticeable backdoor attacks on graph neural networks. arXiv preprint arXiv:2303.01263 (2023)

  8. Dong, Y., et al.: Black-box detection of backdoor attacks with limited information and data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16482–16491 (2021)

    Google Scholar 

  9. Guo, J., Li, A., Liu, C.: Aeva: Black-box backdoor detection using adversarial extreme value analysis. arXiv preprint arXiv:2110.14880 (2021)

  10. Guo, J., Li, Y., Chen, X., Guo, H., Sun, L., Liu, C.: Scale-up: an efficient black-box input-level backdoor detection via analyzing scaled prediction consistency. arXiv preprint arXiv:2302.03251 (2023)

  11. Hamilton, W.L., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Guyon, I., von Luxburg, U., Bengio, S., Wallach, H.M., Fergus, R., Vishwanathan, S.V.N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017(December), pp. 4–9, 2017. Long Beach, CA, USA, pp. 1024–1034 (2017), https://proceedings.neurips.cc/paper/2017/hash/5dd9db5e033da9c6fb5ba83c7a7ebea9-Abstract.html

  12. Jiang, B., Li, Z.: Defending against backdoor attack on graph nerual network by explainability. arXiv preprint arXiv:2209.02902 (2022)

  13. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24–26, 2017, Conference Track Proceedings. OpenReview.net (2017). https://openreview.net/forum?id=SJU4ayYgl

  14. Li, Y., Wu, B., Jiang, Y., Li, Z., Xia, S.: Backdoor learning: a survey. CoRR abs/2007.08745 (2020). https://arxiv.org/abs/2007.08745

  15. Li, Y., Li, Y., Wu, B., Li, L., He, R., Lyu, S.: Invisible backdoor attack with sample-specific triggers. In: 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10–17, 2021, pp. 16443–16452. IEEE (2021). https://doi.org/10.1109/ICCV48922.2021.01615

  16. Liu, Y., et al.: Backdoor defense with machine unlearning. In: IEEE INFOCOM 2022 - IEEE Conference on Computer Communications, London, United Kingdom, May 2–5, 2022, pp. 280–289. IEEE (2022). https://doi.org/10.1109/INFOCOM48880.2022.9796974

  17. Liu, Y., Ma, X., Bailey, J., Lu, F.: Reflection backdoor: a natural backdoor attack on deep neural networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 182–199. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_11

    Chapter  Google Scholar 

  18. Reynolds, D.A., et al.: Gaussian mixture models. Encycl. Biometrics 741(659–663) (2009)

    Google Scholar 

  19. Sammaknejad, N., Zhao, Y., Huang, B.: A review of the expectation maximization algorithm in data-driven process identification. J. Process Control 73, 123–136 (2019)

    Article  Google Scholar 

  20. Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y., et al.: Graph attention networks. stat 1050(20), 10–48550 (2017)

    Google Scholar 

  21. Wang, X., Zhang, M.: How powerful are spectral graph neural networks. In: Chaudhuri, K., Jegelka, S., Song, L., Szepesvári, C., Niu, G., Sabato, S. (eds.) International Conference on Machine Learning, ICML 2022, 17–23 July 2022, Baltimore, Maryland, USA. Proceedings of Machine Learning Research, vol. 162, pp. 23341–23362. PMLR (2022). https://proceedings.mlr.press/v162/wang22am.html

  22. Weng, C.H., Lee, Y.T., Wu, S.H.B.: On the trade-off between adversarial and backdoor robustness. Adv. Neural. Inf. Process. Syst. 33, 11973–11983 (2020)

    Google Scholar 

  23. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 4–24 (2020)

    Article  MathSciNet  Google Scholar 

  24. Xi, Z., Pang, R., Ji, S., Wang, T.: Graph backdoor. In: USENIX Security Symposium, pp. 1523–1540 (2021)

    Google Scholar 

  25. Xu, J., Wang, R., Liang, K., Picek, S.: More is better (mostly): On the backdoor attacks in federated graph neural networks. arXiv preprint arXiv:2202.03195 (2022)

  26. Xu, J., Xue, M., Picek, S.: Explainability-based backdoor attacks against graph neural networks. In: Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Learning, pp. 31–36 (2021)

    Google Scholar 

  27. Yan, Z., et al.: Dehib: Deep hidden backdoor attack on semi-supervised learning via adversarial perturbation. In: Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2–9, 2021, pp. 10585–10593. AAAI Press (2021). https://doi.org/10.1609/aaai.v35i12.17266

  28. Yan, Z., Li, S., Zhao, R., Tian, Y., Zhao, Y.: DHBE: data-free holistic backdoor erasing in deep neural networks via restricted adversarial distillation. In: Liu, J.K., Xiang, Y., Nepal, S., Tsudik, G. (eds.) Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security, ASIA CCS 2023, Melbourne, VIC, Australia, July 10–14, 2023, pp. 731–745. ACM (2023). https://doi.org/10.1145/3579856.3582822

  29. Yan, Z., Wu, J., Li, G., Li, S., Guizani, M.: Deep neural backdoor in semi-supervised learning: threats and countermeasures. IEEE Trans. Inf. Forensics Secur. 16, 4827–4842 (2021). https://doi.org/10.1109/TIFS.2021.3116431

  30. Yang, S., et al.: Transferable graph backdoor attack. In: Proceedings of the 25th International Symposium on Research in Attacks, Intrusions and Defenses, pp. 321–332 (2022)

    Google Scholar 

  31. Yao, Y., Li, H., Zheng, H., Zhao, B.Y.: Latent backdoor attacks on deep neural networks. In: Cavallaro, L., Kinder, J., Wang, X., Katz, J. (eds.) Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, London, UK, November 11–15, 2019, pp. 2041–2055. ACM (2019). https://doi.org/10.1145/3319535.3354209

  32. Zhang, M., Cui, Z., Neumann, M., Chen, Y.: An end-to-end deep learning architecture for graph classification. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2–7, 2018, pp. 4438–4445. AAAI Press (2018). https://doi.org/10.1609/aaai.v32i1.11782

  33. Zhang, X., Chen, H., Huang, K., Koushanfar, F.: An adaptive black-box backdoor detection method for deep neural networks (2022)

    Google Scholar 

  34. Zhang, Z., Jia, J., Wang, B., Gong, N.Z.: Backdoor attacks to graph neural networks. In: Proceedings of the 26th ACM Symposium on Access Control Models and Technologies, pp. 15–26. SACMAT ’21, Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3450569.3463560

  35. Zheng, H., Xiong, H., Chen, J., Ma, H., Huang, G.: Motif-backdoor: rethinking the backdoor attack on graph neural networks via motifs. arXiv preprint arXiv:2210.13710 (2022)

Download references

Acknowledgement

This research is supported National Nature Science Foundation of China (No. 62202303, 62202302, U20B2048, and U2003206), Shanghai Sailing Program (No. 21YF1421700), Action Plan of Science and Technology Innovation of Science and Technology Commission of Shanghai Municipality (No. 22511101202), and JSPS KAKENHI (No. JP22K17884).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gaolei Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, X., Li, G., Tao, X., Zhang, C., Li, J. (2024). Black-Box Graph Backdoor Defense. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14491. Springer, Singapore. https://doi.org/10.1007/978-981-97-0808-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0808-6_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0807-9

  • Online ISBN: 978-981-97-0808-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics