Skip to main content

Graph Unlearning Using Knowledge Distillation

  • Conference paper
  • First Online:
Information and Communications Security (ICICS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14252))

Included in the following conference series:

  • 839 Accesses

Abstract

With the popularity of graph-structured data and the promulgation of various data privacy protection laws, machine unlearning in Graph Convolutional Network (GCN) has attracted more and more attention. However, machine unlearning in GCN scenarios faces multiple challenges. For example, many unlearning algorithms require large computational resources and storage space or cannot be applied to graph-structured data, and so on. In this paper, we design a novel, lightweight unlearning method using knowledge distillation to solve the class unlearning problem in GCN scenarios. Unlike other methods using knowledge distillation to unlearn Euclidean spatial data, we use a single retrained deep Graph Convolutional Network via Initial residual and Identity mapping (GCNII) model as the teacher network and the shallow GCN model as a student network. During the training stage, the teacher’s network transfers the knowledge of the retained set to the student network, enabling the student network to forget some or more categories of information. Compared with the baseline methods, Graph Unlearning using Knowledge Distillation (GUKD) shows state-of-the-art model performance and unlearning quality on five real datasets. Specifically, our method outperforms all baseline methods by 33.77% on average in the multi-class experiments on the Citeseer dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Mantelero, A.: The EU proposal for a general data protection regulation and the roots of the right to be forgotten. Comput. Law Secur. Rev. 29(3), 229–235 (2013). https://doi.org/10.1016/j.clsr.2013.03.010

    Article  Google Scholar 

  2. Goldman, E.: An introduction to the California consumer privacy act (ccpa). Santa Clara Univ. Legal Studies Research Paper (2020). https://doi.org/10.2139/ssrn.3211013

  3. Bourtoule, L., et al.: Machine unlearning. In: 2021 IEEE Symposium on Security and Privacy (SP), pp. 141–159. IEEE (2021). https://doi.org/10.1109/SP40001.2021.00019

  4. Baumhauer, T., Schöttle, P., Zeppelzauer, M.: Machine unlearning: linear filtration for logit-based classifiers. Mach. Learn. 111(9), 3203–3226 (2022). https://doi.org/10.1007/s10994-022-06178-9

    Article  MathSciNet  MATH  Google Scholar 

  5. Guo, C., Goldstein, T., Hannun, A., Van Der Maaten, L.: Certified data removal from machine learning models. arXiv preprint arXiv:1911.03030 (2019). https://doi.org/10.48550/arXiv.1911.03030

  6. Chen, M., Zhang, Z., Wang, T., Backes, M., Humbert, M., Zhang, Y.: Graph unlearning. In: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pp. 499–513 (2022). https://doi.org/10.1145/3548606.3559352

  7. Chien, E., Pan, C., Milenkovic, O.: Certified graph unlearning. arXiv preprint arXiv:2206.09140 (2022). https://doi.org/10.48550/arXiv.2206.09140

  8. Yang, C., Liu, J., Shi, C.: Extract the knowledge of graph neural networks and go beyond it: an effective knowledge distillation framework. In: Proceedings of the Web Conference 2021, pp. 1227–1237 (2021). https://doi.org/10.1145/3442381.3450068

  9. Kim, J., Woo, S.S.: Efficient two-stage model retraining for machine unlearning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4361–4369 (2022). https://doi.org/10.1109/CVPRW56347.2022.00482

  10. Chundawat, V.S., Tarun, A.K., Mandal, M., Kankanhalli, M.: Can bad teaching induce forgetting? Unlearning in deep networks using an incompetent teacher. arXiv preprint arXiv:2205.08096 (2022). https://doi.org/10.48550/arXiv.2205.08096

  11. Chen, M., Wei, Z., Huang, Z., Ding, B., Li, Y.: Simple and deep graph convolutional networks. In: International Conference on Machine Learning, pp. 1725–1735. PMLR (2020). https://doi.org/10.48550/arXiv.2007.02133

  12. Graves, L., Nagisetty, V., Ganesh, V.: Amnesiac machine learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 11516–11524 (2021). https://doi.org/10.1609/aaai.v35i13.17371

  13. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. Comput. Sci. 14(7), 38–39 (2015). https://doi.org/10.4140/TCP.n.2015.249

    Article  Google Scholar 

  14. Zhuang, F., et al.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2020). https://doi.org/10.1109/JPROC.2020.3004555

    Article  Google Scholar 

  15. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015). https://doi.org/10.1038/nature14236

  16. Yang, Y., Qiu, J., Song, M., Tao, D., Wang, X.: Distilling knowledge from graph convolutional networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7074–7083 (2020). https://doi.org/10.1109/cvpr42600.2020.00710

  17. He, H., Wang, J., Zhang, Z., Wu, F.: Compressing deep graph neural networks via adversarial knowledge distillation. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 534–544 (2022). https://doi.org/10.1145/3534678.3539315

  18. Tarun, A.K., Chundawat, V.S., Mandal, M., Kankanhalli, M.: Deep regression unlearning. arXiv preprint arXiv:2210.08196 (2022)

  19. Brophy, J., Lowd, D.: Machine unlearning for random forests. In: International Conference on Machine Learning, pp. 1092–1104. PMLR (2021). https://doi.org/10.48550/arXiv.2009.05567

  20. Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy, pp. 463–480. IEEE (2015). https://doi.org/10.1109/SP.2015.35

  21. Golatkar, A., Achille, A., Soatto, S.: Eternal sunshine of the spotless net: selective forgetting in deep networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9304–9312 (2020). https://doi.org/10.1109/CVPR42600.2020.00932

  22. Golatkar, A., Achille, A., Soatto, S.: Forgetting outside the box: scrubbing deep networks of information accessible from input-output observations. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 383–398. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_23

    Chapter  Google Scholar 

  23. Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T.: Collective classification in network data. AI Mag. 29(3), 93–93 (2008). https://doi.org/10.1609/aimag.v29i3.2157

    Article  Google Scholar 

  24. Shchur, O., Mumme, M., Bojchevski, A., Günnemann, S.: Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868 (2018)

  25. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst. 30 (2017). https://doi.org/10.48550/arXiv.1706.02216

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ximeng Liu .

Editor information

Editors and Affiliations

A Appendix

A Appendix

To evaluate the effectiveness of GUKD, we used five publicly available datasets for node classification, including Cora, Citseer, Pubmed, CS and Reddit. Among these datasets, Cora, Citseer, and Pubmed are citation networks. Nodes represent papers or scientific publications, and edges represent their citation relationship; CS is a co-author relationship graph. Nodes represent the authors of articles, and an edge connecting two nodes represents the two authors who have completed a paper together. The vertex label represents the author’s most active field; Reddit is a social network dataset, a node represents a post in a community, and an edge connecting two posts indicates that the same user commented on both posts. The label suggests the community or subreddit a post belongs to. The Statistics of the detailed datasets are summarized in Table 4.

Table 4. Dataset statistics

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zheng, W., Liu, X., Wang, Y., Lin, X. (2023). Graph Unlearning Using Knowledge Distillation. In: Wang, D., Yung, M., Liu, Z., Chen, X. (eds) Information and Communications Security. ICICS 2023. Lecture Notes in Computer Science, vol 14252. Springer, Singapore. https://doi.org/10.1007/978-981-99-7356-9_29

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-7356-9_29

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-7355-2

  • Online ISBN: 978-981-99-7356-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics