Skip to main content

Unlearning Vision Transformers Without Retaining Data via Low-Rank Decompositions

  • Conference paper
  • First Online:
Pattern Recognition (ICPR 2024)

Abstract

The implementation of data protection regulations such as the GDPR and the California Consumer Privacy Act has sparked a growing interest in removing sensitive information from pre-trained models without requiring retraining from scratch, all while maintaining predictive performance on remaining data. Recent studies on machine unlearning for deep neural networks have resulted in different attempts that put constraints on the training procedure and which are limited to small-scale architectures and with poor adaptability to real-world requirements. In this paper, we develop an approach to delete information on a class from a pre-trained model, by injecting a trainable low-rank decomposition into the network parameters, and without requiring access to the original training set. Our approach greatly reduces the number of parameters to train as well as time and memory requirements. This allows a painless application to real-life settings where the entire training set is unavailable, and compliance with the requirement of time-bound deletion. We conduct experiments on various Vision Transformer architectures for class forgetting. Extensive empirical analyses demonstrate that our proposed method is efficient, safe to apply, and effective in removing learned information while maintaining accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The classes that we consider are as follows: kite, mud turtle, triceratops, scorpion, peacock, goose, jellyfish, snail, flamingo, beagle.

References

  1. Barsellotti, L., Amoroso, R., Cornia, M., Baraldi, L., Cucchiara, R.: Training-free open-vocabulary segmentation with offline diffusion-augmented prototype generation. In: CVPR (2024)

    Google Scholar 

  2. Baumhauer, T., Schöttle, P., Zeppelzauer, M.: Machine unlearning: linear filtration for logit-based classifiers. Mach. Learn. 111(9), 3203–3226 (2022)

    Article  MathSciNet  Google Scholar 

  3. Bontempo, G., Porrello, A., Bolelli, F., Calderara, S., Ficarra, E.: DAS-MIL: distilling across scales for MIL classification of histological WSIs. In: Greenspan, H., et al. (eds.) Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. LNCS, vol. 14220, pp. 248–258. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-43907-0_24

  4. Bourtoule, L., et al.: Machine unlearning. In: IEEE S &P (2021)

    Google Scholar 

  5. Caffagni, D., et al.: The revolution of multimodal large language models: a survey. In: ACL Findings (2024)

    Google Scholar 

  6. Caffagni, D., et al.: Wiki-LLaVA: hierarchical retrieval-augmented generation for multimodal LLMs. In: CVPR Workshops (2024)

    Google Scholar 

  7. Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: IEEE S &P (2015)

    Google Scholar 

  8. Cha, S., Cho, S., Hwang, D., Lee, H., Moon, T., Lee, M.: Learning to unlearn: instance-wise unlearning for pre-trained classifiers. In: AAAI (2024)

    Google Scholar 

  9. Chen, M., Gao, W., Liu, G., Peng, K., Wang, C.: Boundary unlearning. In: CVPR (2023)

    Google Scholar 

  10. Chen, M., Zhang, Z., Wang, T., Backes, M., Humbert, M., Zhang, Y.: When machine unlearning jeopardizes privacy. In: ACM CCS (2021)

    Google Scholar 

  11. Chundawat, V.S., Tarun, A.K., Mandal, M., Kankanhalli, M.: Can bad teaching induce forgetting? Unlearning in deep networks using an incompetent teacher. In: AAAI (2023)

    Google Scholar 

  12. Chundawat, V.S., Tarun, A.K., Mandal, M., Kankanhalli, M.: Zero-shot machine unlearning. IEEE Trans. IFS 18, 2345–2354 (2023)

    Google Scholar 

  13. Cornia, M., Baraldi, L., Cucchiara, R.: Explaining transformer-based image captioning models: an empirical analysis. AI Commun. 35(2), 111–129 (2022)

    Article  MathSciNet  Google Scholar 

  14. Cucchiara, R., Baraldi, L., Cornia, M., Sarto, S.: Video surveillance and privacy: a solvable paradox? Computer 57(3), 91–100 (2024)

    Article  Google Scholar 

  15. Dang, Q.V.: Right to be forgotten in the age of machine learning. In: ICADS (2021)

    Google Scholar 

  16. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)

    Google Scholar 

  17. Dosovitskiy, A., et al.: An image is worth 16 \(\times \) 16 words: transformers for image recognition at scale. In: ICLR (2021)

    Google Scholar 

  18. Goddard, M.: The EU General Data Protection Regulation (GDPR): European Regulation that has a global impact. IJMR 59(6), 703–705 (2017)

    Google Scholar 

  19. Golatkar, A., Achille, A., Ravichandran, A., Polito, M., Soatto, S.: Mixed-privacy forgetting in deep networks. In: CVPR (2021)

    Google Scholar 

  20. Golatkar, A., Achille, A., Soatto, S.: Eternal sunshine of the spotless net: selective forgetting in deep networks. In: CVPR (2020)

    Google Scholar 

  21. Graves, L., Nagisetty, V., Ganesh, V.: Amnesiac machine learning. In: AAAI (2021)

    Google Scholar 

  22. Hayase, T., Yasutomi, S., Katoh, T.: Selective forgetting of deep networks at a finer level than samples. arXiv preprint arXiv:2012.11849 (2020)

  23. Hu, E.J., et al.: LoRA: low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021)

  24. Izzo, Z., Smart, M.A., Chaudhuri, K., Zou, J.: Approximate data deletion from machine learning models. In: AISTATS (2021)

    Google Scholar 

  25. Jia, J., et al.: Model sparsity can simplify machine unlearning. In: NeurIPS (2023)

    Google Scholar 

  26. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  27. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  28. Lin, S., Zhang, X., Chen, C., Chen, X., Susilo, W.: ERM-KTP: knowledge-level machine unlearning via knowledge transfer. In: CVPR (2023)

    Google Scholar 

  29. Liu, J., Xue, M., Lou, J., Zhang, X., Xiong, L., Qin, Z.: MUter: machine unlearning on adversarially trained models. In: ICCV (2023)

    Google Scholar 

  30. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: ICCV (2021)

    Google Scholar 

  31. Luo, Z., Xu, X., Liu, F., Koh, Y.S., Wang, D., Zhang, J.: Privacy-preserving low-rank adaptation for latent diffusion models. arXiv preprint arXiv:2402.11989 (2024)

  32. Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. JMLR 9(11), 2579–2605 (2008)

    Google Scholar 

  33. Neel, S., Roth, A., Sharifi-Malvajerdi, S.: Descent-to-Delete: gradient-based methods for machine unlearning. In: ALT (2021)

    Google Scholar 

  34. Nguyen, Q.P., Low, B.K.H., Jaillet, P.: Variational Bayesian unlearning. In: NeurIPS (2020)

    Google Scholar 

  35. Nguyen, T.T., Huynh, T.T., Nguyen, P.L., Liew, A.W.C., Yin, H., Nguyen, Q.V.H.: A survey of machine unlearning. arXiv preprint arXiv:2209.02299 (2022)

  36. Pawelczyk, M., Neel, S., Lakkaraju, H.: In-context unlearning: language models as few shot unlearners. arXiv preprint arXiv:2310.07579 (2023)

  37. Poppi, S., Cornia, M., Baraldi, L., Cucchiara, R.: Revisiting the evaluation of class activation mapping for explainability: a novel metric and experimental analysis. In: CVPR Workshops (2021)

    Google Scholar 

  38. Poppi, S., Poppi, T., Cocchi, F., Cornia, M., Baraldi, L., Cucchiara, R.: Safe-CLIP: removing NSFW concepts from vision-and-language models. In: ECCV (2024)

    Google Scholar 

  39. Poppi, S., Sarto, S., Cornia, M., Baraldi, L., Cucchiara, R.: Multi-class unlearning for image classification via weight filtering. IEEE Intell. Syst. (2024)

    Google Scholar 

  40. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV (2017)

    Google Scholar 

  41. Sun, Y., Li, Z., Li, Y., Ding, B.: Improving loRA in privacy-preserving federated learning. In: ICLR (2024)

    Google Scholar 

  42. Tarun, A.K., Chundawat, V.S., Mandal, M., Kankanhalli, M.: Fast yet effective machine unlearning. IEEE Trans. NNLS (2023)

    Google Scholar 

  43. de la Torre, L.: A Guide to the California Consumer Privacy Act of 2018. Available at SSRN 3275571 (2018)

    Google Scholar 

  44. Wu, Y., Dobriban, E., Davidson, S.: DeltaGrad: rapid retraining of machine learning models. In: ICML (2020)

    Google Scholar 

  45. Yoon, Y., Nam, J., Yun, H., Kim, D., Ok, J.: Few-shot unlearning by model inversion. arXiv preprint arXiv:2205.15567 (2022)

Download references

Acknowledgments

This work has been conducted under a research grant co-funded by Leonardo S.p.A. and supported by the EU Horizon project “ELIAS - European Lighthouse of AI for Sustainability” (No. 101120237).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samuele Poppi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Poppi, S., Sarto, S., Cornia, M., Baraldi, L., Cucchiara, R. (2025). Unlearning Vision Transformers Without Retaining Data via Low-Rank Decompositions. In: Antonacopoulos, A., Chaudhuri, S., Chellappa, R., Liu, CL., Bhattacharya, S., Pal, U. (eds) Pattern Recognition. ICPR 2024. Lecture Notes in Computer Science, vol 15303. Springer, Cham. https://doi.org/10.1007/978-3-031-78122-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-78122-3_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-78121-6

  • Online ISBN: 978-3-031-78122-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics