Skip to main content

FedScale: A Federated Unlearning Method Mimicking Human Forgetting Processes

  • Conference paper
  • First Online:
Wireless Artificial Intelligent Computing Systems and Applications (WASA 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14997))

  • 297 Accesses

Abstract

The heightened demands for privacy protection, combined with legal mandates granting users the right to be forgotten, emphasize the growing importance of endowing models with the capability to forget specific sample information. Unlearning is akin to the reverse process of model training, which involves the elimination of the influence of training data on a machine learning model. In the context of federated learning, conducting unlearning is notably more demanding, as data is dispersed across multiple client endpoints. Due to privacy and security concerns, many conventional machine unlearning methods are no longer applicable. This paper draws inspiration from the forgetting processes in the human brain, highlighting that memory attrition is not solely a passive process. Therefore, we dichotomize the federated unlearning process into two components: active forgetting and passive forgetting. We make respective modifications to the updates of these two components to achieve efficient unlearning. We propose an approach called FedScale, which only sacrifices a fraction of storage space, enabling a rapid federated unlearning process without the need for retraining the federated model when it becomes necessary to forget a user’s data. Experimental evaluations are conducted on four datasets, demonstrating the effectiveness of FedScale unlearning. Furthermore, we show its resilience against backdoor attacks and membership inference attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  2. Yu, B., Mao, W., Lv, Y., Zhang, C., Xie, Y.: A survey on federated learning in data mining. Wiley Interdisc. Rev. Data Min. Knowl. Discovery 12(1), e1443 (2022)

    Article  Google Scholar 

  3. WeBank: Webank and Swiss Re signed cooperation MoU (2019). https://www.fedai.org/news/webank-and-swiss-re-signed-cooperation-mou/

  4. Wang, G.: Interpret federated learning with Shapley values. arXiv preprint arXiv:1905.04519 (2019)

  5. Gao, X., et al.: VeriFi: towards verifiable federated unlearning. arXiv preprint arXiv:2205.12709 (2022)

  6. Voigt, P., dem Bussche, A.V.: The EU General Data Protection Regulation (GDPR): A Practical Guide, 1st edn. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57959-7

    Book  Google Scholar 

  7. Bourtoule, L., et al.: Machine unlearning. In: 2021 IEEE Symposium on Security and Privacy (SP), pp. 141–159. IEEE (2021)

    Google Scholar 

  8. Shintre, S., Roundy, K.A., Dhaliwal, J.: Making machine learning forget. In: Naldi, M., Italiano, G., Rannenberg, K., Medina, M., Bourka, A. (eds.) Privacy Technologies and Policy: 7th Annual Privacy Forum, APF 2019, Rome, Italy, 13–14 June 2019, Proceedings 7, pp. 72–83. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21752-5_6

  9. Xu, H., Zhu, T., Zhang, L., Zhou, W., Yu, P.S.: Machine unlearning: a survey. ACM Comput. Surv. 56(1), 1–36 (2023)

    Google Scholar 

  10. Xie, C., Huang, K., Chen, P.Y., Li, B.: DBA: distributed backdoor attacks against federated learning. In: International Conference on Learning Representations (2019)

    Google Scholar 

  11. Liu, Y., Xu, L., Yuan, X., Wang, C., Li, B.: The right to be forgotten in federated learning: an efficient realization with rapid retraining. In: IEEE INFOCOM 2022-IEEE Conference on Computer Communications, pp. 1749–1758. IEEE (2022)

    Google Scholar 

  12. Li, Y., Chen, C., Zheng, X., Zhang, J.: Federated unlearning via active forgetting. arXiv preprint arXiv:2307.03363 (2023)

  13. Zhang, X., Sabandal, J.M., Tsaprailis, G., Davis, R.L.: Active forgetting requires sickie function in a dedicated dopamine circuit in drosophila. Proc. Natl. Acad. Sci. 119(38), e2204229119 (2022)

    Article  Google Scholar 

  14. Graves, L., Nagisetty, V., Ganesh, V.: Amnesiac machine learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 11516–11524 (2021)

    Google Scholar 

  15. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  16. Wu, C., Zhu, S., Mitra, P.: Federated unlearning with knowledge distillation. arXiv preprint arXiv:2201.09441 (2022)

  17. Wang, J., Guo, S., Xie, X., Qi, H.: Federated unlearning via class-discriminative pruning. In: Proceedings of the ACM Web Conference 2022, pp. 622–632 (2022)

    Google Scholar 

  18. Zhu, X., Li, G., Hu, W.: Heterogeneous federated knowledge graph embedding learning and unlearning. In: Proceedings of the ACM Web Conference 2023, pp. 2444–2454 (2023)

    Google Scholar 

  19. Sommer, D.M., Song, L., Wagh, S., Mittal, P.: Towards probabilistic verification of machine unlearning. arXiv preprint arXiv:2003.04247 (2020)

  20. Hu, H., Salcic, Z., Dobbie, G., Chen, J., Sun, L., Zhang, X.: Membership inference via backdooring. arXiv preprint arXiv:2206.04823 (2022)

  21. Ozdayi, M.S., Kantarcioglu, M., Gel, Y.R.: Defending against backdoors in federated learning with robust learning rate. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 9268–9276 (2021)

    Google Scholar 

  22. Liu, G., Ma, X., Yang, Y., Wang, C., Liu, J.: FedEraser: enabling efficient client-level data removal from federated learning models. In: 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pp. 1–10. IEEE (2021)

    Google Scholar 

  23. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

Download references

Acknowledgements

This work is supported by the National Key R&D Program of China (No.2021YFB3100700), the National Natural Science Foundation of China (No. U22B2029, 62272228, U20A20176, 62071222), Shenzhen Science and Technology Program (Grant No. JCYJ20210324134408023) and the Natural Science Foundation of Jiangsu Province (No. BK20220075).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Huiwen Wu or Liming Fang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, W., Wu, H., Fang, L., Zhou, L. (2025). FedScale: A Federated Unlearning Method Mimicking Human Forgetting Processes. In: Cai, Z., Takabi, D., Guo, S., Zou, Y. (eds) Wireless Artificial Intelligent Computing Systems and Applications. WASA 2024. Lecture Notes in Computer Science, vol 14997. Springer, Cham. https://doi.org/10.1007/978-3-031-71464-1_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-71464-1_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-71463-4

  • Online ISBN: 978-3-031-71464-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics