skip to main content
10.1145/3589334.3645462acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Incentive and Dynamic Client Selection for Federated Unlearning

Published: 13 May 2024 Publication History

Abstract

With the development of AI-Generated Content (AIGC), data is becoming increasingly important, while the right of data to be forgotten, which is defined in the General Data Protection Regulation (GDPR) and permits data owners to remove information from AIGC models, is also arising. To protect this right in a distributed manner corresponding to federated learning, federated unlearning is employed to eliminate history model updates and unlearn the global model to mitigate data effects from the targeted clients intending to withdraw from training tasks. To diminish centralization failures, the hierarchical federated framework that is distributed and collaborative can be integrated into the unlearning process, wherein each cluster can support multiple AIGC tasks. However, two issues remain unexplored in current federated unlearning solutions: 1) getting remaining clients, those not withdraw from the task, to join the unlearning process, which demands additional resources and notably has fewer benefits than federated learning, particularly in achieving the original performance via alternative unlearning processes and 2) exploring mechanisms for dynamic unlearning in the selection of remaining clients possessing unbalanced data to avoid starting the unlearning from scratch. We initially consider a two-level incentive and unlearning mechanism to address the aforementioned challenges. At the lower level, we utilize evolutionary game theory to model the dynamic participation process, aiming to attract remaining clients to participate in retraining tasks. At the upper level, we integrate deep reinforcement learning into federated unlearning to dynamically select remaining clients to join the unlearning process to mitigate the bias introduced by the unbalanced data distribution among clients. Experimental results demonstrate that the proposed mechanisms outperform comparative methods, enhancing utilities and improving accuracy.

Supplemental Material

MP4 File
Supplemental video

References

[1]
John V Pavlik. Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education. Journal. Mass Commun. Educ., 2023.
[2]
Heng Xu, Tianqing Zhu, Lefeng Zhang, Wanlei Zhou, and Philip S Yu. Machine unlearning: A survey. ACM Computing Surveys, 56(1):1--36, 2023.
[3]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273--1282. PMLR, 2017.
[4]
Gaoyang Liu, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. Federaser: Enabling efficient client-level data removal from federated learning models. In 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pages 1--10. IEEE, 2021.
[5]
Yi Liu, Lei Xu, Xingliang Yuan, Cong Wang, and Bo Li. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In IEEE INFOCOM 2022-IEEE Conference on Computer Communications, pages 1749--1758. IEEE, 2022.
[6]
Lefeng Zhang, Tianqing Zhu, Haibin Zhang, Ping Xiong, and Wanlei Zhou. Fedrecovery: Differentially private machine unlearning for federated learning frameworks. IEEE Transactions on Information Forensics and Security, 2023.
[7]
Ningxin Su and Baochun Li. Asynchronous federated unlearning. In IEEE INFOCOM 2023-IEEE Conference on Computer Communications, pages 1--10. IEEE, 2023.
[8]
Fei Wang, Baochun Li, and Bo Li. Federated unlearning and its privacy threats. IEEE Network, 2023.
[9]
ChenWu, Sencun Zhu, and Prasenjit Mitra. Federated unlearning with knowledge distillation. arXiv preprint arXiv:2201.09441, 2022.
[10]
Anisa Halimi, Swanand Kadhe, Ambrish Rawat, and Nathalie Baracaldo. Federated unlearning: How to efficiently erase a client in fl? arXiv preprint arXiv:2207.05521, 2022.
[11]
Hui Xia, Shuo Xu, Jiaming Pei, Rui Zhang, Zhi Yu,Weitao Zou, LukunWang, and Chao Liu. Fedme 2: Memory evaluation & erase promoting federated unlearning in dtmn. IEEE Journal on Selected Areas in Communications, 2023.
[12]
Weiqi Wang, Zhiyi Tian, Chenhan Zhang, An Liu, and Shui Yu. Bfu: Bayesian federated unlearning with parameter self-sharing. In Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security, pages 567--578, 2023.
[13]
Haocheng Xia, Jinfei Liu, Jian Lou, Zhan Qin, Kui Ren, Yang Cao, and Li Xiong. Equitable data valuation meets the right to be forgotten in model markets. Proceedings of the VLDB Endowment, 16(11):3349--3362, 2023.
[14]
Junxiao Wang, Song Guo, Xin Xie, and Heng Qi. Federated unlearning via classdiscriminative pruning. In Proceedings of the ACM Web Conference 2022, pages 622--632, 2022.
[15]
Wei Yang Bryan Lim, Jer Shyuan Ng, Zehui Xiong, Jiangming Jin, Yang Zhang, Dusit Niyato, Cyril Leung, and Chunyan Miao. Decentralized edge intelligence: A dynamic resource allocation framework for hierarchical federated learning. IEEE Transactions on Parallel and Distributed Systems, 33(3):536--550, 2021.
[16]
Jer Shyuan Ng, Wei Yang Bryan Lim, Zehui Xiong, Xianbin Cao, Dusit Niyato, Cyril Leung, and Dong In Kim. A hierarchical incentive design toward motivating participation in coded federated learning. IEEE Journal on Selected Areas in Communications, 40(1):359--375, 2021.
[17]
Xiaowen Gong, Lingjie Duan, Xu Chen, and Junshan Zhang. When social network effect meets congestion effect in wireless networks: Data usage equilibrium and optimal pricing. IEEE Journal on Selected Areas in Communications, 35(2):449--462, 2017.
[18]
Wenji He, Haipeng Yao, Tianle Mai, Fu Wang, and Mohsen Guizani. Three-stage stackelberg game enabled clustered federated learning in heterogeneous uav swarms. IEEE Transactions on Vehicular Technology, 2023.
[19]
Josef Hofbauer and Karl Sigmund. Evolutionary game dynamics. Bulletin of the American mathematical society, 40(4):479--519, 2003.
[20]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pages 3--18. IEEE, 2017.
[21]
Hao Wang, Zakhary Kaplan, Di Niu, and Baochun Li. Optimizing federated learning on non-iid data with reinforcement learning. In IEEE INFOCOM 2020- IEEE Conference on Computer Communications, pages 1698--1707. IEEE, 2020.
[22]
Yijing Lin, Zhipeng Gao, Hongyang Du, Jiawen Kang, Dusit Niyato, Qian Wang, Jingqing Ruan, and Shaohua Wan. Drl-based adaptive sharding for blockchainbased federated learning. IEEE Transactions on Communications, 2023.
[23]
Hongyang Du, Zonghang Li, Dusit Niyato, Jiawen Kang, Zehui Xiong, Huawei Huang, and Shiwen Mao. Generative ai-aided optimization for ai-generated content (aigc) services in edge networks. arXiv preprint arXiv:2303.13052, 2023.
[24]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278--2324, 1998.
[25]
Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
[26]
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.

Cited By

View all
  • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024

Index Terms

  1. Incentive and Dynamic Client Selection for Federated Unlearning

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    WWW '24: Proceedings of the ACM Web Conference 2024
    May 2024
    4826 pages
    ISBN:9798400701719
    DOI:10.1145/3589334
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 May 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. deep reinforcement learning
    2. dynamic retraining
    3. federated unlearning

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    WWW '24
    Sponsor:
    WWW '24: The ACM Web Conference 2024
    May 13 - 17, 2024
    Singapore, Singapore

    Acceptance Rates

    Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)481
    • Downloads (Last 6 weeks)43
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media