Loading [MathJax]/extensions/MathMenu.js
Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning | IEEE Journals & Magazine | IEEE Xplore

Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning


Abstract:

Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct pr...Show More

Abstract:

Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.
Published in: IEEE/ACM Transactions on Networking ( Volume: 32, Issue: 2, April 2024)
Page(s): 1407 - 1422
Date of Publication: 26 September 2023

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.