Abstract:
Due to the different privacy and local model quality requirements for each participant, federated learning (FL) is vulnerable to membership inference attacks. To solve th...Show MoreMetadata
Abstract:
Due to the different privacy and local model quality requirements for each participant, federated learning (FL) is vulnerable to membership inference attacks. To solve this issue, we propose a risk-aware reinforcement learning (RL)-based personalized differentially private FL framework. This framework uses local model accuracy and privacy loss as the constraints to satisfy the user’s personalized requirements. By designing a multi-agent RL, this framework optimizes perturbation policy including perturbation mechanisms and parameters (such as privacy budget and probabilistic relaxation). The goal of each participant is to improve global accuracy and reduce privacy loss, attack success rate, and short-term risk value. Firstly, the framework designs a two-level hierarchical policy selection module to choose the perturbation policy to accelerate learning speed. Secondly, our proposed framework designs a punishment function to evaluate short-term risk and an R-network to estimate long-term risk, which guarantees safe exploration. Thirdly, this framework formulates an improved Boltzmann policy distribution to increase the impact of risk, thus avoiding risky policies that may cause severe privacy leakage or local task failure. We also analyze the convergence performance and provide privacy analysis for both Gaussian and Laplace mechanisms. Experimental results based on the MNIST dataset demonstrate the effectiveness of our framework compared with benchmarks.
Published in: IEEE Transactions on Information Forensics and Security ( Volume: 20)