Abstract:
Federated learning (FL) provides a learning framework without participants sharing local raw data, but individual privacy is still at risk of disclosure through attacking...Show MoreMetadata
Abstract:
Federated learning (FL) provides a learning framework without participants sharing local raw data, but individual privacy is still at risk of disclosure through attacking the trained models. Due to the strong privacy guarantee, differential privacy (DP) is widely applied to FL to avoid privacy leakage. Traditional private learning adds noise directly to the gradients. The continuous accumulated noise on parameter models severely impairs learning effectiveness. To solve this problem, we introduce the idea of differentially private continuous data release (DPCR) into FL and propose an FL framework based on DPCR (FL-DPCR). Meanwhile, our proposed Equivalent Aggregation Theorem demonstrates that DPCR effectively reduces the overall error added to parameter models and improves FL's accuracy. To improve FL-DPCR's learning effectiveness, we introduce Matrix Mechanism to construct a release strategy and design a binary-indexed-tree (BIT) based DPCR model for Gaussian mechanism (BCRG). By solving a complex nonlinear programming problem with negative exponents, BCRG achieves optimal release accuracy efficiently. Besides, we exploit the residual privacy budget to boost the accuracy further and propose an advanced BCRG version (ABCRG). Our experiments show that, compared to traditional FL with DP, our achievements improve the accuracy with gains ranging from 3.4% on FMNIST to 65.7% on PAMAP2.
Published in: IEEE Transactions on Dependable and Secure Computing ( Volume: 21, Issue: 5, Sept.-Oct. 2024)