Loading [a11y]/accessibility-menu.js
Efficient Defenses Against Output Poisoning Attacks on Local Differential Privacy | IEEE Journals & Magazine | IEEE Xplore

Efficient Defenses Against Output Poisoning Attacks on Local Differential Privacy


Abstract:

Local differential privacy (LDP) is a promising technique to realize privacy-preserving data aggregation without a trusted aggregator. Normally, an LDP protocol requires ...Show More

Abstract:

Local differential privacy (LDP) is a promising technique to realize privacy-preserving data aggregation without a trusted aggregator. Normally, an LDP protocol requires each user to locally perturb his raw data and submit the perturbed data to the aggregator. Consequently, LDP is vulnerable to output poisoning attacks. Malicious users can skip the perturbation and submit carefully crafted data to the aggregator, altering the data aggregation results. Existing verifiable LDP protocols, which can verify the perturbation process and prevent output poisoning attacks, usually incur significant computation and communication costs, due to the use of zero-knowledge proofs. In this paper, we analyze the attacks on two classic LDP protocols for frequency estimation, namely GRR and OUE, and propose two verifiable LDP protocols. The proposed protocols are based on an interactive framework, where the user and the aggregator complete the perturbation together. By providing some additional information, which reveals nothing about the raw data but helps the verification, the user can convince the aggregator that he is incapable of launching an output poisoning attack. Simulation results demonstrate that the proposed protocols have good defensive performance and outperform existing approaches in terms of efficiency.
Page(s): 5506 - 5521
Date of Publication: 15 August 2023

ISSN Information:

Funding Agency:


References

References is not available for this document.