Robust and Verifiable Privacy Federated Learning | IEEE Journals & Magazine | IEEE Xplore

Robust and Verifiable Privacy Federated Learning


Impact Statement:Society is becoming more and more dependent on data, so people and enterprises are beginning to place a high value on their data and demand more privacy protection. Howev...Show More

Abstract:

Federated learning (FL) safeguards user privacy by uploading gradients instead of raw data. However, inference attacks can reconstruct raw data using gradients uploaded b...Show More
Impact Statement:
Society is becoming more and more dependent on data, so people and enterprises are beginning to place a high value on their data and demand more privacy protection. However, new technologies to protect privacy bring new security challenges, such as how to ensure that the uploaded data are honest and the aggregation results are based on the user's data. We are the first to propose an approach to address these challenges by protecting users' privacy while maintaining the robustness and integrity of their data during aggregation. This approach can be useful in areas such as healthcare, finance, and IoT, where honest data and privacy protection are crucial.

Abstract:

Federated learning (FL) safeguards user privacy by uploading gradients instead of raw data. However, inference attacks can reconstruct raw data using gradients uploaded by users in FL. To mitigate this issue, researchers have combined privacy computing techniques with FL. However, these techniques may not ensure the Byzantine robustness of aggregation or the integrity of the aggregated outcomes. Most current robust privacy FL methods assess differences between gradients and benchmarks in the direction, allowing adversaries to poison the aggregation against the magnitude. Furthermore, these methods cannot ensure the integrity of the aggregation results. To overcome these challenges, this study proposes a novel algorithm, robust and verifiable privacy federated learning (RVPFL), which can more effectively eliminate the poisoning attack of the opponent by measuring the direction and magnitude of the gradient in the ciphertext state. The proposed algorithm guarantees the integrity of serve...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 4, April 2024)
Page(s): 1895 - 1908
Date of Publication: 28 August 2023
Electronic ISSN: 2691-4581

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.