Skip to main content

Advertisement

Log in

BVDFed: Byzantine-resilient and verifiable aggregation for differentially private federated learning

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

Federated Learning (FL) has emerged as a powerful technology designed for collaborative training between multiple clients and a server while maintaining data privacy of clients. To enhance the privacy in FL, Differentially Private Federated Learning (DPFL) has gradually become one of the most effective approaches. As DPFL operates in the distributed settings, there exist potential malicious adversaries who manipulate some clients and the aggregation server to produce malicious parameters and disturb the learning model. However, existing aggregation protocols for DPFL concern either the existence of some corrupted clients (Byzantines) or the corrupted server. Such protocols are limited to eliminate the effects of corrupted clients and server when both are in existence simultaneously due to the complicated threat model. In this paper, we elaborate such adversarial threat model and propose BVDFed. To our best knowledge, it is the first Byzantine-resilient and Verifiable aggregation for Differentially private FEDerated learning. In specific, we propose Differentially Private Federated Averaging algorithm (DPFA) as our primary workflow of BVDFed, which is more lightweight and easily portable than traditional DPFL algorithm. We then introduce Loss Score to indicate the trustworthiness of disguised gradients in DPFL. Based on Loss Score, we propose an aggregation rule DPLoss to eliminate faulty gradients from Byzantine clients during server aggregation while preserving the privacy of clients’ data. Additionally, we design a secure verification scheme DPVeri that are compatible with DPFA and DPLoss to support the honest clients in verifying the integrity of received aggregated results. And DPVeri also provides resistance to collusion attacks with no more than t participants for our aggregation. Theoretical analysis and experimental results demonstrate our aggregation to be feasible and effective in practice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. McMahan B, Moore E, Ramage D, Hampson S, Arcas B A Y. Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. 2017, 1273–1282

  2. Zhu L, Liu Z, Han S. Deep leakage from gradients. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 1323

  3. Zhao B, Mopuri K R, Bilen H. iDLG: improved deep leakage from gradients. 2020, arXiv preprint arXiv: 2001.02610

  4. Geiping J, Bauermeister H, Dröge H, Moeller M. Inverting gradients -how easy is it to break privacy in federated learning? In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 1421

  5. Geyer R C, Klein T, Nabi M. Differentially private federated learning: a client level perspective. 2017, arXiv preprint, arXiv: 1712.07557

  6. Hitaj B, Ateniese G, Perez-Cruz F. Deep models under the GAN: Information leakage from collaborative deep learning. In: Proceedings of 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017, 603–618

  7. Wei W, Liu L. Gradient leakage attack resilient deep learning. IEEE Transactions on Information Forensics and Security, 2022, 17: 303–316

    Article  Google Scholar 

  8. Shejwalkar V, Houmansadr A. Manipulating the byzantine: optimizing model poisoning attacks and defenses for federated learning. In: Proceedings of the 28th Annual Network and Distributed System Security Symposium. 2021

  9. Xu G, Li H, Liu S, Yang K, Lin X. VerifyNet: secure and verifiable federated learning. IEEE Transactions on Information Forensics and Security, 2020, 15: 911–926

    Article  Google Scholar 

  10. Li M, Xiao D, Liang J, Huang H. Communication-efficient and byzantine-robust differentially private federated learning. IEEE Communications Letters, 2022, 26(8): 1725–1729

    Article  Google Scholar 

  11. Zhou J, Wu N, Wang Y, Gu S, Cao Z, Dong X, Choo K K R. A differentially private federated learning model against poisoning attacks in edge computing. IEEE Transactions on Dependable and Secure Computing, 2023, 20(3): 1941–1958

    Google Scholar 

  12. Ma X, Sun X, Wu Y, Liu Z, Chen X, Dong C. Differentially private byzantine-robust federated learning. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(12): 3690–3701

    Article  Google Scholar 

  13. Xiang M, Su L. β-stochastic sign SGD: a byzantine resilient and differentially private gradient compressor for federated learning. 2022, arXiv preprint arXiv: 2210.00665

  14. Guo X, Liu Z, Li J, Gao J, Hou B, Dong C, Baker T. VeriFL: communication-efficient and fast verifiable aggregation for federated learning. IEEE Transactions on Information Forensics and Security, 2021, 16: 1736–1751

    Article  Google Scholar 

  15. Abadi M, Chu A, Goodfellow I, McMahan H B, Mironov I, Talwar K, Zhang L. Deep learning with differential privacy. In: Proceedings of 2016 ACM SIGSAC Conference on Computer and Communications Security. 2016, 308–318

  16. Yang Q, Liu Y, Chen T, Tong Y. Federated machine learning: concept and applications. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): 12

    Article  Google Scholar 

  17. Kairouz P, McMahan H B, Avent B, Bellet A, Bennis M, et al. Advances and open problems in federated learning. Foundations and Trends in Machine Learning, 2021, 14(1–2): 210

    Article  Google Scholar 

  18. Tolpegin V, Truex S, Gursoy M E, Liu L. Data poisoning attacks against federated learning systems. In: Proceedings of the 25th European Symposium on Research in Computer Security. 2020, 480–501

  19. Xia G, Chen J, Yu C, Ma J. Poisoning attacks in federated learning: a survey. IEEE Access, 2023, 11: 10708–10722

    Article  Google Scholar 

  20. Dwork C. Differential privacy. In: Proceedings of the 33rd International Conference on Automata, Languages and Programming. 2006, 1–12

  21. Dwork C, Roth A. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 2014, 9(3–4): 211–407

    MathSciNet  Google Scholar 

  22. Dwork C, McSherry F, Nissim K, Smith A. Calibrating noise to sensitivity in private data analysis. In: Proceedings of the 3rd Theory of Cryptography Conference. 2006, 265–284

  23. Dwork C. A firm foundation for private data analysis. Communications of the ACM, 2011, 54(1): 86–95

    Article  Google Scholar 

  24. Krohn M N, Freedman M J, Mazieres D. On-the-fly verification of rateless erasure codes for efficient content distribution. In: Proceedings of IEEE Symposium on Security and Privacy, 2004, 226–240

  25. Pedersen T P. Non-interactive and information-theoretic secure verifiable secret sharing. In: Proceedings of Annual International Cryptology Conference. 1992, 129–140

  26. Shamir A. How to share a secret. Communications of the ACM, 1979, 22(11): 612–613

    Article  MathSciNet  Google Scholar 

  27. Lyu L, Yu H, Ma X, Chen C, Sun L, Zhao J, Yang Q, Yu P S. Privacy and robustness in federated learning: attacks and defenses. IEEE Transactions on Neural Networks and Learning Systems, 2022, doi: https://doi.org/10.1109/TNNLS.2022.3216981.

  28. McMahan H B, Ramage D, Talwar K, Zhang L. Learning differentially private recurrent language models. In: Proceedings of the 6th International Conference on Learning Representations. 2018

  29. Lyu L, Nandakumar K, Rubinstein B, Jin J, Bedo J, Palaniswami M. PPFA: privacy preserving fog-enabled aggregation in smart grid. IEEE Transactions on Industrial Informatics, 2018, 14(8): 3733–3744

    Article  Google Scholar 

  30. Rastogi V, Nath S. Differentially private aggregation of distributed time-series with transformation and encryption. In: Proceedings of 2010 ACM SIGMOD International Conference on Management of Data. 2010, 735–746

  31. Agarwal N, Suresh A T, Yu F, Kumar S, McMahan H B. cpSGD: communication-efficient and differentially-private distributed sgd. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 7575–7586

  32. Duchi J C, Jordan M I, Wainwright M J. Local privacy and statistical minimax rates. In: Proceedings of the 54th IEEE Annual Symposium on Foundations of Computer Science. 2013, 429–438

  33. Wu N, Farokhi F, Smith D, Kaafar M A. The value of collaboration in convex machine learning with differential privacy. In: Proceedings of 2020 IEEE Symposium on Security and Privacy. 2020, 304–317

  34. Zhou Y, Liu X, Fu Y, Wu D, Li C, Yu S. Optimizing the numbers of queries and replies in federated learning with differential privacy. 2021, arXiv preprint, arXiv: 2107.01895

  35. Xie C, Koyejo S, Gupta I. Zeno: distributed stochastic gradient descent with suspicion-based fault-tolerance. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 6893–6901

  36. Wilcox-O’Hearn Z. Bitcoin privacy technologies - zerocash and confidential transactions. weusecoins.com/bitcoin-privacy-technologies-zerocash-confidential-transactions/. 2015

  37. Truex S, Liu L, Chow K H, Gursoy M E, Wei W. LDP-fed: federated learning with local differential privacy. In: Proceedings of the 3rd ACM International Workshop on Edge Systems, Analytics and Networking. 2020, 61–66

  38. Cao X, Fang M, Liu J, Gong N Z. FLTrust: Byzantine-robust federated learning via trust bootstrapping. In: Proceedings of the 28th Annual Network and Distributed System Security Symposium. 2021

  39. Xu Y, Peng C, Tan W, Tian Y, Ma M, Niu K. Non-interactive verifiable privacy-preserving federated learning. Future Generation Computer Systems, 2022, 128: 365–380

    Article  Google Scholar 

  40. Blanchard P, Mhamdi E M E, Guerraoui R, Stainer J. Machine learning with adversaries: Byzantine tolerant gradient descent. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 118–128

  41. Mhamdi E M E, Guerraoui R, Rouault S. The hidden vulnerability of distributed learning in byzantium. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 3518–3527

  42. Yin D, Chen Y, Ramchandran K, Bartlett P L. Byzantine-robust distributed learning: Towards optimal statistical rates. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 5636–5645

  43. Xie C, Koyejo O, Gupta I. Generalized byzantine-tolerant SGD. 2018, arXiv preprint arXiv: 1802.10116

  44. Ma Z, Ma J, Miao Y, Li Y, Deng R H. Shieldfl: mitigating model poisoning attacks in privacy-preserving federated learning. IEEE Transactions on Information Forensics and Security, 2022, 17: 1639–1654

    Article  Google Scholar 

  45. Gu Z, Yang Y. Detecting malicious model updates from federated learning on conditional variational autoencoder. In: Proceedings of 2021 IEEE International Parallel and Distributed Processing Symposium. 2021, 671–680

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant Nos. 62072466, 62102430, 62102429, 62102422, U1811462), Natural Science Foundation of Hunan Province, China (No. 2021JJ40688) and Science Research Plan Program by NUDT (No. ZK22-50).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaojing Fu.

Ethics declarations

Competing interests The authors declare that they have no competing interests or financial conflicts to disclose.

Additional information

Xinwen Gao is currently working toward the master degree in cyberspace security with the College of Computer, National University of Defense Technology, China. His research interests include federated learning and applied cryptography.

Shaojing Fu received his PhD degree in applied cryptography from National University of Defense Technology, China in 2010 and has studied at University of Tokyo, Japan for one year during his PhD. He is currently a professor at College of Computer, National University of Defense Technology, China. His research interests include cryptography theory and application, cloud storage security, Blockchain.

Lin Liu received his PhD degree in computer science and technology from National University of Defense Technology, China in 2020. He is currently an associate professor at College of Computer, National University of Defense Technology, China. His research interests include applied cryptography and privacy-preserving machine learning.

Yuchuan Luo received the PhD degree in computer science from the National University of Defense Technology, China in 2019. He is currently a lecturer with the College of Computer, National University of Defense Technology, China. His research interests include applied cryptography, data security, and adversarial machine learning.

Electronic Supplementary Material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, X., Fu, S., Liu, L. et al. BVDFed: Byzantine-resilient and verifiable aggregation for differentially private federated learning. Front. Comput. Sci. 18, 185810 (2024). https://doi.org/10.1007/s11704-023-3142-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-023-3142-5

Keywords