Skip to main content

Lightweight Byzantine-Robust and Privacy-Preserving Federated Learning

  • Conference paper
  • First Online:
Euro-Par 2024: Parallel Processing (Euro-Par 2024)

Abstract

Federated learning (FL) is a distributed machine learning approach that reduces data transfer by aggregating gradients from multiple users. However, this process raises concerns about user privacy, leading to the emergence of privacy preserving FL. Unfortunately, this development poses new Byzantine-robustness challenges as poisoning attacks become difficult to detect. Existing byzantine-robust algorithms operate primarily in plaintext, and crucially, current byzantine-robust privacy FL methods fail to concurrently defend against adaptive attacks. In response, we propose a lightweight, byzantine-robust, and privacy-preserving federated learning framework (LRFL), employing shuffle functions and encryption masks to ensure privacy. In addition, we comprehensively calculate the similarity of the direction and magnitude of each gradient vector to ensure byzantine-robustness. To the best of our knowledge, LRFL is the first byzantine-robust privacy preserving FL capable of identifying malicious users based on gradient angles and magnitudes. What’s more, the theoretical complexity of LRFL is \(\mathcal {O}(dN + dN\log N)\), comparable to byzantine-robust FL with user number N and gradient dimension d. Experimental results demonstrate that LRFL achieves similar accuracy to state-of-the-art methods under multiple attack scenarios.

This work is supported by the Major Research Plan of Hubei Province under Grant/Award NO. 2023BAA027 and the project of Science, Technology and Innovation Commission of Shenzhen Municipality of China under Grant No. JCYJ20210324120002006.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Balle, B., Bell, J., Gascón, A., Nissim, K.: The privacy blanket of the shuffle model. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019. LNCS, vol. 11693, pp. 638–667. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26951-7_22

    Chapter  Google Scholar 

  2. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, 26 June–1 July 2012. icml.cc/Omnipress (2012)

    Google Scholar 

  3. Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: byzantine tolerant gradient descent. In: NIPS (2017)

    Google Scholar 

  4. Bonawitz, K.A., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, 30 October–03 November 2017, pp. 1175–1191. ACM (2017)

    Google Scholar 

  5. Carlini, N., Chien, S., Nasr, M., Song, S., Terzis, A., Tramèr, F.: Membership inference attacks from first principles. In: 43rd IEEE Symposium on Security and Privacy, SP 2022, San Francisco, CA, USA, 22–26 May 2022, pp. 1897–1914. IEEE (2022)

    Google Scholar 

  6. Dang, T.K., Lan, X., Weng, J., Feng, M.: Federated learning for electronic health records. ACM Trans. Intell. Syst. Technol. 13(5), 1–17 (2022)

    Article  Google Scholar 

  7. Dwork, C.: Differential privacy. In: van Tilborg, H.C.A., Jajodia, S. (eds.) Encyclopedia of Cryptography and Security, pp. 338–340. Springer, Boston (2011). https://doi.org/10.1007/978-1-4419-5906-5_752

    Chapter  Google Scholar 

  8. Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    MathSciNet  Google Scholar 

  9. Fang, M., Cao, X., Jia, J., Gong, N.Z.: Local model poisoning attacks to byzantine-robust federated learning. CoRR, abs/1911.11815 (2019)

    Google Scholar 

  10. Fung, C., Yoon, C.J.M., Beschastnikh, I.: The limitations of federated learning in sybil settings. In: RAID (2020)

    Google Scholar 

  11. Gentry, C.: Fully homomorphic encryption using ideal lattices. In: Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC 2009, Bethesda, MD, USA, 31 May–2 June 2009, pp. 169–178. ACM (2009)

    Google Scholar 

  12. Guo, X., et al.: VeriFL: communication-efficient and fast verifiable aggregation for federated learning. IEEE Trans. Inf. Forensics Secur. 16, 1736–1751 (2021)

    Article  Google Scholar 

  13. Jebreel, N.M., Domingo-Ferrer, J., Blanco-Justicia, A., Sánchez, D.: Enhanced security and privacy via fragmented federated learning. CoRR, abs/2207.05978 (2022)

    Google Scholar 

  14. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  15. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  16. Liu, X., Li, H., Xu, G., Chen, Z., Huang, X., Lu, R.: Privacy-enhanced federated learning against poisoning adversaries. IEEE Trans. Inf. Forensics Secur. 16, 4574–4588 (2021)

    Article  Google Scholar 

  17. Lu, Z., Lu, S., Tang, X., Wu, J.: Robust and verifiable privacy federated learning. IEEE Trans. Artif. Intell. 1–14 (2023)

    Google Scholar 

  18. Ma, Z., Ma, J., Miao, Y., Li, Y., Deng, R.H.: ShieldFL: mitigating model poisoning attacks in privacy-preserving federated learning. IEEE Trans. Inf. Forensics Secur. 17, 1639–1654 (2022)

    Article  Google Scholar 

  19. McMahan, H.B., Moore, E., Ramage, D., Arcas, B.A.: Federated learning of deep networks using model averaging. CoRR, abs/1602.05629 (2016)

    Google Scholar 

  20. Mohassel, P., Zhang, Y.: Secureml: a system for scalable privacy-preserving machine learning. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 19–38 (2017)

    Google Scholar 

  21. Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13(5), 1333–1345 (2018)

    Article  Google Scholar 

  22. Shejwalkar, V., Houmansadr, A., Kairouz, P., Ramage, D.: Back to the drawing board: a critical evaluation of poisoning attacks on production federated learning. In: 2022 IEEE Symposium on Security and Privacy (SP), pp. 1354–1371 (2022)

    Google Scholar 

  23. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015)

    Google Scholar 

  24. Truex, S., et al.: A hybrid approach to privacy-preserving federated learning. In: Cavallaro, L., et al. (eds.) Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, AISec@CCS 2019, London, UK, 15 November 2019, pp. 1–11. ACM (2019)

    Google Scholar 

  25. Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, 6–12 December 2020, virtual (2020)

    Google Scholar 

  26. Wei, K., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)

    Article  Google Scholar 

  27. Wu, J., Zhang, W., Luo, F.: On the security of “LSFL: a lightweight and secure federated learning scheme for edge computing’’. IEEE Trans. Inf. Forensics Secur. 19, 3481–3482 (2024)

    Article  Google Scholar 

  28. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017)

    Google Scholar 

  29. Xu, G., Li, H., Liu, S., Yang, K., Lin, X.: Verifynet: secure and verifiable federated learning. IEEE Trans. Inf. Forensics Secur. 15, 911–926 (2020)

    Article  Google Scholar 

  30. Yao, A.C.: Protocols for secure computations. In: 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982), pp. 160–164 (1982)

    Google Scholar 

  31. Yin, D., Chen, Y., Ramchandran, K., Bartlett, P.L.: Byzantine-robust distributed learning: towards optimal statistical rates. CoRR, abs/1803.01498 (2018)

    Google Scholar 

  32. Zhang, Z., et al.: A lightweight and secure federated learning scheme for edge computing. IEEE Trans. Inf. Forensics Secur. 18, 365–379 (2023)

    Article  Google Scholar 

  33. Zhu, L., Han, S.: Deep leakage from gradients. In: Yang, Q., Fan, L., Yu, H. (eds.) Federated Learning. LNCS (LNAI), vol. 12500, pp. 17–31. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-63076-8_2

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongquan Cui .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lu, Z. et al. (2024). Lightweight Byzantine-Robust and Privacy-Preserving Federated Learning. In: Carretero, J., Shende, S., Garcia-Blas, J., Brandic, I., Olcoz, K., Schreiber, M. (eds) Euro-Par 2024: Parallel Processing. Euro-Par 2024. Lecture Notes in Computer Science, vol 14802. Springer, Cham. https://doi.org/10.1007/978-3-031-69766-1_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-69766-1_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-69765-4

  • Online ISBN: 978-3-031-69766-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics