Skip to main content

Private Parameter Aggregation for Federated Learning

  • Chapter
  • First Online:
Federated Learning
  • 2819 Accesses

Abstract

Federated learning enables multiple distributed participants (potentially on different datacenters or clouds) to collaborate and train machine/deep learning models by sharing parameters or gradients. However, sharing gradients, instead of centralizing data, may not be as private as one would expect. Reverse engineering attacks on plain text gradients have been demonstrated to be practically feasible. This problem has been made more insidious by the fact that participants or aggregators may reverse engineer model parameters while participating honestly in the protocol (the so-called honest, but curious trust model). Existing solutions for differentially private federated learning, while promising, lead to less accurate models and require nontrivial hyperparameter tuning. In this chapter, we (1) describe various trust models in federated learning and their challenges, (2) explore the use of secure multi-party computation techniques in federated learning, (3) explore how additive homomorphic encryption can be used efficiently for federated learning, (4) compare these techniques with others like the addition of differentially private noise and the use of specialized hardware, and (5) illustrate these techniques through real-world examples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L (2016) Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, association for computing machinery, New York, NY, CCS ’16, pp 308–318

    Google Scholar 

  2. Agarwal N, Suresh AT, Yu FXX, Kumar S, McMahan B (2018) cpSGD: Communication-efficient and differentially-private distributed SGD. In: NeurIPS 2018

    Google Scholar 

  3. Aono Y, Hayashi T, Trieu Phong L, Wang L (2016) Scalable and secure logistic regression via homomorphic encryption. In: Proceedings of the Sixth ACM conference on data and application security and privacy. Association for Computing Machinery, New York, NY, CODASPY ’16, pp 142–144

    Google Scholar 

  4. Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Kiddon C, Konečnỳ J, Mazzocchi S, McMahan HB et al (2019) Towards federated learning at scale: System design. Preprint. arXiv:190201046

    Google Scholar 

  5. Bottou L (1998) On-line learning and stochastic approximations. In: On-line learning in neural networks. Cambridge University Press, New York

    MATH  Google Scholar 

  6. Chen Y, Luo F, Li T, Xiang T, Liu Z, Li J (2020) A training-integrity privacy-preserving federated learning scheme with trusted execution environment. Inf Sci 522:69–79

    Article  Google Scholar 

  7. Cheng P, Eykholt K, Gu Z, Jamjoom H, Jayaram KR, Valdez E, Verma A (2021) Separation of powers in federated learning. CoRR abs/2105.09400. https://arxiv.org/abs/2105.09400, http://2105.09400

  8. Cramer R, Damgrd IB, Nielsen JB (2015) Secure multiparty computation and secret sharing, 1st edn. Cambridge University Press, Cambridge

    Book  Google Scholar 

  9. Dai W, Sunar B (2016) cuHE: A homomorphic encryption accelerator library. In: Cryptography and information security in the Balkans. Springer International Publishing

    Google Scholar 

  10. Damgård I, Pastro V, Smart N, Zakarias S (2012) Multiparty computation from somewhat homomorphic encryption. In: Proceedings of the 32nd annual cryptology conference on advances in cryptology — CRYPTO 2012 - Volume 7417. Springer-Verlag, Berlin, Heidelberg, pp 643–662

    Google Scholar 

  11. Data61 C (2020) Multi-Protocol SPDZ. https://github.com/data61/MP-SPDZ

  12. Dwork C, Roth A (2014) The algorithmic foundations of differential privacy. Found Trends Theor Comput Sci 9(3–4):211–407

    MathSciNet  MATH  Google Scholar 

  13. Evans D, Kolesnikov V, Rosulek M (2018) A pragmatic introduction to secure multi-party computation. Found Trends Privacy Secur 2:70

    Article  Google Scholar 

  14. Fredrikson M, Lantz E, Jha S, Lin S, Page D, Ristenpart T (2014) Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing. In: 23rd USENIX security symposium (USENIX Security 14). USENIX Association, San Diego, CA, pp 17–32. https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/fredrikson_matthew

  15. Fredrikson M, Jha S, Ristenpart T (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. Association for Computing Machinery, New York, NY, CCS ’15, pp 1322–1333

    Google Scholar 

  16. Garg S, Sahai A (2012) Adaptively secure multi-party computation with dishonest majority. In: Safavi-Naini R, Canetti R (eds) Advances in Cryptology – CRYPTO 2012. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 105–123

    Chapter  Google Scholar 

  17. Geiping J, Bauermeister H, Dröge H, Moeller M (2020) Inverting gradients–how easy is it to break privacy in federated learning? Preprint. arXiv:200314053

    Google Scholar 

  18. Gentry C (2010) Computing arbitrary functions of encrypted data. Commun ACM 53(3):97–105

    Article  Google Scholar 

  19. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778

    Google Scholar 

  20. Jayaram KR, Verma A, Verma A, Thomas G, Sutcher-Shepard C (2020) Mystiko: Cloud-mediated, private, federated gradient descent. In: 2020 IEEE 13th international conference on cloud computing (CLOUD). IEEE Computer Society, Los Alamitos, CA, pp 201–210

    Google Scholar 

  21. Jayaraman B, Evans D (2019) Evaluating differentially private machine learning in practice. In: 28th USENIX Security Symposium (USENIX Security 19), USENIX Association, Santa Clara, CA, pp 1895–1912. https://www.usenix.org/conference/usenixsecurity19/presentation/jayaraman

  22. Jost C, Lam H, Maximov A, Smeets BJM (2015) Encryption performance improvements of the paillier cryptosystem. IACR Cryptology ePrint Archive. https://eprint.iacr.org/2015/864

  23. Kairouz P, McMahan HB, Avent B, Bellet A, Bennis M, Bhagoji AN, Bonawitz K, Charles Z, Cormode G, Cummings R et al (2019) Advances and open problems in federated learning. Preprint. arXiv:191204977

    Google Scholar 

  24. Karger D, Lehman E, Leighton T, Panigrahy R, Levine M, Lewin D (1997) Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the world wide web. In: Proceedings of the twenty-ninth annual ACM symposium on theory of computing. Association for Computing Machinery, New York, NY, STOC ’97, pp 654–663

    Google Scholar 

  25. Keller M, Yanai A (2018) Efficient maliciously secure multiparty computation for ram. In: EUROCRYPT (3). Springer, pp 91–124. https://doi.org/10.1007/978-3-319-78372-7_4

  26. Keller M, Pastro V, Rotaru D (2018) Overdrive: Making SPDZ great again. In: Nielsen JB, Rijmen V (eds) Advances in cryptology – EUROCRYPT 2018. Springer International Publishing, Cham, pp 158–189

    Chapter  Google Scholar 

  27. Krizhevsky A (2009) Learning multiple layers of features from tiny images. https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf, https://www.cs.toronto.edu/~kriz/cifar.html

  28. Kumar V, Grama A, Gupta A, Karypis G (1994) Introduction to parallel computing: design and analysis of algorithms. Benjamin-Cummings Publishing, California

    MATH  Google Scholar 

  29. Lecun Y, Bengio Y, Hinton G (2015) Deep learning. Nat Cell Biol 521(7553):436–444

    Google Scholar 

  30. Lee J, Clifton C (2011) How much is enough? choosing 𝜖 for differential privacy. In: Lai X, Zhou J, Li H (eds) Information security. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 325–340

    Chapter  Google Scholar 

  31. Marr B (2018) 27 Incredible examples of AI and machine learning in practice. Forbes Magazine

    Google Scholar 

  32. McMahan HB, Andrew G (2018) A general approach to adding differential privacy to iterative training procedures. CoRR abs/1812.06210. http://arxiv.org/abs/1812.06210

  33. Melis L, Song C, De Cristofaro E, Shmatikov V (2019) Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE symposium on security and privacy. IEEE, pp 691–706

    Google Scholar 

  34. Millard C (2013) Cloud computing law. Oxford University Press

    Book  Google Scholar 

  35. Mironov I (2017) Rényi differential privacy. In: 2017 IEEE 30th computer security foundations symposium (CSF), pp 263–275

    Google Scholar 

  36. Mo F, Haddadi H, Katevas K, Marin E, Perino D, Kourtellis N (2021) PPFL: Privacy-preserving federated learning with trusted execution environments. In: Proceedings of the 19th annual international conference on mobile systems, applications, and services. Association for Computing Machinery, New York, NY, MobiSys ’21, pp 94–108

    Google Scholar 

  37. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y. Ng Reading digits in natural images with unsupervised feature learning NIPS workshop on deep learning and unsupervised feature learning 2011

    Google Scholar 

  38. Phong LT, Aono Y, Hayashi T, Wang L, Moriai S (2018) Privacy-preserving deep learning via additively homomorphic encryption. Trans Inf Forens Secur 13(5):1333–1345

    Article  Google Scholar 

  39. Rescorla E (2018) The transport layer security (TLS) protocol version 1.3. RFC 8446

    Google Scholar 

  40. Sabt M, Achemlal M, Bouabdallah A (2015) Trusted execution environment: What it is, and what it is not. In: 2015 IEEE Trustcom/BigDataSE/ISPA, vol 1, pp 57–64. https://doi.org/10.1109/Trustcom.2015.357

  41. Shokri R, Shmatikov V (2015) Privacy-preserving deep learning. In: ACM CCS ’15

    Google Scholar 

  42. Shokri R, Stronati M, Song C, Shmatikov V (2017) Membership inference attacks against machine learning models. In: 2017 IEEE symposium on security and privacy (SP), pp 3–18

    Google Scholar 

  43. Song S, Chaudhuri K, Sarwate A (2013) Stochastic gradient descent with differentially private updates. In: 2013 IEEE global conference on signal and information processing, GlobalSIP 2013 - Proceedings, 2013 IEEE global conference on signal and information processing, GlobalSIP 2013 - Proceedings, pp 245–248

    Google Scholar 

  44. Stallings W (2013) Cryptography and network security: principles and practice, 6th edn. Prentice Hall Press, Upper Saddle River

    Google Scholar 

  45. Truex S, Baracaldo N, Anwar A, Steinke T, Ludwig H, Zhang R, Zhou Y (2019) A hybrid approach to privacy-preserving federated learning. In: Proceedings of the 12th ACM workshop on artificial intelligence and security. Association for Computing Machinery, New York, NY, AISec’19, pp 1–11

    Google Scholar 

  46. Volos S, Vaswani K, Bruno R (2018) Graviton: Trusted execution environments on GPUs. In: 13th USENIX symposium on operating systems design and implementation (OSDI 18). USENIX Association, Carlsbad, CA, pp 681–696. https://www.usenix.org/conference/osdi18/presentation/volos

  47. Xu R, Baracaldo N, Zhou Y, Anwar A, Ludwig H (2019) Hybridalpha: An efficient approach for privacy-preserving federated learning. In: Proceedings of the 12th ACM workshop on artificial intelligence and security. Association for Computing Machinery, New York, NY, AISec’19, pp 13–23. https://doi.org/10.1145/3338501.3357371

  48. Yin H, Mallya A, Vahdat A, Alvarez JM, Kautz J, Molchanov P (2021) See through gradients: Image batch recovery via GradInversion. Preprint. arXiv:210407586

    Google Scholar 

  49. Zhang X, Li F, Zhang Z, Li Q, Wang C, Wu J (2020) Enabling execution assurance of federated learning at untrusted participants. In: IEEE INFOCOM 2020 - IEEE conference on computer communications, pp 1877–1886

    Google Scholar 

  50. Zhao B, Mopuri KR, Bilen H (2020) iDLG: Improved deep leakage from gradients. Preprint. arXiv:200102610

    Google Scholar 

  51. Zhu L, Liu Z, Han S (2019) Deep leakage from gradients. In: Advances in neural information processing systems, pp 14774–14784

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. R. Jayaram .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Jayaram, K.R., Verma, A. (2022). Private Parameter Aggregation for Federated Learning. In: Ludwig, H., Baracaldo, N. (eds) Federated Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-96896-0_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-96896-0_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-96895-3

  • Online ISBN: 978-3-030-96896-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics