Skip to main content

Cryptography-Inspired Federated Learning for Generative Adversarial Networks and Meta Learning

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14177))

Included in the following conference series:

Abstract

Federated learning (FL) aims to derive a “better” global model without direct access to individuals’ training data. It is traditionally done by aggregation over individual gradients with differentially private (DP) noises. We study an FL variant as a new point in the privacy-performance space. Namely, cryptographic aggregation is over local models instead of gradients; each contributor then locally trains their model using a DP version of Adam upon the “feedback” (e.g., fake samples from GAN – generative adversarial networks) derived from the securely-aggregated global model. Intuitively, this achieves the best of both worlds – more “expressive” models are processed in the encrypted domain instead of just gradients, without DP’s shortcoming, while heavyweight cryptography is minimized (at only the first step instead of the entire process). Practically, we showcase this new FL variant over GAN and meta-learning, for securing new data and new tasks.

S. S. M. Chow—Supported in part by the General Research Funds (CUHK 14210621 and 14209918), University Grants Committee, Hong Kong. Wei Song is supported by the Fundamental Research Funds for the Central Universities (N2316010).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    It can be used to evaluate ciphertext multiplications.

  2. 2.

    Although it has been wrapped up in some privacy libraries, the research literature lacks a self-contained description.

  3. 3.

    It is just an ingredient of GAN and not for any privacy purposes.

  4. 4.

    We remove this straightforward proof due to page limit.

References

  1. www.github.com/OpenMined/PySyft. Accessed 15 Oct 2023

  2. Microsoft SEAL (release 3.3). www.github.com/Microsoft/SEAL. Accessed 15 Oct 2023

  3. Abadi, M., et al.: Deep learning with differential privacy. In: CCS (2016)

    Google Scholar 

  4. Andrew, G., Thakkar, O., McMahan, B., Ramaswamy, S.: Differentially private learning with adaptive clipping. In: NeurIPS (2021)

    Google Scholar 

  5. Augenstein, S., et al.: Generative models for effective ML on private, decentralized datasets. In: ICLR (2020)

    Google Scholar 

  6. Bernstein, D.J., Hamburg, M., Krasnova, A., Lange, T.: Elligator: elliptic-curve points indistinguishable from uniform random strings. In: CCS (2013)

    Google Scholar 

  7. Bonawitz, K.A., et al.: Practical secure aggregation for privacy-preserving machine learning. In: CCS (2017)

    Google Scholar 

  8. Chen, D., Orekondy, T., Fritz, M.: GS-WGAN: a gradient-sanitized approach for learning differentially private generators. In: NeurIPS (2020)

    Google Scholar 

  9. Cheon, J.H., Kim, A., Kim, M., Song, Y.: Homomorphic encryption for arithmetic of approximate numbers. In: Takagi, T., Peyrin, T. (eds.) ASIACRYPT 2017. LNCS, vol. 10624, pp. 409–437. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70694-8_15

    Chapter  Google Scholar 

  10. Cynthia, McSherry, F., Nissim, K., Smith, A.D.: Calibrating noise to sensitivity in private data analysis. In: TCC (2006)

    Google Scholar 

  11. Damaskinos, G., Mendler-Dünner, C., Guerraoui, R., Papandreou, N., Parnell, T.P.: Differentially private stochastic coordinate descent. In: AAAI (2021)

    Google Scholar 

  12. Deng, J., Dong, W., Socher, R., Li, L., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)

    Google Scholar 

  13. Dong, Y., Chen, X., Jing, W., Li, K., Wang, W.: Meteor: improved secure 3-party neural network inference with reducing online communication costs. In: WWW (2023)

    Google Scholar 

  14. Dwork, C., McSherry, F., Nissim, K., Smith, A.D.: Calibrating noise to sensitivity in private data analysis. In: TCC (2006)

    Google Scholar 

  15. Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    MathSciNet  MATH  Google Scholar 

  16. Dwork, C., Rothblum, G.N., Vadhan, S.P.: Boosting and differential privacy. In: FOCS (2010)

    Google Scholar 

  17. Fan, C., Liu, P.: Federated generative adversarial learning. In: PRCV (2020)

    Google Scholar 

  18. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: CCS (2015)

    Google Scholar 

  19. Gadotti, A., Houssiau, F., Annamalai, M.S.M.S., de Montjoye, Y.: Pool inference attacks on local differential privacy: quantifying the privacy guarantees of apple’s count mean sketch in practice. In: USS (2022)

    Google Scholar 

  20. Gentry, C.: Fully homomorphic encryption using ideal lattices. In: STOC (2009)

    Google Scholar 

  21. Goodfellow, I.J., et al.: Generative adversarial nets. In: NeurIPS (2014)

    Google Scholar 

  22. Huang, Z., Hu, R., Guo, Y., Chan-Tin, E., Gong, Y.: DP-ADMM: ADMM-based distributed learning with differential privacy. IEEE Trans. Inf. Forensics Secur. 15, 1002–1012 (2020)

    Article  Google Scholar 

  23. Kairouz, P., Oh, S., Viswanath, P.: The composition theorem for differential privacy. In: ICML (2015)

    Google Scholar 

  24. Kim, M., Günlü, O., Schaefer, R.F.: Federated learning with local differential privacy: trade-offs between privacy, utility, and communication. In: ICASSP (2021)

    Google Scholar 

  25. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  26. LeCun, Y.: The MNIST database of handwritten digits (1998)

    Google Scholar 

  27. Li, Z., Huang, Z., Chen, C., Hong, C.: Quantification of the leakage in federated learning. In: NeurIPS Workshop on FL (2019)

    Google Scholar 

  28. Mandal, K., Gong, G.: PrivFL: practical privacy-preserving federated regressions on high-dimensional data over mobile networks. In: CCSW@CCS (2019)

    Google Scholar 

  29. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS (2017)

    Google Scholar 

  30. McMahan, H.B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models. In: ICLR (2018)

    Google Scholar 

  31. Mironov, I.: Rényi differential privacy. In: CSF (2017)

    Google Scholar 

  32. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: S &P (2019)

    Google Scholar 

  33. Ng, L.K.L., Chow, S.S.M.: SoK: cryptographic neural-network computation. In: S &P (2023)

    Google Scholar 

  34. Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2017)

    Google Scholar 

  35. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: S &P (2017)

    Google Scholar 

  36. Song, W., Fu, C., Zheng, Y., Cao, L., Tie, M.: A practical medical image cryptosystem with parallel acceleration. J. Ambient. Intell. Humaniz. Comput. 14, 9853–9867 (2022)

    Article  Google Scholar 

  37. Stevens, T., Skalka, C., Vincent, C., Ring, J., Clark, S., Near, J.: Efficient differentially private secure aggregation for federated learning via hardness of learning with errors. In: USENIX Security (2022)

    Google Scholar 

  38. Sun, L., Qian, J., Chen, X.: LDP-FL: practical private aggregation in federated learning with local differential privacy. In: IJCAI (2021)

    Google Scholar 

  39. Truex, S., et al.: A hybrid approach to privacy-preserving federated learning. In: AISec@CCS (2019)

    Google Scholar 

  40. Wang, X., Ranellucci, S., Katz, J.: Authenticated garbling and efficient maliciously secure two-party computation. In: CCS (2017)

    Google Scholar 

  41. Wei, K., et al.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)

    Article  Google Scholar 

  42. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiV:1708.07747 (2017)

  43. Xu, R., Baracaldo, N., Zhou, Y., Anwar, A., Ludwig, H.: HybridAlpha: an efficient approach for privacy-preserving federated learning. In: AISec@CCS (2019)

    Google Scholar 

  44. Zhang, C., Li, S., Xia, J., Wang, W., Yan, F., Liu, Y.: BatchCrypt: efficient homomorphic encryption for cross-silo federated learning. In: USENIX ATC (2020)

    Google Scholar 

  45. Zhang, W., Tople, S., Ohrimenko, O.: Leakage of dataset properties in multi-party machine learning. In: USENIX Security (2021)

    Google Scholar 

  46. Zhang, W., Fu, C., Zheng, Y., Zhang, F., Zhao, Y., Sham, C.: HSNet: a hybrid semantic network for polyp segmentation. Comput. Biol. Med. 150, 106173 (2022)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sherman S. M. Chow .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zheng, Y. et al. (2023). Cryptography-Inspired Federated Learning for Generative Adversarial Networks and Meta Learning. In: Yang, X., et al. Advanced Data Mining and Applications. ADMA 2023. Lecture Notes in Computer Science(), vol 14177. Springer, Cham. https://doi.org/10.1007/978-3-031-46664-9_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46664-9_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46663-2

  • Online ISBN: 978-3-031-46664-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics