Skip to main content

Privacy-Preserving Stochastic Gradient Descent with Multiple Distributed Trainers

  • Conference paper
  • First Online:
Network and System Security (NSS 2017)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 10394))

Included in the following conference series:

Abstract

Assume that there are L local datasets distributed among L owners (also called trainers hereafter). The problem is as follows: the owners wish to apply a machine learning method over the combined dataset of all to obtain the best possible learning output; but do not want to publicly share the local datasets due to privacy concerns. In this paper we design a system solving the problem in which stochastic gradient descent (SGD) algorithm is used as the machine learning method, as SGD is at the heart of recent deep learning techniques. Our system differs from existing work by following features: (1) we do not share the gradients in SGD but share the weight parameters; and (2) we use symmetric encryption to protect the weight parameters against an honest-but-curious server used as a common place for storage. Therefore, we are able to avoid information leakage of local data to the server; and the efficiency of our system is kept reasonably compared to the original SGD over the combined dataset. Finally, we experiment over a real dataset to verify the practicality of our system.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Some documents such as [3] excludes the bias nodes and uses a separate variable b.

References

  1. Anaconda cryptography package. https://anaconda.org/pypi/cryptography

  2. Deep learning documentation. http://deeplearning.net/tutorial/mlp.html

  3. Stanford Deep Learning Tutorial. http://deeplearning.stanford.edu

  4. The MNIST dataset. http://yann.lecun.com/exdb/mnist/

  5. Abadi, M., Chu, A., Goodfellow, I.J., McMahan, H.B., Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: Weippl, E.R., Katzenbeisser, S., Kruegel, C., Myers, A.C., Halevi, S. (eds.) Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318. ACM (2016)

    Google Scholar 

  6. Bellare, M., Namprempre, C.: Authenticated encryption: relations among notions and analysis of the generic composition paradigm. J. Cryptol. 21(4), 469–491 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  7. Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K.E., Naehrig, M., Wernsing, J.: Cryptonets: applying neural networks to encrypted data with high throughput and accuracy. In: Balcan, M., Weinberger, K.Q. (eds.) Proceedings of the 33nd International Conference on Machine Learning, ICML 2016. JMLR Workshop and Conference Proceedings, New York City, NY, USA, 19–24 June 2016, vol. 48, pp. 201–210. JMLR.org (2016)

    Google Scholar 

  8. Goldreich, O.: Foundations of Cryptography: Volume 2, Basic Applications. Cambridge University Press, New York (2004)

    Book  MATH  Google Scholar 

  9. Hitaj, B., Ateniese, G., Pérez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. CoRR, abs/1702.07464 (2017)

    Google Scholar 

  10. Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S.: Privacy-preserving deep learning: Revisited and enhanced. In: Batten, L., Kim, D., Zhang, X., Li, G. (eds.) ATIS 2017. CCIS, vol. 719, pp. 1–11. Springer, Singapore (2017). doi:10.1007/978-981-10-5421-1_9

    Chapter  Google Scholar 

  11. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Ray, I., Li, N., Kruegel, C. (eds.) 2015 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321. ACM (2015)

    Google Scholar 

Download references

Acknowledgement

This work is partially supported by JST CREST #JPMJCR168A.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Le Trieu Phong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Phong, L.T. (2017). Privacy-Preserving Stochastic Gradient Descent with Multiple Distributed Trainers. In: Yan, Z., Molva, R., Mazurczyk, W., Kantola, R. (eds) Network and System Security. NSS 2017. Lecture Notes in Computer Science(), vol 10394. Springer, Cham. https://doi.org/10.1007/978-3-319-64701-2_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-64701-2_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-64700-5

  • Online ISBN: 978-3-319-64701-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics