Skip to main content

PPD-DL: Privacy-Preserving Decentralized Deep Learning

  • Conference paper
  • First Online:
Artificial Intelligence and Security (ICAIS 2019)

Abstract

Privacy is a fundamental challenge for the collection of massive training data in deep learning. Decentralized neural network enables clients to collaboratively learn a shared prediction model, which can protect clients’ sensitive dataset without the need to centrally store training data. But distributed training process by iteratively averaging client-provided model updates will reveal each client’s individual contribution (which can be used to infer clients’ private information) to the server which maintains a global model. To address such privacy concern, we design privacy-preserving decentralized deep learning which we term PPD-DL. PPD-DL includes two non-collusion cloud servers, one for computing clients’ local update safely based on homomorphic encryption, the other for maintaining a global model without the details of individual contribution. During the training and communications, PPD-DL ensures that no more information will be leak to the honest-but-curious servers and adversary.

This work was funded by the National Natural Science Foundation of China under Grant (No. 61472097).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gurusamy, R., Subramaniam, V.: A machine learning approach for MRI brain tumor classification. CMC: Comput. Mater. Continua 53, 91–108 (2017)

    Google Scholar 

  2. Cui, Q., McIntosh, S., Sun, H.: Identifying materials of photographic images and photorealistic computer generated graphics based on deep CNNs. CMC: Comput. Mater. Continua 55, 229–241 (2018)

    Google Scholar 

  3. Li, C., Jiang, Y., Cheslyar, M.: Embedding image through generated intermediate medium using deep convolutional generative adversarial network. CMC: Comput. Mater. Continua 56, 313–324 (2018)

    Google Scholar 

  4. McMahan, H.B., Moore, E., Ramage, D., Hampson, S., et al.: Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629 (2016)

  5. Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Inference attacks against collaborative learning. arXiv preprint arXiv:1805.04049 (2018)

  6. Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: Cryptonets: applying neural networks to encrypted data with high throughput and accuracy. In: International Conference on Machine Learning, pp. 201–210 (2016)

    Google Scholar 

  7. Hesamifard, E., Takabi, H., Ghasemi, M., Jones, C.: Privacy-preserving machine learning in cloud. In: Proceedings of the 2017 on Cloud Computing Security Workshop, pp. 39–43. ACM (2017)

    Google Scholar 

  8. Liu, J., Juuti, M., Lu, Y., Asokan, N.: Oblivious neural network predictions via minionn transformations. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 619–631. ACM (2017)

    Google Scholar 

  9. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321. ACM (2015)

    Google Scholar 

  10. Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191. ACM (2017)

    Google Scholar 

  11. Zhang, X., Ji, S., Wang, H., Wang, T.: Private, yet practical, multiparty deep learning. In: IEEE International Conference on Distributed Computing Systems, pp. 1442–1452 (2017)

    Google Scholar 

  12. Kamp, M., et al.: Efficient decentralized deep learning by dynamic model averaging. arXiv preprint arXiv:1807.03210 (2018)

  13. Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: a client level perspective. arXiv preprint arXiv:1712.07557 (2017)

  14. McMahan, H.B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chunguang Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Song, L., Ma, C., Wu, P., Zhang, Y. (2019). PPD-DL: Privacy-Preserving Decentralized Deep Learning. In: Sun, X., Pan, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2019. Lecture Notes in Computer Science(), vol 11632. Springer, Cham. https://doi.org/10.1007/978-3-030-24274-9_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-24274-9_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-24273-2

  • Online ISBN: 978-3-030-24274-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics