Abstract
Privacy is a fundamental challenge for the collection of massive training data in deep learning. Decentralized neural network enables clients to collaboratively learn a shared prediction model, which can protect clients’ sensitive dataset without the need to centrally store training data. But distributed training process by iteratively averaging client-provided model updates will reveal each client’s individual contribution (which can be used to infer clients’ private information) to the server which maintains a global model. To address such privacy concern, we design privacy-preserving decentralized deep learning which we term PPD-DL. PPD-DL includes two non-collusion cloud servers, one for computing clients’ local update safely based on homomorphic encryption, the other for maintaining a global model without the details of individual contribution. During the training and communications, PPD-DL ensures that no more information will be leak to the honest-but-curious servers and adversary.
This work was funded by the National Natural Science Foundation of China under Grant (No. 61472097).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Gurusamy, R., Subramaniam, V.: A machine learning approach for MRI brain tumor classification. CMC: Comput. Mater. Continua 53, 91–108 (2017)
Cui, Q., McIntosh, S., Sun, H.: Identifying materials of photographic images and photorealistic computer generated graphics based on deep CNNs. CMC: Comput. Mater. Continua 55, 229–241 (2018)
Li, C., Jiang, Y., Cheslyar, M.: Embedding image through generated intermediate medium using deep convolutional generative adversarial network. CMC: Comput. Mater. Continua 56, 313–324 (2018)
McMahan, H.B., Moore, E., Ramage, D., Hampson, S., et al.: Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629 (2016)
Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Inference attacks against collaborative learning. arXiv preprint arXiv:1805.04049 (2018)
Gilad-Bachrach, R., Dowlin, N., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: Cryptonets: applying neural networks to encrypted data with high throughput and accuracy. In: International Conference on Machine Learning, pp. 201–210 (2016)
Hesamifard, E., Takabi, H., Ghasemi, M., Jones, C.: Privacy-preserving machine learning in cloud. In: Proceedings of the 2017 on Cloud Computing Security Workshop, pp. 39–43. ACM (2017)
Liu, J., Juuti, M., Lu, Y., Asokan, N.: Oblivious neural network predictions via minionn transformations. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 619–631. ACM (2017)
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321. ACM (2015)
Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191. ACM (2017)
Zhang, X., Ji, S., Wang, H., Wang, T.: Private, yet practical, multiparty deep learning. In: IEEE International Conference on Distributed Computing Systems, pp. 1442–1452 (2017)
Kamp, M., et al.: Efficient decentralized deep learning by dynamic model averaging. arXiv preprint arXiv:1807.03210 (2018)
Geyer, R.C., Klein, T., Nabi, M.: Differentially private federated learning: a client level perspective. arXiv preprint arXiv:1712.07557 (2017)
McMahan, H.B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Song, L., Ma, C., Wu, P., Zhang, Y. (2019). PPD-DL: Privacy-Preserving Decentralized Deep Learning. In: Sun, X., Pan, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2019. Lecture Notes in Computer Science(), vol 11632. Springer, Cham. https://doi.org/10.1007/978-3-030-24274-9_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-24274-9_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-24273-2
Online ISBN: 978-3-030-24274-9
eBook Packages: Computer ScienceComputer Science (R0)