Abstract
We build a privacy-preserving deep learning system in which many learning participants perform neural network-based deep learning over a combined dataset of all, without actually revealing the participants’ local data to a curious server. To that end, we revisit the previous work by Shokri and Shmatikov (ACM CCS 2015) and point out that local data information may be actually leaked to an honest-but-curious server. We then move on to fix that problem via building an enhanced system with following properties: (1) no information is leaked to the server; and (2) accuracy is kept intact, compared to that of the ordinary deep learning system also over the combined dataset. Our system makes use of additively homomorphic encryption, and we show that our usage of encryption adds little overhead to the ordinary deep learning system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Size of 140106 gradients each of 32 bits; the size is computed via the formula \(\frac{140106\times 32}{8\times 10^6} \approx 0.56\).
References
Stanford Deep Learning Tutorial. http://deeplearning.stanford.edu
The MNIST dataset. http://yann.lecun.com/exdb/mnist/
Aono, Y., Boyen, X., Phong, L.T., Wang, L.: Key-Private Proxy Re-encryption under LWE. In: Paul, G., Vaudenay, S. (eds.) INDOCRYPT 2013. LNCS, vol. 8250, pp. 1–18. Springer, Cham (2013). doi:10.1007/978-3-319-03515-4_1
Aono, Y., Hayashi, T., Phong, L.T., Wang, L.: Efficient key-rotatable and security-updatable homomorphic encryption. In: Fifth ACM International Workshop on Security in Cloud Computing (SCC), 2017, pp. 35–42. ACM (2017)
Chillotti, I., Gama, N., Georgieva, M., Izabachène, M.: Faster fully homomorphic encryption: bootstrapping in less than 0.1 seconds. In: Cheon, J.H., Takagi, T. (eds.) ASIACRYPT 2016. LNCS, vol. 10031, pp. 3–33. Springer, Heidelberg (2016). doi:10.1007/978-3-662-53887-6_1
Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Le, Q.V., Mao, M.Z., Ranzato, M., Senior, A.W., Tucker, P.A., Yang, K., Ng, A.Y.: Large scale distributed deep networks. In: NIPS 2012, pp. 1232–1240 (2012)
Goldreich, O.: The Foundations of Cryptography - Volume 2, Basic Applications. Cambridge University Press, Cambridge (2004)
Hitaj, B., Ateniese, G., Pérez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. CoRR abs/1702.07464 (2017)
Lindner, R., Peikert, C.: Better key sizes (and Attacks) for LWE-based encryption. In: Kiayias, A. (ed.) CT-RSA 2011. LNCS, vol. 6558, pp. 319–339. Springer, Heidelberg (2011). doi:10.1007/978-3-642-19074-2_21
Liu, M., Nguyen, P.Q.: Solving BDD by enumeration: an update. In: Dawson, E. (ed.) CT-RSA 2013. LNCS, vol. 7779, pp. 293–309. Springer, Heidelberg (2013). doi:10.1007/978-3-642-36095-4_19
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011 (2011)
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: ACM CCS 2015, pp. 1310–1321. ACM (2015)
Acknowledgement
This work is partially supported by JST CREST #JPMJCR168A.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Phong, L.T., Aono, Y., Hayashi, T., Wang, L., Moriai, S. (2017). Privacy-Preserving Deep Learning: Revisited and Enhanced. In: Batten, L., Kim, D., Zhang, X., Li, G. (eds) Applications and Techniques in Information Security. ATIS 2017. Communications in Computer and Information Science, vol 719. Springer, Singapore. https://doi.org/10.1007/978-981-10-5421-1_9
Download citation
DOI: https://doi.org/10.1007/978-981-10-5421-1_9
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-5420-4
Online ISBN: 978-981-10-5421-1
eBook Packages: Computer ScienceComputer Science (R0)