Abstract
Deep learning has been increasingly applied in a wide variety of domains and has achieved remarkable results. Generally, a large amount of data is needed to train a deep learning model. The data might contain sensitive information, leading to the risk of privacy leakage in the process of model training and model application. As a privacy definition with strict mathematical guarantee, differential privacy has gained great attention and has been widely studied in recent years. However, applying differential privacy to deep learning still faces a big challenge to reduce the impact on model accuracy. In this paper, we first analyze the privacy threats that exist in deep learning from the perspective of privacy attacks, including membership inference attacks and reconstruction attacks. We also introduce some basic theories necessary to apply differential privacy to deep learning. Second, to summarize how existing works apply differential privacy to deep learning, we divide perturbations into four categories based on different perturbation stages of deep learning with differential privacy: input perturbation, parameter perturbation, objective function perturbation, and output perturbation. Finally, the challenges and future research directions on deep learning with differential privacy are summarized.
This work was supported in part by the National Natural Science Foundation of China under Grant 61672106, and in part by the Natural Science Foundation of Beijing, China under Grant L192023 and in part by the project of Scientific research fund of Beijing Information Science and Technology University of under 5029923412.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
The European General Data Protection Regulation (GDPR). https://gdpr-info.eu/. Accessed 9 July 2021
Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)
Bu, Z., Dong, J., Long, Q., Su, W.J.: Deep learning with gaussian differential privacy. Harvard Data Sci. Rev. 2020(23) (2020)
Cai, Y., Zhang, S., Xia, H., Fan, Y., Zhang, H.: A privacy-preserving scheme for interactive messaging over online social networks. IEEE Internet Things J. 7, 6817–6827 (2020)
Cai, Y., Zhang, H., Fang, Y.: A conditional privacy protection scheme based on ring signcryption for vehicular ad hoc networks. IEEE Internet Things J. 8, 647–656 (2020)
Chamikara, M., Bertok, P., Khalil, I., Liu, D., Camtepe, S.: Local differential privacy for deep learning (2019)
Deng, L., Yu, D.: Deep learning: methods and applications. Foundations Trends Sig. Processing 7(3–4), 197–387 (2014)
Du, B., Xiong, W., Wu, J., Zhang, L., Zhang, L., Tao, D.: Stacked convolutional denoising auto-encoders for feature representation. IEEE Trans. Cybern. 47(4), 1017–1027 (2016)
Alvim, M.S., Chatzikokolakis, K., McIver, A., Morgan, C., Palamidessi, C., Smith, G.: Differential privacy. In: The Science of Quantitative Information Flow. ISC, pp. 433–444. Springer, Cham (2020). https://doi.org/10.1007/978-3-319-96131-6_23
Dwork, C., Naor, M., Pitassi, T., Rothblum, G.N.: Differential privacy under continual observation. In: Proceedings of the Forty-Second ACM Symposium on Theory of Computing, pp. 715–724 (2010)
Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)
Dwork, C., Rothblum, G.N.: Concentrated differential privacy. arXiv preprint arXiv:1603.01887 (2016)
Fanti, G., Pihur, V., Erlingsson, Ú.: Building a RAPPOR with the unknown: privacy-preserving learning of associations and data dictionaries. arXiv preprint arXiv:1503.01214 (2015)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)
Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: 23rd USENIX Security Symposium (USENIX Security 14), pp. 17–32 (2014)
Ganta, S.R., Kasiviswanathan, S.P., Smith, A.: Composition attacks and auxiliary information in data privacy. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 265–273 (2008)
Gong, Y., Lu, N., Zhang, J.: Application of deep learning fusion algorithm in natural language processing in emotional semantic analysis. Concurrency Comput. Pract. Experience 31(10), e4779 (2019)
Guest, D., Cranmer, K., Whiteson, D.: Deep learning and its application to LHC physics. Annu. Rev. Nucl. Part. Sci. 68, 161–181 (2018)
Gui, G., Huang, H., Song, Y., Sari, H.: Deep learning for an effective nonorthogonal multiple access scheme. IEEE Trans. Veh. Technol. 67(9), 8440–8450 (2018)
Hao, X., Zhang, G., Ma, S.: Deep learning. Int. J. Semant. Comput. 10(03), 417–439 (2016)
Hinton, G.E.: Deep belief networks. Scholarpedia 4(5), 5947 (2009)
Huang, H., Song, Y., Yang, J., Gui, G., Adachi, F.: Deep-learning-based millimeter-wave massive MIMO for hybrid precoding. IEEE Trans. Veh. Technol. 68(3), 3027–3032 (2019)
Irolla, P., Chtel, G.: Demystifying the membership inference attack. In: 2019 12th CMI Conference on Cybersecurity and Privacy (CMI) (2020)
Kairouz, P., Oh, S., Viswanath, P.: The composition theorem for differential privacy. IEEE Trans. Inf. Theor. (2017)
Ketkar, N.: Convolutional neural networks. In: Deep Learning with Python, pp. 61–76. Apress, Berkeley, CA (2017). https://doi.org/10.1007/978-1-4842-2766-4_5
Koufogiannis, F., Han, S., Pappas, G.J.: Optimality of the Laplace mechanism in differential privacy. arXiv preprint arXiv:1504.00065 (2015)
Li, N., Li, T., Venkatasubramanian, S.: t-Closeness: privacy beyond k-anonymity and l-diversity. In: 2007 IEEE 23rd International Conference on Data Engineering, pp. 106–115. IEEE (2007)
Machanavajjhala, A., Kifer, D., Gehrke, J., Venkitasubramaniam, M.: L-Diversity: privacy beyond k-anonymity. ACM Trans. Knowl. Discovery Data (TKDD) 1(1), 3-es (2007)
McSherry, F.: Privacy integrated queries. In: Proceedings of the 2009 ACM SIGMOD International Conference on Management of Data (SIGMOD) (2009)
McSherry, F., Talwar, K.: Mechanism design via differential privacy. In: 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2007), pp. 94–103. IEEE (2007)
Novac, O.C., Novac, M., Gordan, C., Berczes, T., Bujdosó, G.: Comparative study of google android, apple iOS and Microsoft windows phone mobile operating systems. In: 2017 14th International Conference on Engineering of Modern Electric Systems (EMES), pp. 154–159. IEEE (2017)
Owusu-Agyemeng, K., Qin, Z., Xiong, H., Liu, Y., Zhuang, T., Qin, Z.: MSDP: multi-scheme privacy-preserving deep learning via differential privacy. Pers. Ubiquit. Comput., 1–13 (2021)
Phan, N., et al.: Heterogeneous gaussian mechanism: preserving differential privacy in deep learning with provable robustness. arXiv preprint arXiv:1906.01444 (2019)
Phan, N., Wang, Y., Wu, X., Dou, D.: Differential privacy preservation for deep auto-encoders: an application of human behavior prediction. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)
Phan, N.H., Wu, X., Dou, D.: Preserving differential privacy in convolutional deep belief networks. Mach. Learn., 1681–1704 (2017). https://doi.org/10.1007/s10994-017-5656-2
Phan, N., Wu, X., Hu, H., Dou, D.: Adaptive Laplace mechanism: differential privacy preservation in deep learning. In: 2017 IEEE International Conference on Data Mining (ICDM), pp. 385–394. IEEE (2017)
Revathi, M., Jeya, I.J.S., Deepa, S.N.: Deep learning-based soft computing model for image classification application. Soft. Comput. 24(24), 18411–18430 (2020). https://doi.org/10.1007/s00500-020-05048-7
Sharma, S., Kumar, V.: Voxel-based 3D face reconstruction and its application to face recognition using sequential deep learning. Multimedia Tools Appl. 79, 1–28 (2020)
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321 (2015)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)
Sweeney, L.: Achieving k-anonymity privacy protection using generalization and suppression. Internat. J. Uncertain. Fuzziness Knowl. Based Syst. 10(5), 571–588 (2002)
Sweeney, L.: k-anonymity: a model for protecting privacy. Internat. J. Uncertain. Fuzziness Knowl. Based Syst. 10(05), 557–570 (2002)
Thambiraja, E., Ramesh, G., Umarani, D.R.: A survey on various most common encryption techniques. Int. J. Adv. Res. Comput. Sci. Software Eng. 2(7), 307–312 (2012)
Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIs. In: 25th USENIX Security Symposium (USENIX Security 16), pp. 601–618 (2016)
Wong, R.C.W., Fu, A.W.C., Wang, K., Yu, P.S., Pei, J.: Can the utility of anonymized data be used for privacy breaches? ACM Trans. Knowl. Discovery Data (TKDD) 5(3), 1–24 (2011)
Wong, R.C.W., Li, J., Fu, A.W.C., Wang, K.: (\(\alpha \), k)-anonymity: an enhanced k-anonymity model for privacy preserving data publishing. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 754–759 (2006)
Xiao, X., Tao, Y.: M-invariance: towards privacy preserving re-publication of dynamic datasets. In: Proceedings of the 2007 ACM SIGMOD International Conference on Management of Data, pp. 689–700 (2007)
Yang, R., Ma, X., Bai, X., Su, X.: Differential privacy images protection based on generative adversarial network. In: 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pp. 1688–1695. IEEE (2020)
Yuan, D., Zhu, X., Wei, M., Ma, J.: Collaborative deep learning for medical image analysis with differential privacy. In: 2019 IEEE Global Communications Conference (GLOBECOM), pp. 1–6. IEEE (2019)
Zhang, J., Zhang, Z., Xiao, X., Yang, Y., Winslett, M.: Functional mechanism: regression analysis under differential privacy. arXiv preprint arXiv:1208.0219 (2012)
Zhang, S., Cai, Y., Xia, H.: A privacy-preserving interactive messaging scheme based on users credibility over online social networks. In: 2017 IEEE/CIC International Conference on Communications in China (ICCC) (2017)
Zhang, X., Ji, S., Wang, T.: Differentially private releasing via deep generative model (technical report). arXiv preprint arXiv:1801.01594 (2018)
Zhao, J., Chen, Y., Zhang, W.: Differential privacy preservation in deep learning: challenges, opportunities and solutions. IEEE Access 7, 48901–48911 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, Y., Cai, Y., Zhang, M., Li, X., Fan, Y. (2022). A Survey on Privacy-Preserving Deep Learning with Differential Privacy. In: Tian, Y., Ma, T., Khan, M.K., Sheng, V.S., Pan, Z. (eds) Big Data and Security. ICBDS 2021. Communications in Computer and Information Science, vol 1563. Springer, Singapore. https://doi.org/10.1007/978-981-19-0852-1_2
Download citation
DOI: https://doi.org/10.1007/978-981-19-0852-1_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-0851-4
Online ISBN: 978-981-19-0852-1
eBook Packages: Computer ScienceComputer Science (R0)