Abstract
Differential privacy is a standard mathematical framework to quantify the degree to which individual privacy in a statistical dataset is preserved. We derive an optimal \((\epsilon ,\delta )\)–differentially private noise adding mechanism for real-valued data matrices meant for the training of models by machine learning algorithms. The aim is to protect a machine learning algorithm from an adversary who seeks to gain an information about the data from algorithm’s output by perturbing the value in a sample of the training data. The fundamental issue of trade-off between privacy and utility is addressed by presenting a novel approach consisting of three steps: (1) the sufficient conditions on the probability density function of noise for \((\epsilon ,\delta )\)–differential privacy of a machine learning algorithm are derived; (2) the noise distribution that, for a given level of entropy, minimizes the expected noise magnitude is derived; (3) using entropy level as the design parameter, the optimal entropy level and the corresponding probability density function of the noise are derived.
The research reported in this paper has been partly supported by EU Horizon 2020 Grant 826278 “Serums” and the Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry for Digital and Economic Affairs, and the Province of Upper Austria in the frame of the COMET center SCCH.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS 2016, pp. 308–318. ACM, New York (2016)
Balle, B., Wang, Y.: Improving the Gaussian mechanism for differential privacy: analytical calibration and optimal denoising. CoRR abs/1805.06530 (2018)
Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our data, ourselves: privacy via distributed noise generation. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 486–503. Springer, Heidelberg (2006). https://doi.org/10.1007/11761679_29
Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14
Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2015, pp. 1322–1333. ACM, New York (2015)
Geng, Q., Kairouz, P., Oh, S., Viswanath, P.: The staircase mechanism in differential privacy. IEEE J. Sel. Topics Signal Process. 9(7), 1176–1184 (2015)
Geng, Q., Viswanath, P.: The optimal noise-adding mechanism in differential privacy. IEEE Trans. Inf. Theory 62(2), 925–951 (2016)
Geng, Q., Viswanath, P.: Optimal noise adding mechanisms for approximate differential privacy. IEEE Trans. Inf. Theory 62(2), 952–969 (2016)
Geng, Q., Ding, W., Guo, R., Kumar, S.: Optimal noise-adding mechanism in additive differential privacy. CoRR abs/1809.10224 (2018)
Ghosh, A., Roughgarden, T., Sundararajan, M.: Universally utility-maximizing privacy mechanisms. SIAM J. Comput. 41(6), 1673–1693 (2012)
Gupte, M., Sundararajan, M.: Universally optimal privacy mechanisms for minimax agents. In: Proceedings of the Twenty-Ninth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS 2010, pp. 135–146. ACM, New York (2010)
He, J., Cai, L.: Differential private noise adding mechanism: basic conditions and its application. In: 2017 American Control Conference (ACC), pp. 1673–1678, May 2017
Phan, N., Wang, Y., Wu, X., Dou, D.: Differential privacy preservation for deep auto-encoders: an application of human behavior prediction. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI 2016, pp. 1309–1316. AAAI Press (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Kumar, M., Rossbory, M., Moser, B.A., Freudenthaler, B. (2019). Deriving an Optimal Noise Adding Mechanism for Privacy-Preserving Machine Learning. In: Anderst-Kotsis, G., et al. Database and Expert Systems Applications. DEXA 2019. Communications in Computer and Information Science, vol 1062. Springer, Cham. https://doi.org/10.1007/978-3-030-27684-3_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-27684-3_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27683-6
Online ISBN: 978-3-030-27684-3
eBook Packages: Computer ScienceComputer Science (R0)