Skip to main content

An Improved Privacy-Preserving Stochastic Gradient Descent Algorithm

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12486))

Abstract

Deep learning techniques based on neural network have made significant achievements in various fields of Artificial Intelligence. However, model training requires large-scale datasets, these datasets are crowd-sourced and model parameters will contain the encoding of private information, resulting in the risk of privacy leakage. With the trend towards sharing pre-trained models, the risk of stealing training datasets through member inference attack and model inversion attack is further heightened. To tackle this problem, we propose an improved Differential Privacy Stochastic Gradient Descent algorithm, using simulated annealing algorithm and denoising mechanism to optimize the allocation method of privacy loss and improve model accuracy. We also analyze privacy cost under random shuffle data batch processing methods in detail within the framework of Subsampled Rényi Differential Privacy. Compared with existing methods, our experiments show that we can train deep neural networks with non-convex objective function more efficiently with moderate privacy budgets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)

    Google Scholar 

  2. Balle, B., Wang, Y.: Improving the Gaussian mechanism for differential privacy: analytical calibration and optimal denoising. arXiv preprint arXiv:1805.06530 (2018)

  3. Bun, M., Steinke, T.: Concentrated differential privacy: simplifications, extensions, and lower bounds. In: Hirt, M., Smith, A. (eds.) TCC 2016. LNCS, vol. 9985, pp. 635–658. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-53641-4_24

    Chapter  Google Scholar 

  4. Dwork, C.: Automata, languages and programming. In: 33rd International Colloquium, ICALP (2006)

    Google Scholar 

  5. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14

    Chapter  Google Scholar 

  6. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    MathSciNet  MATH  Google Scholar 

  7. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)

    Google Scholar 

  8. Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: 23rd \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 14), pp. 17–32 (2014)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  10. Jayaraman, B., Evans, D.: Evaluating differentially private machine learning in practice. In: 28th \(\{USENIX\}\) Security Symposium (\(\{USENIX\}\) Security 19), pp. 1895–1912 (2019)

    Google Scholar 

  11. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  12. Lee, J., Kifer, D.: Concentrated differentially private gradient descent with adaptive per-iteration privacy budget. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1656–1665 (2018)

    Google Scholar 

  13. McMahan, H.B., Ramage, D., Talwar, K., Zhang, L.: Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963 (2017)

  14. Mironov, I.: Rényi differential privacy. In: 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pp. 263–275. IEEE (2017)

    Google Scholar 

  15. Osher, S., Wang, B., Yin, P., Luo, X., Barekat, F., Pham, M., Lin, A.: Laplacian smoothing gradient descent. arXiv preprint arXiv:1806.06317 (2018)

  16. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321 (2015)

    Google Scholar 

  17. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  18. Song, S., Chaudhuri, K., Sarwate, A.D.: Stochastic gradient descent with differentially private updates. In: 2013 IEEE Global Conference on Signal and Information Processing, pp. 245–248. IEEE (2013)

    Google Scholar 

  19. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  20. Waites, C.: PyVacy: towards practical differential privacy for deep learning (2019)

    Google Scholar 

  21. Wang, Y., Balle, B., Kasiviswanathan, S.: Subsampled rényi differential privacy and analytical moments accountant. arXiv preprint arXiv:1808.00087 (2018)

  22. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)

    Google Scholar 

  23. Yu, L., Liu, L., Pu, C., Gursoy, M.E., Truex, S.: Differentially private model publishing for deep learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 332–349. IEEE (2019)

    Google Scholar 

  24. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)

Download references

Acknowledgements

The above work was supported by Beijing Municipal Natural Science Foundation (Grant No. 4202035), the Fundamental Research Funds for the Central Universities (Grant No. YWF-20-BJ-J-1040), the National Key R&D Program of China (Grant No. 2016QY04W0802), and the National Natural Science Foundation of China (Grant No. 61602025, U1636211, and 61170189).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanqing Yao .

Editor information

Editors and Affiliations

Appendix. Proof of Theorem 1

Appendix. Proof of Theorem 1

Theorem 1.Suppose that a mechanism \(\mathcal {M}\) consists of a sequence of k adaptive mechanisms, \(\mathcal {M}_1\),...,\(\mathcal {M}_k\), where each \(\mathcal {M}_i: \prod _{j=1}^{i-1} \mathcal {R}_j\times D\rightarrow R_i\) and \(\mathcal {M}_i\) satisfies (\(\alpha ,\epsilon _i\))-RDP\((1\le i\le k)\). Let \(\mathbb {D}_1,\mathbb {D}_2,...,\mathbb {D}_k\) be the result of a randomized partitioning of the input domain \(\mathbb {D}\). The mechanism \(\mathcal {M}(D)=(\mathcal {M}_1(D\cap \mathbb {D}_1),...,\mathcal {M}_k(D\cap \mathbb {D}_k))\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} (\alpha ,\epsilon )-RDP, &{} \text{ if } \epsilon _i=\epsilon , \forall i \\ \underset{i}{max}(\alpha ,\epsilon _i)-RDP, &{} \text{ if } \epsilon _i\ne \epsilon _j, \text{ for } \text{ some } \text{ i,j } \end{array}\right. } \end{aligned}$$
(12)

Proof

Suppose two neighboring datasets D and \(D'\). Without loss of generality, assume that D contains one more element \(d_e\) than \(D'\). Let \(D_i=D\cap \mathbb {D}_i\) and \(D_i'= D'\cap \mathbb {D}_i\). Accordingly, there exists j such that \(D_j\) contains one more element than \(D_j'\), and for any \(i\ne j\), \(D_i= D_i'\). Consider any sequence of outcomes \(o=(o_1,...,o_k)\) of \(\mathcal {M}_1(D_1),...,\mathcal {M}_k(D_k)\).

Because only \(D_j\) is different from \(D_j'\), for any \(i\ne j\), we have \(Pr[M_i(D_i) = o_i|M_{i-1}(D_{i-1}) = o_{i-1},...,M_1(D_1) = o_1]\) equal to \(Pr[M_i(D_i') = o_i|M_{i-1}(D_{i-1}') = o_{i-1}, ... ,M_1(D_1') = o_1]\)

Then, we have

\(Z_j^{(o)}\triangleq \frac{Pr(\mathcal {M}(D)=o)}{Pr(\mathcal {M}(D')=o)}\)

\(=\frac{\prod _{i\in [n]} Pr[\mathcal {M}_i(D_i)=o_i|\mathcal {M}_{i-1}(D_{i-1})=o_{i-1},...,\mathcal {M}_1(D_1)=o_1]}{\prod _{i\in [n]}Pr[\mathcal {M}_i(D_i')=o_i|\mathcal {M}_{i-1}(D_{i-1}')=o_{i-1},...,\mathcal {M}_1(D_1')=o_1]}\)

\(=\frac{Pr[\mathcal {M}_j(D_j)=o_j|\mathcal {M}_{j-1}(D_{j-1})=o_{j-1},...,\mathcal {M}_1(D_1)=o_1]}{Pr[\mathcal {M}_j(D_j')=o_j|\mathcal {M}_{j-1}(D_{j-1}')=o_{j-1},...,\mathcal {M}_1(D_1')=o_1]}\)

\(\triangleq c_j(o_j;o_1,...,o_{j-1})\)

Once the prefix \((o_1,...,o_{j-1})\) is fixed, \(Z_j\triangleq c_j(o_j;o_1,...,o_{j-1})=\frac{Pr(\mathcal {M}_j(D_j)=o_j)}{Pr(\mathcal {M}_j(D_j')=o_j)}\)

By the \((\alpha ,\epsilon _j)\)-RDP property of \(\mathcal {M}_j\), \(D_{\alpha }\triangleq \frac{1}{\alpha -1}log\mathbb {E}[Z_j^{\alpha }]\le \epsilon \).

Because of randomized partition of the input domain \(\mathbb {D}\), the extra element \(d_e\) of D is randomly mapped to k partitions. Therefore, j is uniformly distributed over \(\{1,...,k\}\), and thus the random variable \(Z^{(o)}\) under random data partition is the mixture of independent random variables \(Z_1^{(o)},..., Z_k^{(o)}\),

\(f(Z^{(o)})=\frac{1}{k}f(Z_1^{(o)})+...+\frac{1}{k}f(Z_k^{(o)})\)

where f(X) is the probability distribution function of X.

We have

\(D_{\alpha }=\frac{1}{\alpha -1}log\mathbb {E}[(Z^{(o)})^{\alpha }]=\frac{1}{k}\sum _{j=1}^k \frac{1}{\alpha -1}log\mathbb {E}[(Z_j^{(o)})^{\alpha }]\)

Because \(Z_j^{(o)}\) satisfies RDP, by (5) then we have \(D_{\alpha }\le \frac{1}{k}\sum _{j=1}^k \epsilon _j\). If \(\forall j\ \epsilon _j=\epsilon \), we have \(D_{\alpha }\le \epsilon \), and thus the mechanism \(\mathcal {M}(D)\) satisfies \((\alpha ,\epsilon )\)-RDP.

If not all \(\epsilon _j\) are the same, we replace each \(\epsilon _j\) with \(\underset{j}{max}\epsilon _j\), we have \(D_{\alpha }=\frac{1}{\alpha -1}log\mathbb {E}[(Z^{(o)})^{\alpha }]\le \underset{j}{max}\epsilon _j\), and the mechanism \(\mathcal {M}(D)\) satisfies \((\alpha ,\underset{j}{max}\epsilon _j)\)-RDP.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cheng, X., Yao, Y., Liu, A. (2020). An Improved Privacy-Preserving Stochastic Gradient Descent Algorithm. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12486. Springer, Cham. https://doi.org/10.1007/978-3-030-62223-7_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62223-7_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62222-0

  • Online ISBN: 978-3-030-62223-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics