Skip to main content

Differentially Private Bayesian Neural Networks on Accuracy, Privacy and Reliability

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13716))

Abstract

Bayesian neural network (BNN) allows for uncertainty quantification in prediction, offering an advantage over regular neural networks that has not been explored in the differential privacy (DP) framework. We fill this important gap by leveraging recent development in Bayesian deep learning and privacy accounting to offer a more precise analysis of the trade-off between privacy and accuracy in BNN. We propose three DP-BNNs that characterize the weight uncertainty for the same network architecture in distinct ways, namely DP-SGLD (via the noisy gradient method), DP-BBP (via changing the parameters of interest) and DP-MC Dropout (via the model architecture). Interestingly, we show a new equivalence between DP-SGD and DP-SGLD, implying that some non-Bayesian DP training naturally allows for uncertainty quantification. However, the hyperparameters such as learning rate and batch size, can have different or even opposite effects in DP-SGD and DP-SGLD.

Extensive experiments are conducted to compare DP-BNNs, in terms of privacy guarantee, prediction accuracy, uncertainty quantification, calibration, computation speed, and generalizability to network architecture. As a result, we observe a new tradeoff between the privacy and the reliability. When compared to non-DP and non-Bayesian approaches, DP-SGLD is remarkably accurate under strong privacy guarantee, demonstrating the great potential of DP-BNN in real-world tasks.

Q. Zhang and Z. Bu—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    For example, if the prior is \(\mathcal {N}(0,\sigma ^2)\), then \(-\log p(\theta )\propto \frac{\Vert \theta \Vert ^2}{2\sigma ^2}\) is the \(L_2\) penalty; if the prior is Laplacian, then \(-\log p(\theta )\) is the \(L_1\) penalty; additionally, the likelihood of a Gaussian model corresponds to the mean squared error loss..

  2. 2.

    Since DP-BBP does not optimize the weights, the back-propagation is much different from using \(\frac{\partial \ell }{\partial \boldsymbol{w}}\) (see Appendix B) and thus requires new design that is currently not available. See https://github.com/pytorch/opacus/blob/master/opacus/supported_layers_grad_samplers.py.

  3. 3.

    Within each cluster, the bins can interchange the ordering. Thus the bin’s x-coordinate is not meaningful and only the cluster’s x-coordinate represents the prediction probability.

References

  1. Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K., Zhang, L.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. pp. 308–318 (2016)

    Google Scholar 

  2. Asoodeh, S., Liao, J., Calmon, F.P., Kosut, O., Sankar, L.: A better bound gives a hundred rounds: Enhanced privacy guarantees via f-divergences. In: 2020 IEEE International Symposium on Information Theory (ISIT). pp. 920–925. IEEE (2020)

    Google Scholar 

  3. Balle, B., Barthe, G., Gaboardi, M.: Privacy amplification by subsampling: Tight analyses via couplings and divergences. arXiv preprint arXiv:1807.01647 (2018)

  4. Blundell, C., Cornebise, J., Kavukcuoglu, K., Wierstra, D.: Weight uncertainty in neural network. In: International Conference on Machine Learning. pp. 1613–1622. PMLR (2015)

    Google Scholar 

  5. Bu, Z., Dong, J., Long, Q., Su, W.J.: Deep learning with gaussian differential privacy. Harvard data science review 2020(23) (2020)

    Google Scholar 

  6. Bu, Z., Gopi, S., Kulkarni, J., Lee, Y.T., Shen, J.H., Tantipongpipat, U.: Fast and memory efficient differentially private-sgd via jl projections. arXiv preprint arXiv:2102.03013 (2021)

  7. Buntine, W.L.: Bayesian backpropagation. Complex systems 5, 603–643 (1991)

    MATH  Google Scholar 

  8. Cadwalladr, C., Graham-Harrison, E.: Revealed: 50 million facebook profiles harvested for cambridge analytica in major data breach. The guardian 17, 22 (2018)

    Google Scholar 

  9. Canonne, C., Kamath, G., Steinke, T.: The discrete gaussian for differential privacy. arXiv preprint arXiv:2004.00010 (2020)

  10. Dong, J., Roth, A., Su, W.J.: Gaussian differential privacy. arXiv preprint arXiv:1905.02383 (2019)

  11. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Theory of cryptography conference. pp. 265–284. Springer (2006)

    Google Scholar 

  12. Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science 9(3–4), 211–407 (2014)

    MathSciNet  MATH  Google Scholar 

  13. Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: international conference on machine learning. pp. 1050–1059. PMLR (2016)

    Google Scholar 

  14. Goodfellow, I.: Efficient per-example gradient computations. arXiv preprint arXiv:1510.01799 (2015)

  15. Graves, A.: Practical variational inference for neural networks. Advances in neural information processing systems 24 (2011)

    Google Scholar 

  16. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: International Conference on Machine Learning. pp. 1321–1330. PMLR (2017)

    Google Scholar 

  17. Kasiviswanathan, S.P., Lee, H.K., Nissim, K., Raskhodnikova, S., Smith, A.: What can we learn privately? SIAM Journal on Computing 40(3), 793–826 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  18. Koskela, A., Jälkö, J., Honkela, A.: Computing tight differential privacy guarantees using fft. In: International Conference on Artificial Intelligence and Statistics. pp. 2560–2569. PMLR (2020)

    Google Scholar 

  19. Kuleshov, V., Fenner, N., Ermon, S.: Accurate uncertainties for deep learning using calibrated regression. In: International Conference on Machine Learning. pp. 2796–2804. PMLR (2018)

    Google Scholar 

  20. Li, B., Chen, C., Liu, H., Carin, L.: On connecting stochastic gradient mcmc and differential privacy. In: The 22nd International Conference on Artificial Intelligence and Statistics. pp. 557–566. PMLR (2019)

    Google Scholar 

  21. Li, C., Chen, C., Carlson, D., Carin, L.: Preconditioned stochastic gradient langevin dynamics for deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 30 (2016)

    Google Scholar 

  22. MacKay, D.J.: A practical bayesian framework for backpropagation networks. Neural computation 4(3), 448–472 (1992)

    Article  Google Scholar 

  23. MacKay, D.J.: Probable networks and plausible predictions-a review of practical bayesian methods for supervised neural networks. Network: computation in neural systems 6(3), 469–505 (1995)

    Google Scholar 

  24. Maroñas, J., Paredes, R., Ramos, D.: Calibration of deep probabilistic models with decoupled bayesian neural networks. Neurocomputing 407, 194–205 (2020)

    Article  Google Scholar 

  25. Minderer, M., Djolonga, J., Romijnders, R., Hubis, F., Zhai, X., Houlsby, N., Tran, D., Lucic, M.: Revisiting the calibration of modern neural networks. arXiv preprint arXiv:2106.07998 (2021)

  26. Mironov, I., Talwar, K., Zhang, L.: R\(\backslash \)’enyi differential privacy of the sampled gaussian mechanism. arXiv preprint arXiv:1908.10530 (2019)

  27. Neal, R.M.: Bayesian learning for neural networks, vol. 118. Springer Science & Business Media (2012)

    Google Scholar 

  28. Niculescu-Mizil, A., Caruana, R.: Predicting good probabilities with supervised learning. In: Proceedings of the 22nd international conference on Machine learning. pp. 625–632 (2005)

    Google Scholar 

  29. Rochette, G., Manoel, A., Tramel, E.W.: Efficient per-example gradient computations in convolutional neural networks. arXiv preprint arXiv:1912.06015 (2019)

  30. Ryffel, T., Trask, A., Dahl, M., Wagner, B., Mancuso, J., Rueckert, D., Passerat-Palmbach, J.: A generic framework for privacy preserving deep learning. arXiv preprint arXiv:1811.04017 (2018)

  31. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  32. Wang, Y.X.: Revisiting differentially private linear regression: optimal and adaptive prediction & estimation in unbounded domain. arXiv preprint arXiv:1803.02596 (2018)

  33. Wang, Y.X., Balle, B., Kasiviswanathan, S.P.: Subsampled rényi differential privacy and analytical moments accountant. In: The 22nd International Conference on Artificial Intelligence and Statistics. pp. 1226–1235. PMLR (2019)

    Google Scholar 

  34. Wang, Y.X., Fienberg, S., Smola, A.: Privacy for free: Posterior sampling and stochastic gradient monte carlo. In: International Conference on Machine Learning. pp. 2493–2502. PMLR (2015)

    Google Scholar 

  35. Welling, M., Teh, Y.W.: Bayesian learning via stochastic gradient langevin dynamics. In: Proceedings of the 28th international conference on machine learning (ICML-11). pp. 681–688. Citeseer (2011)

    Google Scholar 

  36. Xiong, H.Y., Barash, Y., Frey, B.J.: Bayesian prediction of tissue-regulated splicing using rna sequence and cellular context. Bioinformatics 27(18), 2554–2562 (2011)

    Article  Google Scholar 

  37. Zeiler, M.D., Fergus, R.: Stochastic pooling for regularization of deep convolutional neural networks. arXiv preprint arXiv:1301.3557 (2013)

  38. Zhang, Z., Rubinstein, B., Dimitrakakis, C.: On the differential privacy of bayesian inference. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 30 (2016)

    Google Scholar 

Download references

Acknowledgment

This research was supported by the NIH grants RF1AG063481 and R01GM124111.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qi Long .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 939 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Q., Bu, Z., Chen, K., Long, Q. (2023). Differentially Private Bayesian Neural Networks on Accuracy, Privacy and Reliability. In: Amini, MR., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13716. Springer, Cham. https://doi.org/10.1007/978-3-031-26412-2_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26412-2_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26411-5

  • Online ISBN: 978-3-031-26412-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics