Skip to main content
Log in

ADAM-DPGAN: a differential private mechanism for generative adversarial network

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Privacy preserving data release is a major concern of many data mining applications. Using Generative Adversarial Networks (GANs) to generate an unlimited number of synthetic samples is a popular replacement for data sharing. However, GAN models are known to implicitly memorize details of sensitive data used for training. To this end, this paper proposes ADAM-DPGAN, which guarantees differential privacy of training data for GAN models. ADAM-DPGAN specifies the maximum effect of each sensitive training record on the model parameters at each step of the learning procedure when the Adam optimizer is used, and adds appropriate noise to the parameters during the training procedure. ADAM-DPGAN leverages Rényi differential privacy account to track the spent privacy budgets. In contrast to prior work, by accurately determining the effect of each training record, this method can distort parameters more precisely and generate higher quality outputs while preserving the convergence properties of GAN counterparts without privacy leakage as proved. Through experimental evaluations on different image datasets, the ADAM-DPGAN is compared to previous methods and the superiority of the ADAM-DPGAN over the previous methods is demonstrated in terms of visual quality, realism and diversity of generated samples, convergence of training, and resistance to membership inference attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. https://github.com/tensorflow/privacy/blob/master/tensorflow_privacy/privacy/analysis/rdp_accountant.py

  2. http://yann.lecun.com/exdb/mnist/

  3. https://github.com/zalandoresearch/fashion-mnist/

  4. http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html

References

  1. Machanavajjhala A, Kifer D, Gehrke J, Ven-kitasubramaniam M (2007) l-diversity: privacy beyond k-anonymity. ACM Trans Knowl Discov Data (TKDD) 1(1):1–24

    Google Scholar 

  2. Li N, Li T, Venkatasubramanian S (2007) t-closeness: privacy beyond k-anonymity and l-diversity. In: 23rd International conference on data engineering, pp 106–115

  3. Goodfellow I, Pougget-Abadie J, Mirza M, Xu B, Warde-Farely D, Ozair S, Courvalle A, Bongio Y (2014) Generative adversarial nets. In: 27th International conference on neural information processing systems, pp 2672–2680

  4. Hayes J, Melis L, Denerzis G, De Cristofaro E (2019) LOGAN: membership inference attacks against generative models. In: Privacy enhancing technologies symposium, pp 133–152

  5. Hilprecht B, Harterich M, Bernau D (2019) Monte Carlo and reconstruction membership inference attacks against genera-tive models. In: Privacy enhancing technologies symposium, pp 232–249

  6. Chen D, Yu N, Zhang Y, Fritz M (2020) GAN-leaks: a taxonomy of membership inference attacks against generative models. In: the 2020 ACM SIGSAC conference on computer and communications security, pp 343–362

  7. Xu C, Ren J, Zhang D, Zhang Y, Qin Z, Ren K (2019) GANobfuscator: mitigating information leakage under GAN via differential privacy. IEEE Trans Inform Forens Secur 14(9):2358–2371

    Article  Google Scholar 

  8. Torkzadehmahani R, Kairouz P, Paten B (2019) DP-CGAN: differentially private synthetic data and label generation. In: IEEE/CVF Conference on computer vision and pattern recognition workshops (CVPRW), pp 1–8

  9. Jordon J, Yoon J, Schaar M (2019) PATE-GAN: generative synthetic data with differential privacy guarantees. In: Seventh international conference on learning representations, pp 1–21

  10. Long Y, Lin S, Yang Z, Gunter CA, Li B (2019) Scalable differentially private generative student model via PATE. arXiv:1906.09338

  11. Wnag B, Wu F, Long Y, Rimanic L, Zhang C, Li B (2021) DataLens: scable privacy preserving training via gradient compression and aggegation. arXiv:2103.11109

  12. Chen D, Orekondy T, Fritz M (2020) GS-WGAN: a gradient-sanitized approach for learning differentially private generators. In: 34 Conference on neural information processing systems, pp 1–18

  13. Han C, Xue R (2021) Differentially private GANs by adding noise to discriminator’s loss. Comput Secur 107:1–14

    Article  Google Scholar 

  14. Mukherjee S, Xu Y, Trivedi A, Ferres J (2019) PrivGan: protecting GANs from membership inference attack at low cost. arXiv:2001.00071

  15. Kingma DP, Ba JL (2015) ADAM: a method for stochastic optimization. In: International conference on learning representations, pp 1–15

  16. Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC (2017) Improved training of wasserstein GANs. In: Annual conference on neural information processing systems (NIPS), pp 5767–5777

  17. Mironov I, Talwar K, Zhang L (2019) Rényi differential privacy of the sampled gaussian mechanism. arXiv:1908.10530

  18. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. In: Advances in neural information processing systems, pp 2234–2242

  19. Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Annual conference on neural information processing systems (NIPS), pp 6626–6637

  20. Fredrikson M, Jha S, RIstenpart T (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In: 22nd ACM SIGSAC Conference on computer and communications security, pp 1322–1333

  21. Yang Z, Zhang J, Chein-Chang E, Liang Z (2019) Neural network inversion in adversarial setting via background knowledge alignment. In: 2019 ACM SIGSAC conference on computer and communications security, pp 225–240

  22. Shokri R, Stronato M, Song C, Shamatikov V (2017) Membership inference attacks against machine learning models. In: IEEE symposium on security and privacy (SP), pp 3–18

  23. Yeom S, Giacomelli I, Fredrikson M, Jha S (2018) Privacy risk in machine learning: analyzing the connection to over-fitting. In: 2018 IEEE 31st Computer security foundations symposium, pp 268–282

  24. Sablayrolles A, Douze M, Schmid C, Ollivier Y, Jegou H (2019) White-box vs Black-box: bayes optimal strategies for membership inference. In: the 36th International conference on machine learning, pp 1–11

  25. Nasr M, Shokri R, Houmansadr A (2019) Comprehensive privacy analysis of deep learning stand-alone and federated learning under passive and active white-box inference attacks. In: IEEE Symposium on security and privacy, pp 739–853

  26. Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L (2016) Deep learning with differential privacy. In: 2016 ACM SIGSAC Conference on computer and communications security, pp 308–318

  27. Papernot N, Abadi M, Erlingsson U, Goodfellow I, Talwar K (2017) Semi-supervised knowledge transfer for deep learning from private aggregator. In: the International conference on learning representations (ICLR), pp 1–16

  28. Papernot N, Song S, Mironov I, Raghunathan A, Talwar K, Erlingsson U (2018) Scalable private learning with PATE. In: The International conference on learning representations (ICLR), pp 1–34

  29. Phan N, Wang Y, Wu X, Dou D (2016) Differential privacy preserving for deep auto-encoders: an application of human behavior prediction. In: The Thirtieth AAAI conference on artificial intelligence (AAAI-16), pp 1309–1316

  30. Mirza M, Osindero S (2014) Conditional generative adversarial nets. arXiv:1411.1784

  31. Goodfellow I (2015) Efficient per-example gradient computations. arXiv:1510.01799

  32. Rochette G, Manoel A, Tramel EA (2020) Efficient per-example gradient computations in convolutional neural networks. arXiv:1912.06015

  33. Bu Z, Wang H, Long Q, Su WJ (2021) On the convergence of deep learning with differential privacy. arXiv:2106.07830

  34. Nasr M, Shokri R, Houmansadr A (2018) Machine learning with membership privacy using adversarial regularization. In: The ACM SIGSAC conference on computer and communications security, pp 634–646

  35. Jia J, Salem A, Backes M, Zhang Y, Gong NZ (2019) MemGuard: defending against black-box membership inference attacks via adversarial examples. In: The ACM SIGSAC Conference on computer and communications security, pp 259–274

  36. Yang Z, Shao B, Yuan B, Chein E, Zhang F (2020) Defending model inversion and membership inference attacks via prediction purification. arXiv:2005.03915

  37. Dwork C, Kenthapadi K, McSherry F, Mironov I, Naor M (2006) Our data, ourselves: privacy via distributed noise generation

  38. Dwork C, Roth A (2013) The algorithmic foundations of differential privacy. Theor Comput Sci 9(3):211–407

    MathSciNet  MATH  Google Scholar 

  39. Dwork C, Rothblum GN, Vadhan S (2010) Boosting and differential privacy. In: Proceedings of IEEE 51st annual symposium on foundations of computer science, pp 51–60

  40. Mironov I (2017) Rényi differential privacy. In: IEEE 30th computer security foundations symposium (CSF), pp 263–275

  41. Arjovsky M, Chintala S, Bottou L (2017) Wasserstein generative adversarial networks. In: International conference on machine learning, pp 214–223

  42. Chen X, Liu S, Sun R, Hong M (2019) On the convergence of a class of ADAM-type algorithms for non-convex optimization. In: 7th International conference on learning representations (ICLR 2019), pp 1–43

  43. Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adver-sarial networks. arXiv:1511.06434

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Behrouz Sahgholi Ghahfarokhi.

Additional information

Data availability statement

Publicly available datasets were analyzed in this study. This data can be found at: MNIST dataset: http://yann.lecun.com/exdb/mnist/; Fashion-MNIST dataset: https://github.com/zalandoresearch/fashion-mnist/; CelebA dataset: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Azadmanesh, M., Ghahfarokhi, B.S. & Talouki, M.A. ADAM-DPGAN: a differential private mechanism for generative adversarial network. Appl Intell 53, 11142–11161 (2023). https://doi.org/10.1007/s10489-022-03902-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03902-9

Keywords

Navigation