skip to main content
10.1145/3649153.3649211acmconferencesArticle/Chapter ViewAbstractPublication PagescfConference Proceedingsconference-collections
research-article

DP-Discriminator: A Differential Privacy Evaluation Tool Based on GAN

Published: 02 July 2024 Publication History

Abstract

Differential privacy has become increasingly popular in private machine learning applications due to its provable ability to limit information leakage. However, there are often vulnerabilities in the practical implementation of differentially private algorithms, making it necessary to have effective tools for evaluating them before deployment. Unfortunately, the current state of the art classifier-based evaluation tools for differential privacy are still weakly distinguishable and need to be improved. In this paper, we propose a DP-Discriminator to automatically detect the ξ-differential distinguishability (ξ-DD) for specific algorithms, which is able to efficiently discover violations of differential privacy. Specially, we give a new attack definition of ξ-DD, based on a mathematical observation, which conduce to find a larger ξ. In addition, the proposed DP-Discriminator learns the overall distribution of features across samples depending on the ability of the generating adversarial network to capture the latent features. Benefiting from powerful classifiers, DP-Discriminator is able to automatically and accurately evaluate differential privacy with minimal time consumption. The experimental results demonstrate the effectiveness of the proposed method in estimating the ξ-DD for various practical randomized algorithms. For example, the latest work that detects the ξ-DD of the algorithm RAPPOR(0.4-DP) is 0.301, whereas our tool detects ξ-DD=0.369, with an error that is one order of magnitude lower.

References

[1]
Jonas Adler and Sebastian Lunz. 2018. Banach wasserstein gan. Advances in neural information processing systems 31 (2018).
[2]
Önder Askin, Tim Kutta, and Holger Dette. 2022. Statistical quantification of differential privacy: a local approach. In 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 402--421.
[3]
Victor Balcer and Salil Vadhan. 2017. Differential privacy on finite computers. arXiv preprint arXiv:1709.05396 (2017).
[4]
Benjamin Bichsel, Timon Gehr, Dana Drachsler-Cohen, Petar Tsankov, and Martin Vechev. 2018. Dp-finder: Finding differential privacy violations by sampling and optimization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 508--524.
[5]
Benjamin Bichsel, Samuel Steffen, Ilija Bogunovic, and Martin Vechev. 2021. Dpsniper: Black-box discovery of differential privacy violations using classifiers. In 2021 IEEE Symposium on Security and Privacy (SP). IEEE, 391--409.
[6]
Yan Chen and Ashwin Machanavajjhala. 2015. On the privacy properties of variants on the sparse vector technique. arXiv preprint arXiv:1508.07306 (2015).
[7]
Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, and Anil A Bharath. 2018. Generative adversarial networks: An overview. IEEE signal processing magazine 35, 1 (2018), 53--65.
[8]
Zeyu Ding, Yuxin Wang, Guanhong Wang, Danfeng Zhang, and Daniel Kifer. 2018. Detecting violations of differential privacy. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 475--489.
[9]
Kashyap Dixit, Madhav Jha, Sofya Raskhodnikova, and Abhradeep Thakurta. 2013. Testing the Lipschitz property over product distributions with applications to data privacy. In Theory of Cryptography: 10th Theory of Cryptography Conference, TCC 2013, Tokyo, Japan, March 3-6, 2013. Proceedings. Springer, 418--436.
[10]
Cynthia Dwork. 2008. Differential privacy: A survey of results. In Theory and Applications of Models of Computation: 5th International Conference, TAMC 2008, Xi'an, China, April 25-29, 2008. Proceedings 5. Springer, 1--19.
[11]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3. Springer, 265--284.
[12]
Cynthia Dwork, Moni Naor, Omer Reingold, Guy N Rothblum, and Salil Vadhan. 2009. On the complexity of differentially private data release: efficient algorithms and hardness results. In Proceedings of the forty-first annual ACM symposium on Theory of computing. 381--390.
[13]
Cynthia Dwork, Aaron Roth, et al. 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9, 3--4 (2014), 211--407.
[14]
Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. 2014. Rappor: Randomized aggregatable privacy-preserving ordinal response. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications security. 1054--1067.
[15]
Arpita Ghosh, Tim Roughgarden, and Mukund Sundararajan. 2009. Universally utility-maximizing privacy mechanisms. In Proceedings of the forty-first annual ACM symposium on Theory of computing. 351--360.
[16]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks. Commun. ACM 63, 11 (2020), 139--144.
[17]
Tim Kutta, Önder Askin, and Martin Dunsche. 2022. Lower Bounds for R\'enyi Differential Privacy in a Black-Box Setting. arXiv preprint arXiv:2212.04739 (2022).
[18]
Changchang Liu, Xi He, Thee Chanyaswad, Shiqiang Wang, and Prateek Mittal. 2019. Investigating statistical privacy frameworks from the perspective of hypothesis testing. Proceedings on Privacy Enhancing Technologies 2019, 3 (2019), 233--254.
[19]
Xiyang Liu and Sewoong Oh. 2019. Minimax rates of estimating approximate differential privacy. arXiv preprint arXiv:1905.10335 (2019).
[20]
Fred Lu, Joseph Munoz, Maya Fuchs, Tyler LeBlond, Elliott Zaresky-Williams, Edward Raff, Francis Ferraro, and Brian Testa. 2022. A General Framework for Auditing Differentially Private Machine Learning. arXiv preprint arXiv:2210.08643 (2022).
[21]
Yun Lu, Yu Wei, Malik Magdon-Ismail, and Vassilis Zikas. 2022. Eureka: A General Framework for Black-box Differential Privacy Estimators. Cryptology ePrint Archive (2022).
[22]
Min Lyu, Dong Su, and Ninghui Li. 2016. Understanding the sparse vector technique for differential privacy. arXiv preprint arXiv:1603.01699 (2016).
[23]
Ilya Mironov. 2012. On significance of the least significant bits for differential privacy. In Proceedings of the 2012 ACM conference on Computer and communications security. 650--661.
[24]
Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).
[25]
Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).
[26]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. 634--646.
[27]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE symposium on security and privacy (SP). IEEE, 739--753.
[28]
Ben Niu, Zejun Zhou, Yahong Chen, Jin Cao, and Fenghua Li. 2022. DP-Opt: Identify High Differential Privacy Violation by Optimization. In Wireless Algorithms, Systems, and Applications: 17th International Conference, WASA 2022, Dalian, China, November 24-26, 2022, Proceedings, Part II. Springer, 406--416.
[29]
Apostolos Pyrgelis, Carmela Troncoso, and Emiliano De Cristofaro. 2017. Knock knock, who's there? Membership inference on aggregate location data. arXiv preprint arXiv:1708.06145 (2017).
[30]
Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015).
[31]
Vishal Jagannath Ravi. 2019. Automated methods for checking differential privacy. (2019).
[32]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP). IEEE, 3--18.
[33]
Yuxin Wang, Zeyu Ding, Daniel Kifer, and Danfeng Zhang. 2020. Checkdp: An automated and integrated approach for proving differential privacy or finding precise counterexamples. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. 919--938.
[34]
Yuxin Wang, Zeyu Ding, Guanhong Wang, Daniel Kifer, and Danfeng Zhang. 2019. Proving differential privacy with shadow execution. In Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation. 655--669.
[35]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st computer security foundations symposium (CSF). IEEE, 268--282.
[36]
Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Ahmed Salem, Victor Rühle, Andrew Paverd, Mohammad Naseri, and Boris Köpf. 2022. Bayesian estimation of differential privacy. arXiv preprint arXiv:2206.05199 (2022).
[37]
Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Ahmed Salem, Victor Rühle, Andrew Paverd, Mohammad Naseri, Boris Köpf, and Daniel Jones. 2023. Bayesian estimation of differential privacy. In International Conference on Machine Learning. PMLR, 40624--40636.
[38]
Danfeng Zhang and Daniel Kifer. 2017. LightDP: Towards automating differential privacy proofs. In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages. 888--901.

Index Terms

  1. DP-Discriminator: A Differential Privacy Evaluation Tool Based on GAN

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CF '24: Proceedings of the 21st ACM International Conference on Computing Frontiers
    May 2024
    345 pages
    ISBN:9798400705977
    DOI:10.1145/3649153
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 02 July 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. differential privacy
    2. generating adversarial networks
    3. multi-layer perception

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • Key Research and Development Program of Hebei Province
    • Natural Science Foundation of Hebei Province of China
    • Hebei Provincial Department of Science and Technology

    Conference

    CF '24
    Sponsor:

    Acceptance Rates

    CF '24 Paper Acceptance Rate 33 of 105 submissions, 31%;
    Overall Acceptance Rate 273 of 785 submissions, 35%

    Upcoming Conference

    CF '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 80
      Total Downloads
    • Downloads (Last 12 months)80
    • Downloads (Last 6 weeks)11
    Reflects downloads up to 11 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media