Skip to main content

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1925))

Included in the following conference series:

  • 488 Accesses

Abstract

Membership inference attacks are used as an audit tool to quantify training data leaks in machine learning models. Protection can be provided by anonymizing the training data or using training functions with differential privacy. Depending on the context, such as building data collection services for central machine learning models or responding to queries from end users, data scientists can choose between local and global differential privacy parameters. Different types of differential privacy have different epsilon values that reflect different mechanisms, making it difficult for data scientists to select appropriate differential privacy parameters and avoid inaccurate conclusions. The experiments in this paper show the relative privacy-accuracy trade-off of local and global differential privacy mechanisms under a white-box membership inference attack. While membership inference only reflects the lower bound for inference risk, and differential privacy formulates the upper bound, the experiments in this study with some datasets show that the trade-off between accuracy and privacy is similar for both types of mechanisms, although there is a large difference in their upper bounds. This suggests that the upper bound is far from the practical susceptibility to membership inference. Therefore, a small epsilon value in global differential privacy and a large epsilon value in local differential privacy lead to the same risk of membership inference. In addition, the risks from membership inference attacks are not uniform across all classes, especially when the training dataset in machine learning models is skewed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Warner, S.L.: Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965)

    Article  MATH  Google Scholar 

  2. Dwork, C., Kenthapadi, K., McSherry, F., Mironov, I., Naor, M.: Our data, ourselves: Privacy via distributed noise generation. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 486–503. Springer, Heidelberg (2006). https://doi.org/10.1007/11761679_29

    Chapter  Google Scholar 

  3. Hill, K.: How target figured out a teen girl was pregnant before her father did. Forbes, Inc, vol. 7, pp.4–1 (2012)

    Google Scholar 

  4. Li, N., Qardaji, W., Su, D., Wu, Y., Yang, W.: Membership privacy: a unifying framework for privacy definitions. In: Proceedings of the 20th ACM SIGSAC Conference on Computer and Communications Security, pp. 889–900 (2013)

    Google Scholar 

  5. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)

    Google Scholar 

  6. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321 (2015)

    Google Scholar 

  7. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction {APIs}. In: Proceedings of the 25th USENIX Security Symposium, pp. 601–618 (2016)

    Google Scholar 

  8. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: Proceedings of the 38th IEEE Symposium on Security and Privacy, pp. 3–18 (2017)

    Google Scholar 

  9. Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy risk in machine learning: analyzing the connection to overfitting. In: Proceedings of the IEEE 31st Computer Security Foundations Symposium (CSF), pp. 268–282 (2018)

    Google Scholar 

  10. Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: LOGAN: membership inference attacks against generative models. Proc. Priv. Enhancing Technol. (PoPETs) 2019(1), 133–152 (2019)

    Article  Google Scholar 

  11. Hilprecht, B., Härterich, M., Bernau, D.: Monte Carlo and reconstruction membership inference attacks against generative models. Proc. Priv. Enhancing Technol. 2019(4), 232–249 (2019)

    Article  Google Scholar 

  12. Jayaraman, B., Evans, D.: Evaluating differentially private machine learning in practice. In: Proceedings of the 28th USENIX Security Symposium (USENIX Security 2019), pp. 1895–1912 (2019)

    Google Scholar 

  13. Liu, K.S., Xiao, C., Li, B., Gao, J.: Performing co-membership attacks against deep generative models. In: Proceedings of the 19th IEEE International Conference on Data Mining (ICDM), pp. 459–467 (2019)

    Google Scholar 

  14. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning. In: Proceedings of the 40th IEEE Symposium on Security and Privacy, pp. 1–15 (2019)

    Google Scholar 

  15. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: Proceedings of the 40th IEEE Symposium on Security and Privacy, pp. 739–753 (2019)

    Google Scholar 

  16. Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., Backes, M.: ML-leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Proceedings of the 26th Network and Distributed Systems Security Symposium (2019)

    Google Scholar 

  17. Chen, D., Yu, N., Zhang, Y., Fritz, M.: GAN-leaks: a taxonomy of membership inference attacks against generative models. In: Proceedings of the 27th ACM SIGSAC Conference on Computer and Communications Security, pp. 343–362 (2020)

    Google Scholar 

  18. Ha, T., Dang, T.K., Le, H., Truong, T.A.: Security and privacy issues in deep learning: a brief review. SN Comput. Sci. 1(5), 253 (2020)

    Article  Google Scholar 

  19. Leino, K., Fredrikson, M.: Stolen memories: leveraging model memorization for calibrated {White-Box} membership inference. In: Proceedings of the 29th USENIX Security Symposium (USENIX Security 20), pp. 1605–1622 (2020)

    Google Scholar 

  20. Long, Y., et al.: A pragmatic approach to membership inferences on machine learning models. In: Proceedings of the 5th IEEE European Symposium on Security and Privacy, pp. 521–534 (2020). https://doi.org/10.1109/EuroSP48549.2020.00040

  21. Ha, T., Dang, T.K., Nguyen-Tan, N.: Comprehensive analysis of privacy in black-box and white-box inference attacks against generative adversarial network. In: Dang, T.K., Küng, J., Chung, T.M., Takizawa, M. (eds.) FDSE 2021. LNCS, vol. 13076, pp. 323–337. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-91387-8_21

    Chapter  Google Scholar 

  22. Jayaraman, B., Wang, L., Knipmeyer, K., Gu, Q., Evans, D.: Revisiting membership inference under realistic assumptions. Proc. Priv. Enhancing Technol. 2021(2), 348–368 (2021). https://doi.org/10.2478/popets-2021-0031

    Article  Google Scholar 

  23. Ha, T., Dang, T.K.: Inference attacks based on GAN in federated learning. Int. J. Web Inf. Syst. 18(2/3), 117–136 (2022)

    Article  Google Scholar 

  24. Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P.S., Zhang, X.: Membership inference attacks on machine learning: a survey. ACM Comput. Surv. 54(11s), 1–37 (2022)

    Article  Google Scholar 

  25. Liu, L., Wang, Y., Liu, G., Peng, K., Wang, C.: Membership inference attacks against machine learning models via prediction sensitivity. IEEE Trans. Dependable Secure Comput. 20(3), 2341–2347 (2022). https://doi.org/10.1109/TDSC.2022.3180828

    Article  Google Scholar 

Download references

Acknowledgements

This research is funded by University of Information Technology-Vietnam National University Ho Chi Minh City under grant number D1–2023-42.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tran Khanh Dang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ha, T., Vo, T., Dang, T.K., Trang, N.T.H. (2023). Differential Privacy Under Membership Inference Attacks. In: Dang, T.K., Küng, J., Chung, T.M. (eds) Future Data and Security Engineering. Big Data, Security and Privacy, Smart City and Industry 4.0 Applications. FDSE 2023. Communications in Computer and Information Science, vol 1925. Springer, Singapore. https://doi.org/10.1007/978-981-99-8296-7_18

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8296-7_18

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8295-0

  • Online ISBN: 978-981-99-8296-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics