Skip to main content

Squeeze-Loss: A Utility-Free Defense Against Membership Inference Attacks

  • Conference paper
  • First Online:
Security and Privacy in Social Networks and Big Data (SocialSec 2022)

Abstract

Membership inference attacks can infer whether a data sample exists in the training set of the target model based on limited adversary knowledge, which results in serious leakage of privacy. A large number of recent studies have shown that model overfitting is one of the main reasons why membership inference attacks can be executed successfully. Therefore, some classic methods to solve model overfitting are used to defend against membership inference attacks, such as dropout, spatial dropout, and differential privacy. However, it is difficult for these defense methods to achieve an available trade-off in defense success rate and model utility. In this paper, we focus on the impact of model training loss on model overfitting, and we design a Squeeze-Loss strategy to dynamically find the training loss that achieves the best balance between model utility and privacy. Extensive experimental results show that our strategy can limit the success rate of membership inference attacks to the level of random guesses with almost no loss of model utility, which always outperforms other defense methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., Backes, M.: ML-Leaks: model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246 (2018)

  2. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)

    Google Scholar 

  3. Deng, L., Hinton, G., Kingsbury, B.: New types of deep neural network learning for speech recognition and related applications: an overview. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8599–8603. IEEE (2013)

    Google Scholar 

  4. Mikolov, T., Karafiat, M., Burget, L., Cernock‘y, J., Khudanpur, S.: Recurrent neural network based language model. In: Interspeech, Makuhari, vol. 2, no. 3, pp. 1045–1048 (2010)

    Google Scholar 

  5. Jia, J., Salem, A., Backes, M., Zhang, Y., Gong, N.Z.: MemGuard: defending against black-box membership inference attacks via adversarial examples. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 259–274 (2019)

    Google Scholar 

  6. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning (still) requires rethinking generalization. Commun. ACM 64(3), 107–115 (2021)

    Article  Google Scholar 

  7. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  8. Labach, A., Salehinejad, H., Valaee, S.: Survey of dropout methods for deep neural networks. arXiv preprint arXiv:1904.13310 (2019)

  9. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)

    Google Scholar 

  10. Dwork, C.: Differential privacy: a survey of results. In: Agrawal, M., Du, D., Duan, Z., Li, A. (eds.) TAMC 2008. LNCS, vol. 4978, pp. 1–19. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79228-4_1

    Chapter  MATH  Google Scholar 

  11. Tabassi, E., Burns, K.J., Hadjimichael, M., Molina-Markham, A.D., Sexton, J.T.: A taxonomy and terminology of adversarial machine learning. NIST IR, pp. 1–29 (2019)

    Google Scholar 

  12. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  13. Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy risk in machine learning: analyzing the connection to overfitting. In: 2018 IEEE 31st Computer Security Foundations Symposium (CSF), pp. 268–282. IEEE (2018)

    Google Scholar 

  14. Raskutti, G., Wainwright, M.J., Yu, B.: Early stopping and nonparametric regression: an optimal data-dependent stopping rule. J. Mach. Learn. Res. 15(1), 335–366 (2014)

    MathSciNet  MATH  Google Scholar 

  15. Hu, H., Salcic, Z., Sun, L., Dobbie, G., Yu, P.S., Zhang, X.: Membership inference attacks on machine learning: a survey. ACM Comput. Surv. (CSUR) (2021)

    Google Scholar 

  16. Kaya, Y., Dumitras, T.: When does data augmentation help with membership inference attacks? In: International Conference on Machine Learning, pp. 5345–5355. PMLR (2021)

    Google Scholar 

  17. Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006). https://doi.org/10.1007/11681878_14

    Chapter  Google Scholar 

  18. Jayaraman, B., Evans, D.: Evaluating differentially private machine learning in practice. In: 28th USENIX Security Symposium (USENIX Security 2019), pp. 1895–1912 (2019)

    Google Scholar 

  19. Chen, D., Yu, N., Fritz, M.: RelaxLoss: defending membership inference attacks without losing utility. In: International Conference on Learning Representations (2021)

    Google Scholar 

  20. Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Lechevallier, Y., Saporta, G. (eds.) Proceedings of COMPSTAT 2010, pp. 177–186. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-7908-2604-3_16

    Chapter  Google Scholar 

  21. Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning, pp. 1139–1147. PMLR (2013)

    Google Scholar 

  22. Luo, L., Xiong, Y., Liu, Y., Sun, X.: Adaptive gradient methods with dynamic bound of learning rate. arXiv preprint arXiv:1902.09843 (2019)

  23. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  24. Hinton, G., Srivastava, N., Swersky, K.: Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. Cited on 14(8), 2 (2012)

    Google Scholar 

  25. Song, L., Mittal, P.: Systematic evaluation of privacy risks of machine learning models. In: 30th USENIX Security Symposium (USENIX Security 2021), pp. 2615–2632 (2021)

    Google Scholar 

  26. Reddy, D.R.: Speech recognition by machine: a review. Proc. IEEE 64(4), 501–531 (1976)

    Article  Google Scholar 

  27. Dosovitskiy, A., et al.: An image is worth \(16 \times 16\) words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  28. Zhao, W., Chellappa, R., Phillips, P.J., Rosenfeld, A.: Face recognition: a literature survey. ACM Comput. Surv. (CSUR) 35(4), 399–458 (2003)

    Article  Google Scholar 

  29. He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.-J.: Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005)

    Article  Google Scholar 

  30. Nadkarni, P.M., Ohno-Machado, L., Chapman, W.W.: Natural language processing: an introduction. J. Am. Med. Inform. Assoc. 18(5), 544–551 (2011)

    Article  Google Scholar 

  31. Ai, S., Hong, S., Zheng, X., Wang, Y., Liu, X.: CSRT rumor spreading model based on complex network. Int. J. Intell. Syst. 36(5), 1903–1913 (2021)

    Article  Google Scholar 

  32. Yan, H., Chen, M., Hu, L., Jia, C.: Secure video retrieval using image query on an untrusted cloud. Appl. Soft Comput. 97, 106782 (2020)

    Google Scholar 

  33. Chen, C., Huang, T.: Camdar-adv: generating adversarial patches on 3D object. Int. J. Intell. Syst. 36(3), 1441–1453 (2021)

    Article  MathSciNet  Google Scholar 

  34. Ren, H., Huang, T., Yan, H.: Adversarial examples: attacks and defenses in the physical world. Int. J. Mach. Learn. Cybern. 12(11), 3325–3336 (2021). https://doi.org/10.1007/s13042-020-01242-z

    Article  Google Scholar 

  35. Mo, K., Huang, T., Xiang, X.: Querying little is enough: model inversion attack via latent information. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds.) ML4CS 2020. LNCS, vol. 12487, pp. 583–591. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-62460-6_52

    Chapter  Google Scholar 

  36. Jiang, N., Jie, W., Li, J., Liu, X., Jin, D.: GATrust: a multi-aspect graph attention network model for trust assessment in OSNs. IEEE Trans. Knowl. Data Eng. (2022)

    Google Scholar 

  37. Li, J., Hu, X., Xiong, P., Zhou, W., et al.: The dynamic privacy-preserving mechanisms for online dynamic social networks. IEEE Trans. Knowl. Data Eng. (2020)

    Google Scholar 

  38. Li, J., et al.: Efficient and secure outsourcing of differentially private data publishing with multiple evaluators. IEEE Trans. Dependable Secure Comput. (2020)

    Google Scholar 

  39. Li, T., Li, J., Chen, X., Liu, Z., Lou, W., Hou, Y.T.: NPMML: a framework for non-interactive privacy-preserving multi-party machine learning. IEEE Trans. Dependable Secure Comput. 18(6), 2969–2982 (2020)

    Google Scholar 

  40. Mo, K., Tang, W., Li, J., Yuan, X.: Attacking deep reinforcement learning with decoupled adversarial policy. IEEE Trans. Dependable Secure Comput. (2022)

    Google Scholar 

Download references

Acknowledgement

This work was funded by National Natural Science Foundation of China (No. 62102107) and National Natural Science Foundation of China (No. 62072132).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongyang Yan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y., Yan, H., Lin, G., Peng, S., Zhang, Z., Wang, Y. (2022). Squeeze-Loss: A Utility-Free Defense Against Membership Inference Attacks. In: Chen, X., Huang, X., Kutyłowski, M. (eds) Security and Privacy in Social Networks and Big Data. SocialSec 2022. Communications in Computer and Information Science, vol 1663. Springer, Singapore. https://doi.org/10.1007/978-981-19-7242-3_19

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-7242-3_19

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-7241-6

  • Online ISBN: 978-981-19-7242-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics