Skip to main content

MKD: Mutual Knowledge Distillation for Membership Privacy Protection

  • Conference paper
  • First Online:
Artificial Intelligence Security and Privacy (AIS&P 2023)

Abstract

Machine learning models are susceptible to member inference attacks, which attempt to determine whether a given sample belongs to the training data set of the target model. The significant privacy concerns raised by member inference have led to the development of various defenses against Member Inference Attacks (MIAs). Existing techniques for knowledge distillation have been identified as a potential solution to mitigate the tradeoff between model performance and data privacy, demonstrating promising results. Nonetheless, the limitations in performance imposed by the teacher model in knowledge distillation, along with the scarcity of unlabeled reference data, present a challenge in achieving high-performance privacy-preserving training for the target model. To address these issues, we propose a novel knowledge distillation based defense method, i.e., Mutual Knowledge Distillation (MKD). Dividing the training set into subsets for the teacher and the student models, MKD trains them through mutual knowledge distillation for mitigating MIAs. Extensive experimental results demonstrate that MKD outperforms several existing defense methods in improving the trade-off between model utility and member privacy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi, M., et al.: Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)

    Google Scholar 

  2. Chen, D., Yu, N., Fritz, M.: RelaxLoss: defending membership inference attacks without losing utility. arXiv preprint arXiv:2207.05801 (2022)

  3. Chen, J., Wang, W.H., Shi, X.: Differential privacy protection against membership inference attack on machine learning for genomic data. In: BIOCOMPUTING 2021: Proceedings of the Pacific Symposium, pp. 26–37. World Scientific (2020)

    Google Scholar 

  4. Choquette-Choo, C.A., Tramer, F., Carlini, N., Papernot, N.: Label-only membership inference attacks. In: International Conference on Machine Learning, pp. 1964–1974. PMLR (2021)

    Google Scholar 

  5. Chowdhary, K., Chowdhary, K.: Natural language processing. In: Fundamentals of Artificial Intelligence, pp. 603–649 (2020)

    Google Scholar 

  6. Giraldo, J., Cardenas, A., Kantarcioglu, M., Katz, J.: Adversarial classification under differential privacy. In: Network and Distributed Systems Security (NDSS) Symposium 2020 (2020)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  8. Hong, Y., An, S., Im, S., Jo, J., Oh, I.: MONICA2: mobile neural voice command assistants towards smaller and smarter. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 13176–13178 (2022)

    Google Scholar 

  9. Hu, H., Salcic, Z., Dobbie, G., Chen, Y., Zhang, X.: EAR: an enhanced adversarial regularization approach against membership inference attacks. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2021)

    Google Scholar 

  10. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  11. Huang, T., Huang, J., Pang, Y., Yan, H.: Smart contract watermarking based on code obfuscation. Inf. Sci. 628, 439–448 (2023)

    Article  Google Scholar 

  12. Jayaraman, B., Evans, D.: Are attribute inference attacks just imputation? In: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pp. 1569–1582 (2022)

    Google Scholar 

  13. Jia, J., Salem, A., Backes, M., Zhang, Y., Gong, N.Z.: MemGuard: defending against black-box membership inference attacks via adversarial examples. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 259–274 (2019)

    Google Scholar 

  14. Kaya, Y., Hong, S., Dumitras, T.: On the effectiveness of regularization against membership inference attacks. arXiv preprint arXiv:2006.05336 (2020)

  15. Krizhevsky, A.: Learning multiple layers of features from tiny images. University of Toronto (2012). http://www.cs.toronto.edu/kriz/cifar.html. Accessed 13 May (2022)

  16. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25 (2012)

    Google Scholar 

  17. Leino, K., Fredrikson, M.: Stolen memories: leveraging model memorization for calibrated \(\{\)White-Box\(\}\) membership inference. In: 29th USENIX Security Symposium (USENIX Security 20), pp. 1605–1622 (2020)

    Google Scholar 

  18. Li, Z., Liu, F., Yang, W., Peng, S., Zhou, J.: A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 33, 6999–7019 (2021)

    Article  MathSciNet  Google Scholar 

  19. Li, Z., Zhang, Y.: Membership leakage in label-only exposures. In: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 880–895 (2021)

    Google Scholar 

  20. Nasr, M., Shokri, R., Houmansadr, A.: Machine learning with membership privacy using adversarial regularization. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 634–646 (2018)

    Google Scholar 

  21. Papernot, N., Song, S., Mironov, I., Raghunathan, A., Talwar, K., Erlingsson, Ú.: Scalable private learning with pate. arXiv preprint arXiv:1802.08908 (2018)

  22. Qin, X., Tan, S., Tang, W., Li, B., Huang, J.: Image steganography based on iterative adversarial perturbations onto a synchronized-directions sub-image. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2705–2709. IEEE (2021)

    Google Scholar 

  23. Salem, A., Bhattacharya, A., Backes, M., Fritz, M., Zhang, Y.: \(\{\)Updates-Leak\(\}\): Data set inference and reconstruction attacks in online learning. In: 29th USENIX Security Symposium (USENIX Security 20), pp. 1291–1308 (2020)

    Google Scholar 

  24. Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., Backes, M.: ML-Leaks: model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246 (2018)

  25. Shejwalkar, V., Houmansadr, A.: Manipulating the byzantine: optimizing model poisoning attacks and defenses for federated learning. In: NDSS (2021)

    Google Scholar 

  26. Shejwalkar, V., Houmansadr, A.: Membership privacy for machine learning models through knowledge transfer. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 9549–9557 (2021)

    Google Scholar 

  27. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  28. Song, L., Mittal, P.: Systematic evaluation of privacy risks of machine learning models. In: 30th USENIX Security Symposium (USENIX Security 21), pp. 2615–2632 (2021)

    Google Scholar 

  29. Wang, M., Deng, W.: Deep face recognition: a survey. Neurocomputing 429, 215–244 (2021)

    Article  Google Scholar 

  30. Xue, M., et al.: Use the spear as a shield: an adversarial example based privacy-preserving technique against membership inference attacks. IEEE Trans. Emerg. Top. Comput. 11(1), 153–169 (2022)

    Article  MathSciNet  Google Scholar 

  31. Yeom, S., Giacomelli, I., Fredrikson, M., Jha, S.: Privacy risk in machine learning: analyzing the connection to overfitting. In: 2018 IEEE 31st Computer Security Foundations Symposium (CSF), pp. 268–282. IEEE (2018)

    Google Scholar 

  32. Yeom, S., Giacomelli, I., Menaged, A., Fredrikson, M., Jha, S.: Overfitting, robustness, and malicious algorithms: a study of potential causes of privacy risk in machine learning. J. Comput. Secur. 28(1), 35–70 (2020)

    Article  Google Scholar 

  33. Zhang, Z., Lin, G., Ke, L., Peng, S., Hu, L., Yan, H.: KD-GAN: an effective membership inference attacks defence framework. Int. J. Intell. Syst. 37(11), 9921–9935 (2022)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grants 62302117.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuan Rao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, S., Liu, Z., Yu, J., Tang, Y., Luo, Z., Rao, Y. (2024). MKD: Mutual Knowledge Distillation for Membership Privacy Protection. In: Vaidya, J., Gabbouj, M., Li, J. (eds) Artificial Intelligence Security and Privacy. AIS&P 2023. Lecture Notes in Computer Science, vol 14509. Springer, Singapore. https://doi.org/10.1007/978-981-99-9785-5_34

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-9785-5_34

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-9784-8

  • Online ISBN: 978-981-99-9785-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics