skip to main content
survey

Defenses to Membership Inference Attacks: A Survey

Published: 10 November 2023 Publication History

Abstract

Machine learning (ML) has gained widespread adoption in a variety of fields, including computer vision and natural language processing. However, ML models are vulnerable to membership inference attacks (MIAs), which can infer whether access data was used in training a target model, thus compromising the privacy of training data. This has led researchers to focus on protecting the privacy of ML. To date, although there have been extensive efforts to defend against MIAs, we still lack a comprehensive understanding of the progress made in this area, which can often impede our ability to design the most effective defense strategies. In this article, we aim to fill this critical knowledge gap by providing a systematic analysis of membership inference defense. Specifically, we classify and summarize the existing membership inference defense schemes, focusing on optimization phase and objective, basic intuition, and key technology, and we discuss possible research directions of membership inference defense in the future.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 308–318.
[2]
Saeed Ranjbar Alvar, Lanjun Wang, Jian Pei, and Yong Zhang. 2022. Membership privacy protection for image translation models via adversarial knowledge distillation. arXiv preprint arXiv:2203.05212 (2022).
[3]
Yang Bai, Ting Chen, and Mingyu Fan. 2021. A survey on membership inference attacks against machine learning. Management 6 (2021), 14.
[4]
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, and Florian Tramer. 2021. Membership inference attacks from first principles. arXiv preprint arXiv:2112.03570 (2021).
[5]
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Oprea Alina, and Colin Raffel. 2021. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security’21). 2633–2650.
[6]
Dingfan Chen, Ning Yu, and Mario Fritz. 2021. RelaxLoss: Defending membership inference attacks without losing utility. In Proceedings of the International Conference on Learning Representations.
[7]
Jiyu Chen, Yiwen Guo, Qianjun Zheng, and Hao Chen. 2021. Protect privacy of deep classification networks by exploiting their generative power. Mach. Learn. 110, 4 (2021), 651–674.
[8]
Junjie Chen, Wendy Hui Wang, Hongchang Gao, and Xinghua Shi. 2021. PAR-GAN: Improving the generalization of generative adversarial networks against membership inference attacks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 127–137.
[9]
Junjie Chen, Wendy Hui Wang, and Xinghua Shi. 2020. Differential privacy protection against membership inference attack on machine learning for genomic data. In Proceedings of the Pacific Symposium on Biocomputing. World Scientific, 26–37.
[10]
Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, and Haojin Zhu. 2018. Differentially private data generative models. arXiv preprint arXiv:1812.02274 (2018).
[11]
Zongqi Chen, Hongwei Li, Meng Hao, and Guowen Xu. 2021. Enhanced mixup training: A defense method against membership inference attack. In Proceedings of the International Conference on Information Security Practice and Experience. Springer, 32–45.
[12]
Christopher A. Choquette-Choo, Florian Tramer, Nicholas Carlini, and Nicolas Papernot. 2021. Label-only membership inference attacks. In Proceedings of the International Conference on Machine Learning. PMLR, 1964–1974.
[13]
Rishav Chourasia, Batnyam Enkhtaivan, Kunihiro Ito, Junki Mori, Isamu Teranishi, and Hikaru Tsuchida. 2021. Knowledge cross-distillation for membership privacy. arXiv preprint arXiv:2111.01363 (2021).
[14]
Gilad Cohen and Raja Giryes. 2022. Membership inference attack using self influence functions. arXiv preprint arXiv:2205.13680 (2022).
[15]
Elliot J. Crowley, Gavin Gray, and Amos J. Storkey. 2018. Moonshine: Distilling with cheap convolutions. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2893–2903.
[16]
Gonzalo Martínez Ruiz de Arcaute, José Alberto Hernández, and Pedro Reviriego. 2022. Assessing the impact of membership inference attacks on classical machine learning algorithms. In Proceedings of the 18th International Conference on the Design of Reliable Communication Networks (DRCN’22). IEEE, 1–4.
[17]
Ganesh Del Grosso, Hamid Jalalzai, Georg Pichler, Catuscia Palamidessi, and Pablo Piantanida. 2022. Leveraging adversarial examples to quantify membership information leakage. arXiv preprint arXiv:2203.09566 (2022).
[18]
Cynthia Dwork. 2008. Differential privacy: A survey of results. In Proceedings of the International Conference on Theory and Applications of Models of Computation. Springer, 1–19.
[19]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography: Third Theory of Cryptography Conference, TCC’06, New York, NY, USA, March 4-7, 2006. Proceedings 3, Springer, 265–284.
[20]
Y. X. Fu, Y. B. Qin, and G. W. Shen. 2019. Sensitive data privacy protection method based on transfer learning. J. Data Acquisition Process 34, 3 (2019), 422–431.
[21]
Heonseok Ha, Jaehee Jang, Yonghyun Jeong, and Sungroh Yoon. 2022. Membership feature disentanglement network. In Proceedings of the ACM on Asia Conference on Computer and Communications Security. 364–376.
[22]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
[23]
Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012).
[24]
Hongsheng Hu, Zoran Salcic, Gillian Dobbie, Yi Chen, and Xuyun Zhang. 2021. EAR: An enhanced adversarial regularization approach against membership inference attacks. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’21). IEEE, 1–8.
[25]
Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Philip S. Yu, and Xuyun Zhang. 2022. Membership inference attacks on machine learning: A survey. ACM Computing Surveys 54, 11s (2022), 1–37.
[26]
Li Hu, Jin Li, Guanbiao Lin, Shiyu Peng, Zhenxin Zhang, Yingying Zhang, and Changyu Dong. 2022. Defending against membership inference attacks with high utility by GAN. IEEE Transactions on Dependable and Secure Computing 20, 3 (2022), 2144–2157.
[27]
Hongwei Huang. 2021. Defense against membership inference attack applying domain adaptation with addictive noise. J. Comput. Commun. 9, 5 (2021), 92–108.
[28]
Hongwei Huang, Weiqi Luo, Guoqiang Zeng, Jian Weng, Yue Zhang, and Anjia Yang. 2021. Damia: leveraging domain adaptation as a defense against membership inference attacks. IEEE Transactions on Dependable and Secure Computing 19, 5 (2021), 3183–3199.
[29]
Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang Gong, and Yinzhi Cao. 2021. Practical blind membership inference attack via differential comparisons. arXiv preprint arXiv:2101.01341 (2021).
[30]
Thomas Humphries, Matthew Rafuse, Lindsey Tulloch, Simon Oya, Ian Goldberg, Urs Hengartner, and Florian Kerschbaum. 2020. Differentially private learning does not bound membership inference. arXiv preprint arXiv:2010.12112 (2020).
[31]
Matthew Jagielski, Milad Nasr, Christopher Choquette-Choo, Katherine Lee, and Nicholas Carlini. 2023. Students parrot their teachers: Membership inference on model distillation. arXiv preprint arXiv:2303.03446 (2023).
[32]
Ismat Jarin and Birhanu Eshete. 2022. MIAShield: Defending membership inference attacks via preemptive exclusion of members. arXiv preprint arXiv:2203.00915 (2022).
[33]
Bargav Jayaraman and David Evans. 2019. Evaluating differentially private machine learning in practice. In Proceedings of the 28th USENIX Security Symposium (USENIX Security’19). 1895–1912.
[34]
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. 2019. MemGuard: Defending against black-box membership inference attacks via adversarial examples. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 259–274.
[35]
Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating training data mitigates privacy risks in language models. arXiv preprint arXiv:2202.06539 (2022).
[36]
Yigitcan Kaya and Tudor Dumitras. 2021. When does data augmentation help with membership inference attacks? In Proceedings of the International Conference on Machine Learning. PMLR, 5345–5355.
[37]
Yigitcan Kaya, Sanghyun Hong, and Tudor Dumitras. 2020. On the effectiveness of regularization against membership inference attacks. arXiv preprint arXiv:2006.05336 (2020).
[38]
Hongkyu Lee, Jeehyeong Kim, Seyoung Ahn, Rasheed Hussain, Sunghyun Cho, and Junggab Son. 2021. Digestive neural networks: A novel defense strategy against inference attacks in federated learning. Comput. Secur. 109 (2021), 102378.
[39]
Klas Leino and Matt Fredrikson. 2020. Stolen memories: Leveraging model memorization for calibrated white-box membership inference. In Proceedings of the 29th USENIX Security Symposium (USENIX Security’20). 1605–1622.
[40]
Jiacheng Li, Ninghui Li, and Bruno Ribeiro. 2021. Membership inference attacks and defenses in classification models. In Proceedings of the 11th ACM Conference on Data and Application Security and Privacy. 5–16.
[41]
Zheng Li and Yang Zhang. 2020. Label-leaks: Membership inference attack with label. arXiv e-prints (2020), arXiv–2007.
[42]
Gaoyang Liu, Yutong Li, Borui Wan, Chen Wang, and Kai Peng. 2021. Membership inference attacks in black-box machine learning models. J. Cyber Secur. 6, 3 (2021), 15.
[43]
Yaru Liu, Hongcheng Li, Gang Huang, and Wei Hua. 2022. OPUPO: Defending against membership inference Attacks With Order-Preserving and Utility-preserving obfuscation. IEEE Transactions on Dependable and Secure Computing (2022), 1–12.
[44]
Yi Liu, Jialiang Peng, Jiawen Kang, Abdullah M. Iliyasu, Dusit Niyato, and Ahmed A. Abd El-Latif. 2020. A secure federated learning framework for 5G networks. IEEE Wirel. Commun. 27, 4 (2020), 24–31.
[45]
Yiyong Liu, Zhengyu Zhao, Michael Backes, and Yang Zhang. 2022. Membership inference attacks by exploiting loss trajectory. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 2085–2098.
[46]
Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiaofeng Wang, Haixu Tang, Carl A. Gunter, and Kai Chen. 2018. Understanding membership inferences on well-generalized learning models. arXiv preprint arXiv:1802.04889 (2018).
[47]
Federico Mazzone, Leander van den Heuvel, Maximilian Huber, Cristian Verdecchia, Maarten Everts, Florian Hahn, and Andreas Peter. 2022. Repeated knowledge distillation with confidence masking to mitigate membership inference attacks. In Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security. 13–24.
[48]
Sumit Mukherjee, Yixi Xu, Anusua Trivedi, and Juan Lavista Ferres. 2019. privGAN: Protecting GANs from membership inference attacks at low cost. arXiv preprint arXiv:2001.00071 (2019).
[49]
Mohammad Naseri, Jamie Hayes, and Emiliano De Cristofaro. 2020. Toward robustness and privacy in federated learning: Experimenting with local and central differential privacy. arXiv e-prints (2020), arXiv–2009.
[50]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Machine learning with membership privacy using adversarial regularization. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 634–646.
[51]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the IEEE Symposium on Security and Privacy (SP’19). IEEE, 739–753.
[52]
Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, and Nicholas Carlini. 2021. Adversary instantiation: Lower bounds for differentially private machine learning. arXiv preprint arXiv:2101.04535 (2021).
[53]
Nicolas Papernot, Martín Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. 2016. Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755 (2016).
[54]
William Paul, Yinzhi Cao, Miaomiao Zhang, and Phil Burlina. 2021. Defending medical image diagnostics against privacy attacks using generative methods: Application to retinal diagnostics. In Clinical Image-based Procedures, Distributed and Collaborative Learning, Artificial Intelligence for Combating COVID-19 and Secure and Privacy-preserving Machine Learning. Springer, 174–187.
[55]
Shadi Rahimian, Tribhuvanesh Orekondy, and Mario Fritz. 2021. Differential privacy defenses and sampling attacks for membership inference. In Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security. 193–202.
[56]
Md Atiqur Rahman, Tanzila Rahman, Robert Laganière, Noman Mohammed, and Yang Wang. 2018. Membership inference attack against differentially private deep learning model. Trans. Data Priv. 11, 1 (2018), 61–79.
[57]
Maria Rigaki and Sebastian Garcia. 2020. A survey of privacy attacks in machine learning. arXiv preprint arXiv:2007.07646 (2020).
[58]
Alexandre Sablayrolles, Matthijs Douze, Cordelia Schmid, Yann Ollivier, and Hervé Jégou. 2019. White-box vs black-box: Bayes optimal strategies for membership inference. In Proceedings of the International Conference on Machine Learning. PMLR, 5558–5567.
[59]
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2018. ML-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246 (2018).
[60]
Avital Shafran, Shmuel Peleg, and Yedid Hoshen. 2021. Membership inference attacks are easier on difficult problems. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14820–14829.
[61]
Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. 2010. Learnability, stability and uniform convergence. J. Mach. Learn. Res. 11 (2010), 2635–2670.
[62]
Virat Shejwalkar and Amir Houmansadr. 2019. Membership privacy for machine learning models through knowledge transfer. arXiv preprint arXiv:1906.06589 (2019).
[63]
Yi Shi and Yalin Sagduyu. 2022. Membership inference attack and defense for wireless signal classifiers with deep learning. IEEE Transactions on Mobile Computing 22, 7 (2022), 4032–4043.
[64]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In Proceedings of the IEEE Symposium on Security and Privacy (SP’17). IEEE, 3–18.
[65]
Suyash S. Shringarpure and Carlos D. Bustamante. 2015. Privacy risks from genomic data-sharing beacons. Am. J. Hum. Genet. 97, 5 (2015), 631–646.
[66]
Liwei Song and Prateek Mittal. 2021. Systematic evaluation of privacy risks of machine learning models. In Proceedings of the 30th USENIX Security Symposium (USENIX Security’21).
[67]
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2818–2826.
[68]
Jasper Tan, Daniel LeJeune, Blake Mason, Hamid Javadi, and Richard G. Baraniuk. 2022. Benign overparameterization in membership inference with early stopping. arXiv preprint arXiv:2205.14055 (2022).
[69]
Xinyu Tang, Saeed Mahloujifar, Liwei Song, Virat Shejwalkar, Milad Nasr, Amir Houmansadr, and Prateek Mittal. 2021. Mitigating membership inference attacks by self-distillation through a novel ensemble architecture. arXiv preprint arXiv:2110.08324 (2021).
[70]
Gao Ting. 2022. Research progress and challenges of membership inference attacks in machine learning. Oper. Res. Fuzziol. 12, 1 (2022), 15.
[71]
Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. 2015. Efficient object localization using convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 648–656.
[72]
Shakila Mahjabin Tonni, Dinusha Vatsalan, Farhad Farokhi, Dali Kaafar, Zhigang Lu, and Gioacchino Tangari. 2020. Data and model dependencies of membership inference attack. arXiv preprint arXiv:2002.06856 (2020).
[73]
Aleksei Triastcyn and Boi Faltings. 2018. Generating artificial data for private deep learning. arXiv preprint arXiv:1803.03148 (2018).
[74]
Stacey Truex, Ling Liu, Ka-Ho Chow, Mehmet Emre Gursoy, and Wenqi Wei. 2020. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the 3rd ACM International Workshop on Edge Systems, Analytics and Networking. 61–66.
[75]
Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Wenqi Wei, and Lei Yu. 2019. Effects of differential privacy and data skewness on membership inference vulnerability. In Proceedings of the 1st IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA’19). IEEE, 82–91.
[76]
Stacey Truex, Ling Liu, Mehmet Emre Gursoy, Lei Yu, and Wenqi Wei. 2019. Demystifying membership inference attacks in machine learning as a service. IEEE Transactions on Services Computing 14, 6 (2019), 2073–2089.
[77]
Rajagopal Venkatesaramani, Zhiyu Wan, Bradley A. Malin, and Yevgeniy Vorobeychik. 2021. Defending against membership inference attacks on Beacon services. arXiv preprint arXiv:2112.13301 (2021).
[78]
Zhiyu Wan, Yevgeniy Vorobeychik, Murat Kantarcioglu, and Bradley Malin. 2017. Controlling the signal: Practical privacy protection of genomic data sharing through Beacon services. BMC Med. Genom. 10, 2 (2017), 87–100.
[79]
Chen Wang, Gaoyang Liu, Haojun Huang, Weijie Feng, Kai Peng, and Lizhe Wang. 2019. MIASec: Enabling data indistinguishability against membership inference attacks in MLaaS. IEEE Trans. Sustain. Comput. 5, 3 (2019), 365–376.
[80]
Ji Wang, Weidong Bao, Lichao Sun, Xiaomin Zhu, Bokai Cao, and S. Yu Philip. 2019. Private model compression via knowledge distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 1190–1197.
[81]
Lulu Wang, Peng Zhang, Zheng Yan, and Xiaokang Zhou. 2019. A survey on membership inference on training datasets in machine learning. Cybersp. Secur. 10, 10 (2019), 7.
[82]
Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, and Sanguthevar Rajasekaran. 2020. Against membership inference attack: Pruning is all you need. arXiv preprint arXiv:2008.13578 (2020).
[83]
Ryan Webster, Julien Rabin, Loïc Simon, and Frédéric Jurie. 2021. Generating private data surrogates for vision related tasks. In Proceedings of the 25th International Conference on Pattern Recognition (ICPR’21). IEEE, 263–269.
[84]
K. Weiss, T. M. Khoshgoftaar, and D. D. Wang. 2016. A survey of transfer learning. J. Big Data 3, 1 (2016), 1–40.
[85]
Yuxin Wen, Arpit Bansal, Hamid Kazemi, Eitan Borgnia, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2022. Canary in a coalmine: Better membership inference with ensembled adversarial queries. arXiv preprint arXiv:2210.10750 (2022).
[86]
Bingzhe Wu, Chaochao Chen, Shiwan Zhao, Cen Chen, Yuan Yao, Guangyu Sun, Li Wang, Xiaolu Zhang, and Jun Zhou. 2020. Characterizing membership privacy in stochastic gradient Langevin dynamics. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 6372–6379.
[87]
Bingzhe Wu, Shiwan Zhao, Chaochao Chen, Haoyang Xu, Li Wang, Xiaolu Zhang, Guangyu Sun, and Jun Zhou. 2019. Generalization in generative adversarial networks: A novel perspective from privacy protection. Adv. Neural Inf. Process. Syst. 32 (2019).
[88]
Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, and Jiayu Zhou. 2018. Differentially private generative adversarial network. arXiv preprint arXiv:1802.06739 (2018).
[89]
Nuo Xu, Binghui Wang, Ran Ran, Wujie Wen, and Parv Venkitasubramaniam. 2022. NeuGuard: Lightweight neuron-guided defense against membership inference attacks. arXiv preprint arXiv:2206.05565 (2022).
[90]
Mingfu Xue, Chengxiang Yuan, Can He, Zhiyu Wu, Yushu Zhang, Zhe Liu, and Weiqiang Liu. 2020. Use the spear as a shield: A novel adversarial example based privacy-preserving technique against membership inference attacks. arXiv preprint arXiv:2011.13696 (2020).
[91]
Ruikang Yang, Jianfeng Ma, Yinbin Miao, and Xindi Ma. 2022. Privacy-preserving generative framework against membership inference attacks. arXiv preprint arXiv:2202.05469 (2022).
[92]
Ziqi Yang, Bin Shao, Bohan Xuan, Ee-Chien Chang, and Fan Zhang. 2020. Defending model inversion and membership inference attacks via prediction purification. arXiv preprint arXiv:2005.03915 (2020).
[93]
Ziqi Yang, Lijin Wang, Da Yang, Jie Wan, Ziming Zhao, Ee-Chien Chang, Fan Zhang, and Kui Ren. 2022. Purifier: Defending data inference attacks via transforming confidence scores. arXiv preprint arXiv:2212.00612 (2022).
[94]
Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, and Reza Shokri. 2021. Enhanced membership inference attacks against machine learning models. arXiv preprint arXiv:2111.09679 (2021).
[95]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. In Proceedings of the IEEE 31st Computer Security Foundations Symposium (CSF’18). IEEE, 268–282.
[96]
Yu Yin, Ke Chen, Lidan Shou, and Gang Chen. 2021. Defending privacy against more knowledgeable membership inference attackers. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2026–2036.
[97]
Zuobin Ying, Yun Zhang, and Ximeng Liu. 2020. Privacy-preserving in defending against membership inference attacks. In Proceedings of the Workshop on Privacy-preserving Machine Learning in Practice. 61–63.
[98]
Yu FU and others. 2019. Multi-source data privacy protection based on transfer learning. Computer Engineering & Science 41, 4 (2019), 641.
[99]
Bo Zhang, Ruotong Yu, Haipei Sun, Yanying Li, Jun Xu, and Hui Wang. 2020. Privacy for all: Demystify vulnerability disparity of differential privacy against membership inference attack. arXiv preprint arXiv:2001.08855 (2020).
[100]
Minxing Zhang, Zhaochun Ren, Zihan Wang, Pengjie Ren, Zhunmin Chen, Pengfei Hu, and Yang Zhang. 2021. Membership inference attacks against recommender systems. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 864–879.
[101]
Tianwei Zhang, Zecheng He, and Ruby B. Lee. 2018. Privacy-preserving machine learning through data obfuscation. arXiv preprint arXiv:1807.01860 (2018).
[102]
Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, and Yanzhi Wang. 2018. A systematic DNN weight pruning framework using alternating direction method of multipliers. In Proceedings of the European Conference on Computer Vision (ECCV’18). 184–199.
[103]
Xinyang Zhang, Shouling Ji, and Ting Wang. 2018. Differentially private releasing via deep generative model (technical report). arXiv preprint arXiv:1801.01594 (2018).
[104]
Zhaoxi Zhang, Leo Yu Zhang, Xufei Zheng, Bilal Hussain Abbasi, and Shengshan Hu. 2022. Evaluating membership inference through adversarial robustness. arXiv preprint arXiv:2205.06986 (2022).
[105]
Jingwen Zhao, Yunfang Chen, and Wei Zhang. 2019. Differential privacy preservation in deep learning: Challenges, opportunities and solutions. IEEE Access 7 (2019), 48901–48911.
[106]
Junxiang Zheng, Yongzhi Cao, and Hanpin Wang. 2021. Resisting membership inference attacks through knowledge distillation. Neurocomputing 452 (2021), 114–126.
[107]
Da Zhong, Haipei Sun, Jun Xu, Neil Gong, and Wendy Hui Wang. 2022. Understanding disparate effects of membership inference attacks and their countermeasures. In Proceedings of the ACM on Asia Conference on Computer and Communications Security. 959–974.

Cited By

View all
  • (2025)Security and Privacy Challenges of Large Language Models: A SurveyACM Computing Surveys10.1145/371200157:6(1-39)Online publication date: 13-Jan-2025
  • (2025)A Lightweight and Accuracy-Lossless Privacy-Preserving Method in Federated LearningIEEE Internet of Things Journal10.1109/JIOT.2024.347820812:3(3118-3129)Online publication date: 1-Feb-2025
  • (2025)On large language models safety, security, and privacy: A surveyJournal of Electronic Science and Technology10.1016/j.jnlest.2025.100301(100301)Online publication date: Jan-2025
  • Show More Cited By

Index Terms

  1. Defenses to Membership Inference Attacks: A Survey

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Computing Surveys
    ACM Computing Surveys  Volume 56, Issue 4
    April 2024
    1026 pages
    EISSN:1557-7341
    DOI:10.1145/3613581
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 November 2023
    Online AM: 18 September 2023
    Accepted: 22 August 2023
    Revised: 27 March 2023
    Received: 22 July 2022
    Published in CSUR Volume 56, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Membership inference
    2. privacy defense
    3. privacy attack
    4. Machine learning

    Qualifiers

    • Survey

    Funding Sources

    • National Natural Science Foundation of China
    • National Natural Science Foundation of China for Joint Fund Project
    • Basic Innovation Project for Full-time Postgraduates of Guangzhou University

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,678
    • Downloads (Last 6 weeks)200
    Reflects downloads up to 28 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Security and Privacy Challenges of Large Language Models: A SurveyACM Computing Surveys10.1145/371200157:6(1-39)Online publication date: 13-Jan-2025
    • (2025)A Lightweight and Accuracy-Lossless Privacy-Preserving Method in Federated LearningIEEE Internet of Things Journal10.1109/JIOT.2024.347820812:3(3118-3129)Online publication date: 1-Feb-2025
    • (2025)On large language models safety, security, and privacy: A surveyJournal of Electronic Science and Technology10.1016/j.jnlest.2025.100301(100301)Online publication date: Jan-2025
    • (2025)Workplace security and privacy implications in the GenAI age: A surveyJournal of Information Security and Applications10.1016/j.jisa.2024.10396089(103960)Online publication date: Mar-2025
    • (2025)Membership Inference Attacks in Machine LearningEncyclopedia of Cryptography, Security and Privacy10.1007/978-3-030-71522-9_1825(1520-1523)Online publication date: 8-Jan-2025
    • (2024)Mitigating privacy risk in membership inference by convex-concave lossProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693320(30998-31014)Online publication date: 21-Jul-2024
    • (2024)FNRS-MSC: Federated News Recommender System Integrating Multiple Self-Attention and ConvolutionProceedings of the 2024 8th International Conference on Computer Science and Artificial Intelligence10.1145/3709026.3709081(433-439)Online publication date: 6-Dec-2024
    • (2024)Rethinking Membership Inference Attacks Against Transfer LearningIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.341359219(6441-6454)Online publication date: 1-Jan-2024
    • (2024)Securing Artificial Intelligence: Exploring Attack Scenarios and Defense Strategies2024 12th International Symposium on Digital Forensics and Security (ISDFS)10.1109/ISDFS60797.2024.10527288(1-6)Online publication date: 29-Apr-2024
    • (2024)DepInferAttack: Framework for Membership Inference Attack in Depression Dataset2024 4th International Conference on Technological Advancements in Computational Sciences (ICTACS)10.1109/ICTACS62700.2024.10840770(1326-1332)Online publication date: 13-Nov-2024
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media