skip to main content
10.1145/3589462.3589502acmotherconferencesArticle/Chapter ViewAbstractPublication PagesideasConference Proceedingsconference-collections
research-article

Comparative Analysis of Membership Inference Attacks in Federated Learning

Published: 26 May 2023 Publication History

Abstract

Given a federated learning model and a record, a membership inference attack can determine whether this record is part of the model’s training dataset. Federated learning is a machine learning technique that enables different parties to train a model without the need to centralize or share that data. Membership inference attack risks the private datasets if those datasets are used to train the federated learning model and access to the generated model is available. There is a need for further study in a federated learning environment to develop effective countermeasures against the membership inference attack without compromising the utility of the target model. In this study, we empirically investigated and compared various membership inference attack approaches in a federated learning environment. We also evaluated these attacks on several optimizers and analyzed them with and without countermeasures.

Supplementary Material

"Figures" (ideas2023-24-figures.zip)

References

[1]
2023. Team, K. (n.d.). Keras documentation: ActivityRegularization layer. https://keras.io/api/layers/regularization_layers/activity_regularization/
[2]
2023. Team, K. (n.d.). Keras documentation: Masking layer. https://keras.io/api/layers/core_layers/masking/
[3]
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 308–318.
[4]
Accountability Act. 1996. Health insurance portability and accountability act of 1996. Public law 104 (1996), 191.
[5]
Muhammad Asad, Ahmed Moustafa, and Takayuki Ito. 2021. Federated learning versus classical machine learning: A convergence comparison. arXiv preprint arXiv:2107.10976 (2021).
[6]
Michael Backes, Pascal Berrang, Mathias Humbert, and Praveen Manoharan. 2016. Membership privacy in MicroRNA-based studies. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 319–330.
[7]
Yang Bai and Mingyu Fan. 2021. A method to improve the privacy and security for federated learning. In 2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS). IEEE, 704–708.
[8]
Nils Bjorck, Carla P Gomes, Bart Selman, and Kilian Q Weinberger. 2018. Understanding batch normalization. Advances in neural information processing systems 31 (2018).
[9]
Léon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010: 19th International Conference on Computational StatisticsParis France, August 22-27, 2010 Keynote, Invited and Contributed Papers. Springer, 177–186.
[10]
Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks. In USENIX Security Symposium, Vol. 267.
[11]
Hanxiao Chen, Hongwei Li, Guishan Dong, Meng Hao, Guowen Xu, Xiaoming Huang, and Zhe Liu. 2020. Practical membership inference attack against collaborative inference in industrial IoT. IEEE Transactions on Industrial Informatics 18, 1 (2020), 477–487.
[12]
Mauro Conti, Jiaxin Li, Stjepan Picek, and Jing Xu. 2022. Label-Only Membership Inference Attack against Node-Level Graph Neural Networks. In Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security. 1–12.
[13]
Li Deng. 2012. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine 29, 6 (2012), 141–142.
[14]
Georgios Drainakis, Konstantinos V Katsaros, Panagiotis Pantazopoulos, Vasilis Sourlas, and Angelos Amditis. 2020. Federated vs. centralized machine learning under privacy-elastic users: A comparative analysis. In 2020 IEEE 19th International Symposium on Network Computing and Applications (NCA). IEEE, 1–8.
[15]
Cynthia Dwork. 2011. A firm foundation for private data analysis. Commun. ACM 54, 1 (2011), 86–95.
[16]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference. Springer, 265–284.
[17]
Cynthia Dwork, Aaron Roth, 2014. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 9, 3–4 (2014), 211–407.
[18]
Muhammad Firdaus, Harashta Tatimma Larasati, and Kyung-Hyune Rhee. 2022. A Secure Federated Learning Framework using Blockchain and Differential Privacy. In 2022 IEEE 9th International Conference on Cyber Security and Cloud Computing (CSCloud)/2022 IEEE 8th International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE, 18–23.
[19]
Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning. PMLR, 1050–1059.
[20]
Donglin Jiang, Chen Shan, and Zhihui Zhang. 2020. Federated learning algorithm based on knowledge distillation. In 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE). IEEE, 163–167.
[21]
Alex Krizhevsky, Geoffrey Hinton, 2009. Learning multiple layers of features from tiny images. (2009).
[22]
Hongkyu Lee, Jeehyeong Kim, Seyoung Ahn, Rasheed Hussain, Sunghyun Cho, and Junggab Son. 2021. Digestive neural networks: A novel defense strategy against inference attacks in federated learning. computers & security 109 (2021), 102378.
[23]
Xinjia Li, Boyu Chen, and Wenlian Lu. 2023. FedDKD: Federated learning with decentralized knowledge distillation. Applied Intelligence (2023), 1–17.
[24]
Lan Liu, Yi Wang, Gaoyang Liu, Kai Peng, and Chen Wang. 2022. Membership Inference Attacks Against Machine Learning Models via Prediction Sensitivity. IEEE Transactions on Dependable and Secure Computing (2022).
[25]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.
[26]
H Brendan McMahan and Matthew Streeter. 2010. Adaptive bound optimization for online convex optimization. arXiv preprint arXiv:1002.4908 (2010).
[27]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE symposium on security and privacy (SP). IEEE, 691–706.
[28]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE symposium on security and privacy (SP). IEEE, 739–753.
[29]
Solmaz Niknam, Harpreet S Dhillon, and Jeffrey H Reed. 2020. Federated learning for wireless communications: Motivation, opportunities, and challenges. IEEE Communications Magazine 58, 6 (2020), 46–51.
[30]
Roman Novak, Yasaman Bahri, Daniel A Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. 2018. Sensitivity and generalization in neural networks: an empirical study. arXiv preprint (2018).
[31]
Michał Oleszak. 2021. Monte Carlo Dropout. https://towardsdatascience.com/monte-carlo-dropout-7fd52f8b6571
[32]
Sony Peng, Yixuan Yang, Makara Mao, and Doo-Soon Park. 2022. Centralized Machine Learning Versus Federated Averaging: A Comparison using MNIST Dataset. KSII Transactions on Internet and Information Systems (TIIS) 16, 2 (2022), 742–756.
[33]
Pierluigi Poggiolini. 2012. The GN model of non-linear propagation in uncompensated coherent optical systems. Journal of Lightwave Technology 30, 24 (2012), 3857–3879.
[34]
Protection Regulation. 2018. General data protection regulation. Intouch 25 (2018), 1–5.
[35]
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2018. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246 (2018).
[36]
Virat Shejwalkar and Amir Houmansadr. 2021. Membership privacy for machine learning models through knowledge transfer. In Proceedings of the AAAI conference on artificial intelligence, Vol. 35. 9549–9557.
[37]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP). IEEE, 3–18.
[38]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15, 1 (2014), 1929–1958.
[39]
Tianqi Su, Meiqi Wang, and Zhongfeng Wang. 2021. Federated Regularization Learning: an Accurate and Safe Method for Federated Learning. In 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS). IEEE, 1–4.
[40]
Tijmen Tieleman, Geoffrey Hinton, 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning 4, 2 (2012), 26–31.
[41]
Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Yongfeng Huang, and Xing Xie. 2022. Communication-efficient federated learning via knowledge distillation. Nature communications 13, 1 (2022), 2032.
[42]
Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017).
[43]
Yang Xiao, Chengjia Yan, Shuo Lyu, Qingqi Pei, Ximeng Liu, Ning Zhang, and Mianxiong Dong. 2022. Defed: An Edge Feature Enhanced Image Denoised Networks Against Adversarial Attacks for Secure Internet-of-Things. IEEE Internet of Things Journal (2022).
[44]
Yuanyuan Xie, Bing Chen, Jiale Zhang, and Di Wu. 2021. Defending against Membership Inference Attacks in Federated learning via Adversarial Example. In 2021 17th International Conference on Mobility, Sensing and Networking (MSN). IEEE, 153–160.
[45]
Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. 2017. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4133–4141.
[46]
Junxiang Zheng, Yongzhi Cao, and Hanpin Wang. 2021. Resisting membership inference attacks through knowledge distillation. Neurocomputing 452 (2021), 114–126.

Cited By

View all
  • (2024)Recent Advancements in Federated Learning: State of the Art, Fundamentals, Principles, IoT Applications and Future TrendsFuture Internet10.3390/fi1611041516:11(415)Online publication date: 9-Nov-2024
  • (2024)Privacy-Preserving Federated Learning With Resource-Adaptive Compression for Edge DevicesIEEE Internet of Things Journal10.1109/JIOT.2023.334755211:8(13180-13198)Online publication date: 15-Apr-2024
  • (2023)Comparative Analysis of Membership Inference Attacks in Federated and Centralized LearningInformation10.3390/info1411062014:11(620)Online publication date: 19-Nov-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
IDEAS '23: Proceedings of the 27th International Database Engineered Applications Symposium
May 2023
222 pages
ISBN:9798400707445
DOI:10.1145/3589462
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 May 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. federated learning
  2. machine learning
  3. membership inference attack
  4. privacy

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant

Conference

IDEAS '23

Acceptance Rates

Overall Acceptance Rate 74 of 210 submissions, 35%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)117
  • Downloads (Last 6 weeks)4
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Recent Advancements in Federated Learning: State of the Art, Fundamentals, Principles, IoT Applications and Future TrendsFuture Internet10.3390/fi1611041516:11(415)Online publication date: 9-Nov-2024
  • (2024)Privacy-Preserving Federated Learning With Resource-Adaptive Compression for Edge DevicesIEEE Internet of Things Journal10.1109/JIOT.2023.334755211:8(13180-13198)Online publication date: 15-Apr-2024
  • (2023)Comparative Analysis of Membership Inference Attacks in Federated and Centralized LearningInformation10.3390/info1411062014:11(620)Online publication date: 19-Nov-2023
  • (undefined)Responsible Recommendation Services with Blockchain Empowered Asynchronous Federated LearningACM Transactions on Intelligent Systems and Technology10.1145/3633520

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media