skip to main content
10.1145/3579856.3590334acmconferencesArticle/Chapter ViewAbstractPublication Pagesasia-ccsConference Proceedingsconference-collections
research-article

LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks

Published:10 July 2023Publication History

ABSTRACT

Federated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on selected samples and share the gradients with the server (i.e., poisoning). Since honest clients perform gradient descent on samples locally, an adversary can distinguish whether the attacked sample is a training sample based on observation of the change of the sample’s prediction. This type of attack exacerbates traditional passive MIA, yet the defense mechanisms remain largely unexplored.

In this work, we first investigate the effectiveness of the existing server-side robust aggregation algorithms (AGRs), designed to counter general poisoning attacks, in defending against poisoning MIA. We find that they are largely insufficient in mitigating poisoning MIA, as it targets specific victim samples and has minimal impact on model performance, unlike general poisoning. Thus, we propose a new client-side defense mechanism, called LoDen, which leverages the clients’ unique ability to detect any suspicious privacy attacks. We theoretically quantify the membership information leaked to the poisoning MIA and provide a bound for this leakage in LoDen. We perform an extensive experimental evaluation on four benchmark datasets against poisoning MIA, comparing LoDen with six state-of-the-art server-side AGRs. LoDen consistently achieves missing rate in detecting poisoning MIA across all settings, and reduces the poisoning MIA success rate to in most cases. The code of LoDen is available at https://github.com/UQ-Trust-Lab/LoDen.

References

  1. Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Gilad Baruch, Moran Baruch, and Yoav Goldberg. 2019. A little is enough: Circumventing defenses for distributed learning. Advances in Neural Information Processing Systems 32 (2019).Google ScholarGoogle Scholar
  3. Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012).Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of the 31st International Conference on Neural Information Processing Systems.Google ScholarGoogle Scholar
  5. Léon Bottou, Frank E Curtis, and Jorge Nocedal. 2018. Optimization methods for large-scale machine learning. Siam Review 60, 2 (2018), 223–311.Google ScholarGoogle ScholarCross RefCross Ref
  6. Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. 2021. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. In 28th Annual Network and Distributed System Security Symposium, NDSS. The Internet Society.Google ScholarGoogle ScholarCross RefCross Ref
  7. Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. 2011. Differentially private empirical risk minimization.Journal of Machine Learning Research 12, 3 (2011).Google ScholarGoogle Scholar
  8. Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. 2016. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981 (2016).Google ScholarGoogle Scholar
  9. Debajyoti Das, Sebastian Meiser, Esfandiar Mohammadi, and Aniket Kate. 2018. Anonymity trilemma: Strong anonymity, low bandwidth overhead, low latency-choose two. In 2018 IEEE Symposium on Security and Privacy. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  10. Li Deng. 2012. The MNIST database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine 29, 6 (2012), 141–142.Google ScholarGoogle ScholarCross RefCross Ref
  11. Cynthia Dwork. 2008. Differential privacy: A survey of results. In International conference on theory and applications of models of computation. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference. Springer, 265–284.Google ScholarGoogle Scholar
  13. Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local Model Poisoning Attacks to { Byzantine-Robust} Federated Learning. In 29th USENIX Security Symposium.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 47230–47244.Google ScholarGoogle ScholarCross RefCross Ref
  15. Hao Guan, Ying Xiao, Jiaying Li, Yepang Liu, and Guangdong Bai. 2023. A Comprehensive Study of Real-World Bugs in Machine Learning Model Optimization. In Proceedings of the International Conference on Software Engineering.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. 2019. Memguard: Defending against black-box membership inference attacks via adversarial examples. In Proceedings of the 2019 ACM SIGSAC conference on computer and communications security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning 14, 1–2 (2021), 1–210.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Georgios A Kaissis, Marcus R Makowski, Daniel Rückert, and Rickmer F Braren. 2020. Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence 2, 6 (2020), 305–311.Google ScholarGoogle ScholarCross RefCross Ref
  19. Alex Krizhevsky, Geoffrey Hinton, 2009. Learning multiple layers of features from tiny images. (2009).Google ScholarGoogle Scholar
  20. Anders Krogh and John Hertz. 1991. A simple weight decay can improve generalization. Advances in neural information processing systems 4 (1991).Google ScholarGoogle Scholar
  21. Andrew Law, Chester Leung, Rishabh Poddar, Raluca Ada Popa, Chenyu Shi, Octavian Sima, Chaofan Yu, Xingmeng Zhang, and Wenting Zheng. 2020. Secure collaborative training and inference for xgboost. In Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Xingyu Li, Zhe Qu, Shangqing Zhao, Bo Tang, Zhuo Lu, and Yao Liu. 2021. Lomar: A local defense against poisoning attack on federated learning. IEEE Transactions on Dependable and Secure Computing (2021).Google ScholarGoogle Scholar
  23. Guodong Long, Yue Tan, Jing Jiang, and Chengqi Zhang. 2020. Federated learning for open banking. In Federated learning. Springer, 240–254.Google ScholarGoogle Scholar
  24. Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiaofeng Wang, Haixu Tang, Carl A Gunter, and Kai Chen. 2018. Understanding membership inferences on well-generalized learning models. arXiv preprint arXiv:1802.04889 (2018).Google ScholarGoogle Scholar
  25. Pathum Chamikara Mahawaga Arachchige, Dongxi Liu, Seyit Camtepe, Surya Nepal, Marthie Grobler, Peter Bertok, and Ibrahim Khalil. 2022. Local Differential Privacy for Federated Learning. In Computer Security–ESORICS 2022: 27th European Symposium on Research in Computer Security, Copenhagen, Denmark, September 26–30, 2022, Proceedings, Part I. Springer.Google ScholarGoogle Scholar
  26. Saeed Mahloujifar, Esha Ghosh, and Melissa Chase. 2022. Property Inference from Poisoning. In 2022 IEEE Symposium on Security and Privacy. IEEE Computer Society.Google ScholarGoogle ScholarCross RefCross Ref
  27. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR.Google ScholarGoogle Scholar
  28. Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  29. Mark Huasong Meng, Sin G Teo, Guangdong Bai, Kailong Wang, and Jin Song Dong. 2023. Enhancing Federated Learning Robustness using Data-Agnostic Model Pruning. In Advances in Knowledge Discovery and Data Mining: 27th Pacific-Asia Conference. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Machine learning with membership privacy using adversarial regularization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE symposium on security and privacy. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  32. Do Le Quoc and Christof Fetzer. 2021. SecFL: Confidential Federated Learning using TEEs. arXiv preprint arXiv:2110.00981 (2021).Google ScholarGoogle Scholar
  33. Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016).Google ScholarGoogle Scholar
  34. Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2018. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246 (2018).Google ScholarGoogle Scholar
  35. Liyue Shen, Yanjun Zhang, Jingwei Wang, and Guangdong Bai. 2022. Better Together: Attaining the Triad of Byzantine-robust Federated Learning via Local Update Amplification. In Annual Computer Security Applications Conference.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  38. Liwei Song and Prateek Mittal. 2021. Systematic evaluation of privacy risks of machine learning models. In 30th USENIX Security Symposium.Google ScholarGoogle Scholar
  39. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15, 1 (2014), 1929–1958.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H Brendan McMahan. 2019. Can you really backdoor federated learning?arXiv preprint arXiv:1911.07963 (2019).Google ScholarGoogle Scholar
  41. Aleksei Triastcyn and Boi Faltings. 2020. Bayesian differential privacy for machine learning. In International Conference on Machine Learning. PMLR.Google ScholarGoogle Scholar
  42. Jie Xu, Benjamin S Glicksberg, Chang Su, Peter Walker, Jiang Bian, and Fei Wang. 2021. Federated learning for healthcare informatics. Journal of Healthcare Informatics Research 5, 1 (2021), 1–19.Google ScholarGoogle ScholarCross RefCross Ref
  43. Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning. PMLR.Google ScholarGoogle Scholar
  44. Yanjun Zhang, Guangdong Bai, Pathum Chamikara Mahawaga Arachchige, Mengyao Ma, Liyue Shen, Jingwei Wang, Surya Nepal, Minhui Xue, Long Wang, and Joseph Liu. 2023. AgrEvader: Poisoning Membership Inference Against Byzantine-robust Federated Learning. In The ACM Web Conference (WWW).Google ScholarGoogle Scholar

Index Terms

  1. LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ASIA CCS '23: Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security
        July 2023
        1066 pages
        ISBN:9798400700989
        DOI:10.1145/3579856

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 10 July 2023

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate418of2,322submissions,18%
      • Article Metrics

        • Downloads (Last 12 months)339
        • Downloads (Last 6 weeks)34

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format