skip to main content
10.1145/3622896.3622901acmotherconferencesArticle/Chapter ViewAbstractPublication PagesccrisConference Proceedingsconference-collections
research-article

ANA: An Adaptive Non-outlier-detection-based Aggregation Algorithm Against Poisoning Attack for Federated Learning

Published: 03 October 2023 Publication History

Abstract

Federated learning (FL) is an emerging learning paradigm in distributed machine learning (ML) that enables multiple data owners, referred to as clients, to collaboratively train a global model without sharing their local training data. The decentralized architecture of federated learning, which eliminates the need for central collection of client training data, has found widespread applications across various scenarios. However, the traditional federated learning approach, represented by FedAvg, faces unprecedented threats due to the emergence and development of poisoning attacks. In such attacks, malicious clients submit tampered gradients to the central server, leading to system failure even with a single malicious client. Although early federated aggregation algorithms, such as Krum and Median, can tolerate a certain degree of poisoning attacks, their evaluation of gradients is limited and heavily relies on outlier detection theory, making them less effective against recent non-outlier poisoning attacks. Moreover, these early methods often employ fixed threshold to filter gradients based on certain indicators, lacking flexibility. To address these challenges, this paper proposes ANA (Adaptive Non-outlier-detection-based Aggregation), a novel federated learning aggregation algorithm that overcomes the limitations of outlier detection methods. ANA evaluates gradients from multiple perspectives, including module length and direction, without relying on a fixed threshold. The proposed algorithm is tested on two real-world datasets, demonstrating its effectiveness in resisting poisoning attacks and outperforming early methods in defending against non-outlier poisoning attacks.Overall, the ANA algorithm presents a significant advancement in federated learning, providing improved resilience against poisoning attacks and offering a more comprehensive approach to gradient evaluation.

References

[1]
Poushter, J. 2016. Smartphone ownership and internet usage continues to climb in emerging economies.Pew research center,22(1), 1-44. https://www.diapoimansi.gr/PDF/pew_research%201.pdf
[2]
Blanchard, P., El Mhamdi, E. M., Guerraoui, R., & Stainer, J. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems,30.
[3]
Li, B., Wang, Y., Singh, A., & Vorobeychik, Y. 2016. Data poisoning attacks on factorization-based collaborative filtering. Advances in neural information processing systems, 29.
[4]
Chen, X., Liu, C., Li, B., Lu, K., & Song, D. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526. https://doi.org/10.48550/arXiv.1712.05526
[5]
Baruch, G., Baruch, M., & Goldberg, Y. 2019. A little is enough: Circumventing defenses for distributed learning. Advances in Neural Information Processing Systems, 32.
[6]
McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273-1282). PMLR.
[7]
Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., & Bacon, D. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492.
[8]
Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M & Ng, A. 2012. Large scale distributed deep networks. Advances in neural information processing systems, 25.
[9]
Bhagoji, A. N., Chakraborty, S., Mittal, P., & Calo, S. 2019. Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning( pp. 634-643). PMLR.
[10]
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. 2020. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics( pp. 2938-2948). PMLR.
[11]
Xie, C., Koyejo, O., & Gupta, I. 2018. Generalized byzantine-tolerant sgd. arXiv preprint arXiv:1802.10116.
[12]
Yin, D., Chen, Y., Kannan, R., & Bartlett, P. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning( pp. 5650-5659). PMLR.
[13]
Guerraoui, R., & Rouault, S. 2018. The hidden vulnerability of distributed learning in byzantium. In International Conference on Machine Learning( pp. 3521-3530). PMLR.
[14]
Alistarh, D., Allen-Zhu, Z., & Li, J. 2018. Byzantine stochastic gradient descent. Advances in Neural Information Processing Systems, 31.
[15]
Chang, H., Shejwalkar, V., Shokri, R., & Houmansadr, A. 2019. Cronus: Robust and heterogeneous collaborative learning with black-box knowledge transfer. arXiv preprint arXiv:1912.11279.
[16]
He, L., Karimireddy, S. P., & Jaggi, M. 2020. Byzantine-robust learning on heterogeneous datasets via resampling.
[17]
Guerraoui, R., & Rouault, S. 2018. The hidden vulnerability of distributed learning in byzantium. In International Conference on Machine Learning( pp. 3521-3530). PMLR.
[18]
Chen, Y., Su, L., & Xu, J. 2017. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 1(2), 1-25.
[19]
Muñoz-González, L., Co, K. T., & Lupu, E. C. 2019. Byzantine-robust federated machine learning through adaptive model averaging. arXiv preprint arXiv:1909.05125.
[20]
Xie, C., Koyejo, S., & Gupta, I. 2019. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In International Conference on Machine Learning( pp. 6893-6901). PMLR.
[21]
Zhao, B., Sun, P., Wang, T., & Jiang, K. 2022. Fedinv: Byzantine-robust federated learning by inversing local model updates. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 8, pp. 9171-9179).
[22]
Fang, M., Cao, X., Jia, J., & Gong, N. Z. 2020. Local model poisoning attacks to byzantine-robust federated learning. In Proceedings of the 29th USENIX Conference on Security Symposium( pp. 1623-1640).
[23]
Fang, M., Yang, G., Gong, N. Z., & Liu, J. 2018. Poisoning attacks to graph-based recommender systems. In Proceedings of the 34th annual computer security applications conference ( pp. 381-392).
[24]
Muñoz-González, L., Biggio, B., Demontis, A., Paudice, A., Wongrassamee, V., Lupu, E. C., & Roli, F. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM workshop on artificial intelligence and security ( pp. 27-38).
[25]
Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., & Li, B. 2018. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE symposium on security and privacy (SP) ( pp. 19-35). IEEE.
[26]
Li, Y., Chen, C., Liu, N., Huang, H., Zheng, Z., & Yan, Q. 2020. A blockchain-based decentralized federated learning framework with committee consensus. IEEE Network, 35(1), 234-241.
[27]
Zhao, S., Wu, Y., Sun, R., Qian, X., Zi, D., Xie, Z., ... & Han, Z. 2021. Blockchain-based decentralized federated learning: A secure and privacy-preserving system. In 2021 IEEE 23rd Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys) ( pp. 941-948). IEEE.
[28]
Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., ... & Zhao, S. 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1-210.

Index Terms

  1. ANA: An Adaptive Non-outlier-detection-based Aggregation Algorithm Against Poisoning Attack for Federated Learning
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      CCRIS '23: Proceedings of the 2023 4th International Conference on Control, Robotics and Intelligent System
      August 2023
      215 pages
      ISBN:9798400708190
      DOI:10.1145/3622896
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 03 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Adaptive Threshold
      2. Aggregation Algorithm
      3. Federated Learning
      4. Model Poisoning Attack

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      CCRIS 2023

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 44
        Total Downloads
      • Downloads (Last 12 months)26
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 13 Feb 2025

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media