Skip to main content

Advertisement

Log in

Enhancing poisoning attack mitigation in federated learning through perturbation-defense complementarity on history gradients

  • Letter
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. McMahan B, Moore E, Ramage D, Hampson S, Arcas B A Y. Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. 2017, 1273–1282

    Google Scholar 

  2. Xenofontos C, Zografopoulos I, Konstantinou C, Jolfaei A, Khan M K, Choo K K R. Consumer, commercial, and industrial IoT (in) security: attack taxonomy and case studies. IEEE Internet of Things Journal, 2022, 9(1): 199–221

    Article  Google Scholar 

  3. Blanchard P, El Mhamdi E M, Guerraoui R, Stainer J. Machine learning with adversaries: byzantine tolerant gradient descent. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 118–128

    Google Scholar 

  4. Xia Q, Tao Z, Hao Z, Li Q. FABA: an algorithm for fast aggregation against byzantine attacks in distributed neural networks. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2019, 4824–4830

    Google Scholar 

  5. Xie C, Koyejo S, Gupta I. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 6893–6901

    Google Scholar 

  6. Baruch G, Baruch M, Goldberg Y. A little is enough: circumventing defenses for distributed learning. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 775

    Google Scholar 

  7. Yin D, Chen Y, Kannan R, Bartlett P. Byzantine-robust distributed learning: towards optimal statistical rates. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 5650–5659

    Google Scholar 

  8. Liu Y, Chen C, Lyu L, Wu F, Wu S, Chen G. Byzantine-robust learning on heterogeneous data via gradient splitting. In: Proceedings of the 40th International Conference on Machine Learning. 2023, 21404–21425

    Google Scholar 

  9. Zhang Z, Cao X, Jia J, Gong N Z. FLDetector: defending federated learning against model poisoning attacks via detecting malicious clients. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022, 2545–2555

    Chapter  Google Scholar 

Download references

Acknowledgments

This work was supported by the National Science and Technology Major Project (No. 2021ZD0201302), the Fundamental Research Funds for the Central Universities (No. YWF-23-Q-1092), and Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Zhilong Mi or Binghui Guo.

Ethics declarations

Competing interests The authors declare that they have no competing interests or financial conflicts to disclose.

Additional information

Electronic supplementary material Supplementary material is available in the online version of this article at journal.hep.com.cn and link.springer.com

Electronic supplementary material

11704_2025_40924_MOESM1_ESM.pdf

Enhancing Poisoning Attack Mitigation in Federated Learning through Perturbation-Defense Complementarity on History Gradients

Supplementary material, approximately 1.21 MB.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, C., Mi, Z., Yin, Z. et al. Enhancing poisoning attack mitigation in federated learning through perturbation-defense complementarity on history gradients. Front. Comput. Sci. 19, 1912371 (2025). https://doi.org/10.1007/s11704-025-40924-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-025-40924-1

Profiles

  1. Ziqiao Yin