References
McMahan B, Moore E, Ramage D, Hampson S, Arcas B A Y. Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. 2017, 1273–1282
Xenofontos C, Zografopoulos I, Konstantinou C, Jolfaei A, Khan M K, Choo K K R. Consumer, commercial, and industrial IoT (in) security: attack taxonomy and case studies. IEEE Internet of Things Journal, 2022, 9(1): 199–221
Blanchard P, El Mhamdi E M, Guerraoui R, Stainer J. Machine learning with adversaries: byzantine tolerant gradient descent. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 118–128
Xia Q, Tao Z, Hao Z, Li Q. FABA: an algorithm for fast aggregation against byzantine attacks in distributed neural networks. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2019, 4824–4830
Xie C, Koyejo S, Gupta I. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 6893–6901
Baruch G, Baruch M, Goldberg Y. A little is enough: circumventing defenses for distributed learning. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 775
Yin D, Chen Y, Kannan R, Bartlett P. Byzantine-robust distributed learning: towards optimal statistical rates. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 5650–5659
Liu Y, Chen C, Lyu L, Wu F, Wu S, Chen G. Byzantine-robust learning on heterogeneous data via gradient splitting. In: Proceedings of the 40th International Conference on Machine Learning. 2023, 21404–21425
Zhang Z, Cao X, Jia J, Gong N Z. FLDetector: defending federated learning against model poisoning attacks via detecting malicious clients. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022, 2545–2555
Acknowledgments
This work was supported by the National Science and Technology Major Project (No. 2021ZD0201302), the Fundamental Research Funds for the Central Universities (No. YWF-23-Q-1092), and Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Competing interests The authors declare that they have no competing interests or financial conflicts to disclose.
Additional information
Electronic supplementary material Supplementary material is available in the online version of this article at journal.hep.com.cn and link.springer.com
Electronic supplementary material
11704_2025_40924_MOESM1_ESM.pdf
Enhancing Poisoning Attack Mitigation in Federated Learning through Perturbation-Defense Complementarity on History Gradients
Rights and permissions
About this article
Cite this article
Wang, C., Mi, Z., Yin, Z. et al. Enhancing poisoning attack mitigation in federated learning through perturbation-defense complementarity on history gradients. Front. Comput. Sci. 19, 1912371 (2025). https://doi.org/10.1007/s11704-025-40924-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11704-025-40924-1