Conclusion
The letter gives an effective defense paradigm to defend against local model poisoning attack in FL without auxiliary dataset, which further enhances the robust of Byzantine-robust aggregation rules to local model poisoning attack. The experiment results show that our defense scheme can obtain a better detection performance and take less detection time in local model poisoning attack. More technical details please refer to supplementary material.
References
McMahan B, Moore E, Ramage D, Hampson S, Arcas B A. Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR. 2017, 1273–1282
Kairouz P, McMahan B, Avent B, Bellet A, Bennis M, Bhagoji A N, Bonawitz K. Advances and open problems in federated learning. 2019, arXiv preprint arXiv: 1912.04977
Blanchard P, El Mhamdi E M, Guerraoui R, Stainer J. Machine learning with adversaries: Byzantine tolerant gradient descent. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 118–128
Yin D, Chen Y D, Kannan R, Bartlett P. Byzantine-robust distributed learning: towards optimal statistical rates. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 5650–5659
Fang M H, Cao X Y, Jia J Y, and Gong Z Q. Local model poisoning attacks to Byzantine-robust federated learning. In: Proceedings of the 29th Usenix Security Symposium. 2020
Li S, Cheng Y, Wang W, Liu Y, Chen T J. Learning to detect malicious clients for robust federated learning. 2020, arXiv preprint arXiv: 2002.00211
Zong B, Song Q, Min M R, Cheng W, Lumezanu C, Cho D, Chen H F. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: Proceedings of the 27th International Conference on Learning Representations. 2018
Acknowledgements
The work was supported by the National Natural Science Foundation of China (Grand Nos. 11901579, 11801564).
Author information
Authors and Affiliations
Corresponding author
Additional information
Supporting information
The supporting information is available online at journal.hep.com.cn and link.springer.com.
Rights and permissions
About this article
Cite this article
Lu, S., Li, R., Chen, X. et al. Defense against local model poisoning attacks to byzantine-robust federated learning. Front. Comput. Sci. 16, 166337 (2022). https://doi.org/10.1007/s11704-021-1067-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11704-021-1067-4