Skip to main content
Log in

Defense against local model poisoning attacks to byzantine-robust federated learning

  • Letter
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Conclusion

The letter gives an effective defense paradigm to defend against local model poisoning attack in FL without auxiliary dataset, which further enhances the robust of Byzantine-robust aggregation rules to local model poisoning attack. The experiment results show that our defense scheme can obtain a better detection performance and take less detection time in local model poisoning attack. More technical details please refer to supplementary material.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. McMahan B, Moore E, Ramage D, Hampson S, Arcas B A. Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR. 2017, 1273–1282

  2. Kairouz P, McMahan B, Avent B, Bellet A, Bennis M, Bhagoji A N, Bonawitz K. Advances and open problems in federated learning. 2019, arXiv preprint arXiv: 1912.04977

  3. Blanchard P, El Mhamdi E M, Guerraoui R, Stainer J. Machine learning with adversaries: Byzantine tolerant gradient descent. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 118–128

  4. Yin D, Chen Y D, Kannan R, Bartlett P. Byzantine-robust distributed learning: towards optimal statistical rates. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 5650–5659

  5. Fang M H, Cao X Y, Jia J Y, and Gong Z Q. Local model poisoning attacks to Byzantine-robust federated learning. In: Proceedings of the 29th Usenix Security Symposium. 2020

  6. Li S, Cheng Y, Wang W, Liu Y, Chen T J. Learning to detect malicious clients for robust federated learning. 2020, arXiv preprint arXiv: 2002.00211

  7. Zong B, Song Q, Min M R, Cheng W, Lumezanu C, Cho D, Chen H F. Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: Proceedings of the 27th International Conference on Learning Representations. 2018

Download references

Acknowledgements

The work was supported by the National Natural Science Foundation of China (Grand Nos. 11901579, 11801564).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ruihu Li.

Additional information

Supporting information

The supporting information is available online at journal.hep.com.cn and link.springer.com.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, S., Li, R., Chen, X. et al. Defense against local model poisoning attacks to byzantine-robust federated learning. Front. Comput. Sci. 16, 166337 (2022). https://doi.org/10.1007/s11704-021-1067-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-021-1067-4

Navigation