Skip to main content

Preventing Text Data Poisoning Attacks in Federated Machine Learning by an Encrypted Verification Key

  • Conference paper
  • First Online:
Rough Sets (IJCRS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14481))

Included in the following conference series:

  • 180 Accesses

Abstract

Recent studies show significant security problems with most of the Federated Learning models. There is a false assumption that the participant is not the attacker and would not use poisoned data. This vulnerability allows attackers to use polluted data to train their data locally and send the model updates to the edge server for aggregation, which generates an opportunity for data poisoning. In such a setting, it is challenging for an edge server to thoroughly examine the data used for model training and supervise any edge device. This paper evaluates existing vulnerabilities, attacks, and defenses of federated learning, discusses the hazard of data poisoning and backdoor attacks in federated learning, and proposes a robust scheme to prevent any categories of data poisoning attacks on text data. A new two-phase strategy and encryption algorithms allow Federated Learning servers to supervise participants in real-time and eliminate infected participants by adding an encrypted verification scheme to the Federated Learning mode. This paper includes the protocol design of the prevention scheme and presents the experimental results demonstrating this scheme’s effectiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Sun, G., Cong, Y., Dong, J., Wang, Q., Lyu, L., Liu, J.: Data poisoning attacks on federated machine learning. IEEE Internet Things J. https://doi.org/10.1109/JIOT.2021.3128646

  2. Singh, A.K., Blanco-Justicia, A., Domingo-Ferrer, J., Sánchez, D., Rebollo-Monedero, D.: Fair detection of poisoning attacks in federated learning. In: 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), pp. 224–229 (2020). https://doi.org/10.1109/ICTAI50040.2020.00044

  3. Doku, R., Rawat, D.B.: Mitigating data poisoning attacks on a federated learning-edge computing network. In: 2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC), pp. 1–6 (2021). https://doi.org/10.1109/CCNC49032.2021.9369581

  4. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 108, pp. 2938–2948. https://proceedings.mlr.press/v108/bagdasaryan20a.html

  5. Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  6. Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  7. El Mhamdi, M., Guerraoui, R., Rouault, S.: The Hidden Vulnerability of Distributed Learning in Byzantium. arXiv e-prints, arXiv-1802 (2018). https://doi.org/10.48550/arXiv.1802.07927

  8. Lyu, L., Yu, H., Yang, Q.: Threats to federated learning: a survey. arXiv preprint arXiv:2003.02133 (2020)

  9. Fan, X., Ma, Y., Dai, Z., Jing, W., Tan, C., Low, B.K.H.: Fault-tolerant federated reinforcement learning with a theoretical guarantee. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  10. Xu, H., Kostopoulou, K., Dutta, A., Li, X., Ntoulas, A., Kalnis, P.: DeepReduce: a sparse-tensor communication framework for federated deep learning. In: Advances in Neural Information Processing Systems, vol. 34, pp. 21150–21163 (2021)

    Google Scholar 

  11. Jin, X., Chen, P.Y., Hsu, C.Y., Yu, C.M., Chen, T.: Catastrophic data leakage in vertical federated learning. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  12. Huang, Y., Gupta, S., Song, Z., Li, K., Arora, S.: Evaluating gradient inversion attacks and defenses in federated learning. In: Advances in Neural Information Processing Systems, vol. 34 (2021)

    Google Scholar 

  13. Lyu, L., et al.: Privacy and robustness in federated learning: attacks and defenses. arXiv preprint arXiv:2012.06337. (2020)

  14. Liu, P., Xu, X., Wang, W.: Threats, attacks, and defenses to federated learning: issues, taxonomy and perspectives. Cybersecurity 5(1), 1–19 (2022)

    Article  Google Scholar 

  15. Lee, H., Kim, J., Ahn, S., Hussain, R., Cho, S., Son, J.: Digestive neural networks: a novel defense strategy against inference attacks in federated learning. Comput. Secur. 109, 102378 (2021)

    Article  Google Scholar 

  16. Ozdayi, M.S., Kantarcioglu, M., Gel, Y.R.: Defending against backdoors in Federated Learning with robust learning rate. arXiv preprint arXiv:2007.03767 (2020)

  17. Lai, J., Huang, X., Gao, X., Xia, C., Hua, J.: GAN-based information leakage attack detection in federated learning. Secur. Commun. Netw. (2022)

    Google Scholar 

  18. Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  19. Chen, J., Zhang, J., Zhao, Y., Han, H., Zhu, K., Chen, B.: Beyond model-level membership privacy leakage: an adversarial approach in federated learning. In: 2020 29th International Conference on Computer Communications and Networks (ICCCN), pp. 1–9 (2020). https://doi.org/10.1109/ICCCN49398.2020.920974

  20. Lo, S.K., Lu, Q., Wang, C., Paik, H.Y., Zhu, L.: A systematic literature review on federated machine learning: from a software engineering perspective. ACM Comput. Surv. 54 (2021)

    Google Scholar 

  21. Wu, C., Wu, F., Lyu, L., Huang, Y., Xie, X.: Communication-efficient federated learning via knowledge distillation. Nat. Commun. 13(1), 2032 (2022)

    Article  Google Scholar 

  22. Ángel Morell, J., Abdelmoiz Dahi, Z., Chicano, F., Luque, G., Alba, E.: Optimising communication overhead in federated learning using NSGA-II. arXiv e-prints, arXiv-2204 (2022)

    Google Scholar 

  23. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  24. Alistarh, D., Grubic, D., Li, J.Z., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 1707–1718 (2017)

    Google Scholar 

  25. Wangni, J., Wang, J., Liu, J., Zhang, T.: Gradient sparsication for communication-efficient distributed optimization. In: Proceedings of 32nd International Conference on Neural Information Processing Systems, pp. 1306–1316 (2018)

    Google Scholar 

Download references

Acknowledgments

The first author acknowledges partial support of the Discovery NSERC Grant of Canada.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahdee Jodayree .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jodayree, M., He, W., Janicki, R. (2023). Preventing Text Data Poisoning Attacks in Federated Machine Learning by an Encrypted Verification Key. In: Campagner, A., Urs Lenz, O., Xia, S., Ślęzak, D., Wąs, J., Yao, J. (eds) Rough Sets. IJCRS 2023. Lecture Notes in Computer Science(), vol 14481. Springer, Cham. https://doi.org/10.1007/978-3-031-50959-9_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50959-9_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50958-2

  • Online ISBN: 978-3-031-50959-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics