Skip to main content

Targeted Clean-Label Poisoning Attacks on Federated Learning

  • Conference paper
  • First Online:
Recent Trends in Image Processing and Pattern Recognition (RTIP2R 2022)

Abstract

Federated Learning (FL) has become one of the most extensively utilized distributed training approaches since it allows users to access large datasets without really sharing them. Only the updated model parameters are exchanged with the central server after the model has been trained locally on the devices holding the data. Because of the distributed nature of the FL technique, there is a possibility of adversarial attacks that aim to manipulate the behavior of the model. This paper explores targeted clean-label attack in which adversaries inject poisoned images into compromised clients’ dataset to alter the behaviour of the model on a specific target image at test time. The standard CIFAR10 dataset is used in this study to conduct various experiments and manipulate the image classifier. This study discovered that the behavior of a FL model can be altered maliciously towards a specific target image without significantly affecting the model’s overall accuracy. In addition, the attack’s impact grows in direct proportion to the number of injected poisonous images and malicious client (i.e. controlled by adversaries) participating in the FL process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Shafahi, A., et al.: Poison frogs! Targeted clean-label poisoning attacks on neural networks. In: NIPS Conference (2018)

    Google Scholar 

  2. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2938–2948 (2020)

    Google Scholar 

  3. Bhagoji, A., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: Proceedings of the 36th International Conference on Machine Learning (ICML), vol. 97, pp. 634–643 (2019)

    Google Scholar 

  4. Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning, In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  5. Sun, G., Cong, Y., Dong, J., Wang, Q., Lyu, L., Liu, J.: Data poisoning attacks on federated machine learning. IEEE Internet Things J. 1 (2021)

    Google Scholar 

  6. Tolpegin, V., Truex, S., Gursoy, M., Liu, L.: Data poisoning attacks against federated learning systems, pp. 480–501 (2020)

    Google Scholar 

  7. Cao, D., Chang, S., Lin, Z., Liu, G., Sun, D.: Understanding distributed poisoning attack in federated learning. In: 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS), pp. 233–239 (2019)

    Google Scholar 

  8. Zhang, J., Chen, B., Cheng, X., Binh, H.T.T., Yu, S.: PoisonGAN: generative poisoning attacks against federated learning in edge computing systems. IEEE Internet Things J. 8, 3310–3322 (2021)

    Article  Google Scholar 

  9. Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S.: Poisoning attack in federated learning using generative adversarial nets. In: 2019 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE), pp. 374–380 (2019)

    Google Scholar 

  10. Zhu, L., Han, S.: Deep leakage from gradients. In: Yang, Q., Fan, L., Yu, H. (eds.) Federated Learning. LNCS (LNAI), vol. 12500, pp. 17–31. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-63076-8_2

    Chapter  Google Scholar 

  11. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ayushi Patel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Patel, A., Singh, P. (2023). Targeted Clean-Label Poisoning Attacks on Federated Learning. In: Santosh, K., Goyal, A., Aouada, D., Makkar, A., Chiang, YY., Singh, S.K. (eds) Recent Trends in Image Processing and Pattern Recognition. RTIP2R 2022. Communications in Computer and Information Science, vol 1704. Springer, Cham. https://doi.org/10.1007/978-3-031-23599-3_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23599-3_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23598-6

  • Online ISBN: 978-3-031-23599-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics