Skip to main content

Data Poisoning Attack by Label Flipping on SplitFed Learning

  • Conference paper
  • First Online:
Recent Trends in Image Processing and Pattern Recognition (RTIP2R 2022)

Abstract

In the distributed machine learning scenario, we have Split Learning (SL) and Federated Learning (FL) as the popular techniques. In SL, the model is split between the clients and the server for sequential training of clients, whereas in FL, clients train parallelly. The model splitting in SL provides better overall privacy than FL. SplitFed learning (SFL) combines these two popular techniques to incorporate the model splitting approach from SL to improve privacy and utilize the generic FL approach for faster training. Despite the advantages, the distributed nature of SFL makes it vulnerable to data poisoning attacks by malicious participants. This vulnerability prompted us to study the robustness of SFL under such attacks. The outcomes of this study would provide valuable insights to organizations and researchers who wish to deploy or study SFL. In this paper, we conduct three experiments. Our first experiment demonstrates that data poisoning attacks seriously threaten SFL systems. Even the presence of 10% malicious participants can cause a drastic drop in the accuracy of the global model. We further perform a second experiment to study the robustness of two variants of SFL under the category of targeted data poisoning attacks. The results of experiment two demonstrate that SFLV1 is more robust than SFLV2 the majority of times. In our third experiment, we studied untargeted data poisoning attacks on SFL. We found that untargeted attacks cause a more significant loss in the global model’s accuracy than targeted attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Mathews, K., Bowman, C.: The California consumer privacy act of (2018)

    Google Scholar 

  2. Regulation, G.D.P.: Regulation of the European parliament and of the council of 27: on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46. Off. J. Eur. Union (OJ) 59(1–88), 294 (2016)

    Google Scholar 

  3. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings Of the 20th International Conference on Artificial Intelligence and Statistics, vol. 54, pp. 1273–1282 (2017)

    Google Scholar 

  4. Vepakomma, P., Gupta, O., Swedish, T., Raskar, R.: Split learning for health: distributed deep learning without sharing raw patient data. CoRR. abs/1812.00564 (2018)

    Google Scholar 

  5. Bouacida, N., Mohapatra, P.: Vulnerabilities in federated learning. IEEE Access 9, 63229–63249 (2021)

    Article  Google Scholar 

  6. Thapa, C., Chamikara, M., Camtepe, S, Sun, L.: Splitfed: when federated learning meets split learning. ArXiv Preprint ArXiv:2004.12088 (2020)

  7. Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. Adv. Neural Inf. Process. Syst. 31 (2018)

    Google Scholar 

  8. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. ArXiv Preprint ArXiv:1712.05526 (2017)

  9. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. ArXiv Preprint ArXiv:1206.6389 (2012)

  10. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. ArXiv Preprint ArXiv:1712.05526 (2017)

  11. Koh, P., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning, pp. 1885–1894 (2017)

    Google Scholar 

  12. Liu, Y., et al.: Trojaning attack on neural networks (2017)

    Google Scholar 

  13. Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2938–2948 (2020)

    Google Scholar 

  14. Tolpegin, V., Truex, S., Gursoy, M., Liu, L.: Data poisoning attacks against federated learning systems. In: European Symposium on Research in Computer Security, pp. 480–501 (2020)

    Google Scholar 

  15. Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 (Canadian Institute for Advanced Research). http://www.cs.toronto.edu/kriz/cifar.html

  16. LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010). http://yann.lecun.com/exdb/mnist/

  17. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8024–8035 (2019)

    Google Scholar 

  18. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  19. LeCun, Y., et al.: LeNet-5, convolutional neural networks. http://yann.Lecun.Com/exdb/lenet 20, 14 (2015)

  20. Li, D., Wong, W., Wang, W., Yao, Y., Chau, M.: Detection and mitigation of label-flipping attacks in federated learning systems with KPCA and K-Means. In: 2021 8th International Conference on Dependable Systems and their Applications (DSA), pp. 551–559 (2021)

    Google Scholar 

  21. Shen, S., Tople, S., Saxena, P.: AUROR: defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd Annual Conference on Computer Security Applications, pp. 508–519 (2016)

    Google Scholar 

  22. Steinhardt, J., Koh, P., Liang, P.: Certified defenses for data poisoning attacks. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  23. Xiao, H., Xiao, H., Eckert, C.: Adversarial label flips attack on support vector machines. ECAI 2012, 870–875 (2012)

    Google Scholar 

  24. Xiao, H., Biggio, B., Nelson, B., Xiao, H., Eckert, C., Roli, F.: Support vector machines under adversarial label contamination. Neurocomputing 160, 53–62 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saurabh Gajbhiye .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gajbhiye, S., Singh, P., Gupta, S. (2023). Data Poisoning Attack by Label Flipping on SplitFed Learning. In: Santosh, K., Goyal, A., Aouada, D., Makkar, A., Chiang, YY., Singh, S.K. (eds) Recent Trends in Image Processing and Pattern Recognition. RTIP2R 2022. Communications in Computer and Information Science, vol 1704. Springer, Cham. https://doi.org/10.1007/978-3-031-23599-3_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23599-3_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23598-6

  • Online ISBN: 978-3-031-23599-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics