Abstract
In the distributed machine learning scenario, we have Split Learning (SL) and Federated Learning (FL) as the popular techniques. In SL, the model is split between the clients and the server for sequential training of clients, whereas in FL, clients train parallelly. The model splitting in SL provides better overall privacy than FL. SplitFed learning (SFL) combines these two popular techniques to incorporate the model splitting approach from SL to improve privacy and utilize the generic FL approach for faster training. Despite the advantages, the distributed nature of SFL makes it vulnerable to data poisoning attacks by malicious participants. This vulnerability prompted us to study the robustness of SFL under such attacks. The outcomes of this study would provide valuable insights to organizations and researchers who wish to deploy or study SFL. In this paper, we conduct three experiments. Our first experiment demonstrates that data poisoning attacks seriously threaten SFL systems. Even the presence of 10% malicious participants can cause a drastic drop in the accuracy of the global model. We further perform a second experiment to study the robustness of two variants of SFL under the category of targeted data poisoning attacks. The results of experiment two demonstrate that SFLV1 is more robust than SFLV2 the majority of times. In our third experiment, we studied untargeted data poisoning attacks on SFL. We found that untargeted attacks cause a more significant loss in the global model’s accuracy than targeted attacks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Mathews, K., Bowman, C.: The California consumer privacy act of (2018)
Regulation, G.D.P.: Regulation of the European parliament and of the council of 27: on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46. Off. J. Eur. Union (OJ) 59(1–88), 294 (2016)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings Of the 20th International Conference on Artificial Intelligence and Statistics, vol. 54, pp. 1273–1282 (2017)
Vepakomma, P., Gupta, O., Swedish, T., Raskar, R.: Split learning for health: distributed deep learning without sharing raw patient data. CoRR. abs/1812.00564 (2018)
Bouacida, N., Mohapatra, P.: Vulnerabilities in federated learning. IEEE Access 9, 63229–63249 (2021)
Thapa, C., Chamikara, M., Camtepe, S, Sun, L.: Splitfed: when federated learning meets split learning. ArXiv Preprint ArXiv:2004.12088 (2020)
Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. Adv. Neural Inf. Process. Syst. 31 (2018)
Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. ArXiv Preprint ArXiv:1712.05526 (2017)
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. ArXiv Preprint ArXiv:1206.6389 (2012)
Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. ArXiv Preprint ArXiv:1712.05526 (2017)
Koh, P., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning, pp. 1885–1894 (2017)
Liu, Y., et al.: Trojaning attack on neural networks (2017)
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2938–2948 (2020)
Tolpegin, V., Truex, S., Gursoy, M., Liu, L.: Data poisoning attacks against federated learning systems. In: European Symposium on Research in Computer Security, pp. 480–501 (2020)
Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 (Canadian Institute for Advanced Research). http://www.cs.toronto.edu/kriz/cifar.html
LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010). http://yann.lecun.com/exdb/mnist/
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8024–8035 (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
LeCun, Y., et al.: LeNet-5, convolutional neural networks. http://yann.Lecun.Com/exdb/lenet 20, 14 (2015)
Li, D., Wong, W., Wang, W., Yao, Y., Chau, M.: Detection and mitigation of label-flipping attacks in federated learning systems with KPCA and K-Means. In: 2021 8th International Conference on Dependable Systems and their Applications (DSA), pp. 551–559 (2021)
Shen, S., Tople, S., Saxena, P.: AUROR: defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd Annual Conference on Computer Security Applications, pp. 508–519 (2016)
Steinhardt, J., Koh, P., Liang, P.: Certified defenses for data poisoning attacks. Adv. Neural Inf. Process. Syst. 30 (2017)
Xiao, H., Xiao, H., Eckert, C.: Adversarial label flips attack on support vector machines. ECAI 2012, 870–875 (2012)
Xiao, H., Biggio, B., Nelson, B., Xiao, H., Eckert, C., Roli, F.: Support vector machines under adversarial label contamination. Neurocomputing 160, 53–62 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Gajbhiye, S., Singh, P., Gupta, S. (2023). Data Poisoning Attack by Label Flipping on SplitFed Learning. In: Santosh, K., Goyal, A., Aouada, D., Makkar, A., Chiang, YY., Singh, S.K. (eds) Recent Trends in Image Processing and Pattern Recognition. RTIP2R 2022. Communications in Computer and Information Science, vol 1704. Springer, Cham. https://doi.org/10.1007/978-3-031-23599-3_30
Download citation
DOI: https://doi.org/10.1007/978-3-031-23599-3_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-23598-6
Online ISBN: 978-3-031-23599-3
eBook Packages: Computer ScienceComputer Science (R0)