Abstract
Split learning of deep neural networks (SplitNN) has provided a promising solution to learning jointly for the mutual interest of a guest and a host, which may come from different backgrounds, holding features partitioned vertically. However, SplitNN creates a new attack surface for the adversarial participant. By investigating the adversarial effects of highly threatening attacks, including property inference, data reconstruction, and feature hijacking attacks, we identify the underlying vulnerability of SplitNN. To protect SplitNN, we design a privacy-preserving tunnel for information exchange. The intuition is to perturb the propagation of knowledge in each direction with a controllable unified solution. To this end, we propose a new activation function named R3eLU, transferring private smashed data and partial loss into randomized responses. We give the first attempt to secure split learning against three threatening attacks and present a fine-grained privacy budget allocation scheme. The analysis proves that our privacy-preserving SplitNN solution provides a tight privacy budget, while the experimental results show that our solution performs better than existing solutions in most cases and achieves a good tradeoff between defense and model usability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
More details can be found in another version, http://arxiv.org/abs/2304.09515.
References
Abadi, M., et al.: Deep learning with differential privacy. In: ACM SIGSAC CCS (2016)
Ceballos, I., et al.: Splitnn-driven vertical partitioning. preprint arXiv:2008.04137 (2020)
Du, J., Li, S., Chen, X., Chen, S., Hong, M.: Dynamic differential-privacy preserving sgd. preprint arXiv:2111.00173 (2021)
Dwork, C., Roth, A., et al.: The algorithmic foundations of differential privacy. Found. Trends® Theor. Comput. Sci. 9(3–4), 211–407 (2014)
Erdogan, E., Kupcu, A., Cicek, A.E.: Unsplit: data-oblivious model inversion, model stealing, and label inference attacks against split learning. In: Proceedings of the 21st Workshop on Privacy in the Electronic Society, WPES 2022 (2022)
Erdogan, E., Teksen, U., Celiktenyildiz, M.S., Kupcu, A., Cicek, A.E.: Defense mechanisms against training-hijacking attacks in split learning. arXiv preprint arXiv:2302.0861 (2023)
Fang, M., Gong, N.Z., Liu, J.: Influence function based data poisoning attacks to top-n recommender systems. In: WWW 2020 (2020)
Fu, C., et al.: Label inference attacks against vertical federated learning. In: USENIX Security 2022 (2022)
Ganju, K., Wang, Q., Yang, W., Gunter, C.A., Borisov, N.: Property inference attacks on fully connected neural networks using permutation invariant representations. In: ACM SIGSAC CCS (2018)
Gao, H., Cai, L., Ji, S.: Adaptive convolutional relus. In: AAAI Conference on Artificial Intelligence (2020)
Gao, Y., et al.: End-to-end evaluation of federated learning and split learning for internet of things. In: SRDS (2020)
Gawron, G., Stubbings, P.: Feature space hijacking attacks against differentially private split learning. arXiv preprint arXiv:2201.04018 (2022)
Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)
Gupta, O., Raskar, R.: Distributed learning of deep neural network over multiple agents. J. Netw. Comput. Appl. 116, 1–8 (2018)
Harper, A.F.M., Konstan, J.A.: The movielens datasets: history and context. ACM Trans. Interact. Intell. Syst. (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. CoRR (2015)
Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the gan: information leakage from collaborative deep learning. In: ACM SIGSAC CCS (2017)
Huang, H., Mu, J., Gong, N.Z., Li, Q., Liu, B., Xu, M.: Data poisoning attacks to deep learning based recommender systems. In: NDSS (2021)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Li, J., Rakin, A.S., Chen, X., He, Z., Fan, D., Chakrabarti, C.: Ressfl: a resistance transfer framework for defending model inversion attack in split federated learning. In: CVPR (2022)
Liu, R., Cao, Y., Chen, H., Guo, R., Yoshikawa, M.: Flame: differentially private federated learning in the shuffle model. In: Proceedings of the AAAI Conference on Artificial Intelligence (2021)
Luo, X., Wu, Y., Xiao, X., Ooi, B.C.: Feature inference attack on model predictions in vertical federated learning. In: ICDE (2021)
Mao, Y., Yuan, X., Zhao, X., Zhong, S.: Romoa: robust model aggregation for the resistance of federated learning to model poisoning attacks. In: ESORICS (2021)
Mao, Y., Zhu, B., Hong, W., Zhu, Z., Zhang, Y., Zhong, S.: Private deep neural network models publishing for machine learning as a service. In: IWQoS (2020)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics (2017)
Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: IEEE S &P (2019)
Molchanov, P., Mallya, A., Tyree, S., Frosio, I., Kautz, J.: Importance estimation for neural network pruning. In: CVPR (2019)
Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: IEEE S &P (2019)
Nguyen, T.D., et al.: Flguard: secure and private federated learning. arXiv preprint arXiv:2101.02281 (2021)
OpenMined: Syft (2021). https://github.com/OpenMined/PySyft
Pasquini, D., Ateniese, G., Bernaschi, M.: Unleashing the tiger: inference attacks on split learning. In: ACM SIGSAC CCS (2021)
Pereteanu, G.L., Alansary, A., Passerat-Palmbach, J.: Split he: fast secure inference combining split learning and homomorphic encryption. arXiv preprint arXiv:2202.13351 (2022)
Salem, A., Bhattacharya, A., Backes, M., Fritz, M., Zhang, Y.: Updates-leak: data set inference and reconstruction attacks in online learning. In: USENIX Security Symposium (2020)
Salem, A., Zhang, Y., Humbert, M., Fritz, M., Backes, M.: ML-leaks: model and data independent membership inference attacks and defenses on machine learning models. In: NDSS (2019)
Sun, L., Qian, J., Chen, X.: LDP-FL: practical private aggregation in federated learning with local differential privacy. In: IJCAI (2021)
Tolpegin, V., Truex, S., Gursoy, M.E., Liu, L.: Data poisoning attacks against federated learning systems. In: ESORICS (2020)
Warner, S.L.: Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965)
Webank: Fate (2021). https://github.com/FederatedAI/FATE
Yu, L., Liu, L., Pu, C., Gursoy, M.E., Truex, S.: Differentially private model publishing for deep learning. In: IEEE S &P (2019)
Zhang, C., Li, S., Xia, J., Wang, W., Yan, F., Liu, Y.: \(\{\)BatchCrypt\(\}\): efficient homomorphic encryption for \(\{\)Cross-Silo\(\}\) federated learning. In: 2020 USENIX Annual Technical Conference (USENIX ATC 2020) (2020)
Zheng, Y., Lai, S., Liu, Y., Yuan, X., Yi, X., Wang, C.: Aggregation service for federated learning: an efficient, secure, and more resilient realization. IEEE Trans. Dependable Secure Comput. 20(2), 988–1001 (2022)
Ziegler, C.N., McNee, S.M., Konstan, J.A., Lausen, G.: Improving recommendation lists through topic diversification. In: WWW (2005)
Acknowledgement
The authors would like to thank our shepherd Prof. Stjepan Picek and the anonymous reviewers for the time and effort they have kindly put into this paper. Our work has been improved by following the suggestions they have made. This work was supported in part by the Leading-edge Technology Program of Jiangsu-NSF under Grant BK20222001 and the National Natural Science Foundation of China under Grants NSFC-62272222, NSFC-61902176, NSFC-62272215.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Appendix
A Appendix
1.1 A.1 Model Architecture
The neural networks we used for MovieLens, BookCrossing, MNIST and CIFAR100 datasets after a split are shown in Table 8. These networks are widely used in related studies. We apply ResNet18 [16] for CIFAR100. We split them according to the interpretation of SplitNN in previous studies [2, 32].
1.2 A.2 Supplement Results of Evaluation
To further investigate how our solution affects the learning process of SplitNN, we report learning results of a MovieLens recommendation model protecting the privacy of the guest and the host in Fig. 4 and 5, respectively. In each plot, we show trends of training accuracy and testing accuracy as the training epoch increases. When \(\epsilon =0.1\) for the guest or the host, model usability will be influenced seriously. Things get better when the privacy budget increases to 1 for either the guest or the host. We can conclude from the figures that our solution achieves satisfying model usability even with a small privacy budget for either side of SplitNN.
In Table 9, we give a benchmark of SplitNN using different cut layers with two public datasets, MovieLens [15] and BookCrossing [43]. We also give the top-10 hit ratio for the test in Table 9. We use min as our merging strategy. We combine one linear layer with one ReLU as one cut layer. We notice that there is little difference between different cut layers. However, considering the computational cost at the guest part, it is a good tradeoff between computational cost and model usability to select the first layer as the cut layer.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mao, Y., Xin, Z., Li, Z., Hong, J., Yang, Q., Zhong, S. (2024). Secure Split Learning Against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks. In: Tsudik, G., Conti, M., Liang, K., Smaragdakis, G. (eds) Computer Security – ESORICS 2023. ESORICS 2023. Lecture Notes in Computer Science, vol 14347. Springer, Cham. https://doi.org/10.1007/978-3-031-51482-1_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-51482-1_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-51481-4
Online ISBN: 978-3-031-51482-1
eBook Packages: Computer ScienceComputer Science (R0)