Abstract
With the enhanced sensing and computing power of mobile devices, emerging technologies and applications generate massive amounts of data at the edge network, assigning new requirements for improving the privacy and security. Federated learning signifies the advancement in protecting the privacy of intelligent IoT applications. However, recent researches reveal the inherent vulnerabilities of the federated learning that the adversary could obtain the private training data from the publicly shared gradients. Although model distillation is considered a state-of-the-art technique to solve the gradient leakage by hiding gradients, our experiments show that it still has the risk of membership information leakage at the client level. In this paper, we propose a novel client-based membership inference attack in federated distillation learning. Specifically, we first comprehensively analyze model distillation in defensive capabilities for deep gradient leakage. Then, by exploiting that the adversary can learn other participants’ model structure and behavior during federated distillation, we design membership inference attacks against other participants based on private models of malicious clients. Experimental results demonstrate the superiority of our proposed method in terms of efficiency in resolving deep leakage of gradients and high accuracy of membership inference attack in federated distillation learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
De Boer, P.T., Kroese, D.P., Mannor, S., Rubinstein, R.Y.: A tutorial on the cross-entropy method. Ann. Oper. Res. 134(1), 19–67 (2005)
Hao, M., Li, H., Luo, X., Xu, G., Yang, H., Liu, S.: Efficient and privacy-enhanced federated learning for industrial artificial intelligence. IEEE Trans. Industr. Inf. 16(10), 6532–6542 (2019)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.L.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-IID private data. arXiv preprint arXiv:1811.11479 (2018)
Khan, M.A., Salah, K.: IoT security: review, blockchain solutions, and open challenges. Futur. Gener. Comput. Syst. 82, 395–411 (2018)
Li, D., Wang, J.: FedMD: heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581 (2019)
Li, P., et al.: Multi-key privacy-preserving deep learning in cloud computing. Futur. Gener. Comput. Syst. 74, 76–85 (2017)
Lu, Y., Huang, X., Dai, Y., Maharjan, S., Zhang, Y.: Blockchain and federated learning for privacy-preserved data sharing in industrial iot. IEEE Trans. Industr. Inf. 16(6), 4177–4186 (2019)
Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Inference attacks against collaborative learning. arXiv preprint arXiv:1805.04049 (2018)
Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 739–753. IEEE (2019)
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321 (2015)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)
Wang, J., Cao, B., Yu, P., Sun, L., Bao, W., Zhu, X.: Deep learning towards mobile applications. In: 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pp. 1385–1393. IEEE (2018)
Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)
Zhu, L., Han, S.: Deep leakage from gradients. In: Yang, Q., Fan, L., Yu, H. (eds.) Federated Learning. LNCS (LNAI), vol. 12500, pp. 17–31. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-63076-8_2
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, Z., Zhao, Y., Zhang, J. (2023). FD-Leaks: Membership Inference Attacks Against Federated Distillation Learning. In: Li, B., Yue, L., Tao, C., Han, X., Calvanese, D., Amagasa, T. (eds) Web and Big Data. APWeb-WAIM 2022. Lecture Notes in Computer Science, vol 13423. Springer, Cham. https://doi.org/10.1007/978-3-031-25201-3_28
Download citation
DOI: https://doi.org/10.1007/978-3-031-25201-3_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-25200-6
Online ISBN: 978-3-031-25201-3
eBook Packages: Computer ScienceComputer Science (R0)