Abstract
Federated Distillation (FD) is a novel and promising distributed machine learning paradigm, where knowledge distillation is leveraged to facilitate a more efficient and flexible cross-device knowledge transfer in federated learning. By optimizing local models with knowledge distillation, FD circumvents the necessity of uploading large-scale model parameters to the central server, simultaneously preserving the raw data on local clients. Despite the growing popularity of FD, there is a noticeable gap in previous works concerning the exploration of poisoning attacks within this framework. This can lead to a scant understanding of the vulnerabilities to potential adversarial actions. To this end, we introduce Federated Distillation Logits Attack (FDLA), a poisoning attack method tailored for FD. FDLA manipulates logit communications in FD, aiming to significantly degrade model performance on clients through misleading the discrimination of private samples. Through extensive simulation experiments across a variety of datasets, attack scenarios, and FD configurations, we demonstrate that Logits Poisoning Attack (LPA) effectively compromises client model accuracy, outperforming established baseline algorithms in this regard. Our findings underscore the critical need for robust defense mechanisms in FD settings to mitigate such adversarial threats.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1ā19 (2019)
Kairouz, P., et al.: Advances and open problems in federated learning. Found. TrendsĀ® Mach. Learn. 14(1ā2), 1ā210 (2021)
Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.-L.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-IID private data. arXiv preprint arXiv:1811.11479 (2018)
Wu, Z., Sun, S., Wang, Y., Liu, M., Wen, T., Wang, W.: Improving communication efficiency of federated distillation via accumulating local updates. arXiv preprint arXiv:2312.04166 (2023)
Itahara, S., Nishio, T., Koda, Y., Morikura, M., Yamamoto, K.: Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-IID private data. IEEE Trans. Mob. Comput. 22(1), 191ā205 (2021)
Li, D., Wang, J.: Fedmd: heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581 (2019)
Wu, Z., et al.: Fedict: federated multi-task distillation for multi-access edge computing. IEEE Trans. Parallel Distrib. Syst. 35, 1ā16 (2023)
Sun, G., Cong, Y., Dong, J., Wang, Q., Lyu, L., Liu, J.: Data poisoning attacks on federated machine learning. IEEE Internet Things J. 9(13), 11365ā11375 (2021)
Yang, J., Zheng, J., Baker, T., Tang, S., Tan, Y., Zhang, Q.: Clean-label poisoning attacks on federated learning for IoT. Expert. Syst. 40(5), e13161 (2023)
Gong, X., Chen, Y., Wang, Q., Kong, W.: Backdoor attacks and defenses in federated learning: state-of-the-art, taxonomy, and future directions. IEEE Wirel. Commun. 30(2), 114ā121 (2022)
Cao, X., Gong, N.Z.: MPAF: model poisoning attacks to federated learning based on fake clients. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3396ā3404 (2022)
Sun, W., Gao, B., Xiong, K., Lu, Y., Wang, Y.: VagueGAN: a GAN-based data poisoning attack against federated learning systems. In: 2023 20th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 321ā329 (2023)
Hinton, G., Vinyals, O., Dean, J. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Wu, Z., et al.: Spirit distillation: a model compression method with multi-domain knowledge transfer. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, S.-Y. (eds.) KSEM 2021. LNCS (LNAI), vol. 12815, pp. 553ā565. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82136-4_45
Mora, A., Tenison, I., Bellavista, P., Rish, I.: Knowledge distillation for federated learning: a practical guide. arXiv preprint arXiv:2211.04742 (2022)
Wu, Z., et al.: Survey of knowledge distillation in federated edge learning. arXiv preprint arXiv:2301.05849 (2023)
Wu, Z., Sun, S., Liu, M., Zhang, J., Wang, Y., Liu, Q.: Exploring the distributed knowledge congruence in proxy-data-free federated distillation. arXiv preprint arXiv:2204.07028 (2022)
Zhang, J., Guo, S., Ma, X., Wang, H., Wenchao, X., Feijie, W.: Parameterized knowledge transfer for personalized federated learning. Adv. Neural. Inf. Process. Syst. 34, 10092ā10104 (2021)
Wu, Z., et al.: Fedcache: a knowledge cache-driven federated learning architecture for personalized edge intelligence. arXiv preprint arXiv:2308.07816 (2023)
Sattler, F., Marban, A., Rischke, R., Samek, W.: CFD: communication-efficient federated distillation via soft-label quantization and delta coding. IEEE Trans. Netw. Sci. Eng. 9(4), 2025ā2038 (2021)
Wu, Z., et al.: Agglomerative federated learning: empowering larger model training via end-edge-cloud collaboration. arXiv preprint arXiv:2312.11489 (2023)
Cheng, S., Wu, J., Xiao, Y., Liu, Y.: Fedgems: federated learning of larger server models via selective knowledge fusion. arXiv preprint arXiv:2110.11027 (2021)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)
Tolpegin, V., Truex, S., Gursoy, M.E., Liu, L.: Data poisoning attacks against federated learning systems. In: Chen, L., Li, N., Liang, K., Schneider, S. (eds.) ESORICS 2020. LNCS, vol. 12308, pp. 480ā501. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58951-6_24
Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemom. Intell. Lab. Syst. 2(1ā3), 37ā52 (1987)
Acknowledgments
This work was supported in part by the Fundamental Research Funds for the Central Universities under Grant 2021JBM008 and Grant 2022JBXT001, and in part by the National Natural Science Foundation of China (NSFC) under Grant 61872028.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Tang, Y., Wu, Z., Gao, B., Wen, T., Wang, Y., Sun, S. (2024). Logits Poisoning Attack in Federated Distillation. In: Cao, C., Chen, H., Zhao, L., Arshad, J., Asyhari, T., Wang, Y. (eds) Knowledge Science, Engineering and Management. KSEM 2024. Lecture Notes in Computer Science(), vol 14886. Springer, Singapore. https://doi.org/10.1007/978-981-97-5498-4_22
Download citation
DOI: https://doi.org/10.1007/978-981-97-5498-4_22
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5497-7
Online ISBN: 978-981-97-5498-4
eBook Packages: Computer ScienceComputer Science (R0)