Skip to main content

Logits Poisoning Attack in Federated Distillation

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14886))

  • 465 Accesses

Abstract

Federated Distillation (FD) is a novel and promising distributed machine learning paradigm, where knowledge distillation is leveraged to facilitate a more efficient and flexible cross-device knowledge transfer in federated learning. By optimizing local models with knowledge distillation, FD circumvents the necessity of uploading large-scale model parameters to the central server, simultaneously preserving the raw data on local clients. Despite the growing popularity of FD, there is a noticeable gap in previous works concerning the exploration of poisoning attacks within this framework. This can lead to a scant understanding of the vulnerabilities to potential adversarial actions. To this end, we introduce Federated Distillation Logits Attack (FDLA), a poisoning attack method tailored for FD. FDLA manipulates logit communications in FD, aiming to significantly degrade model performance on clients through misleading the discrimination of private samples. Through extensive simulation experiments across a variety of datasets, attack scenarios, and FD configurations, we demonstrate that Logits Poisoning Attack (LPA) effectively compromises client model accuracy, outperforming established baseline algorithms in this regard. Our findings underscore the critical need for robust defense mechanisms in FD settings to mitigate such adversarial threats.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1ā€“19 (2019)

    Article  Google Scholar 

  2. Kairouz, P., et al.: Advances and open problems in federated learning. Found. TrendsĀ® Mach. Learn. 14(1ā€“2), 1ā€“210 (2021)

    Google Scholar 

  3. Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.-L.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-IID private data. arXiv preprint arXiv:1811.11479 (2018)

  4. Wu, Z., Sun, S., Wang, Y., Liu, M., Wen, T., Wang, W.: Improving communication efficiency of federated distillation via accumulating local updates. arXiv preprint arXiv:2312.04166 (2023)

  5. Itahara, S., Nishio, T., Koda, Y., Morikura, M., Yamamoto, K.: Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-IID private data. IEEE Trans. Mob. Comput. 22(1), 191ā€“205 (2021)

    Article  Google Scholar 

  6. Li, D., Wang, J.: Fedmd: heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581 (2019)

  7. Wu, Z., et al.: Fedict: federated multi-task distillation for multi-access edge computing. IEEE Trans. Parallel Distrib. Syst. 35, 1ā€“16 (2023)

    Google Scholar 

  8. Sun, G., Cong, Y., Dong, J., Wang, Q., Lyu, L., Liu, J.: Data poisoning attacks on federated machine learning. IEEE Internet Things J. 9(13), 11365ā€“11375 (2021)

    Article  Google Scholar 

  9. Yang, J., Zheng, J., Baker, T., Tang, S., Tan, Y., Zhang, Q.: Clean-label poisoning attacks on federated learning for IoT. Expert. Syst. 40(5), e13161 (2023)

    Article  Google Scholar 

  10. Gong, X., Chen, Y., Wang, Q., Kong, W.: Backdoor attacks and defenses in federated learning: state-of-the-art, taxonomy, and future directions. IEEE Wirel. Commun. 30(2), 114ā€“121 (2022)

    Article  Google Scholar 

  11. Cao, X., Gong, N.Z.: MPAF: model poisoning attacks to federated learning based on fake clients. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3396ā€“3404 (2022)

    Google Scholar 

  12. Sun, W., Gao, B., Xiong, K., Lu, Y., Wang, Y.: VagueGAN: a GAN-based data poisoning attack against federated learning systems. In: 2023 20th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 321ā€“329 (2023)

    Google Scholar 

  13. Hinton, G., Vinyals, O., Dean, J. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  14. Wu, Z., et al.: Spirit distillation: a model compression method with multi-domain knowledge transfer. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, S.-Y. (eds.) KSEM 2021. LNCS (LNAI), vol. 12815, pp. 553ā€“565. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82136-4_45

    Chapter  Google Scholar 

  15. Mora, A., Tenison, I., Bellavista, P., Rish, I.: Knowledge distillation for federated learning: a practical guide. arXiv preprint arXiv:2211.04742 (2022)

  16. Wu, Z., et al.: Survey of knowledge distillation in federated edge learning. arXiv preprint arXiv:2301.05849 (2023)

  17. Wu, Z., Sun, S., Liu, M., Zhang, J., Wang, Y., Liu, Q.: Exploring the distributed knowledge congruence in proxy-data-free federated distillation. arXiv preprint arXiv:2204.07028 (2022)

  18. Zhang, J., Guo, S., Ma, X., Wang, H., Wenchao, X., Feijie, W.: Parameterized knowledge transfer for personalized federated learning. Adv. Neural. Inf. Process. Syst. 34, 10092ā€“10104 (2021)

    Google Scholar 

  19. Wu, Z., et al.: Fedcache: a knowledge cache-driven federated learning architecture for personalized edge intelligence. arXiv preprint arXiv:2308.07816 (2023)

  20. Sattler, F., Marban, A., Rischke, R., Samek, W.: CFD: communication-efficient federated distillation via soft-label quantization and delta coding. IEEE Trans. Netw. Sci. Eng. 9(4), 2025ā€“2038 (2021)

    Article  MathSciNet  Google Scholar 

  21. Wu, Z., et al.: Agglomerative federated learning: empowering larger model training via end-edge-cloud collaboration. arXiv preprint arXiv:2312.11489 (2023)

  22. Cheng, S., Wu, J., Xiao, Y., Liu, Y.: Fedgems: federated learning of larger server models via selective knowledge fusion. arXiv preprint arXiv:2110.11027 (2021)

  23. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  24. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)

    Google Scholar 

  25. Tolpegin, V., Truex, S., Gursoy, M.E., Liu, L.: Data poisoning attacks against federated learning systems. In: Chen, L., Li, N., Liang, K., Schneider, S. (eds.) ESORICS 2020. LNCS, vol. 12308, pp. 480ā€“501. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58951-6_24

    Chapter  Google Scholar 

  26. Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemom. Intell. Lab. Syst. 2(1ā€“3), 37ā€“52 (1987)

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the Fundamental Research Funds for the Central Universities under Grant 2021JBM008 and Grant 2022JBXT001, and in part by the National Natural Science Foundation of China (NSFC) under Grant 61872028.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bo Gao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tang, Y., Wu, Z., Gao, B., Wen, T., Wang, Y., Sun, S. (2024). Logits Poisoning Attack in Federated Distillation. In: Cao, C., Chen, H., Zhao, L., Arshad, J., Asyhari, T., Wang, Y. (eds) Knowledge Science, Engineering and Management. KSEM 2024. Lecture Notes in Computer Science(), vol 14886. Springer, Singapore. https://doi.org/10.1007/978-981-97-5498-4_22

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5498-4_22

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5497-7

  • Online ISBN: 978-981-97-5498-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics