Skip to main content

FD-Leaks: Membership Inference Attacks Against Federated Distillation Learning

  • Conference paper
  • First Online:
Web and Big Data (APWeb-WAIM 2022)

Abstract

With the enhanced sensing and computing power of mobile devices, emerging technologies and applications generate massive amounts of data at the edge network, assigning new requirements for improving the privacy and security. Federated learning signifies the advancement in protecting the privacy of intelligent IoT applications. However, recent researches reveal the inherent vulnerabilities of the federated learning that the adversary could obtain the private training data from the publicly shared gradients. Although model distillation is considered a state-of-the-art technique to solve the gradient leakage by hiding gradients, our experiments show that it still has the risk of membership information leakage at the client level. In this paper, we propose a novel client-based membership inference attack in federated distillation learning. Specifically, we first comprehensively analyze model distillation in defensive capabilities for deep gradient leakage. Then, by exploiting that the adversary can learn other participants’ model structure and behavior during federated distillation, we design membership inference attacks against other participants based on private models of malicious clients. Experimental results demonstrate the superiority of our proposed method in terms of efficiency in resolving deep leakage of gradients and high accuracy of membership inference attack in federated distillation learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. De Boer, P.T., Kroese, D.P., Mannor, S., Rubinstein, R.Y.: A tutorial on the cross-entropy method. Ann. Oper. Res. 134(1), 19–67 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  2. Hao, M., Li, H., Luo, X., Xu, G., Yang, H., Liu, S.: Efficient and privacy-enhanced federated learning for industrial artificial intelligence. IEEE Trans. Industr. Inf. 16(10), 6532–6542 (2019)

    Article  Google Scholar 

  3. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  4. Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.L.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-IID private data. arXiv preprint arXiv:1811.11479 (2018)

  5. Khan, M.A., Salah, K.: IoT security: review, blockchain solutions, and open challenges. Futur. Gener. Comput. Syst. 82, 395–411 (2018)

    Article  Google Scholar 

  6. Li, D., Wang, J.: FedMD: heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581 (2019)

  7. Li, P., et al.: Multi-key privacy-preserving deep learning in cloud computing. Futur. Gener. Comput. Syst. 74, 76–85 (2017)

    Article  Google Scholar 

  8. Lu, Y., Huang, X., Dai, Y., Maharjan, S., Zhang, Y.: Blockchain and federated learning for privacy-preserved data sharing in industrial iot. IEEE Trans. Industr. Inf. 16(6), 4177–4186 (2019)

    Article  Google Scholar 

  9. Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Inference attacks against collaborative learning. arXiv preprint arXiv:1805.04049 (2018)

  10. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 739–753. IEEE (2019)

    Google Scholar 

  11. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321 (2015)

    Google Scholar 

  12. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  13. Wang, J., Cao, B., Yu, P., Sun, L., Bao, W., Zhu, X.: Deep learning towards mobile applications. In: 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), pp. 1385–1393. IEEE (2018)

    Google Scholar 

  14. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)

    Article  Google Scholar 

  15. Zhu, L., Han, S.: Deep leakage from gradients. In: Yang, Q., Fan, L., Yu, H. (eds.) Federated Learning. LNCS (LNAI), vol. 12500, pp. 17–31. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-63076-8_2

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanchao Zhao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, Z., Zhao, Y., Zhang, J. (2023). FD-Leaks: Membership Inference Attacks Against Federated Distillation Learning. In: Li, B., Yue, L., Tao, C., Han, X., Calvanese, D., Amagasa, T. (eds) Web and Big Data. APWeb-WAIM 2022. Lecture Notes in Computer Science, vol 13423. Springer, Cham. https://doi.org/10.1007/978-3-031-25201-3_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25201-3_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25200-6

  • Online ISBN: 978-3-031-25201-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics