Skip to main content

PLFG: A Privacy Attack Method Based on Gradients for Federated Learning

  • Conference paper
  • First Online:
Security and Privacy in Digital Economy (SPDE 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1268))

Included in the following conference series:

Abstract

Privacy of machine learning becomes increasingly crucial, abundant emerging technologies have been spawned to solve privacy problem and federated learning (FL) is one of them. FL can replace data transmission through transmission gradient to prevent the leakage of data privacy. Recent researches indicated that privacy can be revealed through gradients and a little auxiliary information. To further verify the safety of gradient transmission mechanism, we propose a novel method called Privacy-leaks From Gradients (PLFG) to infer sensitive information through gradients only. To our knowledge, the weak assumption of this level is currently unique. PLFG uses the gradients obtained from victims in each iteration to build a special model, then updates initial noise through the model to fit victims’ privacy data. Experimental results demonstrate that even if only gradients are leveraged, users’ privacy can be disclosed, and current popular defense (gradient noise addition and gradient compression) cannot defend effectively. Furthermore, we discuss the limitations and feasible improvements of PLFG. We hope our attack can provide different ideas for future defense attempts to protect sensitive privacy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ateniese, G., Mancini, L.V., Spognardi, A., Villani, A., Vitali, D., Felici, G.: Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. Int. J. Secur. Networks 10(3), 137–150 (2015)

    Article  Google Scholar 

  2. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS’2015, New York, NY, USA, pp. 1322–1333. Association for Computing Machinery (2015)

    Google Scholar 

  3. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS’2015, New York, NY, USA, pp. 1322–1333. Association for Computing Machinery (2015)

    Google Scholar 

  4. Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: 23rd USENIX Security Symposium (USENIX Security 14), San Diego, CA, pp. 17–32. USENIX Association, August 2014

    Google Scholar 

  5. Goodfellow, I.J., et al.: Generative adversarial networks (2014)

    Google Scholar 

  6. Hawkins, D.M.: The problem of overfitting. J. Chem. Inf. Comput. Sci. 44(1), 1–12 (2004). PMID: 14741005

    Article  Google Scholar 

  7. Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS’2017, New York, NY, USA, pp. 603–618. Association for Computing Machinery (2017)

    Google Scholar 

  8. Iandola, F.N., Moskewicz, M.W., Ashraf, K., Keutzer, K.: Firecaffe: near-linear acceleration of deep neural network training on compute clusters. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016

    Google Scholar 

  9. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Wallach, H., Larochelle, H., Beygelzimer, A., Fox, F.-B.E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 125–136. Curran Associates Inc. (2019)

    Google Scholar 

  10. Li, M., et al.: Scaling distributed machine learning with the parameter server. In: 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), Broomfield, CO, pp. 583–598. USENIX Association, October 2014

    Google Scholar 

  11. Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: reducing the communication bandwidth for distributed training (2017)

    Google Scholar 

  12. McMahan, H.B., Ramage, D.: Federated learning: collaborative machine learning without centralized training data (2017)

    Google Scholar 

  13. Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 691–706 (2019)

    Google Scholar 

  14. Qu, Y., et al.: Decentralized privacy using blockchain-enabled federated learning in fog computing. IEEE Internet Things J. 7(6), 1 (2020)

    Article  Google Scholar 

  15. Youyang, Q., Shui, Y., Gao, L., Zhou, W., Peng, S.: A hybrid privacy protection scheme in cyber-physical social networks. IEEE Trans. Comput. Soc. Syst. 5(3), 773–784 (2018)

    Article  Google Scholar 

  16. Youyang, Q., Shui, Y., Zhou, W., Peng, S., Wang, G., Xiao, K.: Privacy of things: emerging challenges and opportunities in wireless internet of things. IEEE Wirel. Commun. 25(6), 91–97 (2018)

    Article  Google Scholar 

  17. Elsevier SDOL. Journal of parallel and distributed computing (2009)

    Google Scholar 

  18. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18 (2017)

    Google Scholar 

  19. Shui, Y.: Big privacy: challenges and opportunities of privacy study in the age of big data. IEEE Access 4, 2751–2763 (2016)

    Article  Google Scholar 

  20. Tsuzuku, Y., Imachi, H., Akiba, T.: Variance-based gradient compression for efficient distributed deep learning (2018)

    Google Scholar 

  21. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms (2017)

    Google Scholar 

  22. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019)

    Article  Google Scholar 

  23. Shui, Y., Liu, M., Dou, W., Liu, X., Zhou, S.: Networking for big data: a survey. IEEE Commun. Surv. Tutorials 19(1), 531–549 (2017)

    Article  Google Scholar 

  24. Zhang, J., Chen, B., Yu, S., Deng, S.: PEFL: a privacy-enhanced federated learning scheme for big data analytics (2019)

    Google Scholar 

  25. Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S.: Poisoning attack in federated learning using generative adversarial nets, pp. 374–380 (2019)

    Google Scholar 

  26. Zhao, Y., Chen, J., Wu, D., Teng, J., Yu, S.: Multi-task network anomaly detection using federated learning, pp. 273–279 (2019)

    Google Scholar 

  27. Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Wallach, H., Larochelle, H., Beygelzimer, A., Fox, F.-B.E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 14774–14784. Curran Associates Inc. (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Feng Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, F. (2020). PLFG: A Privacy Attack Method Based on Gradients for Federated Learning. In: Yu, S., Mueller, P., Qian, J. (eds) Security and Privacy in Digital Economy. SPDE 2020. Communications in Computer and Information Science, vol 1268. Springer, Singapore. https://doi.org/10.1007/978-981-15-9129-7_14

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-9129-7_14

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-9128-0

  • Online ISBN: 978-981-15-9129-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics