Abstract
Privacy of machine learning becomes increasingly crucial, abundant emerging technologies have been spawned to solve privacy problem and federated learning (FL) is one of them. FL can replace data transmission through transmission gradient to prevent the leakage of data privacy. Recent researches indicated that privacy can be revealed through gradients and a little auxiliary information. To further verify the safety of gradient transmission mechanism, we propose a novel method called Privacy-leaks From Gradients (PLFG) to infer sensitive information through gradients only. To our knowledge, the weak assumption of this level is currently unique. PLFG uses the gradients obtained from victims in each iteration to build a special model, then updates initial noise through the model to fit victims’ privacy data. Experimental results demonstrate that even if only gradients are leveraged, users’ privacy can be disclosed, and current popular defense (gradient noise addition and gradient compression) cannot defend effectively. Furthermore, we discuss the limitations and feasible improvements of PLFG. We hope our attack can provide different ideas for future defense attempts to protect sensitive privacy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ateniese, G., Mancini, L.V., Spognardi, A., Villani, A., Vitali, D., Felici, G.: Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. Int. J. Secur. Networks 10(3), 137–150 (2015)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS’2015, New York, NY, USA, pp. 1322–1333. Association for Computing Machinery (2015)
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS’2015, New York, NY, USA, pp. 1322–1333. Association for Computing Machinery (2015)
Fredrikson, M., Lantz, E., Jha, S., Lin, S., Page, D., Ristenpart, T.: Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing. In: 23rd USENIX Security Symposium (USENIX Security 14), San Diego, CA, pp. 17–32. USENIX Association, August 2014
Goodfellow, I.J., et al.: Generative adversarial networks (2014)
Hawkins, D.M.: The problem of overfitting. J. Chem. Inf. Comput. Sci. 44(1), 1–12 (2004). PMID: 14741005
Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS’2017, New York, NY, USA, pp. 603–618. Association for Computing Machinery (2017)
Iandola, F.N., Moskewicz, M.W., Ashraf, K., Keutzer, K.: Firecaffe: near-linear acceleration of deep neural network training on compute clusters. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016
Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Wallach, H., Larochelle, H., Beygelzimer, A., Fox, F.-B.E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 125–136. Curran Associates Inc. (2019)
Li, M., et al.: Scaling distributed machine learning with the parameter server. In: 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), Broomfield, CO, pp. 583–598. USENIX Association, October 2014
Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: reducing the communication bandwidth for distributed training (2017)
McMahan, H.B., Ramage, D.: Federated learning: collaborative machine learning without centralized training data (2017)
Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 691–706 (2019)
Qu, Y., et al.: Decentralized privacy using blockchain-enabled federated learning in fog computing. IEEE Internet Things J. 7(6), 1 (2020)
Youyang, Q., Shui, Y., Gao, L., Zhou, W., Peng, S.: A hybrid privacy protection scheme in cyber-physical social networks. IEEE Trans. Comput. Soc. Syst. 5(3), 773–784 (2018)
Youyang, Q., Shui, Y., Zhou, W., Peng, S., Wang, G., Xiao, K.: Privacy of things: emerging challenges and opportunities in wireless internet of things. IEEE Wirel. Commun. 25(6), 91–97 (2018)
Elsevier SDOL. Journal of parallel and distributed computing (2009)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18 (2017)
Shui, Y.: Big privacy: challenges and opportunities of privacy study in the age of big data. IEEE Access 4, 2751–2763 (2016)
Tsuzuku, Y., Imachi, H., Akiba, T.: Variance-based gradient compression for efficient distributed deep learning (2018)
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms (2017)
Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10(2), 1–19 (2019)
Shui, Y., Liu, M., Dou, W., Liu, X., Zhou, S.: Networking for big data: a survey. IEEE Commun. Surv. Tutorials 19(1), 531–549 (2017)
Zhang, J., Chen, B., Yu, S., Deng, S.: PEFL: a privacy-enhanced federated learning scheme for big data analytics (2019)
Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S.: Poisoning attack in federated learning using generative adversarial nets, pp. 374–380 (2019)
Zhao, Y., Chen, J., Wu, D., Teng, J., Yu, S.: Multi-task network anomaly detection using federated learning, pp. 273–279 (2019)
Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Wallach, H., Larochelle, H., Beygelzimer, A., Fox, F.-B.E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 14774–14784. Curran Associates Inc. (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wu, F. (2020). PLFG: A Privacy Attack Method Based on Gradients for Federated Learning. In: Yu, S., Mueller, P., Qian, J. (eds) Security and Privacy in Digital Economy. SPDE 2020. Communications in Computer and Information Science, vol 1268. Springer, Singapore. https://doi.org/10.1007/978-981-15-9129-7_14
Download citation
DOI: https://doi.org/10.1007/978-981-15-9129-7_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-9128-0
Online ISBN: 978-981-15-9129-7
eBook Packages: Computer ScienceComputer Science (R0)