Abstract
Federated Recommendation (FR) has received considerable attention in the past few years. For each user in FR, its latent vector and interaction data are kept on its local device and thus are private to others. However, keeping the training data locally can not ensure the user’s privacy is compromised. In this paper, we show that the existing FR is vulnerable to a new reconstruction attack in which the attacker leverages the semi-trusted FR server to lunch the reconstruction attack. In this attack, the server rigidly follows the protocol of FR, but the attacker may compromise the system security by analyzing the gradient updates received by the server. Specifically, we design Generative Reconstruction Network (GRN), a model reconstructing attack against FR aiming to generate the target user’s (i.e., the victim) latent vector including user’s sensitive information. Moreover, a server-side generator is designed to take random vectors as inputs and outputs generated latent vectors. The generator is trained by the distance between the real victim’s gradient updates and the generated gradient updates. We explain that the generator will successfully learn the target latent vector distribution to probe into the victim’s privacy. The experimental results demonstrate the proposed attack’s effectiveness and superiority over the baseline attacks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ammad-ud-din, M., et al.: Federated collaborative filtering for privacy-preserving personalized recommendation system. CoRR abs/1901.09888 (2019)
Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.B.: Analyzing federated learning through an adversarial lens. In: International Conference on Machine Learning, ICML 2019. Proceedings of Machine Learning Research, vol. 97, pp. 634–643 (2019)
Brisimi, T.S., Chen, R., Mela, T., Olshevsky, A., Paschalidis, I.C., Shi, W.: Federated learning of predictive models from federated electronic health records. Int. J. Med. Inf. 112, 59–67 (2018)
Cerda, G.C., Guzmán, J., Parra, D.: Recommender systems for online video game platforms: the case of STEAM. In: Conference on World Wide Web, WWW 2019, pp. 763–771 (2019)
Chen, L., Xu, Y., Xie, F., Huang, M., Zheng, Z.: Data poisoning attacks on neighborhood-based recommender systems. Trans. Emerg. Telecommun. Technol. 32(6) (2021)
Fang, M., Gong, N.Z., Liu, J.: Influence function based data poisoning attacks to top-n recommender systems. In: The Web Conference 2020, WWW 2020, pp. 3019–3025 (2020)
Fang, M., Yang, G., Gong, N.Z., Liu, J.: Poisoning attacks to graph-based recommender systems. In: Computer Security Applications Conference, ACSAC 2018, pp. 381–392 (2018)
Goodfellow, I.J., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, NIPS 2014, pp. 2672–2680 (2014)
Gupta, V., Kapoor, S., Kumar, R.: A review of attacks and its detection attributes on collaborative recommender systems. Int. J. Adv. Res. Comput. Sci. 8 (2017)
Hard, A., et al.: Federated learning for mobile keyboard prediction. CoRR abs/1811.03604 (2018)
Harper, F.M., Konstan, J.A.: The movielens datasets: history and context. ACM Trans. Interact. Intell. Syst. 5(4), 19:1-19:19 (2016)
He, R., McAuley, J.J.: Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In: Conference on World Wide Web, WWW 2016, pp. 507–517 (2016)
He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.: Neural collaborative filtering. In: Conference on World Wide Web, WWW 2017, pp. 173–182 (2017)
Huang, H., Mu, J., Gong, N.Z., Li, Q., Liu, B., Xu, M.: Data poisoning attacks to deep learning based recommender systems. In: Network and Distributed System Security Symposium, NDSS 2021 (2021)
Li, B., Wang, Y., Singh, A., Vorobeychik, Y.: Data poisoning attacks on factorization-based collaborative filtering. In: Advances in Neural Information Processing Systems, NIPS 2016, pp. 1885–1893 (2016)
Liang, F., Pan, W., Ming, Z.: FedRec++: lossless federated recommendation with explicit feedback. In: 35th AAAI Conference on Artificial Intelligence, AAAI 2021, pp. 4224–4231 (2021)
Liu, K., Dolan-Gavitt, B., Garg, S.: Fine-pruning: defending against backdooring attacks on deep neural networks. In: Bailey, M., Holz, T., Stamatogiannakis, M., Ioannidis, S. (eds.) RAID 2018. LNCS, vol. 11050, pp. 273–294. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00470-5_13
Luo, X., Wu, Y., Xiao, X., Ooi, B.C.: Feature inference attack on model predictions in vertical federated learning. In: International Conference on Data Engineering, ICDE 2021, pp. 181–192 (2021)
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: International Conference on Artificial Intelligence and Statistics, AISTATS 2017. Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282 (2017)
Melis, L., Song, C., Cristofaro, E.D., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: Symposium on Security and Privacy, S &P 2019, pp. 691–706 (2019)
Mobasher, B., Burke, R.D., Bhaumik, R., Sandvig, J.J.: Attacks and remedies in collaborative recommendation. IEEE Intell. Syst. 22(3), 56–63 (2007)
Muhammad, K., et al.: FedFast: going beyond average for faster training of federated recommender systems. In: Conference on Knowledge Discovery and Data Mining, 2020, pp. 1234–1242 (2020)
Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: Symposium on Security and Privacy, SP 2019, pp. 739–753 (2019)
O’Mahony, M.P., Hurley, N.J., Silvestre, G.C.M.: Recommender systems: attack types and strategies. In: Conference on Artificial Intelligence, AAAI 2005, pp. 334–339 (2005)
Patarasuk, P., Yuan, X.: Bandwidth optimal all-reduce algorithms for clusters of workstations. J. Parallel Distrib. Comput. 69(2), 117–124 (2009)
Rong, D., He, Q., Chen, J.: Poisoning deep learning based recommender model in federated learning scenario. In: Advances in Neural Information Processing Systems, NIPS 2022 (2022)
Rong, D., Ye, S., Zhao, R., Yuen, H.N., Chen, J., He, Q.: FedRecAttack: model poisoning attack to federated recommendation. CoRR abs/2204.01499 (2022)
Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, NeurIPS 2018, pp. 6106–6116 (2018)
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: Symposium on Security and Privacy, S &P, 2017, pp. 3–18 (2017)
Tong, Y., et al.: Hu-fu: Efficient and secure spatial queries over data federation. Proc. VLDB Endow. 15(6), 1159–1172 (2022)
Wang, Z., Song, M., Zhang, Z., Song, Y., Wang, Q., Qi, H.: Beyond inferring class representatives: User-level privacy leakage from federated learning. In: Conference on Computer Communications, INFOCOM 2019, pp. 2512–2520 (2019)
Wu, J., et al.: Hierarchical personalized federated learning for user modeling. In: The Web Conference 2021, WWW 2021, pp. 957–968 (2021)
Xie, C., Huang, K., Chen, P., Li, B.: DBA: distributed backdoor attacks against federated learning. In: International Conference on Learning Representations, ICLR 2020 (2020)
Xing, X., et al.: Take this personally: Pollution attacks on personalized services. In: 2013 Proceedings of the 22th USENIX Security Symposium, pp. 671–686 (2013)
Yang, G., Gong, N.Z., Cai, Y.: Fake co-visitation injection attacks to recommender systems. In: Network and Distributed System Security Symposium, NDSS 2017 (2017)
Zeller, W., Felten, E.W.: Cross-site request forgeries: exploitation and prevention. The New York Times, pp. 1–13 (2009)
Zhang, S., Yin, H., Chen, T., Huang, Z., Nguyen, Q.V.H., Cui, L.: PipAttack: poisoning federated recommender systems formanipulating item promotion. CoRR abs/2110.10926 (2021)
Zheng, W., Yan, L., Gou, C., Wang, F.: Federated meta-learning for fraudulent credit card detection. In: International Joint Conference on Artificial Intelligence, IJCAI 2020, pp. 4654–4660 (2020)
Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Advances in Neural Information Processing Systems, NIPS 2019, pp. 14747–14756 (2019)
Acknowledgement
This work is partially supported by National Natural Science Foundation of China Nos. 62072349, U1811263, Technological Innovation Major Program of Hubei Province No. 2021BEE057, and Technological Innovation Major Program of China Tobacco Corporation No. 110202102031.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, Z., Song, W. (2023). A New Reconstruction Attack: User Latent Vector Leakage in Federated Recommendation. In: Wang, X., et al. Database Systems for Advanced Applications. DASFAA 2023. Lecture Notes in Computer Science, vol 13944. Springer, Cham. https://doi.org/10.1007/978-3-031-30672-3_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-30672-3_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30671-6
Online ISBN: 978-3-031-30672-3
eBook Packages: Computer ScienceComputer Science (R0)