Abstract
Training a deep neural network requires substantial data and intensive computing resources. Unaffordable price holds back many potential applications of deep learning. Besides, it is risky to gather user’s private data for training centrally. Then federated learning appears as a promising solution to having users learned jointly while keeping training data local. However, security issues keep coming up in federated learning applications. One of the most threatening attacks is the model poisoning attack which can manipulate the inference result of a jointly learned model. Some recent studies show that elaborate model poisoning approaches can even breach the existing Byzantine-robust federated learning solutions. Hence, it is critical to discuss alternative solutions to secure federated learning. In this paper, we propose to protect federated learning against model poisoning attacks by introducing a robust model aggregation solution named Romoa. Unlike previous studies, Romoa can deal with targeted and untargeted poisoning attacks with a unified approach. Moreover, Romoa achieves more precise attack detection and better fairness for federated learning participants by constructing a new similarity measurement. We conclude that through a comprehensive evaluation of standard datasets, Romoa can provide a satisfying defense effect against model poisoning attacks, including those attacks breaching Byzantine-robust federated learning solutions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
For more details of this approach, please refer to [9]. We will use UPA as a general notation in this paper.
- 3.
Detailed information of DNN architectures we used is given in the appendix.
- 4.
Since our FL setting is different from Krum and RFA, the results may vary slightly. But this does not hurt major conclusions.
References
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 2938–2948 (2020)
S S Bell, J.H., Bonawitz, K.A., Gascón, A., Lepoint, T., Raykova, M.: Secure single-server aggregation with (poly) logarithmic overhead. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 1253–1269 (2020)
Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: International Conference on Machine Learning, pp. 634–643 (2019)
Blanchard, P., El Mhamdi, E.M., Guerraoui, R., Stainer, J.: Machine learning with adversaries: byzantine tolerant gradient descent. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 118–128 (2017)
Bonawitz, K., et al.: Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175–1191 (2017)
Cao, X., Fang, M., Liu, J., Gong, N.Z.: Fltrust: byzantine-robust federated learning via trust bootstrapping. In: 28th Annual Network and Distributed System Security Symposium, NDSS (2021)
Cao, X., Jia, J., Gong, N.Z.: Data poisoning attacks to local differential privacy protocols. In: 30th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 21) (2021)
Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)
Fang, M., Cao, X., Jia, J., Gong, N.: Local model poisoning attacks to byzantine-robust federated learning. In: 29th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 20), pp. 1605–1622 (2020)
Fukunaga, K., Hostetler, L.: The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans. Inf. theory 21(1), 32–40 (1975)
Fung, C., Yoon, C.J., Beschastnikh, I.: The limitations of federated learning in sybil settings. In: 23rd International Symposium on Research in Attacks, Intrusions and Defenses (\(\{\)RAID\(\}\) 2020), pp. 301–316 (2020)
Gu, T., Dolan-Gavitt, B., Garg, S.: Badnets: identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017)
Han, Y., Zhang, X.: Robust federated learning via collaborative machine teaching. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 4075–4082 (2020)
Huang, H., Mu, J., Gong, N.Z., Li, Q., Liu, B., Xu, M.: Data poisoning attacks to deep learning based recommender systems. In: 28th Annual Network and Distributed System Security Symposium, NDSS (2021)
Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., Li, B.: Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 19–35 (2018)
John, P.G., Vijaykeerthy, D., Saha, D.: Verifying individual fairness in machine learning models. In: Conference on Uncertainty in Artificial Intelligence, pp. 749–758 (2020)
Kadhe, S., Rajaraman, N., Koyluoglu, O.O., Ramchandran, K.: Fastsecagg: scalable secure aggregation for privacy-preserving federated learning. arXiv preprint arXiv:2009.11248 (2020)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Lin, T., Kong, L., Stich, S.U., Jaggi, M.: Ensemble distillation for robust model fusion in federated learning. In: Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems (2020)
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)
Muñoz-González, L. Co, K.T., Lupu, E.C.: Byzantine-robust federated machine learning through adaptive model averaging. arXiv preprint arXiv:1909.05125 (2019)
Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE symposium on security and privacy (SP), pp. 739–753 (2019)
Portnoy, A., Hendler, D.: Towards realistic byzantine-robust federated learning. arXiv preprint arXiv:2004.04986 (2020)
Reisizadeh, A., Farnia, F., Pedarsani, R., Jadbabaie, A.: Robust federated learning: the case of affine distribution shifts. In: Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems (2020)
Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, pp. 6103–6113 (2018)
Shen, S., Tople, S., Saxena, P.: Auror: defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd Annual Conference on Computer Security Applications, pp. 508–519 (2016)
Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1310–1321 (2015)
Suciu, O., Marginean, R., Kaya, Y., Daume III, H., Dumitras, T.: When does machine learning \(\{\)FAIL\(\}\)? generalized transferability for evasion and poisoning attacks. In: 27th \(\{\)USENIX\(\}\) Security Symposium (\(\{\)USENIX\(\}\) Security 18), pp. 1299–1316 (2018)
Toch, E., et al.: The privacy implications of cyber security systems: a technological survey. ACM Comput. Surv. (CSUR) 51(2), 1–27 (2018)
Tolpegin, V., Truex, S., Gursoy, M.E., Liu, L.: Data poisoning attacks against federated learning systems. In: European Symposium on Research in Computer Security, pp. 480–501 (2020)
Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning. In: Advances in Neural Information Processing Systems (2020)
Wu, W., He, L., Lin, W., Mao, R., Maple, C., Jarvis, S.A.: Safa: a semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans. Comput. (2020)
Xie, C., Koyejo, S., Gupta, I.: Asynchronous federated optimization. arXiv preprint arXiv:1903.03934 (2019)
Yeom, S., Fredrikson, M.: Individual fairness revisited: transferring techniques from adversarial robustness. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI, pp. 437–443 (2020)
Yin, D., Chen, Y., Kannan, R., Bartlett, P.: Byzantine-robust distributed learning: towards optimal statistical rates. In: International Conference on Machine Learning, pp. 5650–5659 (2018)
Yin, D., Chen, Y., Kannan, R., Bartlett, P.: Defending against saddle point attack in byzantine-robust distributed learning. In: International Conference on Machine Learning, pp. 7074–7084 (2019)
Zhang, J., Chen, J., Wu, D., Chen, B., Yu, S.: Poisoning attack in federated learning using generative adversarial nets. In: 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE), pp. 374–380 (2019)
Zhang, M.R., Lucas, J., Hinton, G., Ba, J.: Lookahead optimizer: k steps forward, 1 step back. In: Advances in Neural Information Processing Systems, pp. 9597–9608 (2019)
Zheng, S., et al.: Asynchronous stochastic gradient descent with delay compensation. In: International Conference on Machine Learning, pp. 4120–4129 (2017)
Acknowledgement
The authors would like to thank the reviewers for their helpful comments. This work was supported in part by National Key R&D Program of China under Grant 2020YFB1005900, NSFC-61902176, BK20190294, NSFC-61872176, and Leading-edge Technology Program of Jiangsu Natural Science Foundation (No. BK20202001).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Mao, Y., Yuan, X., Zhao, X., Zhong, S. (2021). Romoa: Robust Model Aggregation for the Resistance of Federated Learning to Model Poisoning Attacks. In: Bertino, E., Shulman, H., Waidner, M. (eds) Computer Security – ESORICS 2021. ESORICS 2021. Lecture Notes in Computer Science(), vol 12972. Springer, Cham. https://doi.org/10.1007/978-3-030-88418-5_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-88418-5_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88417-8
Online ISBN: 978-3-030-88418-5
eBook Packages: Computer ScienceComputer Science (R0)