ABSTRACT
Federated learning protocol provides an applicable solution for large-scale deep learning in distributed scenario. However, existing federated learning system is vulnerable to many attacks and threats. This paper proposes a universal verification framework for federated learning protocol, aiming to analyze the security and privacy risks of interactions between clients and servers during the training and inference process. Based on reinforcement learning technology, our verification framework is capable to adopt various conditions. Furthermore, it presents a set of interactive verification metrics for comprehensive evaluation, including data confidentiality, privacy threat, model availability, model robustness and system vulnerability.
- Riazi MS, Rouhani BD, Koushanfar F. Deep learning on private data. IEEE Secur Priv. 2019;17:54–63.Google Scholar
- Kairouz P, McMahan HB, Avent B, Bellet A, Bennis M, Bhagoji AN, Advances and Open Problems in Federated Learning. arXiv. 2019;1–105.Google Scholar
- Shokri R, Stronati M, Song C, Shmatikov V. Membership Inference Attacks Against Machine Learning Models. Proc - IEEE Symp Secur Priv. 2017;3–18.Google Scholar
- Nasr M, Shokri R, Houmansadr A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: IEEE Symposium on Security and Privacy (S&P). 2019. p. 739–53.Google ScholarCross Ref
- Hitaj B, Ateniese G, Perez-Cruz F. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. In: ACM Conference on Computer and Communications Security (CCS). ACM Press; 2017. p. 603–18.Google ScholarDigital Library
- Yu H, Yang K, Zhang T, Tsai Y-Y, Ho T-Y, Jin Y. CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples. In: Network and Distributed Systems Security (NDSS). 2020.Google ScholarCross Ref
- Fang M, Cao X, Jia J, Gong NZ. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. In: USENIX Security Symposium. 2020.Google Scholar
- Yao Y, Zheng H, Li H, Zhao BY. Latent backdoor attacks on deep neural networks. ACM Conf Comput Commun Secur. 2019;2041–55.Google Scholar
- Pang R, Shen H, Zhang X, Ji S, Vorobeychik Y, Luo X, A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models. In: ACM Conference on Computer and Communications Security (CCS). 2020.Google Scholar
- McMahan HB, Moore E, Ramage D, Hampson S, Arcas BA y. Communication-Efficient Learning of Deep Networks from Decentralized Data. In: International Conference on Artificial Intelligence and Statistics (AISTATS). 2017.Google Scholar
- Melis L, Song C, De Cristofaro E, Shmatikov V. Exploiting Unintended Feature Leakage in Collaborative Learning. In: IEEE Symposium on Security and Privacy (S&P). IEEE; 2019.Google Scholar
- Kendirci M, Nowfar S, Hellstrom WJG. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In: ACM Conference on Computer and Communications Security (CCS). ACM; 2017. p. 1175–1191.Google Scholar
- Geyer RC, Klein T, Nabi M. Differentially Private Federated Learning: A Client Level Perspective. In: Conference on Neural Information Processing Systems (NeurIPS). 2017.Google Scholar
- McMahan HB, Ramage D, Talwar K, Zhang L. Learning Differentially Private Recurrent Language Models. In: International Conference on Learning Representations (ICLR). 2018.Google Scholar
- Zhang D, Chen X, Wang D, Shi J. A survey on collaborative deep learning and privacy-preserving. Int Conf Data Sci Cybersp. 2018;652–8.Google Scholar
- Song L, Mittal P. Systematic Evaluation of Privacy Risks of Machine Learning Models. In: USENIX Security Symposium. 2021.Google Scholar
- Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Towards Federated Learning at Scale: System Design. In: SysML. 2019.Google Scholar
- Shokri R, Stronati M, Song C, Shmatikov V. Membership Inference Attacks Against Machine Learning Models. In: IEEE Symposium on Security and Privacy (S&P). IEEE; 2017. p. 3–18.Google Scholar
- Szepesvári C. Algorithms for reinforcement learning. Synth Lect Artif Intell Mach Learn. 2010;4(1):1–103.Google Scholar
Recommendations
Federated Machine Learning: Concept and Applications
Survey Papers and Regular PapersToday’s artificial intelligence still faces two major challenges. One is that, in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these ...
Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges
AbstractFederated learning is a machine learning paradigm that emerges as a solution to the privacy-preservation demands in artificial intelligence. As machine learning, federated learning is threatened by adversarial attacks against the ...
Highlights- We claim that adversarial attacks are a significant challenge in federated learning.
Dynamic defense against byzantine poisoning attacks in federated learning
AbstractFederated learning, as a distributed learning that conducts the training on the local devices without accessing to the training data, is vulnerable to Byzantine poisoning adversarial attacks. We argue that the federated learning model ...
Highlights- We identify Byzantine attacks as a real problem of Federated Learning.
- We ...
Comments