skip to main content
10.1145/3510513.3510531acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicnccConference Proceedingsconference-collections
research-article

Universal Interactive Verification Framework for Federated Learning Protocol

Authors Info & Claims
Published:10 May 2022Publication History

ABSTRACT

Federated learning protocol provides an applicable solution for large-scale deep learning in distributed scenario. However, existing federated learning system is vulnerable to many attacks and threats. This paper proposes a universal verification framework for federated learning protocol, aiming to analyze the security and privacy risks of interactions between clients and servers during the training and inference process. Based on reinforcement learning technology, our verification framework is capable to adopt various conditions. Furthermore, it presents a set of interactive verification metrics for comprehensive evaluation, including data confidentiality, privacy threat, model availability, model robustness and system vulnerability.

References

  1. Riazi MS, Rouhani BD, Koushanfar F. Deep learning on private data. IEEE Secur Priv. 2019;17:54–63.Google ScholarGoogle Scholar
  2. Kairouz P, McMahan HB, Avent B, Bellet A, Bennis M, Bhagoji AN, Advances and Open Problems in Federated Learning. arXiv. 2019;1–105.Google ScholarGoogle Scholar
  3. Shokri R, Stronati M, Song C, Shmatikov V. Membership Inference Attacks Against Machine Learning Models. Proc - IEEE Symp Secur Priv. 2017;3–18.Google ScholarGoogle Scholar
  4. Nasr M, Shokri R, Houmansadr A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: IEEE Symposium on Security and Privacy (S&P). 2019. p. 739–53.Google ScholarGoogle ScholarCross RefCross Ref
  5. Hitaj B, Ateniese G, Perez-Cruz F. Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning. In: ACM Conference on Computer and Communications Security (CCS). ACM Press; 2017. p. 603–18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Yu H, Yang K, Zhang T, Tsai Y-Y, Ho T-Y, Jin Y. CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples. In: Network and Distributed Systems Security (NDSS). 2020.Google ScholarGoogle ScholarCross RefCross Ref
  7. Fang M, Cao X, Jia J, Gong NZ. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. In: USENIX Security Symposium. 2020.Google ScholarGoogle Scholar
  8. Yao Y, Zheng H, Li H, Zhao BY. Latent backdoor attacks on deep neural networks. ACM Conf Comput Commun Secur. 2019;2041–55.Google ScholarGoogle Scholar
  9. Pang R, Shen H, Zhang X, Ji S, Vorobeychik Y, Luo X, A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models. In: ACM Conference on Computer and Communications Security (CCS). 2020.Google ScholarGoogle Scholar
  10. McMahan HB, Moore E, Ramage D, Hampson S, Arcas BA y. Communication-Efficient Learning of Deep Networks from Decentralized Data. In: International Conference on Artificial Intelligence and Statistics (AISTATS). 2017.Google ScholarGoogle Scholar
  11. Melis L, Song C, De Cristofaro E, Shmatikov V. Exploiting Unintended Feature Leakage in Collaborative Learning. In: IEEE Symposium on Security and Privacy (S&P). IEEE; 2019.Google ScholarGoogle Scholar
  12. Kendirci M, Nowfar S, Hellstrom WJG. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In: ACM Conference on Computer and Communications Security (CCS). ACM; 2017. p. 1175–1191.Google ScholarGoogle Scholar
  13. Geyer RC, Klein T, Nabi M. Differentially Private Federated Learning: A Client Level Perspective. In: Conference on Neural Information Processing Systems (NeurIPS). 2017.Google ScholarGoogle Scholar
  14. McMahan HB, Ramage D, Talwar K, Zhang L. Learning Differentially Private Recurrent Language Models. In: International Conference on Learning Representations (ICLR). 2018.Google ScholarGoogle Scholar
  15. Zhang D, Chen X, Wang D, Shi J. A survey on collaborative deep learning and privacy-preserving. Int Conf Data Sci Cybersp. 2018;652–8.Google ScholarGoogle Scholar
  16. Song L, Mittal P. Systematic Evaluation of Privacy Risks of Machine Learning Models. In: USENIX Security Symposium. 2021.Google ScholarGoogle Scholar
  17. Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Towards Federated Learning at Scale: System Design. In: SysML. 2019.Google ScholarGoogle Scholar
  18. Shokri R, Stronati M, Song C, Shmatikov V. Membership Inference Attacks Against Machine Learning Models. In: IEEE Symposium on Security and Privacy (S&P). IEEE; 2017. p. 3–18.Google ScholarGoogle Scholar
  19. Szepesvári C. Algorithms for reinforcement learning. Synth Lect Artif Intell Mach Learn. 2010;4(1):1–103.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICNCC '21: Proceedings of the 2021 10th International Conference on Networks, Communication and Computing
    December 2021
    146 pages
    ISBN:9781450385848
    DOI:10.1145/3510513

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 10 May 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)13
    • Downloads (Last 6 weeks)2

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format