skip to main content
10.1145/3534678.3539039acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Felicitas: Federated Learning in Distributed Cross Device Collaborative Frameworks

Authors Info & Claims
Published:14 August 2022Publication History

ABSTRACT

Felicitas is a distributed cross-device Federated Learning (FL) framework to solve the industrial difficulties of FL in large-scale device deployment scenarios. In Felicitas, FL-Clients are deployed on mobile or embedded devices, while FL-Server is deployed on the cloud platform. We also summarize the challenges of FL deployment in industrial cross-device scenarios (massively parallel, stateless clients, non-use of client identifiers, highly unreliable, unsteady and complex deployment), and provide reliable solutions. We provide the source code and documents at https://www.mindspore.cn/. In addition, the Felicitas has been deployed on mobile phones in real world. At the end of the paper, we demonstrate the validity of the framework through experiments.

References

  1. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, 54:1273--1282, 2017.Google ScholarGoogle Scholar
  2. Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and Open Problems in Federated Learning. Foundations and Trends® in Machine Learning. 2021.Google ScholarGoogle Scholar
  3. Alex Ingerman and Krzys Ostrowski. Tensorflow federated. 2019.Google ScholarGoogle Scholar
  4. The FATE Authors. Fate: An industrial grade federated learning framework. 2019.Google ScholarGoogle Scholar
  5. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konený, H. Brendan McMahan, Virginia Smith, and Ameet Talwalkar. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097, 2019.Google ScholarGoogle Scholar
  6. Theo Ryffel, Andrew Trask, Morten Dahl, Bobby Wagner, Jason Mancuso, Daniel Rueckert, and Jonathan Passerat-Palmbach. A generic framework for privacy preserving deep learning. arXiv preprint arXiv:1811.04017, 2018.Google ScholarGoogle Scholar
  7. Yanjun Ma, Dianhai Yu, Tian Wu, and Haifeng Wang. Paddlepaddle: An open-source deep learning platform from industrial practice. Frontiers of Data and Domputing, 1(1):105, 2019.Google ScholarGoogle Scholar
  8. The clara training framework authors. Nvidia clara. 2019.Google ScholarGoogle Scholar
  9. Pitch Patarasuk and Xin Yuan. Bandwidth optimal all-reduce algorithms for clusters of workstations. Journal of Parallel and Distributed Computing, 69(2):117--124, 2009.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. John Lamping and Eric Veach. A fast, minimal memory, consistent hash algorithm. arXiv preprint arXiv:1406.2294, 2014.Google ScholarGoogle Scholar
  11. Pathum Chamikara Mahawaga Arachchige, Peter Bertok, Ibrahim Khalil, Dongxi Liu, Seyit Camtepe, and Mohammed Atiquzzaman. Local differential privacy for deep learning. IEEE Internet of Things Journal, 7(7):5827--5842, 2020.Google ScholarGoogle ScholarCross RefCross Ref
  12. Oded Goldreich. Secure multi-party computation. Manuscript. Preliminary version, 1998.Google ScholarGoogle Scholar
  13. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Handbook of Systemic Autoimmune Diseases, 1(4), 2009.Google ScholarGoogle Scholar
  14. The TensorFlow Federated Authors. Tensorflow federated stack overflow dataset. 2019.Google ScholarGoogle Scholar
  15. Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Xinghua Zhu, Jianzong Wang, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram, and Salman Avestimehr. Fedml: A research library and benchmark for federated machine learning. arXiv preprint arXiv:2007.13518, 2020.Google ScholarGoogle Scholar
  16. Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konený, Sanjiv Kumar, and H. Brendan McMahan. Adaptive federated optimization. arXiv preprint arXiv:2003.00295, 2020.Google ScholarGoogle Scholar
  17. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:1909.11942, 2020.Google ScholarGoogle Scholar
  18. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2019.Google ScholarGoogle Scholar
  19. Zheng Wang, Xiaoliang Fan, Jianzhong Qi, Chenglu Wen, Cheng Wang, and Rongshan Yu. Federated learning with fair averaging. arXiv preprint arXiv:2104.14937, 2021.Google ScholarGoogle Scholar
  20. Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H. Vincent Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. Advances in Neural Information Processing Systems, 33:7611--7623, 2020.Google ScholarGoogle Scholar
  21. Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. Proceedings of Machine Learning and Systems, 2:429--450, 2020.Google ScholarGoogle Scholar
  22. Yujun Lin, Song Han, Huizi Mao, Yu Wang, and William J. Dally. Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887, 2020.Google ScholarGoogle Scholar
  23. Hanlin Tang, Shaoduo Gan, Ce Zhang, Tong Zhang, and Ji Liu. Communication compression for decentralized training. In Advances in Neural Information Processing Systems, 31, 2018.Google ScholarGoogle Scholar
  24. Constantin Philippenko and Aymeric Dieuleveut. Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees. arXiv preprint arXiv:2006.14591, 2021.Google ScholarGoogle Scholar
  25. Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz. Deep models under the gan: information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 603--618, 2017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Qi Zhao, Chuan Zhao, Shujie Cui, Shan Jing, and Zhenxiang Chen. Privatedl: Privacy reserving collaborative deep learning against leakage from gradient sharing. International Journal of Intelligent Systems, 35(8):1262--1279, 2020.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Milad Nasr, Reza Shokri, and Amir Houmansadr. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE Symposium on Security and Privacy (SP), 739--753, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  28. A. Ghosh, J. Chung, Y. Dong, and K. Ramchandran. An efficient framework for clustered federated learning. In Advances in Neural Information Processing Systems, 33:19586--19597, 2020.Google ScholarGoogle Scholar
  29. Felix Sattler, Klaus-Robert Müller, and Wojciech Samek. Clustered Federated Learning: Model-Agnostic Distributed Multitask Optimization Under Privacy Constraints. IEEE Transactions on Neural Networks and Learning Systems, 32(8):3710--3722, 2021.Google ScholarGoogle ScholarCross RefCross Ref
  30. Ming Xie, Guodong Long, Tao Shen, Tianyi Zhou, Xianzhi Wang, Jing Jiang, and Chengqi Zhang. Multi-Center Federated Learning. arXiv preprint arXiv:2005.01026, 2020.Google ScholarGoogle Scholar
  31. Yishay Mansour, Mehryar Mohri, Jae Ro, and Ananda Theertha Suresh. Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619, 2020.Google ScholarGoogle Scholar
  32. Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. Ditto: Fair and Robust Federated Learning Through Personalization. International Conference on Machine Learning, 6357--6368, 2021.Google ScholarGoogle Scholar
  33. Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning: A meta-learning approach. arXiv preprint arXiv:2002.07948, 2020.Google ScholarGoogle Scholar
  34. Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, and Chengqi Zhang. Fedproto: Federated prototype learning over heterogeneous devices. arXiv preprint arXiv:2105.00243, 2021.Google ScholarGoogle Scholar
  35. Tao Lin, Lingjing Kong, Sebastian U. Stich, and Martin Jaggi. Ensemble Distillation for Robust Model Fusion in Federated Learning. Advances in Neural Information Processing Systems, 33:2351--2363, 2021.Google ScholarGoogle Scholar
  36. Mengwei Xu, Yuxin Zhao, Kaigui Bian, Gang Huang, Qiaozhu Mei, and Xuanzhe Liu. Federated Neural Architecture Search. arXiv preprint arXiv:2002.06352, 2020.Google ScholarGoogle Scholar

Index Terms

  1. Felicitas: Federated Learning in Distributed Cross Device Collaborative Frameworks

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
        August 2022
        5033 pages
        ISBN:9781450393850
        DOI:10.1145/3534678

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 14 August 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate1,133of8,635submissions,13%

        Upcoming Conference

        KDD '24
      • Article Metrics

        • Downloads (Last 12 months)113
        • Downloads (Last 6 weeks)9

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader