skip to main content
10.1145/3434770.3459734acmconferencesArticle/Chapter ViewAbstractPublication PageseurosysConference Proceedingsconference-collections
research-article
Public Access

Accelerated Training via Device Similarity in Federated Learning

Authors Info & Claims
Published:26 April 2021Publication History

ABSTRACT

Federated Learning is a privacy-preserving, machine learning technique that generates a globally shared model with in-situ model training on distributed devices. These systems are often comprised of millions of user devices and only a subset of available devices can be used for training in each epoch. Designing a device selection strategy is challenging, given that devices are highly heterogeneous in both their system resources and training data. This heterogeneity makes device selection very crucial for timely model convergence and sufficient model accuracy. Existing approaches have addressed system heterogeneity for device selection but have largely ignored the data heterogeneity. In this work, we analyze the impact of data heterogeneity on device selection, model convergence, model accuracy, and fault tolerance in a federated learning setting. Based on our analysis, we propose that clustering devices with similar data distributions followed by selecting the devices with the best processing capacity from each cluster can significantly improve the model convergence without compromising model accuracy. This clustering also guides us in designing policies for fault tolerance in the system. We propose three methods for identifying groups of devices with similar data distributions. We also identify and discuss rich trade-offs between privacy, bandwidth consumption, and computation overhead for each of these proposed methods. Our preliminary experiments show that the proposed methods can provide a 46% - 58% reduction in training time compared to existing approaches in reaching the same accuracy.

References

  1. 2014. MNIST Variations. https://sites.google.com/a/lisa.iro.umontreal.ca/public_static_twiki/variations-on-the-mnist-digits. Accessed: 2021-02-22.Google ScholarGoogle Scholar
  2. 2017. Federated Learning: Collaborative Machine Learning without Centralized Training Data. https://ai.googleblog.com/2017/04/federated-learning-collaborative.html. Accessed: 2021-02-25.Google ScholarGoogle Scholar
  3. Keith Bonawitz et al. 2019. Towards Federated Learning at Scale: System Design. In Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019. mlsys.org. https://proceedings.mlsys.org/book/271.pdfGoogle ScholarGoogle Scholar
  4. Sebastian Caldas et al. 2018. LEAF: A Benchmark for Federated Settings. arXiv:1812.01097 [cs.LG] https://arxiv.org/abs/1812.01097Google ScholarGoogle Scholar
  5. Zheng Chai et al. 2020. TiFL: A Tier-Based Federated Learning System. In Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing (Stockholm, Sweden) (HPDC '20). Association for Computing Machinery, New York, NY, USA, 125--136.Google ScholarGoogle Scholar
  6. Brendan McMahan et al. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics. PMLR, 1273--1282.Google ScholarGoogle Scholar
  7. Lumin Liu et al. 2019. Client-Edge-Cloud Hierarchical Federated Learning. (2019). arXiv:arXiv:1905.06641 https://arxiv.org/abs/1905.06641Google ScholarGoogle Scholar
  8. Rie Kubota Ando et al. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research 6, Nov(2005), 1817--1853.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Virginia Smith et al. 2017. Federated multi-task learning. Advances in neural information processing systems 30 (2017), 4424--4434.Google ScholarGoogle Scholar
  10. Yann LeCun et al. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278--2324.Google ScholarGoogle ScholarCross RefCross Ref
  11. Yujun Lin et al. 2017. Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887 (2017).Google ScholarGoogle Scholar
  12. Yang Liu et al. 2020. Fedvision: An online visual object detection platform powered by federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 13172--13179.Google ScholarGoogle Scholar
  13. Yuanli Wang et al. 2020. Poster: Exploiting Data Heterogeneity for Performance and Reliability in Federated Learning. In 2020 IEEE/ACM Symposium on Edge Computing (SEC). 164--166. https://doi.org/10.1109/SEC50012.2020.00023Google ScholarGoogle Scholar
  14. Yufeng Zhan et al. 2020. Experience-Driven Computational Resource Allocation of Federated Learning by Deep Reinforcement Learning. In 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS). 234--243.Google ScholarGoogle Scholar
  15. Zichen Xu et al. 2019. Exploring Federated Learning on Battery-Powered Devices. In Proceedings of the ACM Turing Celebration Conference - China (Chengdu, China) (ACM TURC '19). Association for Computing Machinery, New York, NY, USA, Article 6, 6 pages.Google ScholarGoogle Scholar
  16. Kevin Hsieh et al. 2017. Gaia: Geo-Distributed Machine Learning Approaching LAN Speeds. In Proceedings of the 14th USENIX Conference on Networked Systems Design and Implementation (NSDI'17). 629--647.Google ScholarGoogle Scholar
  17. Kevin Hsieh et al. 2020. The non-iid data quagmire of decentralized machine learning. In International Conference on Machine Learning. PMLR, 4387--4398.Google ScholarGoogle Scholar
  18. T. Kailath. 1967. The Divergence and Bhattacharyya Distance Measures in Signal Selection. IEEE Transactions on Communication Technology 15, 1 (1967), 52--60.Google ScholarGoogle ScholarCross RefCross Ref
  19. Peter Kairouz et al. 2019. Advances and Open Problems in Federated Learning. CoRR abs/1912.04977 (2019). http://arxiv.org/abs/1912.04977Google ScholarGoogle Scholar
  20. Dhruv Kumar et al. 2019. DeCaf: Iterative Collaborative Processing over the Edge. In HotEdge.Google ScholarGoogle Scholar
  21. Fan Lai et al. 2020. Oort: Informed Participant Selection for Scalable Federated Learning. arXiv preprint arXiv:2010.06081 (2020).Google ScholarGoogle Scholar
  22. Shiqiang Wang et al. 2018. When edge meets learning: Adaptive control for resource-constrained distributed machine learning. In IEEE INFOCOM 2018-IEEE Conference on Computer Communications. IEEE, 63--71.Google ScholarGoogle Scholar
  23. Timothy Yang et al. 2018. Applied Federated Learning: Improving Google Keyboard Query Suggestions. arXiv:1812.02903 [cs.LG] https://arxiv.org/abs/1812.02903Google ScholarGoogle Scholar
  24. Yue Zhao et al. 2018. Federated Learning with Non-IID Data. arXiv:1806.00582 [cs.LG] https://arxiv.org/abs/1806.00582Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    EdgeSys '21: Proceedings of the 4th International Workshop on Edge Systems, Analytics and Networking
    April 2021
    84 pages
    ISBN:9781450382915
    DOI:10.1145/3434770

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 26 April 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate10of23submissions,43%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader