skip to main content
10.1145/3457388.3458655acmconferencesArticle/Chapter ViewAbstractPublication PagescfConference Proceedingsconference-collections
research-article

TEA-fed: time-efficient asynchronous federated learning for edge computing

Published: 11 May 2021 Publication History

Abstract

Federated learning (FL) has attracted more and more attention recently. The integration of FL and edge computing makes the edge system more efficient and intelligent. FL usually uses the server to actively select certain edge devices to participate in the global model training. However, the selected edge devices may be stragglers, or even crash during training. Meanwhile, the unselected idle edge devices cannot be fully utilized for training. Therefore, besides the widely studied communication efficiency and data heterogeneity issues in FL, we also take the above time efficiency into consideration, and propose a time-efficient asynchronous federated learning protocol, TEA-Fed, to solve these problems. With TEA-Fed, idle edge devices actively apply for training tasks and participate in model training asynchronously once assigned tasks. Considering that there may be a huge number of edge devices in edge computing, we introduce control parameters to limit the number of devices participating in training the identical model at the same time. Meanwhile, we also introduce caching mechanism and weighted averaging with respect to model staleness in the model aggregation step to reduce the adverse effects of model staleness and further improve the accuracy of the global model. Finally, the experimental results show that the protocol can accelerate the convergence of model training, improve the accuracy, and has robustness to heterogeneous data.

References

[1]
Jiasi Chen and Xukan Ran. 2019. Deep Learning With Edge Computing: A Review. Proc. IEEE 107, 8 (2019), 1655--1674.
[2]
Yang Chen, Xiaoyan Sun, and Yaochu Jin. 2020. Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation. IEEE Trans. Neural Networks Learn. Syst. 31, 10 (2020), 4229--4238.
[3]
Andrew Hard, Kanishka Rao, Rajiv Mathews, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. 2018. Federated Learning for Mobile Keyboard Prediction. CoRR abs/1811.03604 (2018). arXiv:1811.03604 http://arxiv.org/abs/1811.03604
[4]
Zhiming Hu, Ahmad Bisher Tarakji, Vishal Raheja, Caleb Phillips, Teng Wang, and Iqbal Mohomed. 2019. DeepHome: Distributed Inference with Heterogeneous Devices in the Edge. In The 3rd International Workshop on Deep Learning for Mobile Systems and Applications (Seoul, Republic of Korea) (EMDL '19). Association for Computing Machinery, New York, NY, USA, 13--18.
[5]
Yuang Jiang, Shiqiang Wang, Bong-Jun Ko, Wei-Han Lee, and Leandros Tassiulas. 2019. Model Pruning Enables Efficient Federated Learning on Edge Devices. CoRR abs/1909.12326 (2019). arXiv:1909.12326 http://arxiv.org/abs/1909.12326
[6]
Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process. Mag. 37, 3 (2020), 50--60.
[7]
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated Optimization in Heterogeneous Networks. In Proceedings of Machine Learning and Systems 2020, MLSys 2020, Austin, TX, USA, March 2--4, 2020, Inderjit S. Dhillon, Dimitris S. Papailiopoulos, and Vivienne Sze (Eds.). mlsys.org. https://proceedings.mlsys.org/book/316.pdf
[8]
Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. 2020. On the Convergence of FedAvg on Non-IID Data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26--30, 2020. OpenReview.net. https://openreview.net/forum?id=HJxNAnVtDS
[9]
Sidi Lu, Yongtao Yao, and Weisong Shi. 2019. Collaborative Learning on the Edges: A Case Study on Connected Vehicles. In 2nd USENIX Workshop on Hot Topics in Edge Computing, HotEdge 2019, Renton, WA, USA, July 9, 2019, Irfan Ahmad and Swaminathan Sundararaman (Eds.). USENIX Association. https://www.usenix.org/conference/hotedge19/presentation/lu
[10]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20--22 April 2017, Fort Lauderdale, FL, USA (Proceedings of Machine Learning Research, Vol. 54), Aarti Singh and Xiaojin (Jerry) Zhu (Eds.). PMLR, 1273--1282. http://proceedings.mlr.press/v54/mcmahan17a.html
[11]
Jed Mills, Jia Hu, and Geyong Min. 2020. Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT. IEEE Internet Things J. 7, 7 (2020), 5986--5994.
[12]
Umair Mohammad and Sameh Sorour. 2019. Adaptive Task Allocation for Asynchronous Federated Mobile Edge Learning. CoRR abs/1905.01656 (2019). arXiv:1905.01656 http://arxiv.org/abs/1905.01656
[13]
Takayuki Nishio and Ryo Yonetani. 2019. Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge. In 2019 IEEE International Conference on Communications, ICC 2019, Shanghai, China, May 20--24, 2019. IEEE, 1--7.
[14]
Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan. 2019. Adaptive Federated Learning in Resource Constrained Edge Computing Systems. IEEE J. Sel. Areas Commun. 37, 6 (2019), 1205--1221.
[15]
Xiaofei Wang, Yiwen Han, Chenyang Wang, Qiyang Zhao, Xu Chen, and Min Chen. 2019. Distributed Deep Learning Model for Intelligent Video Surveillance Systems with Edge Computing. IEEE Transactions on Industrial Informatics (2019), 1--1.
[16]
Xiaofei Wang, Yiwen Han, Chenyang Wang, Qiyang Zhao, Xu Chen, and Min Chen. 2019. In-Edge AI: Intelligentizing Mobile Edge Computing, Caching and Communication by Federated Learning. IEEE Network 33, 5 (2019), 156--165.
[17]
Wentai Wu, Ligang He, Weiwei Lin, Rui Mao, and Stephen A. Jarvis. 2019. SAFA: a Semi-Asynchronous Protocol for Fast Federated Learning with Low Overhead. CoRR abs/1910.01355 (2019). arXiv:1910.01355 http://arxiv.org/abs/1910.01355
[18]
Cong Xie, Sanmi Koyejo, and Indranil Gupta. 2019. Asynchronous Federated Optimization. CoRR abs/1903.03934 (2019). arXiv:1903.03934 http://arxiv.org/abs/1903.03934
[19]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol. 10, 2 (2019), 12:1--12:19.
[20]
Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated Learning with Non-IID Data. CoRR abs/1806.00582 (2018). arXiv:1806.00582 http://arxiv.org/abs/1806.00582

Cited By

View all
  • (2025)Weighted Average Consensus Algorithms in Distributed and Federated LearningIEEE Transactions on Network Science and Engineering10.1109/TNSE.2025.352898212:2(1369-1382)Online publication date: Mar-2025
  • (2025)A decentralized asynchronous federated learning framework for edge devicesFuture Generation Computer Systems10.1016/j.future.2024.107683166(107683)Online publication date: May-2025
  • (2024)Communication Efficiency and Non-Independent and Identically Distributed Data Challenge in Federated Learning: A Systematic Mapping StudyApplied Sciences10.3390/app1407272014:7(2720)Online publication date: 24-Mar-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CF '21: Proceedings of the 18th ACM International Conference on Computing Frontiers
May 2021
254 pages
ISBN:9781450384049
DOI:10.1145/3457388
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. edge computing
  2. federated learning
  3. machine learning
  4. time efficiency

Qualifiers

  • Research-article

Funding Sources

  • A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD)
  • JSPS KAKENHI
  • Collaborative Innovation Center of Novel Software Technology and Industrialization
  • China Postdoctoral Science Foundation

Conference

CF '21
Sponsor:
CF '21: Computing Frontiers Conference
May 11 - 13, 2021
Virtual Event, Italy

Acceptance Rates

Overall Acceptance Rate 273 of 785 submissions, 35%

Upcoming Conference

CF '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)87
  • Downloads (Last 6 weeks)10
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Weighted Average Consensus Algorithms in Distributed and Federated LearningIEEE Transactions on Network Science and Engineering10.1109/TNSE.2025.352898212:2(1369-1382)Online publication date: Mar-2025
  • (2025)A decentralized asynchronous federated learning framework for edge devicesFuture Generation Computer Systems10.1016/j.future.2024.107683166(107683)Online publication date: May-2025
  • (2024)Communication Efficiency and Non-Independent and Identically Distributed Data Challenge in Federated Learning: A Systematic Mapping StudyApplied Sciences10.3390/app1407272014:7(2720)Online publication date: 24-Mar-2024
  • (2024)Weight-Based Privacy-Preserving Asynchronous SplitFed for Multimedia Healthcare DataACM Transactions on Multimedia Computing, Communications, and Applications10.1145/369587620:12(1-24)Online publication date: 21-Nov-2024
  • (2024)On the Impact of Heterogeneity on Federated Learning at the Edge with DGA Malware DetectionProceedings of the Asian Internet Engineering Conference 202410.1145/3674213.3674215(10-17)Online publication date: 9-Aug-2024
  • (2024)Spyker: Asynchronous Multi-Server Federated Learning for Geo-Distributed ClientsProceedings of the 25th International Middleware Conference10.1145/3652892.3700778(367-378)Online publication date: 2-Dec-2024
  • (2024)SimProx: A Similarity-Based Aggregation in Federated Learning With Client Weight OptimizationIEEE Open Journal of the Communications Society10.1109/OJCOMS.2024.35138165(7806-7817)Online publication date: 2024
  • (2024)Federated Learning: Challenges, SoTA, Performance Improvements and Application DomainsIEEE Open Journal of the Communications Society10.1109/OJCOMS.2024.34580885(5933-6017)Online publication date: 2024
  • (2024)Towards Efficient Asynchronous Federated Learning in Heterogeneous Edge EnvironmentsIEEE INFOCOM 2024 - IEEE Conference on Computer Communications10.1109/INFOCOM52122.2024.10621333(2448-2457)Online publication date: 20-May-2024
  • (2024)Asynchronous Federated Learning Through Online Linear RegressionsIEEE Access10.1109/ACCESS.2024.352100912(195131-195144)Online publication date: 2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media