Processing math: 100%
BOSE: Block-Wise Federated Learning in Heterogeneous Edge Computing | IEEE Journals & Magazine | IEEE Xplore

BOSE: Block-Wise Federated Learning in Heterogeneous Edge Computing


Abstract:

At the network edge, federated learning (FL) has gained attention as a promising approach for training deep learning (DL) models collaboratively across a large number of ...Show More

Abstract:

At the network edge, federated learning (FL) has gained attention as a promising approach for training deep learning (DL) models collaboratively across a large number of devices while preserving user privacy. However, FL still faces specific challenges related to the limited, heterogeneous and dynamic resources of devices. In most FL systems, all devices train the same model, while the devices with constrained resources, referred to as stragglers, will significantly slow down overall training process. It is intuitive to alleviate computation and communication load on the stragglers by training and transmitting a part of the model. Inspired by multi-exit models, we divide an original DL model into several non-overlapping blocks, which can be trained separately on the low-capability devices. Furthermore, we propose BOSE, a novel FL system that performs adaptive block-wise model training under resource constraints. Considering the diverse impacts of different blocks on model convergence and the varying training loads they incur, a naive block assignment strategy, e.g., uniformly random assignment, may not yield optimal model performance and fail to fully utilize available resources. To this end, we introduce two metrics, including learning speed and device-wise divergence, to measure the potential of blocks in promoting model convergence. Given resource budget, BOSE initially identifies a set of candidate blocks for each device and subsequently selects specific training blocks based on their potential for promoting model convergence. In general, blocks with higher potential are more likely to be chosen for training. Extensive experiments on a physical platform show that BOSE provides a 1.4\times \sim 3.8\times speedup without sacrificing model accuracy, compared to the baselines.
Published in: IEEE/ACM Transactions on Networking ( Volume: 32, Issue: 2, April 2024)
Page(s): 1362 - 1377
Date of Publication: 05 October 2023

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.