Loading [a11y]/accessibility-menu.js
Adaptive Block-Wise Regularization and Knowledge Distillation for Enhancing Federated Learning | IEEE Journals & Magazine | IEEE Xplore
Scheduled Maintenance: On Tuesday, 25 February, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (1800-2200 UTC). During this time, there may be intermittent impact on performance. We apologize for any inconvenience.

Adaptive Block-Wise Regularization and Knowledge Distillation for Enhancing Federated Learning


Abstract:

Federated Learning (FL) is a distributed model training framework that allows multiple clients to collaborate on training a global model without disclosing their local da...Show More

Abstract:

Federated Learning (FL) is a distributed model training framework that allows multiple clients to collaborate on training a global model without disclosing their local data in edge computing (EC) environments. However, FL usually faces statistical heterogeneity (e.g., non-IID data) and system heterogeneity (e.g., computing and communication capabilities), resulting in poor model training performance. To deal with the above two challenges, we propose an efficient FL framework, named FedBR, which integrates the idea of block-wise regularization and knowledge distillation (KD) into the pioneering FL algorithm FedAvg, for resource-constrained edge computing. Specifically, we first divide the model into multiple blocks according to the layer order of deep neural network (DNN). The server only sends some consecutive model blocks instead of an entire model to clients for communication efficiency. Then, the clients make use of knowledge distillation to absorb the knowledge of global model blocks to alleviate statistical heterogeneity during local training. We provide a theoretical convergence guarantee for FedBR and show that the convergence bound will decrease as the increasing number of model blocks sent by the server. Besides, since the increasing number of model blocks brings more computing and communication costs, we design a heuristic algorithm (GMBS) to determine the appropriate number of model blocks for clients according to their varied data distributions, computing, and communication capabilities. Extensive experimental results show that FedBR can reduce the bandwidth consumption by about 31%, and achieve an average accuracy improvement of around 5.6% compared with the baselines under heterogeneous settings.
Published in: IEEE/ACM Transactions on Networking ( Volume: 32, Issue: 1, February 2024)
Page(s): 791 - 805
Date of Publication: 14 August 2023

ISSN Information:

Funding Agency:


References

References is not available for this document.