Abstract:
Federated Learning (FL) has emerged as an efficient distributed model training framework that enables multiple clients cooperatively to train a global model without expos...Show MoreMetadata
Abstract:
Federated Learning (FL) has emerged as an efficient distributed model training framework that enables multiple clients cooperatively to train a global model without exposing their local data in edge computing (EC). However, FL usually faces statistical heterogeneity (e.g., non-IID data) and system heterogeneity (e.g., computing and communication capabilities), resulting in poor model training performance. To deal with the above two challenges, we propose an efficient FL framework, named FedBR, which integrates the idea of block-wise regularization and knowledge distillation (KD) into the pioneer FL algorithm FedAvg, for resource-constrained edge computing. Besides, we design a heuristic algorithm (GMBS) to determine the appropriate number of model blocks for clients according to their varied data distributions, computing, and communication capabilities. Extensive experimental results show that FedBR can reduce the time cost by 19.5% and the communication cost by 27% on average compared with the other three baselines when achieving the target testing accuracy under heterogeneous settings.
Date of Conference: 19-21 June 2023
Date Added to IEEE Xplore: 27 July 2023
ISBN Information: