Abstract:
Federated learning (FL) is a distributed training paradigm in which the training of a machine learning model is coordinated by a central parameter server (PS) while the d...Show MoreMetadata
Abstract:
Federated learning (FL) is a distributed training paradigm in which the training of a machine learning model is coordinated by a central parameter server (PS) while the data is distributed across multiple edge devices. Thus, FL has received considerable attention as it allows for the training of models while providing security and privacy. In practice, the performance bottleneck is the link capacity from each edge device to the PS. To satisfy stringent link capacity constraints, model updates need to be compressed rather aggressively at the edge devices. In this paper, we propose a low-rate universal vector quantizer that can attain low or even fractional-rate compression. Our scheme consists of two steps: (i) model update pre-processing and (ii) vector quantization using a universal trellis coded quantizer (TCQ). In the pre-processing steps, model updates are sparsified and scaled, so as to match the TCQ design. Then, the quantization step uses TCQ, which allows for a fractional compression rate and has a flexible input size so that it can be adapted to the different neural network layers. The simulations show that our vector quantization can save 75% link capacity and still have competing accuracy, as compared with the other compressors proposed in the literature.
Date of Conference: 09-13 June 2024
Date Added to IEEE Xplore: 12 August 2024
ISBN Information: