FedAQT: Accurate Quantized Training with Federated Learning | IEEE Conference Publication | IEEE Xplore

FedAQT: Accurate Quantized Training with Federated Learning


Abstract:

Federated learning (FL) has been widely used to train neural networks with the decentralized training procedure where data is only accessed on clients’ devices for privac...Show More

Abstract:

Federated learning (FL) has been widely used to train neural networks with the decentralized training procedure where data is only accessed on clients’ devices for privacy preservation. However, the limited computation resources on clients’ devices prevent FL of large models. To overcome the constraint, one possible method is to reduce the computation memory usage with quantized neural networks such as quantization aware training on a centralized server. However, directly applying the quantization aware methods does not reduce the memory consumption on the clients’ devices of FL because the full-precision model is still used in the forward propagation of the model computation. To enable FL of the Conformer based ASR models, we propose FedAQT, an accurate quantized training framework under FL by training with quantized variables directly on clients’ devices. We empirically show that our method can achieve comparable WER with only 60% memory of the full-precision model.
Date of Conference: 14-19 April 2024
Date Added to IEEE Xplore: 18 March 2024
ISBN Information:

ISSN Information:

Conference Location: Seoul, Korea, Republic of

References

References is not available for this document.