Abstract:
Federated learning (FL) enables multiple users to collaboratively train a shared model while protecting user privacy. In this paper, we investigate the transmission delay...Show MoreMetadata
Abstract:
Federated learning (FL) enables multiple users to collaboratively train a shared model while protecting user privacy. In this paper, we investigate the transmission delay minimization problem for non-orthogonal multiple access (NOMA)-assisted FL. We analyze the convergence rate of heterogeneous quantized FL to demonstrate that the minimum quantization level among scheduled users is crucial in controlling the trade-off between the number of training rounds and the transmission delay of each round. Based on the convergence analysis, we formulate a delay minimization problem for NOMA-assisted FL and propose a communication-efficient heterogeneous compression NOMA scheme for FL. Subsequently, we develop a block coordinate descent (BCD)-based algorithm that jointly optimizes the sub channel allocation, power allocation, and quan-tization level for each scheduled user. Results reveal that our proposed algorithm significantly reduces the transmission delay while achieving the same learning performance compared with conventional FL algorithms.
Date of Conference: 21-24 April 2024
Date Added to IEEE Xplore: 03 July 2024
ISBN Information: