Abstract:
The disaggregated and hierarchical architecture of Open Radio Access Network (ORAN) with openness paradigm promises to deliver the ever demanding 5G services. Meanwhile, ...Show MoreMetadata
Abstract:
The disaggregated and hierarchical architecture of Open Radio Access Network (ORAN) with openness paradigm promises to deliver the ever demanding 5G services. Meanwhile, it also faces new challenges for the efficient deployment of Machine Learning (ML) models. Although ORAN has been designed with built-in Radio Intelligent Controllers (RIC) providing the capability of training ML models, traditional centralized learning methods may be no longer appropriate for the RICs due to privacy issues, computational burden, and communication overhead. Recently, Federated Learning (FL), a powerful distributed ML training, has emerged as a new solution for training models in ORAN systems. 5G use cases such as meeting the network slice Service Level Agreement (SLA) and Key Performance Indicator (KPI) monitoring for the smart radio resource management can greatly benefit from the FL models. However, training FL models efficiently in ORAN system is a challenging issue due to the stringent deadline of ORAN control loops, expensive compute resources, and limited communication bandwidth. Moreover, to deliver Grade of Service (GoS), the trained ML models must converge with acceptable accuracy. In this paper, we propose a second order gradient descent based FL training method named MCORANFed that utilizes compression techniques to minimize the communication cost and yet converges at a faster rate than state-of-the-art FL variants. We formulate a joint optimization problem to minimize the overall resource cost and learning time, and then solve it by the decomposition method. Our experimental results prove that MCORANFed is communication efficient with respect to ORAN system, and outperforms FL methods like MFL, FedAvg, and ORANFed in terms of costs and convergence rate.
Published in: IEEE/ACM Transactions on Networking ( Volume: 32, Issue: 4, August 2024)