Abstract:
In a vehicular communication network, how to efficiently utilize the limited network resources to meet the stringent transmission quality of service (QoS) requirements in...Show MoreMetadata
Abstract:
In a vehicular communication network, how to efficiently utilize the limited network resources to meet the stringent transmission quality of service (QoS) requirements in the dynamically changing environment is a challenging task. This paper studies a typical vehicle-to-vehicle (V2V) communication scenario, in which multiple source nodes desire to deliver different types of messages to their respective destinations. Considering the case that accurate instantaneous global channel state information (CSI) is unavailable, properly selecting access channel, transmission power, and data rate at each source to optimize the links' performance is an extremely involved mixed-integer stochastic optimization problem. We propose applying the distributed multi-agent deep reinforcement learning technique to solve the problem. Specifically, a distributed multi-agent parameterized deep Q-network (DMA-PDQN) algorithm is employed to explore optimal decision in the discrete-continuous hybrid action space. The idea of federated meta learning (FML) is also adopted to tackle the non-stationarity issue and improve the training efficiency of distributed agents. Using energy efficiency (EE) as the target transmission objective, through extensive simulation we show that our method can achieve excellent convergence and performance.
Published in: IEEE Transactions on Vehicular Technology ( Volume: 73, Issue: 3, March 2024)