Abstract:
Unmanned aerial vehicles mounted aerial base stations (ABSs) are capable of providing on-demand coverage in next-generation mobile communication system. However, resource...Show MoreMetadata
Abstract:
Unmanned aerial vehicles mounted aerial base stations (ABSs) are capable of providing on-demand coverage in next-generation mobile communication system. However, resource allocation for ABSs to provide continuous coverage is challenging, since the high dynamic of ABSs and time-varying air-to-ground channel would result in channel state information (CSI) mismatch between resource allocation decision and implementation. In consequence, the coverage of ABSs is discontinuous in spatial-temporal dimensions, i.e., the variance of user rate between adjacent time slots is large. To ensure the coverage continuity, we design a resource allocation method based on deep reinforcement learning (RDRL). Capable of adaptively tuning neural network structures, RDRL could satisfy coverage requirements by jointly allocating subchannels and power for ground users. Meanwhile, the temporal channel correlation is taken into account in the design of reward function in RDRL, which aims to alleviate the influence of CSI mismatch between method decision and implementation. Moreover, RDRL can apply a pre-trained model of previous coverage requirement to current requirement to reduce computation complexity. Experimental results show that the rate variance of RDRL can be reduced by 66.7% and spectral efficiency of RDRL can be increased by 34.7% compared with benchmark algorithms, which ensures the coverage continuity.
Published in: IEEE Transactions on Wireless Communications ( Volume: 23, Issue: 2, February 2024)