Multi-Agent Deep Reinforcement Learning Based UAV Trajectory Optimization for Differentiated Services | IEEE Journals & Magazine | IEEE Xplore

Multi-Agent Deep Reinforcement Learning Based UAV Trajectory Optimization for Differentiated Services


Abstract:

Driven by the increasing computational demand of real-time mobile applications, Unmanned Aerial Vehicle (UAV) assisted Multi-access Edge Computing (MEC) has been envision...Show More

Abstract:

Driven by the increasing computational demand of real-time mobile applications, Unmanned Aerial Vehicle (UAV) assisted Multi-access Edge Computing (MEC) has been envisioned as a promising paradigm for pushing computational resources to network edges and constructing high-throughput line-of-sight links for ground users. Most exsiting studies consider simplified scenarios, such as a single UAV, Service Provider (SP) or service type, and centralized UAV trajectory control. In order to be more in line with real-world cases, we intend to achieve distributed trajectory control of multiple UAVs in UAV-assisted MEC networks with multiple SPs providing differentiated services. Our objective is to minimize the short-term computational costs of ground users and the long-term computational cost of UAVs, simultaneously based on incomplete information. We first solve the formulated problem by reaching the Nash Equilibrium (NE) of the game among SPs based on complete information. We further formulate a Markov game model and propose a Deep Reinforcement Learning (DRL)-based UAV trajectory optimization algorithm, where only local observations of each UAV are required for each SP's flying action execution. Theoretical analysis and performance evaluation demonstrate the convergence, efficiency, scalability, and robustness of our algorithm compared with other representative algorithms.
Published in: IEEE Transactions on Mobile Computing ( Volume: 23, Issue: 5, May 2024)
Page(s): 5818 - 5834
Date of Publication: 05 September 2023

ISSN Information:

Funding Agency:


References

References is not available for this document.