Abstract:
This paper considers the cooperative search for stationary targets by multiple unmanned aerial vehicles (UAVs) with limited sensing range and communication ability in a d...Show MoreMetadata
Abstract:
This paper considers the cooperative search for stationary targets by multiple unmanned aerial vehicles (UAVs) with limited sensing range and communication ability in a dynamic threatening environment. The main purpose is to use multiple UAVs to find more unknown targets as soon as possible, increase the coverage rate of the mission area, and more importantly, guide UAVs away from threats. However, traditional search methods are mostly unscalable and perform poorly in dynamic environments. A new multi-agent deep reinforcement learning (MADRL) method, DNQMIX, is proposed in this study to solve the multi-UAV cooperative target search (MCTS) problem. The reward function is also newly designed for the MCTS problem to guide UAVs to explore and exploit the environment information more efficiently. Moreover, this paper proposes a digital twin (DT) driven training framework “centralized training, decentralized execution, and continuous evolution” (CTDECE). It can facilitate the continuous evolution of MADRL models and solve the tradeoff between training speed and environment fidelity when MADRL is applied to real-world multi-UAV systems. Simulation results show that DNQMIX outperforms state-of-art methods in terms of search rate and coverage rate.
Published in: IEEE Transactions on Vehicular Technology ( Volume: 72, Issue: 7, July 2023)