Abstract:
The dynamic deployment of aerial vehicles in urban delivery scenarios demands precise route planning, reliable data links, and efficient use of network infrastructure. Al...Show MoreMetadata
Abstract:
The dynamic deployment of aerial vehicles in urban delivery scenarios demands precise route planning, reliable data links, and efficient use of network infrastructure. Although prior efforts have explored various aspects of unmanned navigation or communication optimization, existing approaches often overlook the combined impact of three-dimensional path selection and robust connectivity maintenance. To address this gap, this paper proposes a deep reinforcement learning framework employing Independent Proximal Policy Optimization (IPPO) to steer the flight paths of unmanned vehicles while curtailing needless interactions with base stations (BSs). By incorporating communication-specific metrics, Reference Signal Received Power (RSRP), the proposed system adapts to urban conditions and upholds resilient network links. Comprehensive performance evaluations reveal the potential of the framework to reduce excessive BS connections by up to 82.24%, preserve RSRP levels above -80 dBm, and balance handover frequencies across flight trajectories. These findings affirm the scalability and effectiveness of the method for achieving efficient aerial navigation and consistent communication in urban delivery services.
Published in: 2025 19th International Conference on Ubiquitous Information Management and Communication (IMCOM)
Date of Conference: 03-05 January 2025
Date Added to IEEE Xplore: 04 February 2025
ISBN Information: