Abstract:
The lifetime of a UAV-assisted wireless network is determined by the amount of energy consumed by the UAVs during flight, data collection, and transmission to the ground ...Show MoreMetadata
Abstract:
The lifetime of a UAV-assisted wireless network is determined by the amount of energy consumed by the UAVs during flight, data collection, and transmission to the ground station. Routing protocols are commonly used for data transmission in a communication network. However, because of the mobility of UAVs, using a routing protocol with a single communication technology results in higher delay and more energy consumption in a UAV-assisted wireless network. To overcome this, we propose two reinforcement learning (RL) algorithms, Q-learning and deep Q-network (DQN), for energy-efficient data transmission over a hybrid BLE/LTE/Wi-Fi/LoRa UAV-assisted wireless network. We consider BLE, LTE, Wi-Fi, and LoRa for communication over a UAV-GS link. The RL algorithms take any random network as input and learn the best policy to output the network with less energy consumption. The reward/penalty is chosen in such a way that the network with the highest energy consumption is penalized and the one with the lowest is rewarded, thereby minimizing total network energy consumption. Based on learning, it creates a hybrid BLE/LTE/Wi-Fi/LoRa UAV-assisted wireless network by assigning the best communication technology to a UAV-GS link. Further, we compare the performance of proposed RL algorithms with a rule-based algorithm and random hybrid scheme. In addition, we propose a theoretical framework for constructing hybrid network for both free space and free space multipath path loss models. We demonstrate the performance comparison of the proposed work with the conventional shortest path routing algorithm in terms of network energy consumption and average network delay using extensive results. Finally, the effect of the velocity of the UAV and the number of packets on the performance of the proposed framework is analyzed.
Published in: IEEE/ACM Transactions on Networking ( Volume: 32, Issue: 3, June 2024)