Abstract:
In the Internet of Autonomous Vehicles (IoAV), task offloading is a method to address computationally intensive tasks and ensure the safe operation of vehicles. However, ...Show MoreMetadata
Abstract:
In the Internet of Autonomous Vehicles (IoAV), task offloading is a method to address computationally intensive tasks and ensure the safe operation of vehicles. However, under extreme weather conditions, the number of these tasks significantly increases, posing higher risks and challenges. Therefore, to mitigate risks and ensure the safe operation of vehicles, it is crucial to make quick and effective decisions during the task offloading process. Currently, most methods in this domain utilize deep reinforcement learning (DRL). However, the large number of parameters in deep networks results in the characteristics of long decision time and large consumption of computational resources. In order to solve this problem, this article proposes a task offloading scheme named dynamic pricing driven double broad reinforcement learning (DP-DBRL), which utilizes double broad reinforcement learning (DBRL) to reduce model memory consumption and decision time. Additionally, it considers the high-speed mobility and resource variability to devise a more efficient dynamic pricing scheme that minimizes the overall delay in task processing for vehicles. To validate the proposed scheme, we conduct simulations using the VISSIM platform, meanwhile, we simulate task offloading scenarios under extreme weather conditions by randomly reducing factors, such as the transmission rate and task execution efficiency of infrastructure and vehicles on the road. Finally, we deployed the proposed scheme both locally and on the Huawei Atlas 500 device to demonstrate its effectiveness and lightweight nature.
Published in: IEEE Internet of Things Journal ( Volume: 11, Issue: 10, 15 May 2024)