Loading [a11y]/accessibility-menu.js
Deep Reinforcement Learning Based on Parked Vehicles-Assisted for Task Offloading in Vehicle Edge Computing | IEEE Conference Publication | IEEE Xplore

Deep Reinforcement Learning Based on Parked Vehicles-Assisted for Task Offloading in Vehicle Edge Computing


Abstract:

Vehicles may produce a lot of data that is timesensitive and computationally intensive due to the quick development of on-board applications. Due to the limitation of veh...Show More

Abstract:

Vehicles may produce a lot of data that is timesensitive and computationally intensive due to the quick development of on-board applications. Due to the limitation of vehicle computing power and battery capacity, these data cannot be processed in time. Vehicle edge computing (VEC) is increasingly used to solve this problem because of its greater computing power. This paper proposes a VEC task offloading model based on improved Q-learning algorithm. First of all, we establish the system model. Due to the complexity of the system model environment, we adopted the reinforcement learning (RL) algorithm, but the environment space and action space in RL are large, which will lead to slow convergence of the model. We combine RL and deep learning (DL) to increase convergence efficiency, and to convert the task of maintaining the value function table, we utilize deep reinforcement learning (DRL) by training a neural network model. The revised Q-learning algorithm’s solving procedure is then thoroughly introduced. The simulation outcomes demonstrate that as training times are increased, the training loss of the enhanced Q-learning algorithm gradually declines and tends to converge to zero. It has been confirmed that our suggested approach does a goodjob of evaluating the system cost. As the number of vehicles on the route increases, the system experiences an increase in time cost. We also compare the traditional Q-learning algorithm to the improved Q-learning strategy. The simulation results show that the enhanced Q-learning algorithm is much faster than the conventional Q-learning method.
Date of Conference: 19-23 June 2023
Date Added to IEEE Xplore: 21 July 2023
ISBN Information:

ISSN Information:

Conference Location: Marrakesh, Morocco

References

References is not available for this document.