skip to main content
10.1145/3573942.3573987acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaiprConference Proceedingsconference-collections
research-article

Research on Task Offloading Based on Deep Reinforcement Learning for Internet of Vehicles

Published: 16 May 2023 Publication History

Abstract

Mobile Edge Computing (MEC) is a promising technology that facilitates the computational offloading and resource allocation in the Internet of Vehicles (IoV) environment. When the mobile device is not capable enough to meet its own demands for data processing, the task will be offloaded to the MEC server, which can effectively relieve the network pressure, meet the multi-task computing requirements, and ensure the quality of service (QoS). Via multi-user and multi-MEC servers, this paper proposes the Q-Learning task offloading strategy based on the improved deep reinforcement learning policy(IDRLP) to obtain an optimal strategy for task offloading and resource allocation. Simulation results suggest that the proposed algorithm compared with other benchmark schemes has better performance in terms of delay, energy consumption and system weighted cost, even with different tasks, users and data sizes.

References

[1]
Q Yuan, H Zhou and J Li. 2018. Toward Efficient Content Delivery for Automated Driving Services: An Edge Computing Solution. IEEE Network. 32, 1, 80-86.
[2]
J Zhao, Q Li and Y Gong. 2019. Computation Offloading and Resource Allocation For Cloud Assisted Mobile Edge Computing in Vehicular Networks. IEEE Transactions on Vehicular Technology. 68, 8, 7944-7956.
[3]
Y Wang, K Wang and H Huang. 2018. Traffic and Computation Co-Offloading With Reinforcement Learning in Fog Computing for Industrial Applications. IEEE Transactions on Industrial Informatics. 1-1.
[4]
K Wang, X Wang and X Liu. 2020. Task Offloading Strategy Based on Reinforcement Learning Computing in Edge Computing Architecture of Internet of Vehicles. IEEE Access. 8, 173779-173789.
[5]
F Jiang, W Liu, J Wang and X Liu. 2020. Q-Learning Based Task Offloading and Resource Allocation Scheme for Internet of Vehicles. IEEE/CIC International Conference on Communications in China (ICCC). 460-465.
[6]
M Li, J Gao, L Zhao and X Shen. 2020. Deep Reinforcement Learning for Collaborative Edge Computing in Vehicular Networks. IEEE Transactions on Cognitive Communications and Networking. 6, 4, 1122-1135.
[7]
Z Yu, Y Tang, L Zhang and H Zeng. 2021. Deep Reinforcement Learning Based Computing Offloading Decision and Task Scheduling in Internet of Vehicles. IEEE/CIC International Conference on Communications in China (ICCC). 1166-1171.
[8]
Z Wu and D Yan. 2021. Deep reinforcement learning-based computation offloading for 5G vehicle-aware multi-access edge computing network. China Communications. 18, 11, 26-41.
[9]
X Gu, G Zhang and N Zhao. 2020. Cooperative Mobile Edge Computing Architecture in IoV and Its Workload Balance Policy[C]. IEEE 1st International Conference on Civil Aviation Safety and Information Technology (ICCASIT).
[10]
X Chen, Y Cai and L Li. 2019. Energy-Efficient Resource Allocation for Latency-Sensitive Mobile Edge Computing[J]. IEEE Transactions on Vehicular Technology. 1-1.
[11]
Y Cui, D Zhang and T Zhang. 2021. A New Approach on Task Offloading Scheduling for Application of Mobile Edge Computing[C]. IEEE Wireless Communications and Networking Conference (WCNC).
[12]
W Duan, X Gu, M Wen, Y Ji, J Ge and G Zhang. 2021. Resource Management for Intelligent Vehicular Edge Computing Networks. IEEE Transactions on Intelligent Transportation Systems.
[13]
X Gao and F Xu. 2020. Research on task offloading based on deep reinforcement learning in mobile edge environment[J]. MATEC Web of Conferences.
[14]
K Jiang, H Zhou and D L0.i 2020. A Q-learning based Method for Energy-Efficient Computation Offloading in Mobile Edge Computing[C]. International Conference on Computer Communications and Networks (ICCCN).

Index Terms

  1. Research on Task Offloading Based on Deep Reinforcement Learning for Internet of Vehicles

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    AIPR '22: Proceedings of the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition
    September 2022
    1221 pages
    ISBN:9781450396899
    DOI:10.1145/3573942
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 16 May 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    AIPR 2022

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 48
      Total Downloads
    • Downloads (Last 12 months)14
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 01 Mar 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media