Skip to main content
Log in

Joint computation offloading and resource allocation based on deep reinforcement learning in C-V2X edge computing

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The integration of Cellular Vehicle-to-Everything (C-V2X) and Mobile Edge Computing (MEC) is critical for satisfying the demanding requirements of vehicular applications, which are characterized by ultra-low latency and ultra-high reliability. In this paper, we address the challenge of jointly optimizing computation offloading and resource allocation in C-V2X network. To achieve this, we propose a hierarchical MEC/C-V2X network that accounts for the dynamic changes of the vehicular network and the diversity of computation offloading patterns. Additionally, we establish a collaborative computation offloading model that supports multiple offloading patterns. We formulate the dynamic computation offloading and resource allocation problem as a sequential decision problem based on the Markovian decision process. To enable automated and intelligent decision-making, we propose a deep reinforcement learning algorithm called ORAD, based on the deep deterministic policy gradient algorithm, to maximize offloading success rate in real-time. The numerical results demonstrate that the proposed algorithm effectively provides the optimal policy, resulting in the offloading success rate of vehicular tasks being improved by 2.73% to 95.51%.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Data Availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  1. Qi Q, Wang J, Ma Z, Sun H, Cao Y, Zhang L, Liao J (2019) Knowledge-driven service offloading decision for vehicular edge computing: A deep reinforcement learning approach. IEEE Trans Veh Technol 68(5):4192–4203. https://doi.org/10.1109/TVT.2019.2894437

    Article  Google Scholar 

  2. Chen S, Hu J, Shi Y, Zhao L, Li W (2020) A vision of c-v2x: Technologies, field testing, and challenges with chinese development. IEEE Internet Things J 7(5):3872–3881. https://doi.org/10.1109/JIOT.2020.2974823

    Article  Google Scholar 

  3. Li B, Hou P, Wu H, Hou F (2021) Optimal edge server deployment and allocation strategy in 5g ultra-dense networking environments. Pervasive Mob Comput 72:101312. https://doi.org/10.1016/j.pmcj.2020.101312

  4. Xiong W, Lu Z, Li B, Wu Z, Hang B, Wu J, Xuan X (2019) A self-adaptive approach to service deployment under mobile edge computing for autonomous driving. Eng Appl Artif Intell 81:397–407. https://doi.org/10.1016/j.engappai.2019.03.006

    Article  Google Scholar 

  5. He J, Wang Y, Du X, Lu Z, Duan Q, Wu J (2022) Optos: A strategy of online pre-filtering task offloading system in vehicular ad hoc networks. IEEE Access 10:4112–4124. https://doi.org/10.1109/ACCESS.2022.3141456

    Article  Google Scholar 

  6. Hou P, Li B, Wang Z, Ding H (2022) Joint hierarchical placement and configuration of edge servers in c-v2x. Ad Hoc Netw 131:102842. https://doi.org/10.1016/j.adhoc.2022.102842

  7. Sehla K, Nguyen TMT, Pujolle G, Velloso PB (2022) Resource allocation modes in c-v2x: From lte-v2x to 5g–v2x. IEEE Internet Things J 9(11):8291–8314. https://doi.org/10.1109/JIOT.2022.3159591

    Article  Google Scholar 

  8. Li B, Hou P, Wu H, Qian R, Ding H (2020) Placement of edge server based on task overhead in mobile edge computing environment. Trans Emerg Telecommun 4196. https://doi.org/10.1002/ett.4196

  9. Song S, Ma S, Zhao J, Yang F, Zhai L (2022) Cost-efficient multi-service task offloading scheduling for mobile edge computing. Appl Intell 52(4):4028–4040. https://doi.org/10.1007/s10489-021-02549-2

    Article  Google Scholar 

  10. Ke H, Wang J, Deng L, Ge Y, Wang H (2020) Deep reinforcement learning-based adaptive computation offloading for mec in heterogeneous vehicular networks. IEEE Trans Veh Technol 69(7):7916–7929. https://doi.org/10.1109/TVT.2020.2993849

    Article  Google Scholar 

  11. Li B, Hou P, Wang K, Peng Z, Jin S, Niu L (2022) Deployment of edge servers in 5g cellular networks. Transactions on Emerging Telecommunications Technologies 33(8):3937. https://doi.org/10.1002/ett.3937

    Article  Google Scholar 

  12. Liu J, Ahmed M, Mirza MA, Khan WU, Xu D, Li J, Aziz A, Han Z (2022) Rl/drl meets vehicular task offloading using edge and vehicular cloudlet: A survey. IEEE Internet Things J 1. https://doi.org/10.1109/JIOT.2022.3155667

  13. Jin W (2022) Edge artificial intelligence-based affinity task offloading under resource adjustment in a 5g network. Appl Intell 52(7):8167–8188. https://doi.org/10.1007/s10489-021-02786-5

    Article  MathSciNet  Google Scholar 

  14. Liu Y, Yu H, Xie S, Zhang Y (2019) Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks. IEEE Trans Veh Technol 68(11):11158–11168. https://doi.org/10.1109/TVT.2019.2935450

    Article  Google Scholar 

  15. Peng H, Shen X (2020) Deep reinforcement learning based resource management for multi-access edge computing in vehicular networks. IEEE Transactions on Network Science and Engineering 7(4):2416–2428. https://doi.org/10.1109/TNSE.2020.2978856

    Article  MathSciNet  Google Scholar 

  16. Yang H, Xie X, Kadoch M (2019) Intelligent resource management based on reinforcement learning for ultra-reliable and low-latency iov communication networks. IEEE Trans Veh Technol 68(5):4157–4169. https://doi.org/10.1109/TVT.2018.2890686

    Article  Google Scholar 

  17. Feng L, Li W, Lin Y, Zhu L, Guo S, Zhen Z (2020) Joint computation offloading and urllc resource allocation for collaborative mec assisted cellular-v2x networks. IEEE Access 8:24914–24926. https://doi.org/10.1109/ACCESS.2020.2970750

    Article  Google Scholar 

  18. Yadav R, Zhang W, Kaiwartya O, Song H, Yu S (2020) Energy-latency tradeoff for dynamic computation offloading in vehicular fog computing. IEEE Trans Veh Technol 69(12):14198–14211. https://doi.org/10.1109/TVT.2020.3040596

    Article  Google Scholar 

  19. Zhou H, Jiang K, Liu X, Li X, Leung VCM (2022) Deep reinforcement learning for energy-efficient computation offloading in mobile-edge computing. IEEE Internet Things J 9(2):1517–1530. https://doi.org/10.1109/JIOT.2021.3091142

    Article  Google Scholar 

  20. Li B, Chen, F, Peng Z, Hou P, Ding H (2021) Mobility-aware dynamic offloading strategy for c-v2x under multi-access edge computing. Phys Commun 49. https://doi.org/10.1016/j.phycom.2021.101446

  21. Dai P, Hu K, Wu X, Xing H, Teng F, Yu Z (2020) A probabilistic approach for cooperative computation offloading in mec-assisted vehicular networks. IEEE Trans Intell Transp Syst 1–13. https://doi.org/10.1109/TITS.2020.3017172

  22. Wang Z, Zhao D, Ni M, Li L, Li C (2021) Collaborative mobile computation offloading to vehicle-based cloudlets. IEEE Trans Veh Technol 70(1):768–781. https://doi.org/10.1109/TVT.2020.3043296

    Article  Google Scholar 

  23. Ning Z, Zhang K, Wang X, Guo L, Hu X, Huang J, Hu B, Kwok RYK (2021) Intelligent edge computing in internet of vehicles: A joint computation offloading and caching solution. IEEE Trans Intell Transp Syst 22(4):2212–2225. https://doi.org/10.1109/TITS.2020.2997832

    Article  Google Scholar 

  24. Pham X-Q, Huynh-The T, Huh E-N, Kim D-S (2022) Partial computation offloading in parked vehicle-assisted multi-access edge computing: A game-theoretic approach. IEEE Trans Veh Technol 71(9):10220–10225. https://doi.org/10.1109/TVT.2022.3182378

    Article  Google Scholar 

  25. Yang H, Wei Z, Feng Z, Chen X, Li Y, Zhang P (2022) Intelligent computation offloading for mec-based cooperative vehicle infrastructure system: A deep reinforcement learning approach. IEEE Trans Veh Technol 71(7):7665–7679. https://doi.org/10.1109/TVT.2022.3171817

    Article  Google Scholar 

  26. Ning Z, Dong P, Wang X, Guo L, Rodrigues JJPC, Kong X, Huang J, Kwok RYK (2019) Deep reinforcement learning for intelligent internet of vehicles: An energy-efficient computational offloading scheme. IEEE Transactions on Cognitive Communications and Networking 5(4):1060–1072. https://doi.org/10.1109/TCCN.2019.2930521

    Article  Google Scholar 

  27. Lin B, Lin K, Lin C, Lu Y, Huang Z, Chen X (2021) Computation offloading strategy based on deep reinforcement learning for connected and autonomous vehicle in vehicular edge computing. J Cloud Comput 10(1). https://doi.org/10.1186/s13677-021-00246-6

  28. Wang K, Wang X, Liu X, Jolfaei A (2020) Task offloading strategy based on reinforcement learning computing in edge computing architecture of internet of vehicles. IEEE Access 8:173779–173789. https://doi.org/10.1109/ACCESS.2020.3023939

    Article  Google Scholar 

  29. Hu, Z., Niu J, Ren T, Dai B, Li Q, Xu M, Das SK (2021) An efficient online computation offloading approach for large-scale mobile edge computing via deep reinforcement learning. IEEE Trans Serv Comput 1. https://doi.org/10.1109/TSC.2021.3116280

  30. Baghban H, Rezapour A, Hsu CH, Nuannimnoi S, Huang CY (2022) Edge-ai: Iot request service provisioning in federated edge computing using actor-critic reinforcement learning. IEEE Trans Eng Manag 1–10. https://doi.org/10.1109/TEM.2022.3166769

  31. Ho TM, Nguyen KK (2020) Joint server selection, cooperative offloading and handover in multi-access edge computing wireless network: A deep reinforcement learning approach. IEEE Trans on Mob Comput 1. https://doi.org/10.1109/TMC.2020.3043736

  32. Chakraborty S, De D, Mazumdar K (2022) Dome: Dew computing based microservice execution in mobile edge using q-learning. Appl Intell. https://doi.org/10.1007/s10489-022-04087-x

  33. Chen G, Xu X, Zeng Q, et al (2022) A vehicle-assisted computation offloading algorithm based on proximal policy optimization in vehicle edge networks. Mobile Netw Appl. https://doi.org/10.1007/s11036-022-02029-y

  34. Chen C, Liu L, Qiu T, Yang K, Gong F, Song H (2019) Asgr: An artificial spider-web-based geographic routing in heterogeneous vehicular networks. IEEE Trans Intell Transp Syst 20(5):1604–1620. https://doi.org/10.1109/TITS.2018.2828025

    Article  Google Scholar 

  35. Chen L, Xu Y, Lu Z, Wu J, Gai K, Hung PCK, Qiu M (2021) Iot microservice deployment in edge-cloud hybrid environment using reinforcement learning. IEEE Internet Things J 8(16):12610–12622. https://doi.org/10.1109/JIOT.2020.3014970

    Article  Google Scholar 

  36. Zhang X, Wang Y (2023) Deepmecagent: multi-agent computing resource allocation for uav-assisted mobile edge computing in distributed iot system. Appl Intell 53(1):1180–1191. https://doi.org/10.1007/s10489-022-03482-8

    Article  Google Scholar 

  37. Zhou H, Jiang K, Liu X, Li X, Leung VCM (2022) Deep reinforcement learning for energy-efficient computation offloading in mobile-edge computing. IEEE Internet Things J 9(2):1517–1530. https://doi.org/10.1109/JIOT.2021.3091142

  38. Tang M, Wong VWS (2022) Deep reinforcement learning for task offloading in mobile edge computing systems. IEEE Trans Mob Comput 21(6):1985–1997. https://doi.org/10.1109/TMC.2020.3036871

    Article  Google Scholar 

Download references

Acknowledgements

The work of this paper is supported by the National Key Research and Development Program of China (2021YFC3300600), National Natural Science Foundation of China under Grant (No. 92046024, 92146002, 61873309), and Shanghai Science and Technology Project under Grant (No.22510761000).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaohan Jiang.

Ethics declarations

Conflicts of interests

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hou, P., Jiang, X., Lu, Z. et al. Joint computation offloading and resource allocation based on deep reinforcement learning in C-V2X edge computing. Appl Intell 53, 22446–22466 (2023). https://doi.org/10.1007/s10489-023-04637-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04637-x

Keywords

Navigation