Skip to main content
Log in

A federated multi-agent deep reinforcement learning for vehicular fog computing

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Vehicular fog computing is an emerging paradigm for delay-sensitive computations. In this highly dynamic resource-sharing environment, optimal offloading decision for effective resource utilization is a challenging task. In recent years, deep reinforcement learning has emerged as an effective approach for dealing with resource allocation problems because of its self-adapting nature in a large state space scenario. However, due to high mobility and rapid changes in the network topology cause fluctuating task arrival rate. Similarly, the data sharing between the vehicles and the fog nodes raises a variety of security and privacy concerns. Therefore, the proposed system is based on local and global model training approaches. In this paper, we propose a federated multi-agent deep reinforcement learning solution that efficiently learns task-offloading decisions at multiple tiers i.e. locally and globally. The proposed work results in fast convergence due to its collaborative learning model among vehicles and fog servers. The local model runs at the vehicular nodes, and the global model runs at the fog servers. To reduce network overhead, the models are learned locally; thus, limited information is shared across the network this reduces the communication overhead and improves the privacy of the agents. The proposed system is compared with the greedy and stochastic approaches in terms of residence times, cost, delivery rate, and utilization ratio. We observed that the proposed approach has significantly reduced the task residence time, end-to-end delay and overall system cost.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data Availability

Data availability - There is no data associated with this work.

References

  1. Ma L, Wang X, Wang X, Wang L, Shi Y, Huang M (2021) Tcda: Truthful combinatorial double auctions for mobile edge computing in industrial internet of things. IEEE Trans Mobile Comput 2:58

    Google Scholar 

  2. Liu S, Guo L, Easa SM, Chen W, Yan H, Tang Y (2018) Chaotic behavior of traffic-flow evolution with two departure intervals in two-link transportation network. Discr Dyn Nat Soc 68:45

    MATH  Google Scholar 

  3. Raza S, Wang S, Ahmed M, Anwar MR (2019) A survey on vehicular edge computing: architecture, applications, technical issues, and future directions. Wireless Commun Mobile Comput 2019:865

    Google Scholar 

  4. Ranadheera S, Maghsudi S, Hossain E (2017) Mobile edge computation offloading using game theory and reinforcement learning. http://arxiv.org/abs/1711.09012

  5. Ning Z, Dong P, Wang X, Rodrigues JJ, Xia F (2019) Deep reinforcement learning for vehicular edge computing: An intelligent offloading system. ACM Trans Intell Syst Technol (TIST) 10(6):1–24

    Article  Google Scholar 

  6. Qi Q, Wang J, Ma Z, Sun H, Cao Y, Zhang L, Liao J (2019) Knowledge-driven service offloading decision for vehicular edge computing: a deep reinforcement learning approach. IEEE Trans Veh Technol 68(5):4192–4203

    Article  Google Scholar 

  7. Zhan W, Luo C, Wang J, Wang C, Min G, Duan H, Zhu Q (2020) Deep-reinforcement-learning-based offloading scheduling for vehicular edge computing. IEEE Internet Things J 7(6):5449–5465

    Article  Google Scholar 

  8. Zhu Z, Wan S, Fan P, Letaief KB (2021) Federated multi-agent actor-critic learning for age sensitive mobile edge computing. IEEE IoT J 9:1053

    Google Scholar 

  9. Tian G, Ren Y, Pan C, Zhou Z, Wang X (2022) Asynchronous federated learning empowered computation offloading in collaborative vehicular networks. In: 2022 IEEE Wireless Communications and Networking Conference (WCNC), pp. 315–320 . IEEE

  10. Shinde SS, Bozorgchenani A, Tarchi D, Ni Q (2021) On the design of federated learning in latency and energy constrained computation offloading operations in vehicular edge computing systems. IEEE Trans Veh Technol 71(2):2041–2057

    Article  Google Scholar 

  11. Kang J, Xiong Z, Niyato D, Zou Y, Zhang Y, Guizani M (2020) Reliable federated learning for mobile networks. IEEE Wireless Commun 27(2):72–80

    Article  Google Scholar 

  12. Verbraeken J, Wolting M, Katzy J, Kloppenburg J, Verbelen T, Rellermeyer JS (2020) A survey on distributed machine learning. ACM Comput Surv 53(2):1–33

    Article  Google Scholar 

  13. Lu Y, Huang X, Zhang K, Maharjan S, Zhang Y (2020) Low-latency federated learning and blockchain for edge association in digital twin empowered 6g networks. IEEE Trans Industr Inform 17(7):5098–5107

    Article  Google Scholar 

  14. Thrun S, Littman ML (2000) Reinforcement learning: an introduction. AI Magaz 21(1):103–103

    Google Scholar 

  15. Lewis FL, Vrabie D (2009) Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits Syst Mag 9(3):32–50

    Article  Google Scholar 

  16. Van Seijen H, Fatemi M, Romoff J, Laroche R, Barnes T, Tsang J (2017) Hybrid reward architecture for reinforcement learning. http://arxiv.org/abs/1706.04208

  17. Spielberg S, Gopaluni R, Loewen P (2017) Deep reinforcement learning approaches for process control. In: 2017 6th International Symposium on Advance Control of Induction Processes (AdCONIP), pp. 201–206. IEEE

  18. Zhang Y, Yao J, Guan H (2017) Intelligent cloud resource management with deep reinforcement learning. IEEE Cloud Comput 4(6):60–69

    Article  Google Scholar 

  19. O’Shea TJ, Clancy TC (2016) Deep reinforcement learning radio control and signal detection with kerlym, a gym rl agent. http://arxiv.org/abs/1605.09221

  20. Gu S, Holly E, Lillicrap T, Levine S (2017) Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In: 2017 IEEE Int Conf Robot Autom, pp. 3389–3396 . IEEE

  21. Xiong X, Wang J, Zhang F, Li K (2016) Combining deep reinforcement learning and safety based control for autonomous driving. http://arxiv.org/abs/1612.00147

  22. Mekrache A, Bradai A, Moulay E, Dawaliby S (2021) Deep reinforcement learning techniques for vehicular networks: recent advances and future trends towards 6g. Veh Commun 25:100398

    Google Scholar 

  23. Yang C, Liu Y, Chen X, Zhong W, Xie S (2019) Efficient mobility-aware task offloading for vehicular edge computing networks. IEEE Access 7:26652–26664

    Article  Google Scholar 

  24. Rahman AU, Malik AW, Sati V, Chopra A, Ravana SD (2020) Context-aware opportunistic computing in vehicle-to-vehicle networks. Veh Commun 24:100236

    Google Scholar 

  25. Sun Y, Guo X, Song J, Zhou S, Jiang Z, Liu X, Niu Z (2019) Adaptive learning-based task offloading for vehicular edge computing systems. IEEE Trans Veh Technol 68(4):3061–3074

    Article  Google Scholar 

  26. Zhang J, Guo H, Liu J, Zhang Y (2019) Task offloading in vehicular edge computing networks: a load-balancing solution. IEEE Trans Veh Technol 69(2):2092–2104

    Article  Google Scholar 

  27. Hu J, Li K, Liu C, Li K (2020) Game-based task offloading of multiple mobile devices with qos in mobile edge computing systems of limited computation capacity. ACM Trans Embedded Comput Syst 19(4):1–21

    Article  Google Scholar 

  28. Tang C, Xia S, Li Q, Chen W, Fang W (2021) Resource pooling in vehicular fog computing. J Cloud Comput 10(1):1–14

    Article  Google Scholar 

  29. Gao M, Cui W, Gao D, Shen R, Li J, Zhou Y (2019) Deep neural network task partitioning and offloading for mobile edge computing. In: 2019 IEEE Global Communication Conference, pp. 1–6 . IEEE

  30. Hu L, Tian Y, Yang J, Taleb T, Xiang L, Hao Y (2019) Ready player one: Uav-clustering-based multi-task offloading for vehicular vr/ar gaming. IEEE Netw 33(3):42–48

    Article  Google Scholar 

  31. Mashhadi F, Monroy SAS, Bozorgchenani A, Tarchi D (2020) Optimal auction for delay and energy constrained task offloading in mobile edge computing. Comput Netw 183:107527

    Article  Google Scholar 

  32. Chen Z, Hu J, Chen X, Hu J, Zheng X, Min G (2020) Computation offloading and task scheduling for dnn-based applications in cloud-edge computing. IEEE Access 8:115537–115547

    Article  Google Scholar 

  33. Guo H, Liu J, Ren J, Zhang Y (2020) Intelligent task offloading in vehicular edge computing networks. IEEE Wireless Commun 27(4):126–132

    Article  Google Scholar 

  34. Zhang Y, Zhang M, Fan C, Li F, Li B (2021) Computing resource allocation scheme of iov using deep reinforcement learning in edge computing environment. J Adv Signal Process 1:1–19

    Google Scholar 

  35. Shi J, Du J, Wang J, Wang J, Yuan J (2020) Priority-aware task offloading in vehicular fog computing based on deep reinforcement learning. IEEE Trans Veh Technol 69(12):16067–16081

    Article  Google Scholar 

  36. Lin B, Lin K, Lin C, Lu Y, Huang Z, Chen X (2021) Computation offloading strategy based on deep reinforcement learning for connected and autonomous vehicle in vehicular edge computing. J Cloud Comput 10(1):1–17

    Article  Google Scholar 

  37. Lei L, Tan Y, Zheng K, Liu S, Zhang K, Shen X (2020) Deep reinforcement learning for autonomous internet of things: model, applications and challenges. IEEE Commun Surv Tutorials 22(3):1722–1760

    Article  Google Scholar 

  38. Cui T, Hu Y, Shen B, Chen Q (2019) Task offloading based on lyapunov optimization for mec-assisted vehicular platooning networks. Sensors 19(22):4974

    Article  Google Scholar 

  39. Guo H, Liu J (2018) Collaborative computation offloading for multiaccess edge computing over fiber-wireless networks. IEEE Trans Veh Technol 67(5):4514–4526

    Article  Google Scholar 

  40. Prathiba SB, Raja G, Anbalagan S, Dev K, Gurumoorthy S, Sankaran AP (2021) Federated learning empowered computation offloading and resource management in 6g–v2x. IEEE Trans Netw Sci Eng 2:65

    Google Scholar 

  41. Raza S, Liu W, Ahmed M, Anwar MR, Mirza MA, Sun Q, Wang S (2020) An efficient task offloading scheme in vehicular edge computing. J Cloud Comput 9:1–14

    Article  Google Scholar 

  42. Li M, Gao J, Zhao L, Shen X (2020) Deep reinforcement learning for collaborative edge computing in vehicular networks. IEEE Trans Cognit Commun Netw 6(4):1122–1135

    Article  Google Scholar 

  43. Sheela K, Deepa SN (2013) Review on methods to fix number of hidden neurons in neural networks. Math Probl Eng. https://doi.org/10.1155/2013/425740

    Article  Google Scholar 

  44. Uzair M, Jamil N (2020) Effects of hidden layers on the efficiency of neural networks. In: 2020 IEEE 23rd International Multitopic Conference (INMIC), pp. 1–6 . doi: https://doi.org/10.1109/INMIC50486.2020.9318195

  45. Hayou S, Doucet A, Rousseau J (2018) On the selection of initialization and activation function for deep neural networks. arXiv

  46. Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533

    Article  Google Scholar 

  47. Alghamdi I, Anagnostopoulos C, Pezaros DP (2019) On the optimality of task offloading in mobile edge computing environments. Proc IEEE Global Commun Conf (GLOBECOM) 19:1–6

    Google Scholar 

Download references

Funding

No funding was received for conducting this study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Asad Waqar Malik.

Ethics declarations

Conflict of interest

All authors certify that they have no affiliations with or conflict or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shabir, B., Rahman, A.U., Malik, A.W. et al. A federated multi-agent deep reinforcement learning for vehicular fog computing. J Supercomput 79, 6141–6167 (2023). https://doi.org/10.1007/s11227-022-04911-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-022-04911-8

Keywords

Navigation