Abstract
Vehicular fog computing is an emerging paradigm for delay-sensitive computations. In this highly dynamic resource-sharing environment, optimal offloading decision for effective resource utilization is a challenging task. In recent years, deep reinforcement learning has emerged as an effective approach for dealing with resource allocation problems because of its self-adapting nature in a large state space scenario. However, due to high mobility and rapid changes in the network topology cause fluctuating task arrival rate. Similarly, the data sharing between the vehicles and the fog nodes raises a variety of security and privacy concerns. Therefore, the proposed system is based on local and global model training approaches. In this paper, we propose a federated multi-agent deep reinforcement learning solution that efficiently learns task-offloading decisions at multiple tiers i.e. locally and globally. The proposed work results in fast convergence due to its collaborative learning model among vehicles and fog servers. The local model runs at the vehicular nodes, and the global model runs at the fog servers. To reduce network overhead, the models are learned locally; thus, limited information is shared across the network this reduces the communication overhead and improves the privacy of the agents. The proposed system is compared with the greedy and stochastic approaches in terms of residence times, cost, delivery rate, and utilization ratio. We observed that the proposed approach has significantly reduced the task residence time, end-to-end delay and overall system cost.










Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
Data availability - There is no data associated with this work.
References
Ma L, Wang X, Wang X, Wang L, Shi Y, Huang M (2021) Tcda: Truthful combinatorial double auctions for mobile edge computing in industrial internet of things. IEEE Trans Mobile Comput 2:58
Liu S, Guo L, Easa SM, Chen W, Yan H, Tang Y (2018) Chaotic behavior of traffic-flow evolution with two departure intervals in two-link transportation network. Discr Dyn Nat Soc 68:45
Raza S, Wang S, Ahmed M, Anwar MR (2019) A survey on vehicular edge computing: architecture, applications, technical issues, and future directions. Wireless Commun Mobile Comput 2019:865
Ranadheera S, Maghsudi S, Hossain E (2017) Mobile edge computation offloading using game theory and reinforcement learning. http://arxiv.org/abs/1711.09012
Ning Z, Dong P, Wang X, Rodrigues JJ, Xia F (2019) Deep reinforcement learning for vehicular edge computing: An intelligent offloading system. ACM Trans Intell Syst Technol (TIST) 10(6):1–24
Qi Q, Wang J, Ma Z, Sun H, Cao Y, Zhang L, Liao J (2019) Knowledge-driven service offloading decision for vehicular edge computing: a deep reinforcement learning approach. IEEE Trans Veh Technol 68(5):4192–4203
Zhan W, Luo C, Wang J, Wang C, Min G, Duan H, Zhu Q (2020) Deep-reinforcement-learning-based offloading scheduling for vehicular edge computing. IEEE Internet Things J 7(6):5449–5465
Zhu Z, Wan S, Fan P, Letaief KB (2021) Federated multi-agent actor-critic learning for age sensitive mobile edge computing. IEEE IoT J 9:1053
Tian G, Ren Y, Pan C, Zhou Z, Wang X (2022) Asynchronous federated learning empowered computation offloading in collaborative vehicular networks. In: 2022 IEEE Wireless Communications and Networking Conference (WCNC), pp. 315–320 . IEEE
Shinde SS, Bozorgchenani A, Tarchi D, Ni Q (2021) On the design of federated learning in latency and energy constrained computation offloading operations in vehicular edge computing systems. IEEE Trans Veh Technol 71(2):2041–2057
Kang J, Xiong Z, Niyato D, Zou Y, Zhang Y, Guizani M (2020) Reliable federated learning for mobile networks. IEEE Wireless Commun 27(2):72–80
Verbraeken J, Wolting M, Katzy J, Kloppenburg J, Verbelen T, Rellermeyer JS (2020) A survey on distributed machine learning. ACM Comput Surv 53(2):1–33
Lu Y, Huang X, Zhang K, Maharjan S, Zhang Y (2020) Low-latency federated learning and blockchain for edge association in digital twin empowered 6g networks. IEEE Trans Industr Inform 17(7):5098–5107
Thrun S, Littman ML (2000) Reinforcement learning: an introduction. AI Magaz 21(1):103–103
Lewis FL, Vrabie D (2009) Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits Syst Mag 9(3):32–50
Van Seijen H, Fatemi M, Romoff J, Laroche R, Barnes T, Tsang J (2017) Hybrid reward architecture for reinforcement learning. http://arxiv.org/abs/1706.04208
Spielberg S, Gopaluni R, Loewen P (2017) Deep reinforcement learning approaches for process control. In: 2017 6th International Symposium on Advance Control of Induction Processes (AdCONIP), pp. 201–206. IEEE
Zhang Y, Yao J, Guan H (2017) Intelligent cloud resource management with deep reinforcement learning. IEEE Cloud Comput 4(6):60–69
O’Shea TJ, Clancy TC (2016) Deep reinforcement learning radio control and signal detection with kerlym, a gym rl agent. http://arxiv.org/abs/1605.09221
Gu S, Holly E, Lillicrap T, Levine S (2017) Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In: 2017 IEEE Int Conf Robot Autom, pp. 3389–3396 . IEEE
Xiong X, Wang J, Zhang F, Li K (2016) Combining deep reinforcement learning and safety based control for autonomous driving. http://arxiv.org/abs/1612.00147
Mekrache A, Bradai A, Moulay E, Dawaliby S (2021) Deep reinforcement learning techniques for vehicular networks: recent advances and future trends towards 6g. Veh Commun 25:100398
Yang C, Liu Y, Chen X, Zhong W, Xie S (2019) Efficient mobility-aware task offloading for vehicular edge computing networks. IEEE Access 7:26652–26664
Rahman AU, Malik AW, Sati V, Chopra A, Ravana SD (2020) Context-aware opportunistic computing in vehicle-to-vehicle networks. Veh Commun 24:100236
Sun Y, Guo X, Song J, Zhou S, Jiang Z, Liu X, Niu Z (2019) Adaptive learning-based task offloading for vehicular edge computing systems. IEEE Trans Veh Technol 68(4):3061–3074
Zhang J, Guo H, Liu J, Zhang Y (2019) Task offloading in vehicular edge computing networks: a load-balancing solution. IEEE Trans Veh Technol 69(2):2092–2104
Hu J, Li K, Liu C, Li K (2020) Game-based task offloading of multiple mobile devices with qos in mobile edge computing systems of limited computation capacity. ACM Trans Embedded Comput Syst 19(4):1–21
Tang C, Xia S, Li Q, Chen W, Fang W (2021) Resource pooling in vehicular fog computing. J Cloud Comput 10(1):1–14
Gao M, Cui W, Gao D, Shen R, Li J, Zhou Y (2019) Deep neural network task partitioning and offloading for mobile edge computing. In: 2019 IEEE Global Communication Conference, pp. 1–6 . IEEE
Hu L, Tian Y, Yang J, Taleb T, Xiang L, Hao Y (2019) Ready player one: Uav-clustering-based multi-task offloading for vehicular vr/ar gaming. IEEE Netw 33(3):42–48
Mashhadi F, Monroy SAS, Bozorgchenani A, Tarchi D (2020) Optimal auction for delay and energy constrained task offloading in mobile edge computing. Comput Netw 183:107527
Chen Z, Hu J, Chen X, Hu J, Zheng X, Min G (2020) Computation offloading and task scheduling for dnn-based applications in cloud-edge computing. IEEE Access 8:115537–115547
Guo H, Liu J, Ren J, Zhang Y (2020) Intelligent task offloading in vehicular edge computing networks. IEEE Wireless Commun 27(4):126–132
Zhang Y, Zhang M, Fan C, Li F, Li B (2021) Computing resource allocation scheme of iov using deep reinforcement learning in edge computing environment. J Adv Signal Process 1:1–19
Shi J, Du J, Wang J, Wang J, Yuan J (2020) Priority-aware task offloading in vehicular fog computing based on deep reinforcement learning. IEEE Trans Veh Technol 69(12):16067–16081
Lin B, Lin K, Lin C, Lu Y, Huang Z, Chen X (2021) Computation offloading strategy based on deep reinforcement learning for connected and autonomous vehicle in vehicular edge computing. J Cloud Comput 10(1):1–17
Lei L, Tan Y, Zheng K, Liu S, Zhang K, Shen X (2020) Deep reinforcement learning for autonomous internet of things: model, applications and challenges. IEEE Commun Surv Tutorials 22(3):1722–1760
Cui T, Hu Y, Shen B, Chen Q (2019) Task offloading based on lyapunov optimization for mec-assisted vehicular platooning networks. Sensors 19(22):4974
Guo H, Liu J (2018) Collaborative computation offloading for multiaccess edge computing over fiber-wireless networks. IEEE Trans Veh Technol 67(5):4514–4526
Prathiba SB, Raja G, Anbalagan S, Dev K, Gurumoorthy S, Sankaran AP (2021) Federated learning empowered computation offloading and resource management in 6g–v2x. IEEE Trans Netw Sci Eng 2:65
Raza S, Liu W, Ahmed M, Anwar MR, Mirza MA, Sun Q, Wang S (2020) An efficient task offloading scheme in vehicular edge computing. J Cloud Comput 9:1–14
Li M, Gao J, Zhao L, Shen X (2020) Deep reinforcement learning for collaborative edge computing in vehicular networks. IEEE Trans Cognit Commun Netw 6(4):1122–1135
Sheela K, Deepa SN (2013) Review on methods to fix number of hidden neurons in neural networks. Math Probl Eng. https://doi.org/10.1155/2013/425740
Uzair M, Jamil N (2020) Effects of hidden layers on the efficiency of neural networks. In: 2020 IEEE 23rd International Multitopic Conference (INMIC), pp. 1–6 . doi: https://doi.org/10.1109/INMIC50486.2020.9318195
Hayou S, Doucet A, Rousseau J (2018) On the selection of initialization and activation function for deep neural networks. arXiv
Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533
Alghamdi I, Anagnostopoulos C, Pezaros DP (2019) On the optimality of task offloading in mobile edge computing environments. Proc IEEE Global Commun Conf (GLOBECOM) 19:1–6
Funding
No funding was received for conducting this study.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors certify that they have no affiliations with or conflict or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Shabir, B., Rahman, A.U., Malik, A.W. et al. A federated multi-agent deep reinforcement learning for vehicular fog computing. J Supercomput 79, 6141–6167 (2023). https://doi.org/10.1007/s11227-022-04911-8
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-022-04911-8