Skip to main content

Advertisement

Log in

ARLO: An asynchronous update reinforcement learning-based offloading algorithm for mobile edge computing

  • Published:
Peer-to-Peer Networking and Applications Aims and scope Submit manuscript

Abstract

The processing of large volumes of data sets unprecedented demands on the computing power of devices, and it is evident that resource-constrained mobile devices struggle to satisfy the need. As a distributed computing paradigm, edge computing can release mobile devices from computation-intensive tasks, reducing the strain and improving processing efficiency. Traditional offloading methods are less adaptable and do not work in some harsh settings. We simplify the problem to binary offloading decisions in this research and offer a new Asynchronous Update Reinforcement Learning-based Offloading (ARLO) algorithm. The method employs a distributed learning strategy, with five sub-networks and a central public network. Each sub-network has the same structure, as they interact with their environment to learn and update the public network. The sub-network pulls the parameters of the central public network every once in a while. Each sub-network has an experienced pool that minimizes data correlation and is particularly successful in preventing situations where the model falls into a local optimum solution. The main reason for using asynchronous multithreading is that it allows multiple threads to learn the strategy simultaneously, making the learning process faster. At the same time, when the model is trained, five threads can run simultaneously and can handle tasks from different users. The results of simulations show that the algorithm is adaptive and can make optimized offloading decisions on time, even in a time-varying Internet environment, with a significant increase in computational efficiency compared to traditional methods and other reinforcement learning methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data availability

Some or all data, models, or codes generated or used during the study are available from the corresponding author by request.

References

  1. Ai Y, Peng M, Zhang K (2018) Edge computing technologies for Internet of Things: a primer. Digit Commun Netw 4(2):77–86

    Article  Google Scholar 

  2. Wu H, Sun Y, Wolter K (2020) Energy-efficient decision making for mobile cloud offloading. Ieee Trans Cloud Comput 8(2):570–584

    Article  Google Scholar 

  3. Silver D et al (2017) Mastering the game of Go without human knowledge. Nature 550(7676):354–+

    Article  Google Scholar 

  4. Sun Y (2021) Performance of reinforcement learning on traditional video games. Int Conf Artif Intell Adv Manuf (AIAM) 276–279

  5. Jonsson A (2019) Deep reinforcement learning in medicine. Kidney Dis 5(1):18–22

    Article  Google Scholar 

  6. Chinchali S, Hu P, Tianshu C et al (2018) Cellular network traffic scheduling with deep reinforcement learning. AAAI Conf Artif Intell (EAAI'18):766–774

  7. Rui Z, Liu C, Qi G (2014) A decision-making method for autonomous vehicles based on simulation and reinforcement learning. Int Conf Mach Learn Cybern

  8. Ahmed A, Ahmed E (2016) A survey on mobile edge computing. Int Conf Intell Syst Control (ISCO)

  9. Liang J et al (2021) Joint offloading and scheduling decisions for DAG applications in mobile edge computing. Neurocomputing 424:160–171

    Article  Google Scholar 

  10. Lin L et al (2019) Echo: An edge-centric code offloading system with quality of service guarantee. Ieee Access 7:5905–5917

    Article  Google Scholar 

  11. Liu C et al (2019) COOPER-SCHED: A cooperative scheduling framework for mobile edge computing with expected deadline guarantee. IEEE Trans Parallel Distrib Syst 1–1

  12. Li Y et al (2022) Lyapunov optimization-based trade-off policy for mobile cloud offloading in heterogeneous wireless networks. Ieee Trans Cloud Comput 10(1):491–505

    Article  Google Scholar 

  13. Mao Y, Zhang J, Letaief KB (2016) Dynamic computation offloading for mobile-edge computing with energy harvesting devices. IEEE J Sel Areas Commun 34(12):3590–3605

    Article  Google Scholar 

  14. Li M et al (2018) A computing offloading game for mobile devices and edge cloud servers. Wirel Commun Mob Comput

  15. Goudarzi M et al (2021) An application placement technique for concurrent IoT applications in edge and fog computing environments. IEEE Trans Mob Comput 20(4):1298–1311

    Article  MathSciNet  Google Scholar 

  16. Xu X et al (2019) A computation offloading method over big data for IoT-enabled cloud-edge computing. Future Gener Comput Syst Int J eSci 95:522–533

    Article  Google Scholar 

  17. Luo YQ, Yuan XG, Liu YJ (2007) An improved PSO algorithm for solving non-convex NLP/MINLP problems with equality constraints. Comput Chem Eng 31(3):153–162

    Article  Google Scholar 

  18. Thinh Quang D et al (2017) Offloading in mobile edge computing: Task allocation and computational frequency scaling. IEEE Trans Commun 65(8):3571–3584

    Google Scholar 

  19. Xu D et al (2020) A survey on edge intelligence. arXiv preprint arXiv:2003.12172v2

  20. Yu S et al (2020) Intelligent edge: Leveraging deep imitation learning for mobile edge computation offloading. IEEE Wirel Commun 27(1):92–99

    Article  MathSciNet  Google Scholar 

  21. Castelló A, Dolz MF, Quintana-Ortí ES et al (2019) Theoretical scalability analysis of distributed deep convolutional neural networks. IEEE/ACM Int Symp Cluster Cloud Grid Comput (CCGRID)

  22. Huang L et al (2018) Distributed deep learning-based offloading for mobile edge computing networks. Mob Netw Appl

  23. Wu H et al (2020) Collaborate edge and cloud computing with distributed deep learning for smart city internet of things. IEEE Internet Things J 7(9):8099–8110

    Article  Google Scholar 

  24. Chen X et al (2019) Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning. IEEE Internet Things J 6(3):4005–4018

    Article  Google Scholar 

  25. Huang L, Feng X, Qian L et al (2018) Deep reinforcement learning-based task offloading and resource allocation for mobile edge computing. Mach Learn Intell Commun (MLICOM 2018)

  26. Dinh TQ et al (2018) Learning for computation offloading in mobile edge computing. IEEE Trans Commun 66(12):6353–6367

    Article  Google Scholar 

  27. Wang J et al (2019) Computation offloading in multi-access edge computing using a deep sequential model based on reinforcement learning. IEEE Commun Mag 57(5):64–69

    Article  Google Scholar 

  28. Huang L, Bi S, Zhang Y-JA (2020) Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks. IEEE Trans Mob Comput 19(11):2581–2593

    Article  Google Scholar 

  29. Mustafa E et al (2022) Joint wireless power transfer and task offloading in mobile edge computing: a survey. Clust Comput 25(4):2429–2448

    Article  Google Scholar 

  30. Mustafa E et al (2022) Reinforcement learning for intelligent online computation offloading in wireless powered edge networks. Cluster Comput

  31. Zhou S, Jadoon W, Shuja J (2021) Machine learning-based offloading strategy for lightweight user mobile edge computing tasks. Complexity 2021:6455617

    Article  Google Scholar 

  32. Zhan W et al (2020) Deep-reinforcement-learning-based offloading scheduling for vehicular edge computing. IEEE Internet Things J 7(6):5449–5465

    Article  Google Scholar 

  33. Bi S, Ho CK, Zhang R (2015) Wireless powered communication: opportunities and challenges. IEEE Commun Mag 53(4):117–125

    Article  Google Scholar 

  34. Bi S, Zhang YJ (2018) Computation rate maximization for wireless powered mobile-edge computing with binary computation offloading. IEEE Trans Wirel Commun 17(6):4177–4190

    Article  Google Scholar 

  35. Wang F et al (2018) Joint offloading and computing optimization in wireless powered mobile-edge computing systems. IEEE Trans Wirel Commun 17(3):1784–1797

    Article  MathSciNet  Google Scholar 

  36. Zhang W et al (2013) Energy-optimal mobile cloud computing under stochastic wireless channel. IEEE Trans Wirel Commun 12(9):4569–4581

    Article  Google Scholar 

  37. You C, Huang K, Chae H (2016) Energy efficient mobile cloud computing powered by wireless energy transfer. IEEE J Sel Areas Commun 34(5):1757–1771

    Article  Google Scholar 

  38. Schulman J et al (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347

Download references

Acknowledgements

The authors would like to sincerely thank the editor and the anonymous reviewers for their valuable suggestions to improve the quality of this work.

This research was funded by the National Natural Science Foundation of China (Grant no. 60971088) and the Natural Science Foundation of Shandong Province (Grant no. ZR2021MF013).

Funding

This research was funded by the National Natural Science Foundation of China (Grant no. 60971088) and the Natural Science Foundation of Shandong Province (Grant no. ZR2021MF013).

Author information

Authors and Affiliations

Authors

Contributions

Zhibin Liu and Yuhan Liu contributed to the conception of the study; Yuhan Liu and Zhenyou Zhou performed the experiment; Zhibin Liu, Yuhan Liu, and Xinshui Wang contributed significantly to the analysis and manuscript preparation; Zhibin Liu, Yuhan Liu, and Xinshui Wang performed the data analyses and wrote the manuscript; Yuxia Lei helped perform the analysis with constructive discussions.

Corresponding author

Correspondence to Zhibin Liu.

Ethics declarations

Ethics approval

The submitted works are original and have not been published elsewhere in any form or language (partially or in full), nor are they under consideration by another publisher. This manuscript has no plagiarism, fabrication, falsification, or inappropriate manipulation (including image-based manipulation).

Consent to publish

All the authors agree to publication in Peer-to-Peer Networking and Applications.

Conflict of interest

As the corresponding author, I confirm that this manuscript has been read and approved for submission by all the named authors. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The authors are responsible for the correctness of the statements provided in the manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Z., Liu, Y., Lei, Y. et al. ARLO: An asynchronous update reinforcement learning-based offloading algorithm for mobile edge computing. Peer-to-Peer Netw. Appl. 16, 1468–1480 (2023). https://doi.org/10.1007/s12083-023-01490-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12083-023-01490-0

Keywords

Navigation