Skip to main content

Meta-MADDPG: Achieving Transfer-Enhanced MEC Scheduling via Meta Reinforcement Learning

  • Conference paper
  • First Online:
Wireless Algorithms, Systems, and Applications (WASA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13473))

  • 1681 Accesses

Abstract

With the assistance of mobile edge computing (MEC), mobile devices (MDs) can optionally offload local computationally heave tasks to edge servers that are generally deployed at the edge of networks. As thus, the latency of task and energy consumption of MDs can be both reduced, significantly improving mobile users’ quality of experience. Although considerable MEC scheduling algorithms have been designed by researchers, most of them are trained to solve specific tasks, leaving the performance in other MEC environments remaining dubious. To address the issue, this paper first formulates the optimization problem to minimize both task delay and energy consumption, and then transforms it into Markov decision problem that is further solved by using the state-of-the-art multi-agent deep reinforcement learning method, i.e., MADDPG. Furthermore, aiming at improving the overall performance in various MEC environments, we integrate MADDPG with meta-learning and propose Meta-MADDPG which is carefully designed with dedicated reward functions. The evaluation results are given to showcase the more satisfactory performances of Meta-MADDPG over the state-of-the-art algorithms when confronting new environments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Sabella, D., Vaillant, A., Kuure, P., Rauschenbach, U., Giust, F.: Mobile edge computing architecture: the role of MEC in the internet of things. IEEE Consum. Electr. Mag. 5(4), 84–91 (2016)

    Article  Google Scholar 

  2. Mach, P., Becvar, Z.: Mobile edge computing: a survey on architecture and computation offloading. IEEE Commun. Surv. Tutorials 19(3), 1628–1656 (2017)

    Article  Google Scholar 

  3. Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A.: Deep reinforcement learning: a brief survey. IEEE Signal Process. Mag. 34(6), 26–38 (2017)

    Article  Google Scholar 

  4. Li, J., Gao, H., Lv, T., Lu, Y.: Deep reinforcement learning based computation offloading and resource allocation for MEC. In: Proceedings of the 2018 IEEE Wireless Communications and Networking Conference (2018)

    Google Scholar 

  5. Wang, Y., Ru, Z., Wang, K., Huang, P.: Joint deployment and task scheduling optimization for large-scale mobile users in multi-UAV-enabled mobile edge computing. IEEE Trans. Cybern. 50(9), 3984–3997 (2020)

    Article  Google Scholar 

  6. Peng, H., Shen, X.: Multi-agent reinforcement learning based resource management in MEC- and UAV-assisted vehicular networks. IEEE J. Sel. Areas Commun. 39(1), 131–141 (2021)

    Article  MathSciNet  Google Scholar 

  7. Vilalta, R., Drissi, Y.: A perspective view and survey of meta-learning. Artif. Intell. Rev. 18, 77–95 (2002)

    Article  Google Scholar 

  8. R. Lowe, Y. Wu, A. Tamar, J Harb, OpenAI Pieter Abbeel, Igor Mordatch: Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. Advances in Neural Information Processing Systems 30 (NIPS 2017)

    Google Scholar 

  9. Duan, Y., Schulman, J., Chen, X., et al.: RL ^ 2: Fast reinforcement learning via slow reinforcement learning. arXiv preprint arXiv:1611.02779 (2016)

  10. Wang, C., Yu, F.R., Liang, C., Chen, Q., Tang, L.: Joint computation offloading and interference management in wireless cellular networks with mobile edge computing. IEEE Trans. Veh. Technol. 66(8), 7432–7445 (2017)

    Article  Google Scholar 

  11. Wen, Y., Zhang, W., Luo, H.: Energy-optimal mobile application execution: taming resource-poor mobile devices with cloud clones. In: Proceedings of the 2012 IEEE INFOCOM (2012)

    Google Scholar 

  12. Zhou, P., Finley, B., Li, X., Tarkoma, S., Kangasharju, J., Ammar, M., Hui, P.: 5G MEC computation handoff for mobile augmented reality. Computer Science Networking and Internet Architecture (2021)

    Google Scholar 

  13. Hochba, D.S.: Approximation algorithms for NP-hard problems. ACM SIGACT News 28(2), 40–52 (1997)

    Article  Google Scholar 

  14. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)

    Article  Google Scholar 

  15. Cao, Y., Jiang, T., Wang, C.: Optimal radio resource allocation for mobile task offloading in cellular networks. IEEE Network 28(5), 68–73 (2014)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tao Ren .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yao, Y., Ren, T., Cui, M., Liu, D., Niu, J. (2022). Meta-MADDPG: Achieving Transfer-Enhanced MEC Scheduling via Meta Reinforcement Learning. In: Wang, L., Segal, M., Chen, J., Qiu, T. (eds) Wireless Algorithms, Systems, and Applications. WASA 2022. Lecture Notes in Computer Science, vol 13473. Springer, Cham. https://doi.org/10.1007/978-3-031-19211-1_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19211-1_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19210-4

  • Online ISBN: 978-3-031-19211-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics