Skip to main content

PeersimGym: An Environment for Solving the Task Offloading Problem with Reinforcement Learning

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track (ECML PKDD 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14949))

  • 761 Accesses

Abstract

Task offloading, crucial for balancing computational loads across devices in networks such as the Internet of Things, poses significant optimization challenges, including minimizing latency and energy usage under strict communication and storage constraints. While traditional optimization falls short in scalability; and heuristic approaches lack in achieving optimal outcomes, Reinforcement Learning (RL) offers a promising avenue by enabling the learning of optimal offloading strategies through iterative interactions. However, the efficacy of RL hinges on access to rich datasets and custom-tailored, realistic training environments. To address this, we introduce PeersimGym, an open-source, customizable simulation environment tailored for developing and optimizing task offloading strategies within computational networks. PeersimGym supports a wide range of network topologies and computational constraints and integrates a PettingZoo-based interface for RL agent deployment in both solo and multi-agent setups. Furthermore, we demonstrate the utility of the environment through experiments with Deep Reinforcement Learning agents, showcasing the potential of RL-based approaches to significantly enhance offloading strategies in distributed computing settings. PeersimGym thus bridges the gap between theoretical RL models and their practical applications, paving the way for advancements in efficient task offloading methodologies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/FredericoMetelo/peersim-environment.

  2. 2.

    https://github.com/FredericoMetelo/TaskOffloadingAgentLibrary.

  3. 3.

    We provide the in-depth configurations for the environment in the agent repository.

References

  1. Anttalainen, T.: Introduction to Telecommunications Network Engineering, 2nd edn. Artech House Telecommunications Library. Artech House, Boston (2003)

    Google Scholar 

  2. Baek, J., et al.: Managing fog networks using reinforcement learning based load balancing algorithm. In: 2019 IEEE WCNC, pp. 1–7 (2019)

    Google Scholar 

  3. Baek, J., Kaddoum, G.: FLoadNet: load balancing in fog networks with cooperative multiagent using actor-critic method. IEEE Trans. Netw. Serv. Manag. 20, 400–414 (2023)

    Article  Google Scholar 

  4. Dai, F., et al.: Task offloading for vehicular edge computing with edge-cloud cooperation. World Wide Web 25(5), 1999–2017 (2022)

    Article  Google Scholar 

  5. Gawłowicz, P., Zubow, A.: ns-3 meets OpenAI gym: the playground for machine learning in networking research. In: ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (2019)

    Google Scholar 

  6. Geng, L., et al.: Deep reinforcement learning based distributed computation offloading in vehicular edge computing networks. IEEE Internet Things J. 10, 12416–12433 (2023)

    Article  Google Scholar 

  7. Huang, H., Ye, Q., Zhou, Y.: Deadline-aware task offloading with partially-observable deep reinforcement learning for multi-access edge computing. IEEE Trans. Netw. Sci. Eng. 9(6), 3870–3885 (2021)

    Article  MathSciNet  Google Scholar 

  8. Jain, V., Kumar, B.: QoS-aware task offloading in fog environment using multiagent deep reinforcement learning. J. Netw. Syst. Manag. 31(1), 7 (2023)

    Article  Google Scholar 

  9. Lin, L., Zhou, W., Yang, Z., Liu, J.: Deep reinforcement learning-based task scheduling and resource allocation for NOMA-MEC in Industrial Internet of Things. Peer-to-Peer Network. Appl. 16(1), 170–188 (2023)

    Article  Google Scholar 

  10. Liu, Y., Yu, H., Xie, S., Zhang, Y.: Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks. IEEE Trans. Veh. Technol. 68(11), 11158–11168 (2019)

    Article  Google Scholar 

  11. Mahmud, M.R., Pallewatta, S., Goudarzi, M., Buyya, R.: IFogSim2: an extended iFogSim simulator for mobility, clustering, and microservice management in edge and fog computing environments. CoRR arxiv:2109.05636 (2021)

  12. Min, M., et al.: Learning-based computation offloading for IoT devices with energy harvesting. IEEE Trans. Veh. Technol. 68(2), 1930–1941 (2019)

    Article  Google Scholar 

  13. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  14. Montresor, A., Jelasity, M.: PeerSim: a scalable P2P simulator. In: Proceedings of the 9th International Conference on Peer-to-Peer, Seattle, WA, pp. 99–100 (2009)

    Google Scholar 

  15. Muniswamaiah, M., Agerwala, T., Tappert, C.C.: A survey on cloudlets, mobile edge, and fog computing. In: 8th IEEE CSCloud/7th IEEE EdgeCom (2021)

    Google Scholar 

  16. Ng, A.Y., Harada, D., Russell, S.: Policy invariance under reward transformations: theory and application to reward shaping. In: ICML, pp. 278–287 (1999)

    Google Scholar 

  17. Nowé, A., Vrancx, P., De Hauwere, Y.M.: Game Theory and Multi-agent Reinforcement Learning. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27645-3_14

    Book  Google Scholar 

  18. Peng, X., et al.: Deep reinforcement learning for shared offloading strategy in vehicle edge computing. IEEE Syst. J. 17, 2089–2100 (2022)

    Article  Google Scholar 

  19. Qiu, X., et al.: Online deep reinforcement learning for computation offloading in blockchain-empowered mobile edge computing. IEEE Trans. Veh. Technol. 68(8), 8050–8062 (2019)

    Article  Google Scholar 

  20. Rausch, T, et al.: Synthesizing plausible infrastructure configurations for evaluating edge computing systems. In: 3rd USENIX Workshop HotEdge 2020 (2020)

    Google Scholar 

  21. Santos, J., Wauters, T., Volckaert, B., De Turck, F.: Reinforcement learning for service function chain allocation in fog computing. In: Book Chapter in revision, Submitted to Communications Network and Service Management in the Era of Artificial Intelligence and Machine Learning, IEEE Press (2020)

    Google Scholar 

  22. Sonmez, C., Ozgovde, A., Ersoy, C.: Edgecloudsim: an environment for performance evaluation of edge computing systems. Trans. Emerg. Telecommun. Technol. 29(11), e3493 (2018)

    Article  Google Scholar 

  23. Terry, J.K., et al.: PettingZoo: gym for multi-agent reinforcement learning. CoRR arxiv:2009.14471 (2020)

  24. Tian, H., Zheng, Y., Wang, W.: Characterizing and synthesizing task dependencies of data-parallel jobs in alibaba cloud. In: Proceedings of ACM Symposium Cloud Computing (2019)

    Google Scholar 

  25. Tong, Z., et al.: Multi-type task offloading for wireless Internet of Things by federated deep reinforcement learning. Futur. Gener. Comput. Syst. 145, 536–549 (2023)

    Article  Google Scholar 

  26. Towers, M., et al.: Gymnasium (2023)

    Google Scholar 

  27. Van Le, D., Tham, C.K.: A deep reinforcement learning based offloading scheme in ad-hoc mobile clouds. In: IEEE Infocom Workshops, pp. 760–765 (2018)

    Google Scholar 

  28. Varghese, B., Buyya, R.: Next generation cloud computing: new trends and research directions. Futur. Gener. Comput. Syst. 79, 849–861 (2018)

    Article  Google Scholar 

  29. Yu, S., et al.: When deep reinforcement learning meets federated learning: intelligent multitimescale resource management for multiaccess edge computing in 5G ultradense network. IEEE Internet Things J. 8(4), 2238–2251 (2020)

    Article  Google Scholar 

  30. Zhang, F., et al.: Cooperative partial task offloading and resource allocation for IIoT based on decentralized multi-agent deep reinforcement learning. IEEE Internet Things J. (2023)

    Google Scholar 

  31. Zhu, Z., Liu, T., Yang, Y., Luo, X.: BLOT: bandit learning-based offloading of tasks in fog-enabled networks. IEEE Trans. Parallel Distrib. Syst. 30, 2636–2649 (2019)

    Article  Google Scholar 

Download references

Acknowledgments

This work was partially funded by FCT IP, through NOVA LINCS (UIDB/04516/2020), and Project “Artificial Intelligence Fights Space Debris” No C626449889-0046305 co-funded by Recovery and Resilience Plan and NextGeneration EU Funds, www.recuperarportugal.gov.pt. And, by the European Union (TARDIS, 101093006). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frederico Metelo .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 125 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Metelo, F., Soares, C., Racković, S., Costa, P.Á. (2024). PeersimGym: An Environment for Solving the Task Offloading Problem with Reinforcement Learning. In: Bifet, A., Krilavičius, T., Miliou, I., Nowaczyk, S. (eds) Machine Learning and Knowledge Discovery in Databases. Applied Data Science Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14949. Springer, Cham. https://doi.org/10.1007/978-3-031-70378-2_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70378-2_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70377-5

  • Online ISBN: 978-3-031-70378-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics