Skip to main content
Log in

An Integrated Approach of Efficient Edge Task Offloading Using Deep RL, Attention and MDS Techniques

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

In Distributed Computation Optimization (DCO) networks, where clients distribute computational jobs among heterogeneous helpers with different capacities and pricing models, efficient task offloading and cost reduction continue to be major issues. In order to tackle this problem, a unique method utilizing Deep Reinforcement Learning (DRL) is presented in this study. The DRL algorithm allows clients to independently identify the best method of action without requiring prior knowledge of network dynamics since it adjusts to the dynamic and stochastic nature of DCO environments. In distributed computing environments, the combination of Maximum Distance Separable (MDS) and DRL approaches provides a stable framework for task offloading. The suggested method finds the best policy for helper selection and task offloading by combining reward estimation, action selection, environment modeling, and iterative learning approaches. In particular, energy consumption, density, and total rewards show that the DRL algorithm performs better in experimental evaluations than traditional techniques like Q-learning and random selection. The algorithm’s performance is further improved by the integration of DQN with the attention mechanism, highlighting its potential to completely transform the efficiency of DCO networks. This work highlights how DRL approaches have a significant effect on network operations optimization and offers insightful information about future developments in distributed computation paradigms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Availability of supporting data

We will be available data from the corresponding author on reasonable request.

References

  1. Song T. Opportunistic task offloading in UAV-assisted mobile edge computing: a deep reinforcement learning approach. In: 2023 14th International conference on information and communication technology convergence (ICTC), IEEE; 2023. pp. 881–884.

  2. Chen M, Hao Y. Task offloading for mobile edge computing in software defined ultra-dense network. IEEE J Select Areas Commun. 2018;36(3):587–97.

    Article  Google Scholar 

  3. Lee K, Lam M, Pedarsani R, Papailiopoulos D, Ramchandran K. Speeding up distributed machine learning using codes. IEEE Trans Inf Theory. 2017;64(3):1514–29.

    Article  MathSciNet  Google Scholar 

  4. Vu TT, Ngo DT, Ngo HQ, Dao MN, Tran NH, Middleton RH. Straggler effect mitigation for federated learning in cell-free massive MIMO. In: ICC 2021-IEEE international conference on communications, IEEE; 2021. pp. 1–6.

  5. Mofrad MH, Melhem R, Ahmad Y, Hammoud M. Accelerating distributed inference of sparse deep neural networks via mitigating the straggler effect. In: 2020 IEEE high performance extreme computing conference (HPEC), IEEE; 2020. pp. 1–7.

  6. Qi Q, Wang J, Ma Z, Sun H, Cao Y, Zhang L, Liao J. Knowledge-driven service offloading decision for vehicular edge computing: a deep reinforcement learning approach. IEEE Trans Veh Technol. 2019;68(5):4192–203.

    Article  Google Scholar 

  7. Wu X, Li J, Xiao M, Ching P, Poor HV. Multi-agent reinforcement learning for cooperative coded caching via homotopy optimization. IEEE Trans Wirel Commun. 2021;20(8):5258–72.

    Article  Google Scholar 

  8. Kang Q, Chen EJ, Li Z-C, Luo H-B, Liu Y. Attention-based LSTM predictive model for the attitude and position of shield machine in tunneling. Undergr Space. 2023;13:335–50.

    Article  Google Scholar 

  9. Pasteris S, Wang S, Herbster M, He T. Service placement with provable guarantees in heterogeneous edge computing systems. In: IEEE INFOCOM 2019-IEEE conference on computer communications, IEEE; 2019. pp. 514–522.

  10. Wang J, Li R, Wang J, Ge Y-Q, Zhang Q-F, Shi W-X. Artificial intelligence and wireless communications. Front Inf Technol Electron Eng. 2020;21:1413–25.

    Article  Google Scholar 

  11. Lin T, Zheng Z, Chen E, Cuturi M, Jordan MI. On projection robust optimal transport: sample complexity and model misspecification. In: International conference on artificial intelligence and statistics, PMLR; 2021. pp. 262–270.

  12. Li T, He X, Jiang S, Liu J. A survey of privacy-preserving offloading methods in mobile-edge computing. J Netw Compu Appl. 2022;203: 103395.

    Article  Google Scholar 

  13. Ding N, Qin Y, Yang G, Wei F, Yang Z, Su Y, Hu S, Chen Y, Chan C-M, Chen W et al. Delta tuning: a comprehensive study of parameter efficient methods for pre-trained language models. 2022. arXiv preprint arXiv:2203.06904.

  14. Wang H-N, Liu N, Zhang Y-Y, Feng D-W, Huang F, Li D-S, Zhang Y-M. Deep reinforcement learning: a survey. Front Inf TechnolElectron Eng. 2020;21(12):1726–44.

    Article  Google Scholar 

  15. Li SE. Deep reinforcement learning. In: Reinforcement learning for sequential decision and optimal control. Singapore: Springer; 2023. p. 365–402.

    Chapter  Google Scholar 

  16. Alfakih T, Hassan MM, Gumaei A, Savaglio C, Fortino G. Task offloading and resource allocation for mobile edge computing by deep reinforcement learning based on SARSA. IEEE Access. 2020;8:54074–84.

    Article  Google Scholar 

  17. Wang H, Yuan Y, Yang XT, Zhao T, Liu Y. Deep q learning-based traffic signal control algorithms: model development and evaluation with field data. J Intell Transp Syst. 2023;27(3):314–34.

    Article  Google Scholar 

  18. Jain V, Kumar B. QoS-aware task offloading in fog environment using multi-agent deep reinforcement learning. J Netw Syst Manag. 2023;31(1):7.

    Article  Google Scholar 

  19. Choi Y, Lim Y. Deep reinforcement learning-based edge caching in heterogeneous networks. J Inf Process Syst. 2022;18(6):803–12.

    Google Scholar 

  20. Yang Z, Liu Y, Chen Y, Jiao L. Learning automata based Q-learning for content placement in cooperative caching. IEEE Trans Commun. 2020;68(6):3667–80.

    Article  Google Scholar 

  21. Kim KT, Joe-Wong C, Chiang M. Coded edge computing. In: IEEE INFOCOM 2020-IEEE conference on computer communications, IEEE; 2020. pp. 237–246.

  22. Lu Z-L, Liu CQ, Dosher BA. Attention mechanisms for multi-location first-and second-order motion perception. Vis Res. 2000;40(2):173–86.

    Article  Google Scholar 

  23. Minut S, Mahadevan S. A reinforcement learning model of selective visual attention. In: Proceedings of the fifth international conference on autonomous agents, 2001. pp. 457–464.

  24. Kochovski P, Stankovski V. Supporting smart construction with dependable edge computing infrastructures and applications. Autom Constr. 2018;85:182–92.

    Article  Google Scholar 

  25. Shi W, Cao J, Zhang Q, Li Y, Xu L. Edge computing: vision and challenges. IEEE Internet Things J. 2016;3(5):637–46.

    Article  Google Scholar 

  26. Chen L, Qu Z, Zhang Y, Liu J, Wang R, Zhang D. Edge enhanced GCIFFNet: a multiclass semantic segmentation network based on edge enhancement and multiscale attention mechanism. IEEE J Select Top Appl Earth Observ Remote Sens. 2024. https://doi.org/10.1109/JSTARS.2024.3357540.

    Article  Google Scholar 

  27. Nezamdoust SS, Pourmina MA, Razzazi F. Optimal prediction of cloud spot instance price utilizing deep learning. J Supercomput. 2023;79(7):7626–47.

    Article  Google Scholar 

  28. Wang J, Du H, Niyato D, Kang J, Xiong Z, Rajan D, Mao S et al. A unified framework for guiding generative AI with wireless perception in resource constrained mobile edge networks. 2023. arXiv preprint arXiv:2309.01426.

  29. Price E, Woodruff DP. Applications of the Shannon–Hartley theorem to data streams and sparse recovery. In: 2012 IEEE international symposium on information theory proceedings, IEEE; 2012. pp. 2446–2450.

  30. O’Donoghue B, Osband I, Munos R, Mnih V. The uncertainty bellman equation and exploration. In: International conference on machine learning, 2018. pp. 3836–3845.

Download references

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Priyadarshni: Conceptualization, methodology, experimentation, writing; praveen kumar, Dhruvan Kadawala and Shivani Tripathi: Conceptualization, experimentation, editing; Rajiv Misra: conceptualization, review, supervision.

Corresponding author

Correspondence to Priyadarshni.

Ethics declarations

Conflict of interest

Not applicable.

Ethical approval

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Priyadarshni, Kumar, P., Kadavala, D. et al. An Integrated Approach of Efficient Edge Task Offloading Using Deep RL, Attention and MDS Techniques. SN COMPUT. SCI. 5, 681 (2024). https://doi.org/10.1007/s42979-024-03018-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-024-03018-6

Keywords