Skip to main content

Advertisement

Log in

Energy-efficient UAV-enabled computation offloading for industrial internet of things: a deep reinforcement learning approach

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

Industrial Internet of Things (IIoT) has been envisioned as a killer application of 5G and beyond. However, due to the shortness of computation ablility and batery capacity, it is challenging for IIoT devices to process latency-sensitive and resource-sensitive tasks. Moblie Edge Computing (MEC), as a promising paradigm for handling tasks with high quality of service (QoS) requirement and for energy-constrained IIoT devices, allows IIoT devices to offload their tasks to MEC servers, which can significantly reduce the task process delay and energy consumptions. However, the deployment of the MEC servers rely heavily on communication infrastructure, which greatly reduce the flexibility. Toward this end, in this paper, we consider multiple Unmanned Aerial Vehicles (UAV) eqqipped with transceivers as aerial MEC servers to provide IIoT devices computation offloading opportunities due to their high controbility. IIoT devices can choose to offload the tasks to UAVs through air-ground links, or to offload the tasks to the remote cloud center through ground cellular network, or to process the tasks locally. We formulate the multi-UAV-Enabled computation offloading problem as a mixed integer non-linear programming (MINLP) problem and prove its NP-hardness. To obtain the energy-efficient and low complexity solution, we propose an intelligent computation offloading algorithm called multi-agent deep Q-learning with stochastic prioritized replay (MDSPR). Numerical results show that the proposed MDSPR converges fast and outperforms the benchmark algorithms, including random method, deep Q-learning method and double deep Q-learning method in terms of energy efficiency and task successful rate.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Ojo, M. O., Giordano, S., Adami, D., & Pagano, M. (2019). Throughput maximizing and fair scheduling algorithms in industrial internet of things networks. IEEE Transactions on Industrial Informatics, 15(6), 3400–3410.

    Article  Google Scholar 

  2. Willig, A. (2008). Recent and emerging topics in wireless industrial communications: a selection. IEEE Transactions on Industrial Informatics, 4(2), 102–124.

    Article  Google Scholar 

  3. Mao, Y., You, C., Zhang, J., Huang, K., & Letaief, K. B. (2017). A survey on mobile edge computing: the communication perspective. IEEE Communications Surveys & Tutorials, 19(4), 2322–2358.

    Article  Google Scholar 

  4. Jeong, S., Simeone, O., & Kang, J. (2018). Mobile edge computing via a UAV-mounted cloudlet: optimization of bit allocation and path planning. IEEE Transactions on Vehicular Technology, 67(3), 2049–2063.

    Article  Google Scholar 

  5. Zhang, J., et al. (2019). Stochastic computation offloading and trajectory scheduling for UAV-assisted mobile edge computing. IEEE Internet of Things Journal, 6(2), 3688–3699.

    Article  Google Scholar 

  6. Zhang, J., et al. (2020). Computation-efficient offloading and trajectory scheduling for multi-UAV assisted mobile edge computing. IEEE Transactions on Vehicular Technology, 69(2), 2114–2125.

    Article  Google Scholar 

  7. Zhou, F., Wu, Y., Hu, R. Q., & Qian, Y. (2018). Computation rate maximization in UAV-enabled wireless-powered mobile-edge computing systems. IEEE Journal on Selected Areas in Communications, 36(9), 1927–1941.

    Article  Google Scholar 

  8. Mnih, V., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533.

    Article  Google Scholar 

  9. Wang, Y., Wang, K., Huang, H., Miyazaki, T., & Guo, S. (2019). Traffic and computation co-offloading with reinforcement learning in fog computing for industrial applications. IEEE Transactions on Industrial Informatics, 15(2), 976–986.

    Article  Google Scholar 

  10. Liu, Y., Yu, H., Xie, S., & Zhang, Y. (2019). Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks. IEEE Transactions on Vehicular Technology, 68(11), 11158–11168.

    Article  Google Scholar 

  11. Chen, X., Zhang, H., Wu, C., Mao, S., Ji, Y., & Bennis, M. (2019). Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning. IEEE Internet of Things Journal, 6(3), 4005–4018.

    Article  Google Scholar 

  12. Lu, H., He, X., Du, M., Ruan, X., Sun, Y., Wang, K. Edge QoE: Computation offloading with deep reinforcement learning for internet of things, IEEE Internet of Things Journal, https://doi.org/10.1109/JIOT.2020.2981557

  13. Hemmecke, R., Köppe, M., Lee, J., & Weismantel, R. (2012). Nonlinear integer programming. Comput. Math. Appl., 1(2), 215–222.

    Google Scholar 

  14. Zhang, J., et al. (2019). Joint resource allocation for latency-sensitive services over mobile edge computing networks with caching. IEEE Internet of Things Journal, 6(3), 4283–4294.

    Article  Google Scholar 

  15. Al-Hourani, Akram Kandeepan. (2014). Optimal LAP altitude for maximum coverage. IEEE Wireless Communications Letters, 3(6), 569–572.

    Article  Google Scholar 

  16. Marzetta, T. L. (2010). Noncooperative cellular wireless with unlimited numbers of base station antennas. IEEE Trans. Wireless Commun., 9(11), 3590–3600.

    Article  Google Scholar 

  17. Guo, H., & Liu, J. (2018). Collaborative computation offloading for multiaccess edge computing over fiber-wireless networks. IEEE Transactions on Vehicular Technology, 67(5), 4514–4526.

    Article  Google Scholar 

  18. Hu, X., Wong, K., Yang, K., & Zheng, Z. (2019). UAV-assisted relaying and edge computing: scheduling and trajectory optimization. IEEE Transactions on Wireless Communications, 18(10), 4738–4752.

    Article  Google Scholar 

  19. Luong, N. C., et al. (2019). Applications of deep reinforcement learning in communications and networking: a survey. IEEE Communications Surveys & Tutorials, 21(4), 3133–3174.

    Article  Google Scholar 

  20. Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2016). Prioritized Experience Replay, ICLR

  21. Zhang, X., Zhang, J., Xiong, J., Zhou, L., & Wei, J. (2020). Energy-efficient multi-UAV-enabled multiaccess edge computing incorporating NOMA. IEEE Internet of Things Journal, 7(6), 5613–5627.

    Article  Google Scholar 

  22. Peng, H., & Shen, X. (2021). Multi-agent reinforcement learning based resource management in MEC- and UAV-assisted vehicular networks. IEEE Journal on Selected Areas in Communications, 39(1), 131–141.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuo Shi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This paper was presented in part at the EAI AICON 2020: 2nd EAI International Conference on Artificial Intelligence for Communications and Networks, December 19-20, 2020, Cyberspace. Compared with the conference paper, we have made the abstract more precisely. To make the introduction more susbstantial, we refer to more related papers to do a in-depth survey on recent works. The contributions of the paper are more siginificant. Furthermore, the detailed algorithm is given, and the simulation results are added to further prove the effectiveness of the proposed scheme. Finally, we propose a new discussion chapter to study the potential applications of our proposed algorithm in the field of wireless communication.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, S., Wang, M., Gu, S. et al. Energy-efficient UAV-enabled computation offloading for industrial internet of things: a deep reinforcement learning approach. Wireless Netw 30, 3921–3934 (2024). https://doi.org/10.1007/s11276-021-02789-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-021-02789-7

Keywords

Navigation