Loading [a11y]/accessibility-menu.js
New Two-Stage Deep Reinforcement Learning for Task Admission and Channel Allocation of Wireless-Powered Mobile Edge Computing | IEEE Conference Publication | IEEE Xplore

New Two-Stage Deep Reinforcement Learning for Task Admission and Channel Allocation of Wireless-Powered Mobile Edge Computing

Publisher: IEEE

Abstract:

This paper presents a new two-stage deep Q-network (DQN), referred to as "TS-DQN", for online optimization of wireless power transfer (WPT)-powered mobile edge computing ...View more

Abstract:

This paper presents a new two-stage deep Q-network (DQN), referred to as "TS-DQN", for online optimization of wireless power transfer (WPT)-powered mobile edge computing (MEC) systems, where the WPT, offloading schedule, channel allocation, and the CPU configurations of the edge server and devices are jointly optimized to minimize the long-term average energy requirement of the systems. The key idea is to design a DQN to learn the channel allocation and task admission, while the WPT, offloading time and CPU configurations are efficiently optimized to precisely evaluate the reward of the DQN and substantially reduce its action space. A new action generation method is developed to expand and diversify the actions of the DQN, hence further accelerating its convergence. Simulation shows that the gain of the TS-DQN in energy saving is nearly 60% compared to its potential alternatives.
Date of Conference: 16-20 May 2022
Date Added to IEEE Xplore: 11 August 2022
ISBN Information:

ISSN Information:

Publisher: IEEE
Conference Location: Seoul, Korea, Republic of

Funding Agency:


References

References is not available for this document.