Loading [a11y]/accessibility-menu.js
Energy Efficiency Optimization in Downlink NOMA-Enabled Fog Radio Access Network Based on Deep Reinforcement Learning | IEEE Conference Publication | IEEE Xplore

Energy Efficiency Optimization in Downlink NOMA-Enabled Fog Radio Access Network Based on Deep Reinforcement Learning


Abstract:

In this era of Internet of Everything (IoE) and rapid development of mobile wireless communication technology, time delay and spectrum efficiency are difficult to achieve...Show More

Abstract:

In this era of Internet of Everything (IoE) and rapid development of mobile wireless communication technology, time delay and spectrum efficiency are difficult to achieve in the development process. Fog computing (FC) is a kind of edge computing, which is more suitable for the Internet of Things (IoT) with high dense connectivity. Non-orthogonal multiple access (NOMA), as a promising multiple access technology combined with FC, is considered here. In this paper, we study how to adopt deep reinforcement learning (DRL) algorithms to optimize energy efficiency (EE) in the multi-server and multi-user scenario of downlink fog radio access network (F-RAN) based on NOMA. In order to solve this nonconvex problem, it is divided into two subproblems: subchannel allocation and power allocation. The former adopts deep Q network (DQN) and the latter adopts deep deterministic policy gradient (DDPG) to obtain the best allocation strategy. DRL has high applicability in dealing with high-dimensional data, so it can be well applied in dynamic communication environment. The simulation results show that the combination scheme of “DQN-DDPG” has achieved remarkable advantages in terms of the faster convergence speed and the better final result of the system EE than other schemes.
Date of Conference: 26-28 November 2021
Date Added to IEEE Xplore: 30 December 2021
ISBN Information:
Conference Location: Shenzhen, China

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.