Abstract
In this chapter, we describe the design of controlling schemes for energy self-sustainable mobile networks through Deep Learning. The goal is to enable an intelligent energy management that allows the base stations to mostly operate off-grid by using renewable energies. To achieve this goal, we formulate an on-line grid energy and network throughput optimization problem considering both centralized and distributed Deep Reinforcement Learning implementations. We provide an exhaustive discussion on the reference scenario, the techniques adopted, the achieved performance, the complexity and the feasibility of the proposed models, together with the energy and cost savings attained. Results demonstrate that Deep Q-Learning based algorithms represent a viable and economically convenient solution for enabling energy self-sustainability of mobile networks grouped in micro-grids.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Abbreviations
- ANN:
-
Artificial neural network
- BB:
-
Baseband
- BS:
-
Base station
- DRL:
-
Deep reinforcement learning
- DDRL:
-
Distributed deep reinforcement learning
- DP:
-
Dynamic programming
- DQL:
-
Deep Q-learning
- EH:
-
Energy harvesting
- FQL:
-
Fuzzy Q-learning
- MBS:
-
Macro base stations
- MDP:
-
Markov decision process
- MEC:
-
Multi-access edge computing
- MRL:
-
Multi-agent reinforcement learning
- NFV:
-
Network function virtualization
- RAN:
-
Radio access network
- RL:
-
Reinforcement learning
- SBS:
-
Small base station
- SDN:
-
Software defined networking
- SGD:
-
Stochastic gradient descent
- vBS:
-
Virtual small base station
- \(\boldsymbol{A}^t\) :
-
Operative states (control actions) of the SBSs in slot t
- \(\boldsymbol{B}^t\) :
-
Energy stored in batteries at beginning of slot t
- \(\boldsymbol{H}^t\) :
-
Energy harvested by SBSs in slot t
- \({h}^t\) :
-
Hour of the day in slot t
- \({m}^t\) :
-
Month in slot t
- \(\boldsymbol{L}^t\) :
-
Traffic load generated inside coverage of vSBs in slot t
- \(r_t\) :
-
Scalar reward signal
- \(\boldsymbol{X}^t\) :
-
State of the vSBs in slot t
- \(\alpha \) :
-
Learning rate
- \(\varepsilon \) :
-
Exploration parameter
- \(\gamma \) :
-
Discount factor
References
3GPP (2017) TR 21.866.; ; E-U; Study on energy efficiency aspects of 3GPP standards v15
3GPP (2017) TS 38.801.; ; E-U; Study on new radio access technology: radio access architecture and interfaces v14
Abbas N, Zhang Y, Taherkordi A, Skeie T (2018) Mobile edge computing: a survey. IEEE Internet Things J 5(1):450–465
Access EUTR (2010) Further advancements for E-UTRA physical layer aspects, 3GPP TS 36.814, V9. 0.0, Mar
Andrae AS, Edler T (2015) On global electricity usage of communication technology: trends to 2030. MDPI Challenges 6(1):117–157
Auer G, Blume O, Giannini V, Godor I, Imran M, Jading Y, Katranaras E, Olsson M, Sabella D, Skillermark P et al (2013) EARTH Deliverable D2.3: energy efficiency analysis of the reference systems, areas of improvements and target breakdown. Project Deliverable D2.3, www.ict-earth.eu
Bellman R (1957) Dynamic programming. Princeton University Press, Princeton, NJ
Buşoniu L, Babuška R, De Schutter B (2010) Multi-agent reinforcement learning: an overview. Springer, Berlin, pp 183–221
Busoniu L, Babuska R, Schutter BD (2008) A comprehensive survey of multiagent reinforcement learning. IEEE Trans Syst Man Cybern Part C (Appl Rev 38(2):156–172
Chen Q, Zheng Z, Hu C, Wang D, Liu F (2019) Data-driven task allocation for multi-task transfer learning on the edge. In: 2019 IEEE 39th International conference on distributed computing systems (ICDCS), pp 1040–1050
Deng S, Zhao H, Fang W, Yin J, Dustdar S, Zomaya AY (2020) Edge intelligence: the confluence of edge computing and artificial intelligence. IEEE Internet Things J 7(8):7457–7469
Desset C, Debaillie B, Giannini V, Fehske A, Auer G, Holtkamp H, Wajda W, Sabella D, Richter F, Gonzalez MJ et al (2012) Flexible power modeling of LTE base stations. In: 2012 IEEE Wireless communications and networking conference (WCNC). IEEE, 2858–2862
Dulac-Arnold G, Evans R, van Hasselt H, Sunehag P, Lillicrap T, Hunt J, Mann T, Weber T, Degris T, Coppin B (2015) Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679
ETSI (2017) ES 203 208; Environmental Engineering (EE); Assessment of mobile network energy efficiency (v1.2.1)
Fudenberg D, Levine DK (1998) The theory of learning in games, vol 1. MIT Press Books, The MIT Press
Han F, Zhao S, Zhang L, Wu J (2016) Survey of strategies for switching off base stations in heterogeneous networks for greener 5G systems. IEEE Access 4:4959–4973
Hassan HAH, Nuaymi L, Pelov A (2013) Renewable energy in cellular networks: a survey. In: 2013 IEEE Online conference on green communications (OnlineGreenComm), pp 1–7
Hata M (1980) Empirical formula for propagation loss in land mobile radio services. IEEE Trans Veh Technol 29(3):317–325
Heddeghem WV, Lambert S, Lannoo B, Colle D, Pickavet M, Demeester P (2014) Trends in worldwide ICT electricity consumption from 2007 to 2012. Comput Commun 50:64–76
Kabalci Y (2016) A survey on smart metering and smart grid communication. Renew Sustain Energy Rev 57:302–318
Lara A, Kolasani A, Ramamurthy B (2014) Network innovation using OpenFlow: a survey. IEEE Commun Surv Tutorials 16(1):493–512
Le TP, Vien NA, Chung T (2018) A deep hierarchical reinforcement learning algorithm in partially observable Markov decision processes. IEEE Access 6:49089–49102
Leccese F, Leonowicz Z (2012) Intelligent wireless street lighting system. In: 2012 11th International conference on environment and electrical engineering. IEEE, pp 958–961
Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D (2016) Continuous control with deep reinforcement learning. In: ICLR
Lin Y, Han S, Mao H, Wang Y, Dally WJ (2017) Deep gradient compression: reducing the communication bandwidth for distributed training
Lindemark B, Oberg G (2001) Solar power for radio base station (RBS) sites applications including system dimensioning, cell planning and operation. In: Twenty-third international telecommunications energy conference INTELEC 2001, pp 587–590
López-Pérez D, Ding M, Claussen H, Jafari AH (2015) Towards 1 Gbps/UE in cellular systems: understanding ultra-dense small cell deployments. IEEE Commun Surv Tutorials 17(4):2078–2101
Marhon SA, Cameron CJF, Kremer SC (2013) Recurrent neural networks
Mezzavilla M, Miozzo M, Rossi M, Baldo N, Zorzi M (2012) A lightweight and accurate link abstraction model for the simulation of LTE networks in ns-3. In: Proceedings of the 15th ACM international conference on modeling, analysis and simulation of wireless and mobile systems, MSWiM’12. ACM, New York, pp 55–60
Miozzo M, Zordan D, Dini P, Rossi M (2014) SolarStat: modeling photovoltaic sources through stochastic Markov processes. In: IEEE International energy conference (ENERGYCON), Cavtat, Croatia, pp 688–695
Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller M (2013) Playing Atari with deep reinforcement learning. In: NIPS Deep learning workshop
Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G, Petersen S, Beattie C, Sadik A, Antonoglou I, King H, Kumaran D, Wierstra D, Legg S, Hassabis D (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533
Müllner R, Riener A (2011) An energy efficient pedestrian aware smart street lighting system. Int J Pervasive Comput Commun
NEC (n.d.) NFV C-RAN for efficient RAN resource allocation. Online white paper. Accessed on 16 Mar 2016
NREL, National Renewable Energy Laboratory (n.d.) Renewable resource data center. http://www.nrel.gov/rredc/
Park C, Lee J (2020) Mobile edge computing-enabled heterogeneous networks. IEEE Trans Wirel Commun 1
Piovesan N, Dini P (2017) Optimal direct load control of renewable powered small cells: a shortest path approach. Internet Technol Lett e7–n/a. e7
Piovesan N, Dini P (2018) Unsupervised learning of representations from solar energy data. In: 2018 IEEE 29th Annual international symposium on personal, indoor and mobile radio communications (PIMRC), pp 1–6
Piovesan N, Lopez-Perez D, Miozzo M, Dini P (2020) Joint load control and energy sharing for renewable powered small base stations: a machine learning approach. IEEE Trans Green Commun Networking 1
Piovesan N, Miozzo M, Dini P (2020) Modeling the environment in deep reinforcement learning: the case of energy harvesting base stations. In: ICASSP 2020-2020 IEEE International conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 8996–9000
Piovesan N, Temesgene DA, Miozzo M, Dini P (2019) Joint load control and energy sharing for autonomous operation of 5G mobile networks in micro-grids. IEEE Access 7:31140–31150
Piro G, Miozzo M, Forte G, Baldo N, Grieco LA, Boggia G, Dini P (2013) HetNets powered by renewable energy sources: sustainable next-generation cellular networks. IEEE Internet Comput 17(1):32–39
Sewak M (2019) Policy-based reinforcement learning approaches. Springer, Singapore, pp 127–140
Sharma R, Biookaghazadeh S, Li B, Zhao M (2018) Are existing knowledge transfer techniques effective for deep learning with edge devices? In: 2018 IEEE International conference on edge computing (EDGE), pp 42–49
Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT Press, Cambridge
Tang H, Gan S, Zhang C, Zhang T, Liu J (2018) Communication compression for decentralized training. In: Bengio S, Wallach H, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Advances in neural information processing systems 31. Curran Associates Inc., pp 7652–7662
Temesgene DA, Miozzo M, Dini P (2019) Dynamic control of functional splits for energy harvesting virtual small cells: a distributed reinforcement learning approach. Comput Commun 148:48–61
Temesgene DA, Miozzo M, Gunduz D, Dini P (2020) Distributed deep reinforcement learning for functional split control in energy harvesting virtualized small cells. IEEE Trans Sustain Comput 1
Temesgene DA, Piovesan N, Miozzo M, Dini P (2018) Optimal placement of baseband functions for energy harvesting virtual small cells. In: 2018 IEEE 88th Vehicular technology conference (VTC-Fall), pp 1–6
Ton DT, Smith MA (2012) The U.S. department of energy’s microgrid initiative. Electr J 25(8):84–94
Trinh HD, Bui N, Widmer J, Giupponi L, Dini P (2017) Analysis and modeling of mobile traffic using real traces. In: 2017 IEEE 28th Annual international symposium on personal, indoor, and mobile radio communications (PIMRC). IEEE, pp 1–6
Van Moffaert K, Nowé A (2014) Multi-objective reinforcement learning using sets of pareto dominating policies. J Mach Learn Res 15(1):3483–3512
Xu F, Li Y, Wang H, Zhang P, Jin D (2017) Understanding mobile traffic patterns of large scale cellular towers in urban environment. IEEE/ACM Trans Networking 25(2):1147–1161
Zhou Z, Chen X, Li E, Zeng L, Luo K, Zhang J (2019) Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc IEEE 107(8):1738–1762
Zordan D, Miozzo M, Dini P, Rossi M (2015) When telecommunications networks meet energy grids: cellular networks with energy harvesting and trading capabilities. IEEE Commun Mag 53(6):117–123
Acknowledgements
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 675891 (SCAVENGE) and by Spanish MINECO grant TEC2017-88373-R (5G-REFINE).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Miozzo, M., Piovesan, N., Temesgene, D.A., Dini, P. (2021). Deep Reinforcement Learning for Autonomous Mobile Networks in Micro-grids. In: Koubaa, A., Azar, A.T. (eds) Deep Learning for Unmanned Systems. Studies in Computational Intelligence, vol 984. Springer, Cham. https://doi.org/10.1007/978-3-030-77939-9_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-77939-9_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-77938-2
Online ISBN: 978-3-030-77939-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)