Abstract
Wireless body area network (WBAN) is encountering a tough challenge in terms of energy efficiency due to multiple realistic factors like increasing scale of network environment, emerging demand of healthcare applications and limited manufacturing technique of sensors. In this work, we address the energy saving issue of WBAN. We consider a layered network framework and hybrid channels with multiple in vivo medium. A distributed power controller is developed based on deep Q-learning algorithm to mitigate the affection of inter-network interference. The proposed power controller utilizes distributed coordinators to learn from WBAN environment and optimize the transmitting power of sensors in the communication. Simulation results demonstrate that our power controller achieves higher performance of energy efficiency compared with two baseline power controllers. Simulation results also demonstrate that proper configuration of proposed power controller of coordinators can significantly achieve the performance gain with the increase of network scale.











Similar content being viewed by others
References
Movassaghi S, Abolhasan M, Lipman J, Smith D, Jamalipour A (2014) Wireless body area networks: A survey. IEEE Commun Surveys Tutor 16(3):1658–1686
Wang R, Liu H, Wang H, Yang Q, Wu D (2019) Distributed security architecture based on blockchain for connected health: architecture, challenges, and approaches. IEEE Wirel Commun 26(6):30–36
Luong NC, Hoang DT, Gong S, Niyato D, Wang P, Liang YC, Kim DI (2019) Applications of deep reinforcement learning in communications and networking: A survey. IEEE Commun Surv Tutor 21(4):3133–3174
Hussain M, Mehmood A, Khan S, Khan MA, Iqbal Z (2019) A survey on authentication techniques for wireless body area networks. J Syst Archit 101:101655
Wu D, Zhang Z, Wu S, Yang J, Wang R (2019) Biologically inspired resource allocation for network slices in 5g-enabled internet of things. IEEE Internet Things J 6(6):9266–9279
Javaid N, Abbas Z, Fareed MS, Khan ZA, Alrajeh N (2014) RE-ATTEMPT: A new energy-efficient routing protocol for wireless body area sensor networks. Procedia Comput Sci 10(4):224–231
Wu T, Wu F, Redoute JM, Yuce MR (2017) An autonomous wireless body area network implementation towards IoT connected healthcare applications. IEEE Access 5:11413–11422
Mohamed M, Joseph W, Vermeeren G, Tanghe E, Cheffena M (2019) Characterization of dynamic wireless body area network channels during walking. EURASIP J Wirel Commun Netw 1:104
Moosavi H, Bui FM (2016) Optimal relay selection and power control with quality-of-service provisioning in wireless body area networks. IEEE Trans Wirel Commun 15(8):5497–5510
Yang Y, Smith DB, Seneviratne S (2019) Deep learning channel prediction for transmit power control in wireless body area networks. In: 2019 International conference on communications (ICC). IEEE, pp 1–6
Kazemi R, Vesilo R, Dutkiewicz E, Liu R (2011) Dynamic power control in wireless body area networks using reinforcement learning with approximation. In: 2011 International symposium on personal, indoor and mobile radio communications. IEEE, pp 2203–2208
Liu Z, Liu B, Chen CW (2018) Joint power-rate-slot resource allocation in energy harvesting-powered wireless body area networks. IEEE Trans Vehic Technol 67(12):12152–12164
Li S, Hu F, Xu Z, Mao Z, Ling Z, Liu H (2020) Joint power allocation in classified WBANs with wireless information and power transfer. IEEE Internet of Things Journal, early access
Wu D, Yan J, Wang H, Wang R (2020) User-centric edge sharing mechanism in software-defined ultra-dense networks. IEEE Journal on Selected Areas in Communications, early access
He Y, Liang C, Yu R, Han Z (2018) Trust-based social networks with computing, caching and communications: A deep reinforcement learning approach. IEEE Trans Netw Sci Eng 7(1):66–79
He X, Wang K, Huang H, Miyazaki T, Wang Y, Guo S (2018) Green resource allocation based on deep reinforcement learning in content-centric IoT. IEEE Transactions on Emerging Topics in Computing, early access
Puterman ML (2014) Markov decision processes: discrete stochastic dynamic programming. Wiley, New York
Hausknecht M, Stone P (2015) Deep recurrent q-learning for partially observable mdps. In: AAAI Fall symposium series
Dabney WC (2014) DAdaptive step-sizes for reinforcement learning. In: Ph.D. dissertation
Watkins CJ, Dayan P (1992) Q-learning. Mach Learn 8(3-4):279–8C292
Hu J, Wellman MP (2003) Nash q-learning for general-sum stochastic games. J Mach Learn Res 4(Nov):1039–1069
Mismar FB, Brian LE, Ahmed A (2019) Deep reinforcement learning for 5G networks: joint beamforming: Power control, and interference coordination. arXiv:1907.00123
Cole KS (1972) Membranes, ions, and impulses: A chapter of classical biophysics, vol 5. University of California Press, Berkeley
Gabriel S, Lau RW, Gabriel C (1996) The dielectric properties of biological tissues: II. Measurements in the frequency range 10 Hz to 20 GHz. Phys Med Biol 41(11):2251–8C2269
He P, Liu Z, Fu L, Tao Z, Liu J, Tang T, Li Z (2020) Intelligent power controller of wireless body area networks based on deep reinforcement learning. In: International conference on bio-inspired information and communications technologies
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant No. 61901070, 61871062, 61771082, 61801065), partially supported by the Science and Technology Research Program of Chongqing Municipal Education Commission (Grant No. KJQN201900611, KJQN201900604, KJQN201900609), and partially supported by Program for Innovation Team Building at Institutions of Higher Education in Chongqing (Grant No. CXTDX201601020).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
He, P., Liu, M., Lan, C. et al. Distributed Power Controller of Massive Wireless Body Area Networks based on Deep Reinforcement Learning. Mobile Netw Appl 26, 1347–1358 (2021). https://doi.org/10.1007/s11036-021-01751-3
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11036-021-01751-3