Abstract
Dynamic topology, lack of a fixed infrastructure and limited energy in mobile ad-hoc networks (MANETs) give rise to a challenging operational environment. MANET routing protocols should consider dynamic network changes (e.g., link qualities and nodes residual energy) in such circumstances and be able to adapt to these changes to efficiently handle the traffic flows. In this paper, we assume an energy harvesting MANET in which the nodes have recharging capability and thus their residual energy level is randomly changing with time. We present a bi-objective intelligent routing protocol that aims at reducing an expected long-run cost function composed of end-to-end delay and the path energy cost. We formulate the routing problem as a Markov decision process which captures both the link state dynamics due to node mobility and energy state dynamics due to nodes rechargeable energy sources. We propose a multi-agent reinforcement learning-based algorithm to approximate the optimal routing policy in the absence of a priori knowledge of the system statistics. The proposed algorithm is built using the principles of model-based RL. More specifically, we model each node’s cost function by deriving an expression for the expected value of end-to-end costs. Also the transition probabilities are estimated online using a tabular maximum likelihood method. Simulation results show that our model-based scheme outperforms its model-free counterpart and operates closely to standard value-iteration which assumes perfect statistics.
Similar content being viewed by others
References
Sharma, V., Mukherji, U., Joseph, V., & Gupta, S. (2010). Optimal energy management policies for energy harvesting sensor nodes. IEEE Transactions on Wireless Communications, 9(4), 1326–1336.
Gozalvez, J. (2010). Green radio technologies [mobile radio]. IEEE Vehicular Technology Magazine, 5(1), 9–14.
Naruephiphat, W., & Usaha, W. (2008). Balanced energy-efficient routing in MANETs using reinforcement learning. In International conference on information networking, 2008 (ICOIN 2008) (pp. 1–5). IEEE.
Tao, T., Tagashira, S., & Fujita., S. (2005). LQ-routing protocol for mobile ad-hoc networks. In Fourth annual ACIS international conference on computer and information science, 2005 (pp. 441–446). IEEE.
Binbin, Z., Quan, L., & Shouling, Z. (2010). Using statistical network link model for routing in ad hoc networks with multi-agent reinforcement learning. In 2010 2nd International conference on advanced computer control (ICACC) (pp. 462–466). IEEE.
Dowling, J., Curran, E., Cunningham, R., & Cahill, V. (2005). Using feedback in collaborative reinforcement learning to adaptively optimize MANET routing. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 35(3), 360–372.
Macone, D., Oddi, G., & Pietrabissa, A. (2013). MQ-routing: Mobility-, GPS-and energy-aware routing protocol in MANETs for disaster relief scenarios. Ad Hoc Networks, 11(3), 861–878.
Chang, Y.-H., Ho, T., & Kaelbling, L. P. (2004). Mobilized ad-hoc networks: A reinforcement learning approach. In Proceedings of the international conference on autonomic computing, 2004 (pp. 240–247). IEEE.
Santhi, G., Nachiappan, A., Ibrahime, M. Z., Raghunadhane, R., & Favas, M. (2011). Q-learning based adaptive QoS routing protocol for MANETs. In 2011 International conference on recent trends in information technology (ICRTIT) (pp. 1233–1238). IEEE.
Shiang, H.-P., & van der Schaar, M. (2010). Online learning in autonomic multi-hop wireless networks for transmitting mission-critical applications. IEEE Journal on Selected Areas in Communications, 28(5), 728–741.
Boyan, J. A., & Littman, M. L. (1994). Packet routing in dynamically changing networks: A reinforcement learning approach. In J. D. Cowan, G. Tesauro, & J. Alspector (Eds.), Advances in Neural Information Processing Systems (Vol. 6, pp. 671–678). Morgan Kaufmann.
Al-Rawi, H. A., Ng, M. A., & Yau, K.-L.A. (2013). Application of reinforcement learning to routing in distributed wireless networks: A review. Artificial Intelligence Review, 43(3), 381–416.
Zhao, M., Li, Y., & Wang, W. (2012). Modeling and analytical study of link properties in multihop wireless networks. IEEE Transactions on Communications, 60(2), 445–455.
Hong, X., Gerla, M., Pei, G., & Chiang, C.-C. (1999). A group mobility model for ad hoc wireless networks. In Proceedings of the 2nd ACM international workshop on modeling, analysis and simulation of wireless and mobile systems (pp. 53–60). ACM.
Camp, T., Boleng, J., & Davies, V. (2002). A survey of mobility models for ad hoc network research. Wireless Communications and Mobile Computing, 2(5), 483–502.
Eisenman, S. B., Miluzzo, E., Lane, N. D., Peterson, R. A., Ahn, G.-S., & Campbell, A. T. (2009). BikeNet: A mobile sensing system for cyclist experience mapping. ACM Transactions on Sensor Networks (TOSN), 6(1), 6.
Edalat, N., Tham, C.-K., & Xiao, W. (2012). An auction-based strategy for distributed task allocation in wireless sensor networks. Computer Communications, 35(8), 916–928.
Krishnaswamy, D. (2002). Network-assisted link adaptation with power control and channel reassignment in wireless networks. In 3G wireless conference.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. IEEE Transactions on Neural Networks, 9(5), 1054–1054.
Tsitsiklis, J. N. (1994). Asynchronous stochastic approximation and Q-learning. Machine Learning, 16(3), 185–202.
Barto, A. G., Bradtke, S. J., & Singh, S. P. (1995). Learning to act using real-time dynamic programming. Artificial Intelligence, 72(1), 81–138.
Billingsley, P. (1961). Statistical inference for Markov processes. Chicago: The University of Chicago Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Maleki, M., Hakami, V. & Dehghan, M. A Model-Based Reinforcement Learning Algorithm for Routing in Energy Harvesting Mobile Ad-Hoc Networks. Wireless Pers Commun 95, 3119–3139 (2017). https://doi.org/10.1007/s11277-017-3987-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11277-017-3987-8