Abstract:
Target coverage and connectivity are two of the most crucial issues in handling wireless sensor networks. However, maintaining these two factors is challenging due to the...Show MoreMetadata
Abstract:
Target coverage and connectivity are two of the most crucial issues in handling wireless sensor networks. However, maintaining these two factors is challenging due to the energy constraint of sensors. To this end, wireless charging has emerged as a promising solution to prolong the sensor's lifetime. In a wireless charging sensor network, a mobile charger moves around the network, stops at several charging locations and charges the sensor via electromagnetic waves. In this study, we investigate the problem of optimizing the charging location and charging time of the mobile charger to ensure the target coverage and connectivity of the network. Our main idea is to leverage the Deep Reinforcement Learning approach. Specifically, the mobile charger will act as an agent, which receives a state including the energy information of the sensors. The mobile charger then decides the following charging location and charging time using the state information and the knowledge learned in the past. Experimental results have shown that our algorithm can extend the network lifetime (i.e., the time until the network coverage and connectivity are not guaranteed) up to 245.9 times compared to the existing algorithms.
Published in: 2022 IEEE 33rd Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC)
Date of Conference: 12-15 September 2022
Date Added to IEEE Xplore: 20 December 2022
ISBN Information: