Abstract:
We study the charging routing problem faced by a smart electric vehicle (EV) that looks for an EV charging station (EVCS) to fulfill its battery charging demand. Leveragi...Show MoreMetadata
Abstract:
We study the charging routing problem faced by a smart electric vehicle (EV) that looks for an EV charging station (EVCS) to fulfill its battery charging demand. Leveraging the real-time information from both the power and intelligent transportation systems, the EV seeks to minimize the sum of the travel cost (to a selected EVCS) and the charging cost (a weighted sum of electricity cost and waiting time cost at the EVCS). We formulate the problem as a Markov decision process with unknown dynamics of system uncertainties (in traffic conditions, charging prices, and waiting time). To mitigate the curse of dimensionality, we reformulate the deterministic charging routing problem (a mixed-integer program) as a two-level shortest-path (SP)-based optimization problem that can be solved in polynomial time. Its low dimensional solution is input into a state-of-the-art deep reinforcement learning (DRL) algorithm, the advantage actor–critic (A2C) method, to make efficient online routing decisions. Numerical results (on a real-world transportation network) demonstrate that the proposed SP-based A2C approach outperforms the classical A2C method and two alternative SP-based DRL methods.
Published in: IEEE Internet of Things Journal ( Volume: 9, Issue: 22, 15 November 2022)