Loading [a11y]/accessibility-menu.js
Shortest-Path-Based Deep Reinforcement Learning for EV Charging Routing Under Stochastic Traffic Condition and Electricity Prices | IEEE Journals & Magazine | IEEE Xplore

Shortest-Path-Based Deep Reinforcement Learning for EV Charging Routing Under Stochastic Traffic Condition and Electricity Prices


Abstract:

We study the charging routing problem faced by a smart electric vehicle (EV) that looks for an EV charging station (EVCS) to fulfill its battery charging demand. Leveragi...Show More

Abstract:

We study the charging routing problem faced by a smart electric vehicle (EV) that looks for an EV charging station (EVCS) to fulfill its battery charging demand. Leveraging the real-time information from both the power and intelligent transportation systems, the EV seeks to minimize the sum of the travel cost (to a selected EVCS) and the charging cost (a weighted sum of electricity cost and waiting time cost at the EVCS). We formulate the problem as a Markov decision process with unknown dynamics of system uncertainties (in traffic conditions, charging prices, and waiting time). To mitigate the curse of dimensionality, we reformulate the deterministic charging routing problem (a mixed-integer program) as a two-level shortest-path (SP)-based optimization problem that can be solved in polynomial time. Its low dimensional solution is input into a state-of-the-art deep reinforcement learning (DRL) algorithm, the advantage actor–critic (A2C) method, to make efficient online routing decisions. Numerical results (on a real-world transportation network) demonstrate that the proposed SP-based A2C approach outperforms the classical A2C method and two alternative SP-based DRL methods.
Published in: IEEE Internet of Things Journal ( Volume: 9, Issue: 22, 15 November 2022)
Page(s): 22571 - 22581
Date of Publication: 09 June 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.