Abstract:
Electric Vehicle (EV) charging coordination is gaining more interest for better integrating EV depots into the electrical grid. Several solutions exist, ranging from rule...View moreMetadata
Abstract:
Electric Vehicle (EV) charging coordination is gaining more interest for better integrating EV depots into the electrical grid. Several solutions exist, ranging from rule-based control to, more recently, Reinforcement Learning (RL) techniques. Batch RL approaches are particularly interesting for their ability to use past data for training the controller. However, such solutions typically work with action space discretization, which does leverage continuous charging set-points of Electric Vehicle Supply Equipments (EVSEs), hindering the scalability of solutions. In this paper, we leverage dual annealing global optimization algorithm to pick continuous actions from a neural network RL agent trained with fitted Q-iteration with synthetic data coming from a custom depot simulator. Results for one year simulation of a 10 EVSEs depot are reported and compared with good results with random policy over several criteria.
Date of Conference: 23-26 October 2023
Date Added to IEEE Xplore: 30 January 2024
ISBN Information: