Abstract
Scheduling when, where, and under what conditions to re-charge an electric vehicle poses unique challenges absent in internal combustion vehicles. Charging scheduling of an electric vehicle for time- and cost-efficiency depends on many variables in a dynamic environment, such as the time-of-use price and the availability of charging piles at a charging station. This paper presents an adaptive charging scheduling strategy that accounts for the uncertainty in the charging price and the availability of charging stations. We consider the charging scheduling of an electric vehicle in consideration of these variables. We develop a Multiagent Rainbow Deep Q Network with Imparting Preference where the two agents select a charging station and determine the charging quantity. An imparting preference technique is introduced to share experience and learn the charging scheduling strategy for the vehicle en route. Real-world data is used to simulate the vehicle and to learn the charging scheduling. The performance of the model is compared against two reinforcement learning-based benchmarks and a human-imitative charging scheduling strategy on four scenarios. Results indicate that the proposed model outperforms the existing approaches in terms of charging time, cost, and state-of-charge reserve assurance indices.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
References
Cao, Y., Wang, T., Kaiwartya, O., Min, G., Ahmad, N., Abdullah, A.H.: An EV charging management system concerning drivers’ trip duration and mobility uncertainty. IEEE Trans. Syst. Man Cybern.: Syst. 48(4), 596–607 (2016)
Da Silva, F.L., Nishida, C.E., Roijers, D.M., Costa, A.H.R.: Coordination of electric vehicle charging through multiagent reinforcement learning. IEEE Trans. Smart Grid 11(3), 2347–2356 (2019)
Greenblatt, J.B., Saxena, S.: Autonomous taxis could greatly reduce greenhouse-gas emissions of us light-duty vehicles. Nat. Clim. Chang. 5(9), 860–863 (2015)
Hessel, M., et al.: Rainbow: combining improvements in deep reinforcement learning. arXiv preprint arXiv:1710.02298 (2017)
Li, H., Wan, Z., He, H.: Constrained EV charging scheduling based on safe deep reinforcement learning. IEEE Trans. Smart Grid 11(3), 2427–2439 (2019)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Na, J., Zhang, H., Deng, X., Zhang, B., Ye, Z.: Accelerate personalized IoT service provision by cloud-aided edge reinforcement learning: a case study on smart lighting. In: Kafeza, E., Benatallah, B., Martinelli, F., Hacid, H., Bouguettaya, A., Motahari, H. (eds.) ICSOC 2020. LNCS, vol. 12571, pp. 69–84. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65310-1_6
Panayiotou, T., Chatzis, S.P., Panayiotou, C., Ellinas, G.: Charging policies for PHEVs used for service delivery: a reinforcement learning approach. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 1514–1521. IEEE (2018)
Shi, W., Cao, J., Zhang, Q., Li, Y., Xu, L.: Edge computing: vision and challenges. IEEE Internet Things J. 3(5), 637–646 (2016). https://doi.org/10.1109/JIOT.2016.2579198
Sutton, R.S.: Learning to predict by the methods of temporal differences. Mach. Learn. 3(1), 9–44 (1988)
Sutton, R.S., Barto, A.G., et al.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)
Valogianni, K., Ketter, W., Collins, J.: Smart charging of electric vehicles using reinforcement learning. In: Proceedings of the 15th AAAI Conference on Trading Agent Design and Analysis, pp. 41–48 (2013)
Wang, H., et al.: Architectural design alternatives based on cloud/edge/fog computing for connected vehicles. IEEE Commun. Surv. Tutor. 22(4), 2349–2377 (2020)
Winkler, T., Komarnicki, P., Mueller, G., Heideck, G., Heuer, M., Styczynski, Z.A.: Electric vehicle charging stations in Magdeburg. In: 2009 IEEE Vehicle Power and Propulsion Conference, pp. 60–65. IEEE (2009)
Woody, M., Arbabzadeh, M., Lewis, G.M., Keoleian, G.A., Stefanopoulou, A.: Strategies to limit degradation and maximize Li-ion battery service lifetime-critical review and guidance for stakeholders. J. Energy Storage 28, 101231 (2020)
Yang, H., Yang, S., Xu, Y., Cao, E., Lai, M., Dong, Z.: Electric vehicle route optimization considering time-of-use electricity price by learnable partheno-genetic algorithm. IEEE Trans. Smart Grid 6(2), 657–666 (2015)
Yang, S.N., Cheng, W.S., Hsu, Y.C., Gan, C.H., Lin, Y.B.: Charge scheduling of electric vehicles in highways. Math. Comput. Model. 57(11–12), 2873–2882 (2013)
Zhang, F., Yang, Q., An, D.: CDDPG: a deep-reinforcement-learning-based approach for electric vehicle charging control. IEEE Internet Things J. 8(5), 3075–3087 (2021). https://doi.org/10.1109/JIOT.2020.3015204
Zhou, Y., Yau, D.K., You, P., Cheng, P.: Optimal-cost scheduling of electrical vehicle charging under uncertainty. IEEE Trans. Smart Grid 9(5), 4547–4554 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Lee, XL., Yang, HT., Tang, W., Toosi, A.N., Lam, E. (2021). An Adaptive Charging Scheduling for Electric Vehicles Using Multiagent Reinforcement Learning. In: Hacid, H., Kao, O., Mecella, M., Moha, N., Paik, Hy. (eds) Service-Oriented Computing. ICSOC 2021. Lecture Notes in Computer Science(), vol 13121. Springer, Cham. https://doi.org/10.1007/978-3-030-91431-8_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-91431-8_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91430-1
Online ISBN: 978-3-030-91431-8
eBook Packages: Computer ScienceComputer Science (R0)