Skip to main content

An Adaptive Charging Scheduling for Electric Vehicles Using Multiagent Reinforcement Learning

  • Conference paper
  • First Online:
Service-Oriented Computing (ICSOC 2021)

Abstract

Scheduling when, where, and under what conditions to re-charge an electric vehicle poses unique challenges absent in internal combustion vehicles. Charging scheduling of an electric vehicle for time- and cost-efficiency depends on many variables in a dynamic environment, such as the time-of-use price and the availability of charging piles at a charging station. This paper presents an adaptive charging scheduling strategy that accounts for the uncertainty in the charging price and the availability of charging stations. We consider the charging scheduling of an electric vehicle in consideration of these variables. We develop a Multiagent Rainbow Deep Q Network with Imparting Preference where the two agents select a charging station and determine the charging quantity. An imparting preference technique is introduced to share experience and learn the charging scheduling strategy for the vehicle en route. Real-world data is used to simulate the vehicle and to learn the charging scheduling. The performance of the model is compared against two reinforcement learning-based benchmarks and a human-imitative charging scheduling strategy on four scenarios. Results indicate that the proposed model outperforms the existing approaches in terms of charging time, cost, and state-of-charge reserve assurance indices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://web.mta.info/developers/MTA-Bus-Time-historical-data.html.

  2. 2.

    https://forum.abetterrouteplanner.com/blogs/entry/22-tesla-model-3-performance-vs-rwd-consumption-real-driving-data-from-233-cars/.

  3. 3.

    https://data.dundeecity.gov.uk/dataset/ev-charging-data.

  4. 4.

    https://openchargemap.org/site.

  5. 5.

    https://www.taipower.com.tw/en/page.aspx?mid=317.

References

  1. Cao, Y., Wang, T., Kaiwartya, O., Min, G., Ahmad, N., Abdullah, A.H.: An EV charging management system concerning drivers’ trip duration and mobility uncertainty. IEEE Trans. Syst. Man Cybern.: Syst. 48(4), 596–607 (2016)

    Article  Google Scholar 

  2. Da Silva, F.L., Nishida, C.E., Roijers, D.M., Costa, A.H.R.: Coordination of electric vehicle charging through multiagent reinforcement learning. IEEE Trans. Smart Grid 11(3), 2347–2356 (2019)

    Article  Google Scholar 

  3. Greenblatt, J.B., Saxena, S.: Autonomous taxis could greatly reduce greenhouse-gas emissions of us light-duty vehicles. Nat. Clim. Chang. 5(9), 860–863 (2015)

    Article  Google Scholar 

  4. Hessel, M., et al.: Rainbow: combining improvements in deep reinforcement learning. arXiv preprint arXiv:1710.02298 (2017)

  5. Li, H., Wan, Z., He, H.: Constrained EV charging scheduling based on safe deep reinforcement learning. IEEE Trans. Smart Grid 11(3), 2427–2439 (2019)

    Article  Google Scholar 

  6. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  7. Na, J., Zhang, H., Deng, X., Zhang, B., Ye, Z.: Accelerate personalized IoT service provision by cloud-aided edge reinforcement learning: a case study on smart lighting. In: Kafeza, E., Benatallah, B., Martinelli, F., Hacid, H., Bouguettaya, A., Motahari, H. (eds.) ICSOC 2020. LNCS, vol. 12571, pp. 69–84. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65310-1_6

    Chapter  Google Scholar 

  8. Panayiotou, T., Chatzis, S.P., Panayiotou, C., Ellinas, G.: Charging policies for PHEVs used for service delivery: a reinforcement learning approach. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 1514–1521. IEEE (2018)

    Google Scholar 

  9. Shi, W., Cao, J., Zhang, Q., Li, Y., Xu, L.: Edge computing: vision and challenges. IEEE Internet Things J. 3(5), 637–646 (2016). https://doi.org/10.1109/JIOT.2016.2579198

    Article  Google Scholar 

  10. Sutton, R.S.: Learning to predict by the methods of temporal differences. Mach. Learn. 3(1), 9–44 (1988)

    Google Scholar 

  11. Sutton, R.S., Barto, A.G., et al.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  12. Valogianni, K., Ketter, W., Collins, J.: Smart charging of electric vehicles using reinforcement learning. In: Proceedings of the 15th AAAI Conference on Trading Agent Design and Analysis, pp. 41–48 (2013)

    Google Scholar 

  13. Wang, H., et al.: Architectural design alternatives based on cloud/edge/fog computing for connected vehicles. IEEE Commun. Surv. Tutor. 22(4), 2349–2377 (2020)

    Article  Google Scholar 

  14. Winkler, T., Komarnicki, P., Mueller, G., Heideck, G., Heuer, M., Styczynski, Z.A.: Electric vehicle charging stations in Magdeburg. In: 2009 IEEE Vehicle Power and Propulsion Conference, pp. 60–65. IEEE (2009)

    Google Scholar 

  15. Woody, M., Arbabzadeh, M., Lewis, G.M., Keoleian, G.A., Stefanopoulou, A.: Strategies to limit degradation and maximize Li-ion battery service lifetime-critical review and guidance for stakeholders. J. Energy Storage 28, 101231 (2020)

    Article  Google Scholar 

  16. Yang, H., Yang, S., Xu, Y., Cao, E., Lai, M., Dong, Z.: Electric vehicle route optimization considering time-of-use electricity price by learnable partheno-genetic algorithm. IEEE Trans. Smart Grid 6(2), 657–666 (2015)

    Article  Google Scholar 

  17. Yang, S.N., Cheng, W.S., Hsu, Y.C., Gan, C.H., Lin, Y.B.: Charge scheduling of electric vehicles in highways. Math. Comput. Model. 57(11–12), 2873–2882 (2013)

    Article  Google Scholar 

  18. Zhang, F., Yang, Q., An, D.: CDDPG: a deep-reinforcement-learning-based approach for electric vehicle charging control. IEEE Internet Things J. 8(5), 3075–3087 (2021). https://doi.org/10.1109/JIOT.2020.3015204

    Article  Google Scholar 

  19. Zhou, Y., Yau, D.K., You, P., Cheng, P.: Optimal-cost scheduling of electrical vehicle charging under uncertainty. IEEE Trans. Smart Grid 9(5), 4547–4554 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong-Tzer Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lee, XL., Yang, HT., Tang, W., Toosi, A.N., Lam, E. (2021). An Adaptive Charging Scheduling for Electric Vehicles Using Multiagent Reinforcement Learning. In: Hacid, H., Kao, O., Mecella, M., Moha, N., Paik, Hy. (eds) Service-Oriented Computing. ICSOC 2021. Lecture Notes in Computer Science(), vol 13121. Springer, Cham. https://doi.org/10.1007/978-3-030-91431-8_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-91431-8_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-91430-1

  • Online ISBN: 978-3-030-91431-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics