Skip to main content

Benchmarking Deep Reinforcement Learning Based Energy Management Systems for Hybrid Electric Vehicles

  • Conference paper
  • First Online:
Artificial Intelligence (CICAI 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13605))

Included in the following conference series:

  • 1282 Accesses

Abstract

Energy management strategy (EMS) is important for improving the fuel economy of hybrid electric vehicles (HEVs). Deep reinforcement learning techniques have seen a great surge of interest, with promising methods developed for hybrid electric vehicles EMS. As the field grows, it becomes critical to identify key architectures and validate new ideas that generalize to new vehicle types and more complex EMS tasks. Unfortunately, reproducing results for state-of-the-art deep reinforcement learning-based EMS is not an easy task. Without standard benchmarks and tighter metrics of experimental reporting, it is difficult to determine whether improvements are meaningful. This paper conducts an in-depth comparison between numerous deep reinforcement learning algorithms on EMSs. Two different types of hybrid electric vehicles, which include an HEV with planetary gears for power split and a plug-in HEV, are considered in this paper. The main criteria for performance comparison are the fuel consumption, the state of batteries’ charges, and the overall system efficiency. Moreover, the robustness, generality, and modeling difficulty, which are critical for machine learning-based models, are thoroughly evaluated and compared using elaborate devised experiments. Finally, we summarize the state-of-the-art learning-based EMSs from various perspectives and highlight problems that remain open.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A.: Deep reinforcement learning: a brief survey. IEEE Sig. Process. Mag. 34(6), 26–38 (2017)

    Article  Google Scholar 

  2. Biswas, A., Emadi, A.: Energy management systems for electrified powertrains: state-of-the-art review and future trends. IEEE Trans. Veh. Technol. 68(7), 6453–6467 (2019)

    Article  Google Scholar 

  3. Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, pp. 1861–1870 (2018)

    Google Scholar 

  4. Han, X., He, H., Wu, J., Peng, J., Li, Y.: Energy management based on reinforcement learning with double deep q-learning for a hybrid electric tracked vehicle. Appl. Energy 254, 113708 (2019)

    Article  Google Scholar 

  5. Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., Meger, D.: Deep reinforcement learning that matters. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  6. Hilleli, B., El-Yaniv, R.: Toward deep reinforcement learning without a simulator: an autonomous steering example. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  7. Hu, Y., Li, W., Xu, K., Zahid, T., Qin, F., Li, C.: Energy management strategy for a hybrid electric vehicle based on deep reinforcement learning. Appl. Sci. 8(2), 187 (2018)

    Article  Google Scholar 

  8. Li, Y., He, H., Khajepour, A., Wang, H., Peng, J.: Energy management for a power-split hybrid electric bus via deep reinforcement learning with terrain information. Appl. Energy 255, 113762 (2019)

    Article  Google Scholar 

  9. Li, Y., He, H., Peng, J., Wang, H.: Deep reinforcement learning-based energy management for a series hybrid electric vehicle enabled by history cumulative trip information. IEEE Trans. Veh. Technol. 68(8), 7416–7430 (2019)

    Article  Google Scholar 

  10. Lian, R., Peng, J., Wu, Y., Tan, H., Zhang, H.: Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle. Energy, 117297 (2020)

    Google Scholar 

  11. Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. In: ICLR (Poster) (2016)

    Google Scholar 

  12. Liu, T., Hu, X., Li, S.E., Cao, D.: Reinforcement learning optimized look-ahead energy management of a parallel hybrid electric vehicle. IEEE/ASME Trans. Mechatron. 22(4), 1497–1507 (2017)

    Article  Google Scholar 

  13. Liu, T., Wang, B., Yang, C.: Online Markov chain-based energy management for a hybrid tracked vehicle with speedy q-learning. Energy 160, 544–555 (2018)

    Article  Google Scholar 

  14. Prokhorov, D.V.: Toyota Prius HEV neurocontrol and diagnostics. Neural Netw. 21(2–3), 458–465 (2008)

    Article  Google Scholar 

  15. Qi, X., Luo, Y., Wu, G., Boriboonsomsin, K., Barth, M.: Deep reinforcement learning enabled self-learning control for energy efficient driving. Transp. Res. Part C: Emerg. Technol. 99, 67–81 (2019)

    Article  Google Scholar 

  16. Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: International Conference on Machine Learning, pp. 1889–1897. PMLR (2015)

    Google Scholar 

  17. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  18. Tan, H., Zhang, H., Peng, J., Jiang, Z., Wu, Y.: Energy management of hybrid electric bus based on deep reinforcement learning in continuous state and action space. Energy Convers. Manag. 195, 548–560 (2019)

    Article  Google Scholar 

  19. Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)

    Google Scholar 

  20. Won, D.O., Müller, K.R., Lee, S.W.: An adaptive deep reinforcement learning framework enables curling robots with human-like performance in real-world conditions. Sci. Robot. 5(46), eabb9764 (2020)

    Google Scholar 

  21. Wong, J.Y.: Theory of Ground Vehicles. Wiley, Hoboken (2008)

    Google Scholar 

  22. Wu, J., He, H., Peng, J., Li, Y., Li, Z.: Continuous reinforcement learning of energy management with deep q network for a power split hybrid electric bus. Appl. Energy 222, 799–811 (2018)

    Article  Google Scholar 

  23. Wu, Y., Tan, H., Peng, J., Zhang, H., He, H.: Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus. Appl. Energy 247, 454–466 (2019)

    Article  Google Scholar 

  24. Zou, Y., Liu, T., Liu, D., Sun, F.: Reinforcement learning-based real-time energy management for a hybrid tracked vehicle. Appl. Energy 171, 372–382 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wu Yuankai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yuankai, W., Renzong, L., Yong, W., Yi, L. (2022). Benchmarking Deep Reinforcement Learning Based Energy Management Systems for Hybrid Electric Vehicles. In: Fang, L., Povey, D., Zhai, G., Mei, T., Wang, R. (eds) Artificial Intelligence. CICAI 2022. Lecture Notes in Computer Science(), vol 13605. Springer, Cham. https://doi.org/10.1007/978-3-031-20500-2_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20500-2_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20499-9

  • Online ISBN: 978-3-031-20500-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics