skip to main content
10.1145/3468891.3468898acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmltConference Proceedingsconference-collections
research-article

A Deep Reinforcement Learning Method for Freight Train Driving Based on Domain Knowledge and Mass Estimation Network

Published:06 September 2021Publication History

ABSTRACT

In train marshalling, the mass of the freight train will dynamically change in a wide range, which is the main difficulty in realizing its automatic driving. This paper proposes a deep reinforcement learning method that combines domain knowledge and mass estimation network (MEN). The domain knowledge of excellent drivers is utilized to accelerate the convergence speed of the algorithm and improve the driving performance. Furthermore, the MEN is introduced for estimating the mass of the entire train during driving. Finally, the deep reinforcement learning algorithm selects the output gear based on the estimated mass. The simulation results show that the proposed method has significant effects on performance optimization such as reducing parking error, improving marshalling efficiency, optimizing coupler force and reducing jerk.

References

  1. Yin, J., Tang, T., Yang, L., Xun, J., Huang, Y. and Gao, Z. Research and development of automatic train operation for railway transportation systems: A survey. Transportation Research Part C: Emerging Technologies, 85 (2017), 548-572.Google ScholarGoogle ScholarCross RefCross Ref
  2. Larranaga, M., Anselmi, J., Ayesta, U., Jacko, P. and Romo, A. Optimization techniques applied to railway systems (2013).Google ScholarGoogle Scholar
  3. Tuyttens, D., Fei, H., Mezmaz, M. and Jalwan, J. Simulation-based genetic algorithm towards an energy-efficient railway traffic control. Mathematical Problems in Engineering, 2013 (2013).Google ScholarGoogle Scholar
  4. Miyatake, M. and Ko, H. Optimization of train speed profile for minimum energy consumption. IEEJ Transactions on Electrical and Electronic Engineering, 5, 3 (2010), 263-269.Google ScholarGoogle ScholarCross RefCross Ref
  5. Yin, J., Chen, D. and Li, L. Intelligent Train Operation Algorithms for Subway by Expert System and Reinforcement Learning. IEEE Transactions on Intelligent Transportation Systems, 15, 6 (2014), 2561-2571.Google ScholarGoogle ScholarCross RefCross Ref
  6. Xi Wang and Tao Tang Optimal control of heavy haul train on steep downward slope (2016).Google ScholarGoogle Scholar
  7. Cui, Y., Zhang, G., Dong, W., Sun, X. and Yang, W. Knowledge-based Deep Reinforcement Learning for Train Automatic Stop Control of High-Speed Railway. City, 2020.Google ScholarGoogle Scholar
  8. Yang, X., Li, X., Ning, B. and Tang, T. A survey on energy-efficient train operation for urban rail transit. IEEE Transactions on Intelligent Transportation Systems, 17, 1 (2015), 2-13.Google ScholarGoogle Scholar
  9. Liu, R. R. and Golovitcher, I. M. Energy-efficient operation of rail vehicles. Transportation Research Part A, 37, 10 (2003), 917-932.Google ScholarGoogle ScholarCross RefCross Ref
  10. Li, L., Dong, W., Ji, Y., Zhang, Z. and Tong, L. Minimal-energy driving strategy for high-speed electric train with hybrid system model. IEEE Transactions on Intelligent Transportation Systems, 14, 4 (2013), 1642-1653.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Chen, D., Chen, R., Li, Y. and Tang, T. Online learning algorithms for train automatic stop control using precise location data of balises. IEEE Transactions on Intelligent Transportation Systems, 14, 3 (2013), 1526-1535.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Dong, H., Ning, B., Cai, B. and Hou, Z. Automatic train control system development and simulation for high-speed railways. IEEE circuits and systems magazine, 10, 2 (2010), 6-18.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Howlett, P. Optimal strategies for the control of a train. Automatica, 32, 4 (1996), 519-532.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K. and Ostrovski, G. Human-level control through deep reinforcement learning. Nature, 518, 7540 (2015), 529.Google ScholarGoogle Scholar
  15. Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D. and Wierstra, D. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICMLT '21: Proceedings of the 2021 6th International Conference on Machine Learning Technologies
    April 2021
    183 pages
    ISBN:9781450389402
    DOI:10.1145/3468891

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 6 September 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format