skip to main content
10.1145/3357384.3358079acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
short-paper
Public Access

Learning Traffic Signal Control from Demonstrations

Published: 03 November 2019 Publication History

Abstract

Reinforcement learning (RL) has recently become a promising approach in various decision-making tasks. Among them, traffic signal control is the one where RL makes a great breakthrough. However, these methods always suffer from the prominent exploration problem and even fail to converge. To resolve this issue, we make an analogy between agents and humans. Agents can learn from demonstrations generated by traditional traffic signal control methods, in the similar way as people master a skill from expert knowledge. Therefore, we propose DemoLight, for the first time, to leverage demonstrations collected from classic methods to accelerate learning. Based on the state-of-the-art deep RL method Advantage Actor-Critic (A2C), training with demos are carried out for both the actor and the critic and reinforcement learning is followed for further improvement. Results under real-world datasets show that DemoLight enables a more efficient exploration and outperforms existing baselines with faster convergence and better performance.

References

[1]
Tianshu Chu, Jie Wang, Lara Codecà, and Zhaojian Li. 2019. Multi-Agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control. IEEE Transactions on Intelligent Transportation Systems (2019).
[2]
Seung-Bae Cools, Carlos Gershenson, and Bart D'Hooghe. 2013. Self-organizing traffic lights: A realistic simulation. In Advances in applied self-organizing systems. Springer, 45--55.
[3]
Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, et almbox. 2018. Deep Q-learning from demonstrations. In Thirty-Second AAAI Conference on Artificial Intelligence .
[4]
Jonathan Ho and Stefano Ermon. 2016. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems. 4565--4573.
[5]
Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144 (2016).
[6]
Ahmed H. Qureshi, Byron Boots, and Michael C. Yip. 2019. Adversarial Imitation via Variational Inverse Reinforcement Learning. In International Conference on Learning Representations . https://openreview.net/forum?id=HJlmHoR5tQ
[7]
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. 2015. Prioritized experience replay. arXiv preprint arXiv:1511.05952 (2015).
[8]
van der Pol et al. 2016. Coordinated Deep Reinforcement Learners for Traffic Light Control. NIPS.
[9]
Hado van Hasselt, Arthur Guez, and David Silver. 2015. Deep Reinforcement Learning with Double Q-learning. arXiv preprint arXiv:1509.06461 (2015).
[10]
Matej Vevc er'ik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, and Martin Riedmiller. 2017. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817 (2017).
[11]
Hua Wei, Guanjie Zheng, Vikash Gayah, and Zhenhui Li. 2019. A Survey on Traffic Signal Control Methods. arXiv preprint arXiv:1904.08117 (2019).
[12]
Hua Wei, Guanjie Zheng, Huaxiu Yao, and Zhenhui Li. 2018. Intellilight: A reinforcement learning approach for intelligent traffic light control. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2496--2505.
[13]
Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, Vol. 8, 3--4 (1992), 229--256.
[14]
Huichu Zhang, Siyuan Feng, Chang Liu, Yaoyao Ding, Yichen Zhu, Zihan Zhou, Weinan Zhang, Yong Yu, Haiming Jin, and Zhenhui Li. 2019. CityFlow: A Multi-Agent Reinforcement Learning Environment for Large Scale City Traffic Scenario. In The World Wide Web Conference. ACM, 3620--3624.
[15]
Guanjie Zheng, Yuanhao Xiong, Xinshi Zang, Jie Feng, Hua Wei, Huichu Zhang, Yong Li, Kai Xu, and Zhenhui Li. 2019 a. Learning Phase Competition for Traffic Signal Control. arXiv preprint arXiv:1905.04722 (2019).
[16]
Guanjie Zheng, Xinshi Zang, Nan Xu, Hua Wei, Zhengyao Yu, Vikash Gayah, Kai Xu, and Zhenhui Li. 2019 b. Diagnosing Reinforcement Learning for Traffic Signal Control. arXiv preprint arXiv:1905.04716 (2019).

Cited By

View all
  • (2024)Enhancing the Robustness of Traffic Signal Control with StageLight: A Multiscale Learning ApproachEng10.3390/eng50100075:1(104-115)Online publication date: 8-Jan-2024
  • (2024)GeoGail: A Model-Based Imitation Learning Framework for Human Trajectory SynthesizingACM Transactions on Knowledge Discovery from Data10.1145/369996119:1(1-23)Online publication date: 14-Oct-2024
  • (2024)UniTSA: A Universal Reinforcement Learning Framework for V2X Traffic Signal ControlIEEE Transactions on Vehicular Technology10.1109/TVT.2024.340387973:10(14354-14369)Online publication date: Oct-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CIKM '19: Proceedings of the 28th ACM International Conference on Information and Knowledge Management
November 2019
3373 pages
ISBN:9781450369763
DOI:10.1145/3357384
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 November 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. reinforcement learning
  2. traffic signal control

Qualifiers

  • Short-paper

Funding Sources

Conference

CIKM '19
Sponsor:

Acceptance Rates

CIKM '19 Paper Acceptance Rate 202 of 1,031 submissions, 20%;
Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

Upcoming Conference

CIKM '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)275
  • Downloads (Last 6 weeks)22
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Enhancing the Robustness of Traffic Signal Control with StageLight: A Multiscale Learning ApproachEng10.3390/eng50100075:1(104-115)Online publication date: 8-Jan-2024
  • (2024)GeoGail: A Model-Based Imitation Learning Framework for Human Trajectory SynthesizingACM Transactions on Knowledge Discovery from Data10.1145/369996119:1(1-23)Online publication date: 14-Oct-2024
  • (2024)UniTSA: A Universal Reinforcement Learning Framework for V2X Traffic Signal ControlIEEE Transactions on Vehicular Technology10.1109/TVT.2024.340387973:10(14354-14369)Online publication date: Oct-2024
  • (2024)SpeedAdv: Enabling Green Light Optimized Speed Advisory for Diverse Traffic LightsIEEE Transactions on Mobile Computing10.1109/TMC.2023.3319697(1-14)Online publication date: 2024
  • (2024)FELight: Fairness-Aware Traffic Signal Control via Sample-Efficient Reinforcement LearningIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.337674536:9(4678-4692)Online publication date: Sep-2024
  • (2024)Reinforcement Learning for Traffic Signal Control in Hybrid Action SpaceIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2023.334458525:6(5225-5241)Online publication date: Jun-2024
  • (2024)MixLight: Mixed-Agent Cooperative Reinforcement Learning for Traffic Light ControlIEEE Transactions on Industrial Informatics10.1109/TII.2023.329691020:2(2653-2661)Online publication date: Feb-2024
  • (2024)Model-Based Graph Reinforcement Learning for Inductive Traffic Signal ControlIEEE Open Journal of Intelligent Transportation Systems10.1109/OJITS.2024.33765835(238-250)Online publication date: 2024
  • (2024)Demonstration-guided deep reinforcement learning for coordinated ramp metering and perimeter control in large scale networksTransportation Research Part C: Emerging Technologies10.1016/j.trc.2023.104461159(104461)Online publication date: Feb-2024
  • (2024)Network multiscale urban traffic control with mixed traffic flowTransportation Research Part B: Methodological10.1016/j.trb.2024.102963185(102963)Online publication date: Jul-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media