Skip to main content

Learning Cooperative Max-Pressure Control by Leveraging Downstream Intersections Information for Traffic Signal Control

  • Conference paper
  • First Online:
Web and Big Data (APWeb-WAIM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12859))

Abstract

Traffic signal control problems are critical in urban intersections. Recently, deep reinforcement learning demonstrates impressive performance in the control of traffic signals. The design of state and reward function is often heuristic, which leads to highly vulnerable performance. To solve this problem, some studies introduce transportation theory into deep reinforcement learning to support the design of reward function e.g., max-pressure control, which have yielded promising performance. We argue that the constant changes of intersections’ pressure can be better represented with the consideration of downstream neighboring intersections. In this paper, we propose CMPLight, a deep reinforcement learning traffic signal control approach with a novel cooperative max-pressure-based reward function to leverage the vehicle queue information of neighborhoods. The approach employs cooperative max-pressure to guide the design of reward function in deep reinforcement learning. We theoretically prove that it is stabilizing when the average traffic demand is admissible and traffic flow is stable in road network. The state of deep reinforcement learning is enhanced by neighboring information, which helps to learn a detailed representation of traffic environment. Extensive experiments are conducted on synthetic and real-world datasets. The experimental results demonstrate that our approach outperforms traditional heuristic transportation control approaches and the state-of-the-arts learning-based approaches in terms of average travel time of all vehicles in road network.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Open source, https://traffic-signal-control.github.io.

  2. 2.

    https://github.com/wingsweihua/presslight.

  3. 3.

    https://cityflow.readthedocs.io/en/latest/index.html.

References

  1. Webster, F.: Traffic signals. Road research technical paper 56 (1966)

    Google Scholar 

  2. Roess, R.P., Prassas, E.S., McShane, W.R.: Traffic Engineering. Pearson/Prentice Hall (2004)

    Google Scholar 

  3. Varaiya, P.: Max pressure control of a network of signalized intersections. Transp. Res. Part C Emerging Technol. 36, 177–195 (2013)

    Article  Google Scholar 

  4. Miller, A.J.: Settings for fixed-cycle traffic signals. J. Oper. Res. Soc. 14(4), 373–386 (1963)

    Article  Google Scholar 

  5. Yau, K.L.A., Qadir, J., Khoo, H.L., et al.: A survey on reinforcement learning models and algorithms for traffic signal control. ACM Comput. Surv. 50(3), 34:1–34:38 (2017)

    Google Scholar 

  6. Van der Pol, E., Oliehoek, F.A.: Coordinated deep reinforcement learners for traffic light control. In: Proceedings of Learning, Inference and Control of Multi-Agent Systems (at NIPS 2016) (2016)

    Google Scholar 

  7. Wei, H., Chen, C., Zheng, G., et al.: PressLight: learning max pressure control to coordinate traffic signals in arterial network. In: SIGKDD 2019, pp. 1290–1298 (2019)

    Google Scholar 

  8. Zheng, G., Zang, X., Xu, N., et al.: Diagnosing reinforcement learning for traffic signal control. arXiv preprint arXiv:1905.04716 (2019)

  9. El-Tantawy, S., Abdulhai, B., Abdelgawad, H.: Multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLIN-ATSC): methodology and large-scale application on downtown Toronto. IEEE Trans. Intell. Transp. Syst. 14(3), 1140–1150 (2013)

    Article  Google Scholar 

  10. Mousavi, S.S., Schukat, M., Howley, E.: Traffic light control using deep policy-gradient and value-function-based reinforcement learning. IET Intel. Transp. Syst. 11(7), 417–423 (2017)

    Article  Google Scholar 

  11. Tan, T., Bao, F., Deng, Y., et al.: Cooperative deep reinforcement learning for large-scale traffic grid signal control. IEEE Trans. Cybern. 50(6), 2687–2700 (2020)

    Article  Google Scholar 

  12. Chu, T., Wang, J., Codecà, L., Li, Z.: Multi-agent deep reinforcement learning for large-scale traffic signal control. IEEE Trans. Intell. Transp. Syst. 21(3), 1086–1095 (2020)

    Article  Google Scholar 

  13. Chen, C., Wei, H., Xu, N., et al.: Toward a thousand lights: decentralized deep reinforcement learning for large-scale traffic signal control. In: AAAI 2020, pp. 3414–3421 (2020)

    Google Scholar 

  14. Wu, N., Li, D., Xi, Y.: Distributed weighted balanced control of traffic signals for urban traffic congestion. IEEE Trans. Intell. Transp. Syst. 20(10), 3710–3720 (2019)

    Article  Google Scholar 

  15. Hao, S., Yang, L., Ding, L., et al.: Distributed cooperative backpressure-based traffic light control method. J. Adv. Transp. 2019, 1–13 (2019)

    Google Scholar 

  16. Mnih, V., Kavukcuoglu, K., Silver, D., et al.: Human-level control through deep reinforcement learning. Nat. 518(7540), 529–533 (2015)

    Article  Google Scholar 

  17. Wiering, M.A.: Multi-agent reinforcement learning for traffic light control. In: ICML 2000, pp. 1151–1158 (2000)

    Google Scholar 

  18. Kuyer, L., Whiteson, S., Bakker, B., Vlassis, N.: Multiagent reinforcement learning for urban traffic control using coordination graphs. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008. LNCS (LNAI), vol. 5211, pp. 656–671. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-87479-9_61

    Chapter  Google Scholar 

  19. Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)

    MATH  Google Scholar 

  20. Prashanth, L.A., Bhatnagar, S.: Reinforcement learning with function approximation for traffic signal control. IEEE Trans. Intell. Transp. Syst. 12(2), 412–421 (2011)

    Google Scholar 

  21. Zhou, N., Du, J., Yao, X., et al.: A content search method for security topics in microblog based on deep reinforcement learning. World Wide Web 23(1), 75–101 (2020)

    Google Scholar 

  22. Ren, C., An, L., Gu, Z., et al.: Rebalancing the car-sharing system with reinforcement learning. World Wide Web 23(4), 2491–2511 (2020)

    Google Scholar 

  23. Zhao, C., Hu, X., Wang, G.: PDLight: a deep reinforcement learning traffic light control algorithm with pressure and dynamic light duration. arXiv preprint arXiv:2009.13711 (2020)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lin Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Peng, Y., Li, L., Xie, Q., Tao, X. (2021). Learning Cooperative Max-Pressure Control by Leveraging Downstream Intersections Information for Traffic Signal Control. In: U, L.H., Spaniol, M., Sakurai, Y., Chen, J. (eds) Web and Big Data. APWeb-WAIM 2021. Lecture Notes in Computer Science(), vol 12859. Springer, Cham. https://doi.org/10.1007/978-3-030-85899-5_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85899-5_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85898-8

  • Online ISBN: 978-3-030-85899-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics