Skip to main content

Large-Scale Traffic Grid Signal Control Using Decentralized Fuzzy Reinforcement Learning

  • Conference paper
  • First Online:
Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016 (IntelliSys 2016)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 15))

Included in the following conference series:

Abstract

With the rise of rapid urbanization around the world, a majority of countries have experienced a significant increase in traffic congestion. The negative impacts of this change have resulted in a number of serious and adverse effects, not only regarding the quality of daily life at an individual level but also for nations’ economic growth. Thus, the importance of traffic congestion management is well recognized. Adaptive real-time traffic signal control is effective for traffic congestion management. In particular, adaptive control with reinforcement learning (RL) is a promising technique that has recently been introduced in the field to better manage traffic congestion. Traditionally, most studies on traffic signal control have used centralized reinforcement learning, whose computation inefficiency prevents it from being employed for large traffic networks. In this paper, we propose a computationally cost-effective distributed algorithm, namely, a decentralized fuzzy reinforcement learning approach, to deal with problems related to the exponentially growing number of possible states and actions in RL models for a large-scale traffic signal control network. More specifically, the traffic density at each intersection is first mapped to four different fuzzy sets (i.e., low, medium, high, and extremely high). Next, two different kinds of algorithms, greedy and neighborhood approximate Q-learning (NAQL), are adaptively selected, based on the real-time, fuzzified congestion levels. To further reduce computational costs and the number of state-action pairs in the RL model, coordination and communication between the intersections are confined within a single neighborhood, i.e., the controlled intersection with its immediate neighbor intersections, for the NAQL algorithm. Finally, we conduct several numerical experiments to verify the efficiency and effectiveness of our approach. The results demonstrate that the decentralized fuzzy reinforcement learning algorithm achieves comparable results when measured against traditional heuristic-based algorithms. In addition, the decentralized fuzzy RL algorithm generates more adaptive control rules for the underlying dynamics of large-scale traffic networks. Thus, the proposed approach sheds new light on how to provide further improvements to a networked traffic signal control system for real-time traffic congestion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhao, D., Dai, Y., Zhang, Z.: Computational intelligence in urban traffic signal control: a survey. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 42(4), 485–494 (2012)

    Google Scholar 

  2. Webster, F.V.: Traffic signal setting. Road Research Laboratory, HMSO, London, UK, Technical Paper 39, p. 144 (1958)

    Google Scholar 

  3. Miller, A.J.: Settings for fixed-cycle traffic signals. Oper. Res. Q. 14(4), 373–386 (1963)

    Google Scholar 

  4. TRL Software, December 2010. http://www.trlsoftware.co.uk/

  5. Sutton, R.S., Barto, A.G.: Reinforcement learning: an introduction. IEEE Trans. Neural Netw. 9(5), 1051–1053 (1998)

    Google Scholar 

  6. Wiering, M., Van Veenen, J., Vreeken, J., Koopman, A.: Intelligent Traffic Light Control. Institute of Information and Computing Sciences, Utrecht University (2004)

    Google Scholar 

  7. Zhang, C., Lesser, V.: Coordinating multi-agent reinforcement learning with limited communication. In: Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013), Saint Paul, Minnesota, USA, 6–10 May 2013

    Google Scholar 

  8. Lammer, S., Helbing, D.: Self-control of traffic lights and vehicle flows in urban road networks. J. Stat. Mech. Theory Exp. 2008(04), P04019 (2008)

    Article  Google Scholar 

  9. Gershenson, C.: Self-organizing traffic lights. Complex Syst. 16, 29–53 (2005)

    Google Scholar 

  10. Khamis, M., Gomaa, W.: Adaptive multi-objective reinforcement learning with hybrid exploration for traffic signal control based on cooperative multi-agent framework. Eng. Appl. Artif. Intell. 29, 134–151 (2014)

    Article  Google Scholar 

  11. Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)

    Google Scholar 

  12. Le, T., Kovacs, P., Walton, N., Vu, H.: Decentralized signal control for urban road networks. Transp. Res. Part C: Emerg. Technol. 58, 431–450 (2015)

    Google Scholar 

  13. Chu, T., Qu, S., Wang, J.: Large-scale traffic grid signal control with regional reinforcement learning. In: 2016 American Control Conference (ACC), Boston, MA, USA, 6–8 July 2016

    Google Scholar 

  14. Qiao, J., Yang, N., Gao, J.: Two-stage fuzzy logic controller for signalized intersection. IEEE Trans. Syst. 41(1), 389–403 (2011)

    Google Scholar 

  15. Gokulan, B.P., Srinivasan, D.: Distributed geometric fuzzy multiagent urban traffic signal control. IEEE Trans. Intell. Transp. Syst. 11(3), 517–523 (2010)

    Google Scholar 

  16. Fan, S., Tian, H., Sengul, C.: Self-optimization of coverage and capacity based on a fuzzy neural network with cooperative reinforcement learning. EURASIP J. Wirel. Commun. Netw. 2014, 57 (2014)

    Article  Google Scholar 

  17. Bingham, E.: Reinforcement learning in neuro fuzzy traffic signal control. Eur. J. Oper. Res. 131(2), 232–241 (2001)

    Article  MATH  Google Scholar 

  18. Engelbrecht, A.P.: Fuzzy sets. In: Computational Intelligence An Introduction, 2nd edn., chap. 20, pp. 453–463. Wiley, Chichester (2007)

    Google Scholar 

  19. Szepesvri, C.: Algorithms for reinforcement learning. In: Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 4, no. 1, p. 1100 (2010)

    Google Scholar 

  20. Simulation of Urban MObility (SUMO) traffic simulator. http://sumo.sourceforge.net

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tian Tan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Cite this paper

Tan, T., Chu, T., Peng, B., Wang, J. (2018). Large-Scale Traffic Grid Signal Control Using Decentralized Fuzzy Reinforcement Learning. In: Bi, Y., Kapoor, S., Bhatia, R. (eds) Proceedings of SAI Intelligent Systems Conference (IntelliSys) 2016. IntelliSys 2016. Lecture Notes in Networks and Systems, vol 15. Springer, Cham. https://doi.org/10.1007/978-3-319-56994-9_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-56994-9_44

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-56993-2

  • Online ISBN: 978-3-319-56994-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics