Abstract
This paper explores intelligent traffic light management advancements, focusing on controlling intersection traffic opening times. The decision-making process is influenced by factors such as traffic density. The information for these decisions is gathered from sensors placed on the streets, whose accuracy can vary. Data collected are processed to aid control agents in decision-making. The paper proposes an intersection control algorithm that operates under the assumption of lacking sensorisation. To balance raw sensor data, control nodes implement a reinforced learning algorithm to select the most suitable combination of sensors to improve traffic parameters. The paper also introduces a method for calculating traffic density by combining sensors with imprecise data. This research contributes to intelligent traffic management by providing a novel approach to intersection control and traffic density calculation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alegre, L.N.: SUMO-RL (2019)
Bouktif, S., Cheniki, A., Ouni, A., El-Sayed, H.: Deep reinforcement learning for traffic signal control with consistent state and reward design approach. Knowl.-Based Syst. 267, 110440 (2023). https://doi.org/10.1016/j.knosys.2023.110440
Chen, L., Englund, C.: Cooperative intersection management: a survey. IEEE Trans. Intell. Transp. Syst. 17(2), 570–586 (2015)
Fan, X.: Deep learning for intelligent traffic sensing and prediction: recent advances and future challenges. CCF Trans. Pervasive Comput. Interact. 2(4), 240–260 (2020). https://doi.org/10.1007/s42486-020-00039-x
Gershenson, C.: Self-organizing traffic lights download pdf. Complex Systems 16(1) (2005)
Lai, C.D., Murthy, D., Xie, M.: Weibull distributions and their applications. In: Springer Handbooks, pp. 63–78. Springer (2006)
Lopez, P.A., et al.: Microscopic traffic simulation using sumo. In: The 21st IEEE International Conference on Intelligent Transportation Systems. IEEE (2018). https://elib.dlr.de/124092/
Mena-Oreja, J., Gozalvez, J.: On the impact of floating car data and data fusion on the prediction of the traffic density, flow and speed using an error recurrent convolutional neural network. IEEE Access 9, 133710–133724 (2021)
Modi, Y., Teli, R., Mehta, A., Shah, K., Shah, M.: A comprehensive review on intelligent traffic management using machine learning algorithms. Innovative Infrastruct. Solutions 7(1), 128 (2022)
Poza-Lujan, J.L., Uribe-Chavert, P., Posadas-Yagüe, J.L.: Low-cost modular devices for on-road vehicle detection and characterisation. Des. Autom. Embed. Syst. 27(1), 85–102 (2023)
Seinstra, F.J., et al.: Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds, pp. 167–197. Grids, Clouds and Virtualization pp (2011)
Tahir, M.N., Leviäkangas, P., Katz, M.: Connected vehicles: V2V and V2I road weather and traffic communication using cellular technologies. Sensors 22(3), 1142 (2022)
Tang, C., Wei, X., Liu, J.: Application of sensor-cloud systems: smart traffic control. In: Security, Privacy, and Anonymity in Computation, Communication, and Storage: 11th International Conference and Satellite Workshops, SpaCCS 2018, Melbourne, NSW, Australia, December 11-13, 2018, Proceedings 11, pp. 192–202. Springer (2018)
Tomar, I., Sreedevi, I., Pandey, N.: State-of-art review of traffic light synchronization for intelligent vehicles: current status, challenges, and emerging trends. Electronics 11(3), 465 (2022)
Uribe-Chavert, P., Posadas-Yagüe, J.L., Poza-Lujan, J.L.: Proposal for a distributed intelligent control architecture based on heterogeneous modular devices. In: Distributed Computing and Artificial Intelligence, Volume 2: Special Sessions 18th International Conference 18, pp. 198–201. Springer (2022)
Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learn. 8(3), 279–292 (1992). https://doi.org/10.1007/BF00992698
Wei, H., Zheng, G., Gayah, V., Li, Z.: Recent advances in reinforcement learning for traffic signal control: a survey of models and evaluation. SIGKDD Explor. Newsl. 22(2), 12–18 (2021). https://doi.org/10.1145/3447556.3447565
Wiering, M.A., Van Otterlo, M.: Reinforcement learning. Adapt. Learn. Optim. 12(3), 729 (2012)
Wu, Q., Wu, J., Shen, J., Yong, B., Zhou, Q.: An edge based multi-agent auto communication method for traffic light control. Sensors 20(15), 4291 (2020)
Xing, H., Chen, A., Zhang, X.: RL-GCN: traffic flow prediction based on graph convolution and reinforcement learning for smart cities. Displays 80, 102513 (2023). https://doi.org/10.1016/j.displa.2023.102513, https://www.sciencedirect.com/science/article/pii/S0141938223001464
Acknowledgements
Work supported by the Spanish Ministry of Science and Innovation MICINN Project: CICYT PRECON-I4: “Predictable and reliable information systems for Industry 4.0” TIN2017-86520-C3-1-R.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Uribe-Chavert, P., López-Cuerva, L., Posadas-Yagüe, JL., Poza-Lujan, JL. (2025). Adaptive Traffic Light Control Through Reinforcement Learning Based on Sensor Integration. In: Chinthaginjala, R., Sitek, P., Min-Allah, N., Matsui, K., Ossowski, S., Rodríguez, S. (eds) Distributed Computing and Artificial Intelligence, 21st International Conference. DCAI 2024. Lecture Notes in Networks and Systems, vol 1259. Springer, Cham. https://doi.org/10.1007/978-3-031-82073-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-82073-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-82072-4
Online ISBN: 978-3-031-82073-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)