Abstract
The increasing demand for mobility in our society poses various challenges to traffic engineering, computer science in general, and artificial intelligence and multiagent systems in particular. As it is often the case, it is not possible to provide additional capacity, so that a more efficient use of the available transportation infrastructure is necessary. This relates closely to multiagent systems as many problems in traffic management and control are inherently distributed. Also, many actors in a transportation system fit very well the concept of autonomous agents: the driver, the pedestrian, the traffic expert; in some cases, also the intersection and the traffic signal controller can be regarded as an autonomous agent. However, the “agentification” of a transportation system is associated with some challenging issues: the number of agents is high, typically agents are highly adaptive, they react to changes in the environment at individual level but cause an unpredictable collective pattern, and act in a highly coupled environment. Therefore, this domain poses many challenges for standard techniques from multiagent systems such as coordination and learning. This paper has two main objectives: (i) to present problems, methods, approaches and practices in traffic engineering (especially regarding traffic signal control); and (ii) to highlight open problems and challenges so that future research in multiagent systems can address them.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Balan G. and Luke S. (2006). History-based traffic control. In: Nakashima, H., Wellman, M.P., Weiss, G. and Stone, P. (eds) Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, pp 616–621. ACM Press, New York
Balmer, M., Cetin, N., Nagel, K., & Raney, B. (2004). Towards truly agent-based traffic and mobility simulations. In N. Jennings, C. Sierra, L. Sonenberg, & M. Tambe (Eds.), Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multi Agent Systems, AAMAS, July 2004 (Vol. 1, pp. 60–67). New York: IEEE Computer Society.
Bazzan, A. L. C. (1995). A game-theoretic approach to distributed control of traffic signals. In Proceedings of the 1st International Conference on Multi-Agent Systems (ICMAS) (p. 439, extended abstract). San Francisco.
Bazzan, A. L. C. (1997). An evolutionary game-theoretic approach for coordination of traffic signal agents. PhD thesis, University of Karlsruhe.
Bazzan A.L.C. (2005). A distributed approach for coordination of traffic signal agents. Autonomous Agents and Multiagent Systems 10(1): 131–164
Bazzan, A. L. C., de Oliveira, D., & da Silva, B. C. (2008). Learning in groups of traffic signals. Technical report, UFRGS.
Bazzan A.L.C., Klügl F., Nagel K. and Oliveira D. (2008). Adapt or not to adapt—Consequences of adapting driver and traffic light agents. In: Tuyls, K., Nowe, A., Guessoum, Z., and Kudenko, D. (eds) Adaptive agents and multi-agent systems III, Lecture notes in artificial intelligence (Vol 4865), pp 1–14. Springer-Verlag, New York
Bazzan A.L.C. and Junges R. (2006). Congestion tolls as utility alignment between agent and system optimum. In: Nakashima, H., Wellman, M.P., Weiss, G. and Stone, P. (eds) Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, May 2006, pp 126–128. ACM Press, New York
Bazzan A.L.C. and Klügl F. (2005). Case studies on the Braess paradox: Simulating route recommendation and learning in abstract and microscopic models. Transportation Research C 13(4): 299–319
Bazzan A.L.C. and Klügl F. (2008). Re-routing agents in an abstract traffic scenario. In: Zaverucha, G. and da Costa, A.L. (eds) Advances in artificial intelligence, Lecture notes in artificial intelligence (Vol. 5249), pp 63–72. Springer-Verlag, Berlin
Bazzan A.L.C., Klügl F. and Nagel K. (2007). Adaptation in games with many co-evolving agents. In: Neves, J., Santos, M., and Machado, J. (eds) EPIA 2007, Lecture notes in artificial intelligence (Vol. 4874), pp 195–206. Springer-Verlag, Berlin
Bazzan, A. L. C., Wahle, J., & Klügl, F. (1999). Agents in traffic modelling—From reactive to social behavior. In Advances in artificial intelligence, Lecture notes in artificial intelligence (Vol. 1701, pp. 303–306). Berlin/Heidelberg: Springer. Extended version appeared in Proceedings of the U.K. Special Interest Group on Multi-Agent Systems (UKMAS), Bristol, UK.
Bowling M.H. and Veloso M.M. (2001). Rational and convergent learning in stochastic games. In: Nebel, B. (eds) Proceedings of the 17th International Joint Conference on Artificial Intelligence., pp 1021–1026. Morgan Kaufmann, Seattle
Braess D. (1968). über ein Paradoxon aus der Verkehrsplanung. Unternehmensforschung 12: 258
Brockfeld E., Barlovic R., Schadschneider A. and Schreckenberg M. (2001). Optimizing traffic lights in a cellular automaton model for city traffic. Physical Review E 64(5): 056132
Bull L., Sha’Aban J., Tomlinson A., Addison J.D. and Heydecker B.G. (2004). Towards distributed adaptive control for road traffic junction signals using learning classifier systems. In: Bull, L. (eds) Applications of learning classifier systems, Studies in fuzziness and soft computing (Vol. 150), pp 276–299. Springer, New York
Burmeister B., Doormann J. and Matylis G. (1997). Agent-oriented traffic simulation. Transactions Society for Computer Simulation 14: 79–86
Camponogara E. and Kraus W. (2003). Distributed learning agents in urban traffic control. In: Moura-Pires, F. and Abreu, S. (eds) EPIA., pp 324–335. Portugal, Beja
Choi S.P.M., Yeung D.-Y. and Zhang N.L. (2001). Hidden-mode markov decision processes for nonstationary sequential decision making. In: Sun, R. and Giles, C.L. (eds) Sequence learning: paradigms, algorithms, and applications, pp 264–287. Springer, Berlin
Chowdhury D., Santen L. and Schadschneider A. (2000). Statistical physics of vehicular traffic and some related systems. Physics Reports 329: 199–329
Chowdhury D. and Schadschneider A. (1999). Self-organization of traffic jams in cities: Effects of stochastic dynamics and signal periods. Physical Review E 59(2): R1311–R1314
Claus, C., & Boutilier, C. (1998). The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the 15th National Conference on Artificial Intelligence (pp. 746–752). Madison, Wisconsin.
Di Taranto, M. (1989). UTOPIA. In Proceedings of the IFAC-IFIP-IFORS Conference on Control, Computers, Communication in Transportation, Paris. ifac.
Diakaki C., Papageorgiou M. and Aboudolas K. (2002). A multivariable regulator approach to traffic-responsive network-wide signal control. Control Engineering Practice 10(2): 183–195
Doya K., Samejima K., Katagiri K. and Kawato M. (2002). Multiple model-based reinforcement learning. Neural Computation 14(6): 1347–1369
Dresner K. and Stone P. (2004). Multiagent traffic management: A reservation-based intersection control mechanism. In: Jennings, N., Sierra, C., Sonenberg, L. and Tambe, M. (eds) The 3rd International Joint Conference on Autonomous Agents and Multiagent Systems, July 2004, pp 530–537. IEEE Computer Society, New York
Dresner K. and Stone P. (2005). Multiagent traffic management: An improved intersection control mechanism. In: Dignum, F., Dignum, V., Koenig, S., Kraus, S., Singh, M.P. and Wooldridge, M. (eds) The 4th International Joint Conference on Autonomous Agents and Multiagent Systems, July 2005, pp. ACM Press, New York
Dresner K. and Stone P. (2006). Multiagent traffic management: Opportunities for multiagent learning. In: Tuyls, K., Hoen, P.J., Verbeeck, K. and Sen, S. (eds) LAMAS 2005, Lecture notes in artificial intelligence (Vol 3898)., pp 129–138. Springer Verlag, Berlin
Dresner, K., Stone, P. (2007). Sharing the road: Autonomous vehicles meet human drivers. In The 20th International Joint Conference on Artificial Intelligence, January 2007 (pp. 1263–1268). Hyderabad, India.
Elhadouaj, S., Drogoul, A., & Espié, S. (2000). How to combine reactivity and anticipation: The case of conflicts resolution in a simulated road traffic. In Proceedings of the Multiagent Based Simulation (MABS) (pp. 82–96). New York: Springer.
France, J., & Ghorbani, A. A. (2003). A multiagent system for optimizing urban traffic. In Proceedings of the IEEE/WIC International Conference on Intelligent Agent Technology (pp. 411–414). Washington, DC: IEEE Computer Society.
Gartner N.H. (1983). OPAC—A demand-responsive strategy for traffic signal control. Transportation Research Record 906: 75–81
Gershenson C. (2005). Self-organizing traffic lights. Complex Systems 16(1): 29–53
Gordon D. (1996). The organization of work in social insect colonies. Nature 380: 121–124
Hall R.W. (2003). Handbook of transportation science 2nd ed. Kluwer Academic Pub., Dordrecht
Haugeneder, H., & Steiner, D. (1993). MECCA/UTS: A multi-agent scenario for cooperation in urban traffic. In Proceedings of the Special Interest Group on Cooperating Knowledge Based Systems (pp. 83–98). Keele, UK.
Helbing D. and Huberman B.A. (1998). Coherent moving states in highway traffic. Nature 396: 738
Henry, J., Farges, J. L., & Tuffal, J. (1983). The PRODYN real time traffic algorithm. In Proceedings of the International Federation of Automatic Control (IFAC) Conference, Baden-Baden: IFAC.
Hu, J., & Wellman, M. P. (1998). Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proceedings of the 15th International Conference on Machine Learning (pp. 242–250). Los Altos: Morgan Kaufmann.
Hunt, P. B., Robertson, D. I., Bretherton, R. D., & Winton, R. I. (1981). SCOOT—A traffic responsive method of coordinating signals. TRRL Lab. Report 1014, Transport and Road Research Laboratory, Berkshire, 1981.
Klügl F. and Bazzan A.L.C. (2004). Simulated route decision behaviour: Simple heuristics and adaptation. In: Selten, R. and Schreckenberg, M. (eds) Human behaviour and traffic networks., pp 285–304. Springer, New York
Klügl, F., Bazzan, A. L. C., & Wahle, J. (2003). Selection of information types based on personal utility—A testbed for traffic information markets. In Proceedings of the 2nd International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 377–384). Melbourne, Australia: ACM Press.
Köhler E., Möhring R.H. and Wünsch G. (2004). Minimizing total delay in fixed-time controlled traffic networks. In: Fleuren, H., den Hertog, D., and Kort, P. (eds) Proceedings of Operations Research (OR), Operations Research Proceedings., pp 192. Springer, Tilburg
Kosonen I. (2003). Multi-agent fuzzy signal control based on real-time simulation. Transportation Research C 11(5): 389–403
Leutzbach W. (1988). Introduction to the theory of traffic flow. Springer, Berlin
Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning, ML (pp. 157–163). New Brunswick, NJ: Morgan Kaufmann.
Lowrie, P. (1982). The Sydney coordinate adaptive traffic system—Principles, methodology, algorithms. In Proceedings of the International Conference on Road Traffic Signalling, Sydney, Australia.
Möhring, R. H., Nökel, K., & Wünsch, G. (2006). A model and fast optimization method for signal coordination in a network. In Proceedings of the 11th IFAC Symposium on Control in Transportation Systems, August 2006. Delft, The Netherlands.
Moore A.W. and Atkeson C.G. (1993). Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning 13: 103–130
Morgan J.T. and Little J.D.C. (1964). Synchronizing traffic signals for maximal bandwidth. Operations Research 12: 897–912
Nagel K. and Schreckenberg M. (1992). A cellular automaton model for freeway traffic. Journal de Physique I 2: 2221
Nunes, L., & Oliveira, E. C. (2004). Learning from multiple sources. In N. Jennings, C. Sierra, L. Sonenberg, & M. Tambe (Eds.), Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multi Agent Systems, AAMAS, July 2004 (Vol. 3, pp. 1106–1113). New York: IEEE Computer Society.
Oliveira D. and Bazzan A.L.C. (2006). Traffic lights control with adaptive group formation based on swarm intelligence. In: Dorigo, M., Gambardella, L.M., Birattari, M., Martinoli, A., Poli, R. and Stuetzle, T. (eds) Proceedings of the 5th International Workshop on Ant Colony Optimization and Swarm Intelligence, ANTS 2006, Lecture notes in computer science, September 2006., pp 520–521. Springer, Berlin
Oliveira, D., Bazzan, A. L. C., & Lesser, V. (2005). Using cooperative mediation to coordinate traffic lights: A case study. In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS), July 2005 (pp. 463–470). New York: IEEE Computer Society.
Oliveira, D., Ferreira, P. R., Jr., Bazzan, A. L. C., & Klügl, F. (2004). A swarm-based approach for selection of signal plans in urban scenarios. In Proceedings of 4th International Workshop on Ant Colony Optimization and Swarm Intelligence—ANTS 2004, Lecture notes in computer science (Vol. 3172, pp. 416–417). Berlin, Germany.
Oppenheim N. (1995). Urban travel demand modeling: From individual choices to general equilibrium. Wiley, New York, NY
Ossowski S., Fernández A., Serrano J.M., Pérez-de-la-Cruz J.L., Belmonte M.V. and Hernández J.Z. (2005). Designing multiagent decision support systems for traffic management. In: Klügl, F., Bazzan, A.L.C., and Ossowski, S. (eds) Applications of agent technology in traffic and transportation, Whitestein series in software agent technologies and autonomic computing., pp 51–67. Birkhäuser, Basel
Panait L. and Luke S. (2005). Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems 11(3): 387–434
Papageorgiou M. (2003). Traffic control. In: Hall, R.W. (eds) Handbook of transportation science (Chap. 8)., pp 243–277. Kluwer Academic Pub, Dordrecht
Papageorgiou M., Diakaki C., Dinopoulou V., Kotsialos A. and Wang Y. (2003). Review of road traffic control strategies. Proceedings of the IEEE 91(12): 2043–2067
Paruchuri, P., Pullalarevu, A. R., & Karlapalem, K. (2002). Multi agent simulation of unorganized traffic. In Proceedings of the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (Vol. 1, pp. 176–183). Bologna, Italy: ACM Press.
Rigolli M. and Brady M. (2005). Towards a behavioural traffic monitoring system. In: Dignum, F., Dignum, V., Koenig, S., Kraus, S., Singh, M.P. and Wooldridge, M. (eds) Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems., pp 449–454. ACM Press, New York
Robertson, D. I. (1969). TRANSYT: A traffic network study tool. Rep. LR 253, Road Res. Lab., London.
Robinson G.E. (1992). Regulation of division of labor in insect societies. Annual Review of Entomology 37: 637–665
Rochner F., Prothmann H., Branke J., Müller-Schloer C. and Schmeck H. (2006). An organic architecture for traffic light controllers. In: Hochberger, C. and Liskowsky, R. (eds) Proceedings of the Informatik 2006—Informatik für Menschen, Lecture notes in informatics (Vol P-93)., pp 120–127. Köllen Verlag, Berlin
Roess R.P., Prassas E.S. and McShane W.R. (2004). Traffic engineering. Prentice Hall, Englewood Cliffs, NJ
Rossetti R. and Liu R. (2005). A dynamic network simulation model based on multi-agent systems. In: Klügl, F., Bazzan, A.L.C. and Ossowski, S. (eds) Applications of agent technology in traffic and transportation, Whitestein series in software agent technologies and autonomic computing. pp 181–192. Birkhäuser, Basel
Rossetti R.J.F., Bordini R.H., Bazzan A.L.C., Bampi S., Liu R. and Van Vliet D. (2002). Using BDI agents to improve driver modelling in a commuter scenario. Transportation Research Part C: Emerging Technologies 10(5–6): 47–72
Shoham, Y., Powers, R., & Grenager, T. (2003). Multi-agent reinforcement learning: A critical survey. Unpublished survey.
Shoham Y., Powers R. and Grenager T. (2007). If multi-agent learning is the answer, what is the question?. Artificial Intelligence 171(7): 365–377
Silva B.C.d., Basso E.W., Bazzan A.L.C. and Engel P.M. (2006). Dealing with non-stationary environments using context detection. In: Cohen, W.W. and Moore, A. (eds) Proceedings of the 23rd International Conference on Machine Learning ICML, June 2006, pp 217–224. ACM Press, New York
Silva, B. C. d., Oliveira, D. d., Bazzan, A. L. C., & Basso, E. W. (2006). Adaptive traffic control with reinforcement learning. In A. L. C. Bazzan, B. Chaib-Draa, F. Klügl, & S. Ossowski (Eds.), Proceedings of the 4th Workshop on Agents in Traffic and Transportation (at AAMAS 2006), May 2006 (pp. 80–86). Hakodate, Japan.
Steingrover, M., Schouten, R., Peelen, S., Nijhuis, E., & Bakker, B. (2005). Reinforcement learning of traffic light controllers adapting to traffic congestion. In K. Verbeeck, K. Tuyls, A. Nowé, B. Manderick, & B. Kuijpers (Eds.), Proceedings of the 17th Belgium-Netherlands Conference on Artificial Intelligence (BNAIC 2005), October 2005 (pp. 216–223). Brussels, Belgium: Koninklijke Vlaamse Academie van Belie voor Wetenschappen en Kunsten.
Stone, P. (2007). Learning and multiagent reasoning for autonomous agents. In The 20th International Joint Conference on Artificial Intelligence, January 2007 (pp. 13–30).
Stone P. (2007). Multiagent learning is not the answer. It is the question. Artificial Intelligence 171(7): 402–405
Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the 7th International Conference on Machine Learning (pp. 216–224). Austin, Texas.
Tan, M. (1993). Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the 10th International Conference on Machine Learning (ICML 1993), June 1993 (pp. 330–337). Los Altos, CA: Morgan Kaufmann.
TRANSYT-7F. (1988). TRANSYT-7F user’s manual. Transportation Research Center, University of Florida.
Tumer, K., Welch, Z. T., & Agogino, A. (2008). Aligning social welfare and agent preferences to alleviate traffic congestion. In L. Padgham, D. Parkes, J. Müller, & S. Parsons (Eds.), Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems, May 2008 (pp. 655–662). Estoril: IFAAMAS.
Tumer K. and Wolpert D. (2004). A survey of collectives. In: Tumer, K. and Wolpert, D. (eds) Collectives and the design of complex systems., pp 1–42. Springer, New York
Tuyls, K. (2004). Learning in multi-agent systems, an evolutionary game theoretic approach. PhD thesis, Vrije Universiteit Brussel.
Tuyls K., Hoen P.J. and Vanschoenwinkel B. (2006). An evolutionary dynamical analysis of multi-agent learning in iterated games. Autonomous Agents and Multiagent Systems 12(1): 115–153
Tuyls K. and Parsons S. (2007). What evolutionary game theory tells us about multiagent learning. Artificial Intelligence 171(7): 406–416
van Katwijk, R. T., van Koningsbruggen, P., Schutter, B. D., & Hellendoorn, J. (2005). A test bed for multi-agent control systems in road traffic management. In F. Klügl, A. L. C. Bazzan, & S. Ossowski (Eds.), Applications of agent technology in traffic and transportation, Whitestein series in software agent technologies and autonomic computing (pp. 113–131). Basel: Birkhäuser.
Verbeeck, K., Nowé, A., Peeters, M., & Tuyls, K. (2005). Multi-agent reinforcement learning in stochastic games and multi-stage games. In D. K. et al. (Eds.), Adaptive agents and MAS II, LNAI (Vol. 3394, pp. 275–294). Berlin: Springer.
Vu T., Powers R. and Shoham Y. (2006). Learning against multiple opponents. In: Nakashima, H., Wellman, M.P., Weiss, G., and Stone, P. (eds) Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems., pp 752–760. Hakodate, Japan
Wahle J., Bazzan A.L.C. and Kluegl F. (2002). The impact of real time information in a two route scenario using agent based simulation. Transportation Research Part C: Emerging Technologies 10(5–6): 73–91
Wahle J., Bazzan A.L.C., Klügl F. and Schreckenberg M. (2000). Anticipatory traffic forecast using multi-agent techniques. In: Helbing, D., Hermann, H.J., Schreckenberg, M., and Wolf, D. (eds) Traffic and granular flow., pp 87–92. Springer, New york
Wahle J., Bazzan A.L.C., Klügl F. and Schreckenberg M. (2000). Decision dynamics in a traffic scenario. Physica A 287(3–4): 669–681
Wardrop, J. G. (1952). Some theoretical aspects of road traffic research. In Proceedings of the Institute of Civil Engineers (Vol. 2, pp. 325–378). UK.
Watkins C.J.C.H. and Dayan P. (1992). Q-learning. Machine Learning 8(3): 279–292
Wiering, M. (2000). Multi-agent reinforcement learning for traffic light control. In Proceedings of the 17th International Conference on Machine Learning (ICML 2000) (pp. 1151–1158). Stanford.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Bazzan, A.L.C. Opportunities for multiagent systems and multiagent reinforcement learning in traffic control. Auton Agent Multi-Agent Syst 18, 342–375 (2009). https://doi.org/10.1007/s10458-008-9062-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10458-008-9062-9