Skip to main content
Log in

Opportunities for multiagent systems and multiagent reinforcement learning in traffic control

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

The increasing demand for mobility in our society poses various challenges to traffic engineering, computer science in general, and artificial intelligence and multiagent systems in particular. As it is often the case, it is not possible to provide additional capacity, so that a more efficient use of the available transportation infrastructure is necessary. This relates closely to multiagent systems as many problems in traffic management and control are inherently distributed. Also, many actors in a transportation system fit very well the concept of autonomous agents: the driver, the pedestrian, the traffic expert; in some cases, also the intersection and the traffic signal controller can be regarded as an autonomous agent. However, the “agentification” of a transportation system is associated with some challenging issues: the number of agents is high, typically agents are highly adaptive, they react to changes in the environment at individual level but cause an unpredictable collective pattern, and act in a highly coupled environment. Therefore, this domain poses many challenges for standard techniques from multiagent systems such as coordination and learning. This paper has two main objectives: (i) to present problems, methods, approaches and practices in traffic engineering (especially regarding traffic signal control); and (ii) to highlight open problems and challenges so that future research in multiagent systems can address them.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Balan G. and Luke S. (2006). History-based traffic control. In: Nakashima, H., Wellman, M.P., Weiss, G. and Stone, P. (eds) Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, pp 616–621. ACM Press, New York

    Chapter  Google Scholar 

  2. Balmer, M., Cetin, N., Nagel, K., & Raney, B. (2004). Towards truly agent-based traffic and mobility simulations. In N. Jennings, C. Sierra, L. Sonenberg, & M. Tambe (Eds.), Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multi Agent Systems, AAMAS, July 2004 (Vol. 1, pp. 60–67). New York: IEEE Computer Society.

  3. Bazzan, A. L. C. (1995). A game-theoretic approach to distributed control of traffic signals. In Proceedings of the 1st International Conference on Multi-Agent Systems (ICMAS) (p. 439, extended abstract). San Francisco.

  4. Bazzan, A. L. C. (1997). An evolutionary game-theoretic approach for coordination of traffic signal agents. PhD thesis, University of Karlsruhe.

  5. Bazzan A.L.C. (2005). A distributed approach for coordination of traffic signal agents. Autonomous Agents and Multiagent Systems 10(1): 131–164

    Article  Google Scholar 

  6. Bazzan, A. L. C., de Oliveira, D., & da Silva, B. C. (2008). Learning in groups of traffic signals. Technical report, UFRGS.

  7. Bazzan A.L.C., Klügl F., Nagel K. and Oliveira D. (2008). Adapt or not to adapt—Consequences of adapting driver and traffic light agents. In: Tuyls, K., Nowe, A., Guessoum, Z., and Kudenko, D. (eds) Adaptive agents and multi-agent systems III, Lecture notes in artificial intelligence (Vol 4865), pp 1–14. Springer-Verlag, New York

    Google Scholar 

  8. Bazzan A.L.C. and Junges R. (2006). Congestion tolls as utility alignment between agent and system optimum. In: Nakashima, H., Wellman, M.P., Weiss, G. and Stone, P. (eds) Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems, May 2006, pp 126–128. ACM Press, New York

    Chapter  Google Scholar 

  9. Bazzan A.L.C. and Klügl F. (2005). Case studies on the Braess paradox: Simulating route recommendation and learning in abstract and microscopic models. Transportation Research C 13(4): 299–319

    Article  Google Scholar 

  10. Bazzan A.L.C. and Klügl F. (2008). Re-routing agents in an abstract traffic scenario. In: Zaverucha, G. and da Costa, A.L. (eds) Advances in artificial intelligence, Lecture notes in artificial intelligence (Vol. 5249), pp 63–72. Springer-Verlag, Berlin

    Google Scholar 

  11. Bazzan A.L.C., Klügl F. and Nagel K. (2007). Adaptation in games with many co-evolving agents. In: Neves, J., Santos, M., and Machado, J. (eds) EPIA 2007, Lecture notes in artificial intelligence (Vol. 4874), pp 195–206. Springer-Verlag, Berlin

    Google Scholar 

  12. Bazzan, A. L. C., Wahle, J., & Klügl, F. (1999). Agents in traffic modelling—From reactive to social behavior. In Advances in artificial intelligence, Lecture notes in artificial intelligence (Vol. 1701, pp. 303–306). Berlin/Heidelberg: Springer. Extended version appeared in Proceedings of the U.K. Special Interest Group on Multi-Agent Systems (UKMAS), Bristol, UK.

  13. Bowling M.H. and Veloso M.M. (2001). Rational and convergent learning in stochastic games. In: Nebel, B. (eds) Proceedings of the 17th International Joint Conference on Artificial Intelligence., pp 1021–1026. Morgan Kaufmann, Seattle

    Google Scholar 

  14. Braess D. (1968). über ein Paradoxon aus der Verkehrsplanung. Unternehmensforschung 12: 258

    Article  MATH  MathSciNet  Google Scholar 

  15. Brockfeld E., Barlovic R., Schadschneider A. and Schreckenberg M. (2001). Optimizing traffic lights in a cellular automaton model for city traffic. Physical Review E 64(5): 056132

    Article  Google Scholar 

  16. Bull L., Sha’Aban J., Tomlinson A., Addison J.D. and Heydecker B.G. (2004). Towards distributed adaptive control for road traffic junction signals using learning classifier systems. In: Bull, L. (eds) Applications of learning classifier systems, Studies in fuzziness and soft computing (Vol. 150), pp 276–299. Springer, New York

    Google Scholar 

  17. Burmeister B., Doormann J. and Matylis G. (1997). Agent-oriented traffic simulation. Transactions Society for Computer Simulation 14: 79–86

    Google Scholar 

  18. Camponogara E. and Kraus W. (2003). Distributed learning agents in urban traffic control. In: Moura-Pires, F. and Abreu, S. (eds) EPIA., pp 324–335. Portugal, Beja

    Google Scholar 

  19. Choi S.P.M., Yeung D.-Y. and Zhang N.L. (2001). Hidden-mode markov decision processes for nonstationary sequential decision making. In: Sun, R. and Giles, C.L. (eds) Sequence learning: paradigms, algorithms, and applications, pp 264–287. Springer, Berlin

    Google Scholar 

  20. Chowdhury D., Santen L. and Schadschneider A. (2000). Statistical physics of vehicular traffic and some related systems. Physics Reports 329: 199–329

    Article  MathSciNet  Google Scholar 

  21. Chowdhury D. and Schadschneider A. (1999). Self-organization of traffic jams in cities: Effects of stochastic dynamics and signal periods. Physical Review E 59(2): R1311–R1314

    Article  Google Scholar 

  22. Claus, C., & Boutilier, C. (1998). The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the 15th National Conference on Artificial Intelligence (pp. 746–752). Madison, Wisconsin.

  23. Di Taranto, M. (1989). UTOPIA. In Proceedings of the IFAC-IFIP-IFORS Conference on Control, Computers, Communication in Transportation, Paris. ifac.

  24. Diakaki C., Papageorgiou M. and Aboudolas K. (2002). A multivariable regulator approach to traffic-responsive network-wide signal control. Control Engineering Practice 10(2): 183–195

    Article  Google Scholar 

  25. Doya K., Samejima K., Katagiri K. and Kawato M. (2002). Multiple model-based reinforcement learning. Neural Computation 14(6): 1347–1369

    Article  MATH  Google Scholar 

  26. Dresner K. and Stone P. (2004). Multiagent traffic management: A reservation-based intersection control mechanism. In: Jennings, N., Sierra, C., Sonenberg, L. and Tambe, M. (eds) The 3rd International Joint Conference on Autonomous Agents and Multiagent Systems, July 2004, pp 530–537. IEEE Computer Society, New York

    Google Scholar 

  27. Dresner K. and Stone P. (2005). Multiagent traffic management: An improved intersection control mechanism. In: Dignum, F., Dignum, V., Koenig, S., Kraus, S., Singh, M.P. and Wooldridge, M. (eds) The 4th International Joint Conference on Autonomous Agents and Multiagent Systems, July 2005, pp. ACM Press, New York

    Google Scholar 

  28. Dresner K. and Stone P. (2006). Multiagent traffic management: Opportunities for multiagent learning. In: Tuyls, K., Hoen, P.J., Verbeeck, K. and Sen, S. (eds) LAMAS 2005, Lecture notes in artificial intelligence (Vol 3898)., pp 129–138. Springer Verlag, Berlin

    Google Scholar 

  29. Dresner, K., Stone, P. (2007). Sharing the road: Autonomous vehicles meet human drivers. In The 20th International Joint Conference on Artificial Intelligence, January 2007 (pp. 1263–1268). Hyderabad, India.

  30. Elhadouaj, S., Drogoul, A., & Espié, S. (2000). How to combine reactivity and anticipation: The case of conflicts resolution in a simulated road traffic. In Proceedings of the Multiagent Based Simulation (MABS) (pp. 82–96). New York: Springer.

  31. France, J., & Ghorbani, A. A. (2003). A multiagent system for optimizing urban traffic. In Proceedings of the IEEE/WIC International Conference on Intelligent Agent Technology (pp. 411–414). Washington, DC: IEEE Computer Society.

  32. Gartner N.H. (1983). OPAC—A demand-responsive strategy for traffic signal control. Transportation Research Record 906: 75–81

    Google Scholar 

  33. Gershenson C. (2005). Self-organizing traffic lights. Complex Systems 16(1): 29–53

    Google Scholar 

  34. Gordon D. (1996). The organization of work in social insect colonies. Nature 380: 121–124

    Article  Google Scholar 

  35. Hall R.W. (2003). Handbook of transportation science 2nd ed. Kluwer Academic Pub., Dordrecht

    MATH  Google Scholar 

  36. Haugeneder, H., & Steiner, D. (1993). MECCA/UTS: A multi-agent scenario for cooperation in urban traffic. In Proceedings of the Special Interest Group on Cooperating Knowledge Based Systems (pp. 83–98). Keele, UK.

  37. Helbing D. and Huberman B.A. (1998). Coherent moving states in highway traffic. Nature 396: 738

    Article  Google Scholar 

  38. Henry, J., Farges, J. L., & Tuffal, J. (1983). The PRODYN real time traffic algorithm. In Proceedings of the International Federation of Automatic Control (IFAC) Conference, Baden-Baden: IFAC.

  39. Hu, J., & Wellman, M. P. (1998). Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proceedings of the 15th International Conference on Machine Learning (pp. 242–250). Los Altos: Morgan Kaufmann.

  40. Hunt, P. B., Robertson, D. I., Bretherton, R. D., & Winton, R. I. (1981). SCOOT—A traffic responsive method of coordinating signals. TRRL Lab. Report 1014, Transport and Road Research Laboratory, Berkshire, 1981.

  41. Klügl F. and Bazzan A.L.C. (2004). Simulated route decision behaviour: Simple heuristics and adaptation. In: Selten, R. and Schreckenberg, M. (eds) Human behaviour and traffic networks., pp 285–304. Springer, New York

    Google Scholar 

  42. Klügl, F., Bazzan, A. L. C., & Wahle, J. (2003). Selection of information types based on personal utility—A testbed for traffic information markets. In Proceedings of the 2nd International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (pp. 377–384). Melbourne, Australia: ACM Press.

  43. Köhler E., Möhring R.H. and Wünsch G. (2004). Minimizing total delay in fixed-time controlled traffic networks. In: Fleuren, H., den Hertog, D., and Kort, P. (eds) Proceedings of Operations Research (OR), Operations Research Proceedings., pp 192. Springer, Tilburg

    Google Scholar 

  44. Kosonen I. (2003). Multi-agent fuzzy signal control based on real-time simulation. Transportation Research C 11(5): 389–403

    Article  Google Scholar 

  45. Leutzbach W. (1988). Introduction to the theory of traffic flow. Springer, Berlin

    Google Scholar 

  46. Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning, ML (pp. 157–163). New Brunswick, NJ: Morgan Kaufmann.

  47. Lowrie, P. (1982). The Sydney coordinate adaptive traffic system—Principles, methodology, algorithms. In Proceedings of the International Conference on Road Traffic Signalling, Sydney, Australia.

  48. Möhring, R. H., Nökel, K., & Wünsch, G. (2006). A model and fast optimization method for signal coordination in a network. In Proceedings of the 11th IFAC Symposium on Control in Transportation Systems, August 2006. Delft, The Netherlands.

  49. Moore A.W. and Atkeson C.G. (1993). Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning 13: 103–130

    Google Scholar 

  50. Morgan J.T. and Little J.D.C. (1964). Synchronizing traffic signals for maximal bandwidth. Operations Research 12: 897–912

    Article  Google Scholar 

  51. Nagel K. and Schreckenberg M. (1992). A cellular automaton model for freeway traffic. Journal de Physique I 2: 2221

    Article  Google Scholar 

  52. Nunes, L., & Oliveira, E. C. (2004). Learning from multiple sources. In N. Jennings, C. Sierra, L. Sonenberg, & M. Tambe (Eds.), Proceedings of the 3rd International Joint Conference on Autonomous Agents and Multi Agent Systems, AAMAS, July 2004 (Vol. 3, pp. 1106–1113). New York: IEEE Computer Society.

  53. Oliveira D. and Bazzan A.L.C. (2006). Traffic lights control with adaptive group formation based on swarm intelligence. In: Dorigo, M., Gambardella, L.M., Birattari, M., Martinoli, A., Poli, R. and Stuetzle, T. (eds) Proceedings of the 5th International Workshop on Ant Colony Optimization and Swarm Intelligence, ANTS 2006, Lecture notes in computer science, September 2006., pp 520–521. Springer, Berlin

    Google Scholar 

  54. Oliveira, D., Bazzan, A. L. C., & Lesser, V. (2005). Using cooperative mediation to coordinate traffic lights: A case study. In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS), July 2005 (pp. 463–470). New York: IEEE Computer Society.

  55. Oliveira, D., Ferreira, P. R., Jr., Bazzan, A. L. C., & Klügl, F. (2004). A swarm-based approach for selection of signal plans in urban scenarios. In Proceedings of 4th International Workshop on Ant Colony Optimization and Swarm Intelligence—ANTS 2004, Lecture notes in computer science (Vol. 3172, pp. 416–417). Berlin, Germany.

  56. Oppenheim N. (1995). Urban travel demand modeling: From individual choices to general equilibrium. Wiley, New York, NY

    Google Scholar 

  57. Ossowski S., Fernández A., Serrano J.M., Pérez-de-la-Cruz J.L., Belmonte M.V. and Hernández J.Z. (2005). Designing multiagent decision support systems for traffic management. In: Klügl, F., Bazzan, A.L.C., and Ossowski, S. (eds) Applications of agent technology in traffic and transportation, Whitestein series in software agent technologies and autonomic computing., pp 51–67. Birkhäuser, Basel

    Google Scholar 

  58. Panait L. and Luke S. (2005). Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems 11(3): 387–434

    Article  Google Scholar 

  59. Papageorgiou M. (2003). Traffic control. In: Hall, R.W. (eds) Handbook of transportation science (Chap. 8)., pp 243–277. Kluwer Academic Pub, Dordrecht

    Google Scholar 

  60. Papageorgiou M., Diakaki C., Dinopoulou V., Kotsialos A. and Wang Y. (2003). Review of road traffic control strategies. Proceedings of the IEEE 91(12): 2043–2067

    Article  Google Scholar 

  61. Paruchuri, P., Pullalarevu, A. R., & Karlapalem, K. (2002). Multi agent simulation of unorganized traffic. In Proceedings of the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS) (Vol. 1, pp. 176–183). Bologna, Italy: ACM Press.

  62. Rigolli M. and Brady M. (2005). Towards a behavioural traffic monitoring system. In: Dignum, F., Dignum, V., Koenig, S., Kraus, S., Singh, M.P. and Wooldridge, M. (eds) Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems., pp 449–454. ACM Press, New York

    Google Scholar 

  63. Robertson, D. I. (1969). TRANSYT: A traffic network study tool. Rep. LR 253, Road Res. Lab., London.

  64. Robinson G.E. (1992). Regulation of division of labor in insect societies. Annual Review of Entomology 37: 637–665

    Article  Google Scholar 

  65. Rochner F., Prothmann H., Branke J., Müller-Schloer C. and Schmeck H. (2006). An organic architecture for traffic light controllers. In: Hochberger, C. and Liskowsky, R. (eds) Proceedings of the Informatik 2006—Informatik für Menschen, Lecture notes in informatics (Vol P-93)., pp 120–127. Köllen Verlag, Berlin

    Google Scholar 

  66. Roess R.P., Prassas E.S. and McShane W.R. (2004). Traffic engineering. Prentice Hall, Englewood Cliffs, NJ

    Google Scholar 

  67. Rossetti R. and Liu R. (2005). A dynamic network simulation model based on multi-agent systems. In: Klügl, F., Bazzan, A.L.C. and Ossowski, S. (eds) Applications of agent technology in traffic and transportation, Whitestein series in software agent technologies and autonomic computing. pp 181–192. Birkhäuser, Basel

    Google Scholar 

  68. Rossetti R.J.F., Bordini R.H., Bazzan A.L.C., Bampi S., Liu R. and Van Vliet D. (2002). Using BDI agents to improve driver modelling in a commuter scenario. Transportation Research Part C: Emerging Technologies 10(5–6): 47–72

    Google Scholar 

  69. Shoham, Y., Powers, R., & Grenager, T. (2003). Multi-agent reinforcement learning: A critical survey. Unpublished survey.

  70. Shoham Y., Powers R. and Grenager T. (2007). If multi-agent learning is the answer, what is the question?. Artificial Intelligence 171(7): 365–377

    Article  MathSciNet  MATH  Google Scholar 

  71. Silva B.C.d., Basso E.W., Bazzan A.L.C. and Engel P.M. (2006). Dealing with non-stationary environments using context detection. In: Cohen, W.W. and Moore, A. (eds) Proceedings of the 23rd International Conference on Machine Learning ICML, June 2006, pp 217–224. ACM Press, New York

    Chapter  Google Scholar 

  72. Silva, B. C. d., Oliveira, D. d., Bazzan, A. L. C., & Basso, E. W. (2006). Adaptive traffic control with reinforcement learning. In A. L. C. Bazzan, B. Chaib-Draa, F. Klügl, & S. Ossowski (Eds.), Proceedings of the 4th Workshop on Agents in Traffic and Transportation (at AAMAS 2006), May 2006 (pp. 80–86). Hakodate, Japan.

  73. Steingrover, M., Schouten, R., Peelen, S., Nijhuis, E., & Bakker, B. (2005). Reinforcement learning of traffic light controllers adapting to traffic congestion. In K. Verbeeck, K. Tuyls, A. Nowé, B. Manderick, & B. Kuijpers (Eds.), Proceedings of the 17th Belgium-Netherlands Conference on Artificial Intelligence (BNAIC 2005), October 2005 (pp. 216–223). Brussels, Belgium: Koninklijke Vlaamse Academie van Belie voor Wetenschappen en Kunsten.

  74. Stone, P. (2007). Learning and multiagent reasoning for autonomous agents. In The 20th International Joint Conference on Artificial Intelligence, January 2007 (pp. 13–30).

  75. Stone P. (2007). Multiagent learning is not the answer. It is the question. Artificial Intelligence 171(7): 402–405

    MATH  Google Scholar 

  76. Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In Proceedings of the 7th International Conference on Machine Learning (pp. 216–224). Austin, Texas.

  77. Tan, M. (1993). Multi-agent reinforcement learning: Independent vs. cooperative agents. In Proceedings of the 10th International Conference on Machine Learning (ICML 1993), June 1993 (pp. 330–337). Los Altos, CA: Morgan Kaufmann.

  78. TRANSYT-7F. (1988). TRANSYT-7F user’s manual. Transportation Research Center, University of Florida.

  79. Tumer, K., Welch, Z. T., & Agogino, A. (2008). Aligning social welfare and agent preferences to alleviate traffic congestion. In L. Padgham, D. Parkes, J. Müller, & S. Parsons (Eds.), Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems, May 2008 (pp. 655–662). Estoril: IFAAMAS.

  80. Tumer K. and Wolpert D. (2004). A survey of collectives. In: Tumer, K. and Wolpert, D. (eds) Collectives and the design of complex systems., pp 1–42. Springer, New York

    Google Scholar 

  81. Tuyls, K. (2004). Learning in multi-agent systems, an evolutionary game theoretic approach. PhD thesis, Vrije Universiteit Brussel.

  82. Tuyls K., Hoen P.J. and Vanschoenwinkel B. (2006). An evolutionary dynamical analysis of multi-agent learning in iterated games. Autonomous Agents and Multiagent Systems 12(1): 115–153

    Article  Google Scholar 

  83. Tuyls K. and Parsons S. (2007). What evolutionary game theory tells us about multiagent learning. Artificial Intelligence 171(7): 406–416

    Article  MathSciNet  MATH  Google Scholar 

  84. van Katwijk, R. T., van Koningsbruggen, P., Schutter, B. D., & Hellendoorn, J. (2005). A test bed for multi-agent control systems in road traffic management. In F. Klügl, A. L. C. Bazzan, & S. Ossowski (Eds.), Applications of agent technology in traffic and transportation, Whitestein series in software agent technologies and autonomic computing (pp. 113–131). Basel: Birkhäuser.

  85. Verbeeck, K., Nowé, A., Peeters, M., & Tuyls, K. (2005). Multi-agent reinforcement learning in stochastic games and multi-stage games. In D. K. et al. (Eds.), Adaptive agents and MAS II, LNAI (Vol. 3394, pp. 275–294). Berlin: Springer.

  86. Vu T., Powers R. and Shoham Y. (2006). Learning against multiple opponents. In: Nakashima, H., Wellman, M.P., Weiss, G., and Stone, P. (eds) Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems., pp 752–760. Hakodate, Japan

    Chapter  Google Scholar 

  87. Wahle J., Bazzan A.L.C. and Kluegl F. (2002). The impact of real time information in a two route scenario using agent based simulation. Transportation Research Part C: Emerging Technologies 10(5–6): 73–91

    Google Scholar 

  88. Wahle J., Bazzan A.L.C., Klügl F. and Schreckenberg M. (2000). Anticipatory traffic forecast using multi-agent techniques. In: Helbing, D., Hermann, H.J., Schreckenberg, M., and Wolf, D. (eds) Traffic and granular flow., pp 87–92. Springer, New york

    Google Scholar 

  89. Wahle J., Bazzan A.L.C., Klügl F. and Schreckenberg M. (2000). Decision dynamics in a traffic scenario. Physica A 287(3–4): 669–681

    Article  Google Scholar 

  90. Wardrop, J. G. (1952). Some theoretical aspects of road traffic research. In Proceedings of the Institute of Civil Engineers (Vol. 2, pp. 325–378). UK.

  91. Watkins C.J.C.H. and Dayan P. (1992). Q-learning. Machine Learning 8(3): 279–292

    MATH  Google Scholar 

  92. Wiering, M. (2000). Multi-agent reinforcement learning for traffic light control. In Proceedings of the 17th International Conference on Machine Learning (ICML 2000) (pp. 1151–1158). Stanford.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ana L. C. Bazzan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bazzan, A.L.C. Opportunities for multiagent systems and multiagent reinforcement learning in traffic control. Auton Agent Multi-Agent Syst 18, 342–375 (2009). https://doi.org/10.1007/s10458-008-9062-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-008-9062-9

Keywords

Navigation