Skip to main content

Evolution of coordination as a metaphor for learning in multi-agent systems

  • Learning, Cooperation and Competition
  • Conference paper
  • First Online:
Distributed Artificial Intelligence Meets Machine Learning Learning in Multi-Agent Environments (LDAIS 1996, LIOME 1996)

Abstract

In societies of individually-motivated agents where the communication costs are prohibitive, there should be other mechanisms to allow them to coordinate. For instance, by choosing an equilibrium point. Such points have the property that, once all agents choose them, none can get higher utility from this joint decision. Hence, if this fact is common knowledge among agents, then an introspective reasoning process leads them to coordinate with low communication requirements. Game theory offers a mathematical formalism for modelling such interactions among agents. In classical game theory, agents or players are assumed to be always rational, which is a strong assumption regarding bounded-rational agents. Moreover, agents do not profit from results of past interactions. In the evolutionary approach to game theory on the other hand, agents do not need to have full knowledge about the rules of the game. Instead, they are involved in a learning process through active experimentation in which they may reach an equilibrium by playing repeatedly with neighbors. Such evolutionary approach is used here to model the interactions in a network of autonomous agents in the domain of control of traffic signals. The game is one of pure coordination with incomplete information. The dynamics of the game is determined by stochastic events which affect the traffic patterns forcing agents to adapt to them by altering their current strategies. Due to such perturbations, the equilibrium of the system eventually changes as well as the behavior of agents. If the game lasts long enough, agents can asymptotically learn how to re-coordinate their strategies and reach the global goal.

the author is supported by Conselho Nacional de Pesquisa Cientifica e Tecnológica — CNPq — Brasil.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Axelrod, R. (1984). The Evolution of Cooperation. Basic Books, New York.

    Google Scholar 

  • Fudenberg, D. and D. Kreps (1989). A Theory of Learning, Experimentation and Equilibrium in Games. Stanford University. Cited in Kalai and Lehrer, 1993.

    Google Scholar 

  • Fudenberg, D. and D. Levine(1993). Steady State Learning and Nash Equilibrium. Econometrica, 61: 547–573.

    Google Scholar 

  • Harley, C. B. (1981). Learning the Evolutionary Stable Strategy. Journal of Theoretical Biology, 89, 611–631.

    Google Scholar 

  • Harsanyi, J.C. and R. Selten (1988). A General Theory of Equilibrium Selection in Games. Cambridge, Mass., MIT Press.

    Google Scholar 

  • Huberman, B.A. and N. S. Glance (1993). Evolutionary games and computer simulations. Proc. Natl. Acad. Sci., 90: 7716–7718.

    Google Scholar 

  • Kalai, E. and E. Lehrer (1993). Rational Learning leads to Nash Equilibrium. Econometrica, 61: 1019–1045.

    Google Scholar 

  • Kandori, M., G.G. Mailath and R. Rob. Learning, mutation and long run equilibria in games. Econometrica, 61: 29–56.

    Google Scholar 

  • Maynard Smith, J. and G. R. Price (1973). The logic of animal conflict. Nature, 246: 15–18.

    Google Scholar 

  • Nowak, M.A. and R.M. May (1992). Evolutionary games and spatial chaos. Nature, 359: 826–829.

    Google Scholar 

  • Rosenschein, J.S. and M.R. Genesereth (1985). Deals among Rational Agents. In: Proc. of the Int. Joint Conf. on Art. Intelligence.

    Google Scholar 

  • Rosenschein, J.S. and G. Zlotkin (1994). Rules of Encounter. The MIT Press, Cambridge (MA)-London.

    Google Scholar 

  • Sastry, P.S., V.V. Phansalkar and M.A.L. Thathachar (1994). Decentralized Learning of Nash Equilibria in Multi-Person Stochastic Games with Incomplete Information. IEEE Trans. on Systems, Man and Cybernetics, 24, 769–777.

    Google Scholar 

  • Smith, R.G. and R. Davis (1981). Frameworks for cooperation in distributed problem solving. IEEE Trans. on Systems, Man and Cybernetics, 11, 61–70.

    Google Scholar 

  • Wellman, M. P. (1992). A general equilibrium approach to distributed transportation planning. In Proc. of the tenth National Conf. on Artificial Intelligence, San Jose, California.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Gerhard Weiß

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bazzan, A.L.C. (1997). Evolution of coordination as a metaphor for learning in multi-agent systems. In: Weiß, G. (eds) Distributed Artificial Intelligence Meets Machine Learning Learning in Multi-Agent Environments. LDAIS LIOME 1996 1996. Lecture Notes in Computer Science, vol 1221. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-62934-3_45

Download citation

  • DOI: https://doi.org/10.1007/3-540-62934-3_45

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-62934-4

  • Online ISBN: 978-3-540-69050-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics