Skip to main content

Mutually supervised learning in multiagent systems

  • Workshop Contributions
  • Conference paper
  • First Online:
Adaption and Learning in Multi-Agent Systems (IJCAI 1995)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1042))

Included in the following conference series:

Abstract

Learning in a multiagent environment can help agents improve their performance. Agents, in meeting with others, can learn about the partner's knowledge and strategic behavior. Agents that operate in dynamic environments could react to unexpected events by generalizing what they have learned during a training stage'.

In this paper, we propose several learning rules for agents in a multiagent environment. Each agent acts as the teacher of its partner. The agents are trained by receiving examples from a sample space; they then go through a generalization step during which they have to apply the concept they have learned from their instructor.

Agents that learn from each other can sometimes avoid repeatedly coordinating their actions from scratch for similar problems. They will sometimes be able to avoid communication at run-time, by using learned coordination concepts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Martin Anthony and Norman Biggs. Computational Learning Theory. Cambridge University Press, 1992.

    Google Scholar 

  2. Edmund H. Durfee. Coordination of Distributed Problem Solvers. Kluwer Academic Publishers, Boston, 1988.

    Google Scholar 

  3. N.V. Findler. Distributed control of collaborating and learning expert systems for street traffic signals. In Lewis and Stephanon, editors, IFAC Distributed Intelligence Systems, pages 125–130. Pergamon Press, 1991.

    Google Scholar 

  4. C. Goldman and J. Rosenschein. Emergent coordination through the use of cooperative state-changing rules. In AAAI94, pages 408–413, 1994.

    Google Scholar 

  5. B. Grosz and S. Kraus. Collaborative plans for group activities. In IJCAI93, pages 367–373, Chambery, France, 1993.

    Google Scholar 

  6. Michael J. Kearns and Robert E. Shapire. Efficient distribution-free learning of probabilistic concepts. In Proceedings of the 31st Anual Symposium on Foundations of Computer Science, pages 382–391, 1990.

    Google Scholar 

  7. S. Kraus and J. Wilkenfeld. Negotiations over time in a multi agent environment: Preliminary report. In IJCAI91, pages 56–61, Sydney, Australia, August 1991.

    Google Scholar 

  8. M. Littman and J. Boyan. A distributed reinforcement learning scheme for network routing. technical report CMU-CS-93-165, Carnegie Mellon University, 1993.

    Google Scholar 

  9. M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine Learning 1994, pages 157–163, 1994.

    Google Scholar 

  10. Tuomas W. Sandholm and Robert H. Crites. Multiagent reinforcement learning in the iterated prisoner's dilemma. Biosystems Journal Special Issue on the Prisoner's Dilemma, submitted.

    Google Scholar 

  11. S. Sen, M. Sekaran, and J. Hale. Learning to coordinate without sharing information. In AAAI94, pages 426–431, 1994.

    Google Scholar 

  12. Y. Shoham and M. Tennenholtz. Emergent conventions in multi-agent systems: initial experimental results and observations (preliminary report). In KR92, Cambridge, Massachusetts, October 1992.

    Google Scholar 

  13. Y. Shoham and M. Tennenholtz. Co-learning and the evolution of social activity. Technical Report STAN-CS-TR-94-1511, Stanford Univ., 1994.

    Google Scholar 

  14. S. Sian. Adaptation based cooperative learning multi-agent systems. In Decentralized AI 2, pages 257–272, 1991.

    Google Scholar 

  15. Reid G. Smith. A Framework for Problem Solving in a Distributed Processing Environment. PhD thesis, Stanford University, 1978.

    Google Scholar 

  16. M. Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In Machine Learning: Proceedings of the Tenth international conference, pages 330–337, 1993.

    Google Scholar 

  17. M. Tennenholtz and Y. Moses. On cooperation in a multi-entity model. In IJ-CAI89, pages 918–923, Detroit, Michigan, August 1989.

    Google Scholar 

  18. L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.

    Google Scholar 

  19. Christopher Watkins and Peter Dayan. Technical note Q-learning. Machine Learning, 8:279–292, 1992.

    Google Scholar 

  20. Gerhard Weiß. Learning to coordinate actions in multi-agent systems. In IJCAI93, Chambery, France, 1993.

    Google Scholar 

  21. G. Zlotkin and J. S. Rosenschein. A domain theory for task oriented negotiation. In IJCAI93, pages 416–422, Chambery, France, 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Gerhard Weiß Sandip Sen

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Goldman, C.V., Rosenschein, J.S. (1996). Mutually supervised learning in multiagent systems. In: Weiß, G., Sen, S. (eds) Adaption and Learning in Multi-Agent Systems. IJCAI 1995. Lecture Notes in Computer Science, vol 1042. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60923-7_20

Download citation

  • DOI: https://doi.org/10.1007/3-540-60923-7_20

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60923-0

  • Online ISBN: 978-3-540-49726-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics