Skip to main content

Multi-agent Learning: How to Interact to Improve Collective Results

  • Conference paper
Progress in Artificial Intelligence (EPIA 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4874))

Included in the following conference series:

Abstract

The evolution from individual to collective learning opens a new dimension of solutions to address problems that appeal for gradual adaptation in dynamic and unpredictable environments. A team of individuals has the potential to outperform any sum of isolated efforts, and that potential is materialized when a good system of interaction is considered. In this paper, we describe two forms of cooperation that allow multi-agent learning: the sharing of partial results obtained during the learning activity, and the social adaptation to the stages of collective learning. We consider different ways of sharing information and different options for social reconfiguration, and apply them to the same learning problem. The results show the effects of cooperation and help to put in perspective important properties of the collective learning activity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aamodt, A., Plaza, E.: Case based reasoning: foundational issues, methodological variations, and system approaches. In: Artificial Intelligence Communications (AICom), vol. 7, pp. 39–59. IOS Press, Amsterdam (1994)

    Google Scholar 

  2. Chalkiadakis, G., Boutilier, C.: Coordination in multiagent reinforcement learning: a bayesian approach. In: AAMAS 2003. Proceedings of the 2nd Conference on Autonomous Agents and Multiagent Systems (2003)

    Google Scholar 

  3. Clouse, J.A.: Learning from an automated training agent. In: Proceedings of the International Machine Learning Conference (IMLC) (1995)

    Google Scholar 

  4. Graça, P.R., Gaspar, G.: Using cognition and learning to improve agents’ reactions. In: Alonso, E., Kudenko, D., Kazakov, D. (eds.) Adaptive Agents and Multi-Agent Systems. LNCS (LNAI), vol. 2636, pp. 239–259. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  5. Kazakov, D., Kudenko, D.: Machine learning and inductive logic programming for multi-agent systems. In: Luck, M., Mařík, V., Štěpánková, O., Trappl, R. (eds.) ACAI 2001 and EASSS 2001. LNCS (LNAI), vol. 2086, pp. 246–270. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  6. Modi, P.J., Shen, W.: Collaborative multiagent learning for classification tasks. In: Agents 2001. Proceedings of the 5th Conference on Autonomous Agents (2001)

    Google Scholar 

  7. Nunes, L., Oliveira, E.: Cooperative learning using advice exchange. In: Alonso, E., Kudenko, D., Kazakov, D. (eds.) Adaptive Agents and Multi-Agent Systems. LNCS (LNAI), vol. 2636, pp. 33–48. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  8. Ontañon, S., Plaza, E.: A bartering approach to improve multiagent learning. In: Alonso, E., Kudenko, D., Kazakov, D. (eds.) Adaptive Agents and Multi-Agent Systems. LNCS (LNAI), vol. 2636, Springer, Heidelberg (2003)

    Google Scholar 

  9. Szer, D., Charpillet, F.: Coordination through mutual notification in cooperative multiagent reinforcement learning. In: Kudenko, D., Kazakov, D., Alonso, E. (eds.) Adaptive Agents and Multi-Agent Systems II. LNCS (LNAI), vol. 3394, Springer, Heidelberg (2005)

    Google Scholar 

  10. Tan, M.: Multi agent reinforcement learning: independent vs cooperative agents. In: Proceedings of the 10th International Conference on Machine Learning, Amherst, MA, pp. 330–337 (1993)

    Google Scholar 

  11. Vu, T., Powers, R., Shoham, Y.: Learning against multiple opponents. In: AAMAS 2006. Proceedings of the 5th Conference on Autonomous Agents and Multiagent Systems (2006)

    Google Scholar 

  12. Weiβ, G., Dillenbourg, P.: What is ‘multi’ in multi-agent learning. In: Dillenbourg, P. (ed.) Collaborative Learning, pp. 64–80. Pergamon Press, Oxford (1999)

    Google Scholar 

  13. Whitehead, S.D.: A complexity analysis of cooperative mechanisms in reinforcement learning. In: AAAI 1991. Proceedings of the 9th National Conference on Artificial Intelligence, pp. 607–613 (1991)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Neves Manuel Filipe Santos José Manuel Machado

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rafael, P., Neto, J.P. (2007). Multi-agent Learning: How to Interact to Improve Collective Results. In: Neves, J., Santos, M.F., Machado, J.M. (eds) Progress in Artificial Intelligence. EPIA 2007. Lecture Notes in Computer Science(), vol 4874. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-77002-2_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-77002-2_48

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-77000-8

  • Online ISBN: 978-3-540-77002-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics