Skip to main content

Improving Evolutionary Learning of Cooperative Behavior by Including Accountability of Strategy Components

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2831))

Abstract

We present an improvement to evolutionary learning of cooperative behavior which incorporates some accountability measure for strategy components into the evolutionary learning process. Our evolutionary approach is based on evolving sets of prototypical situation-action pairs (strategies) that together, with the nearest-neighbor rule, represent the decision making of our agents. The basic idea of our improvement is to collect data for each pair showing the results of its applications. We then choose those pairs in the parent strategies that had positive results for the construction of new sets of pairs for our strategies.

Our experiments within the OLEMAS system show that the incorporation of accountability results in substantial improvements of both on- and off-line learning when compared to the basic evolutionary approach. In nearly all experiments, either the agent teams required less learning time or found better strategies. In many cases both were observed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Benda, M., Jagannathan, V., Dodhiawalla, R.: An Optimal Cooperation of Knowledge Sources, Technical Report BCS-G201e-28, Boeing AI Center (1985)

    Google Scholar 

  2. Denzinger, J., Ennis, S.: Being the new guy in an experienced team – enhancing training on the job. In: Proc. AAMAS 2002, pp. 1246–1253. ACM Press, New York (2002)

    Chapter  Google Scholar 

  3. Denzinger, J., Fuchs, M.: Experiments in Learning Prototypical Situations for Variants of the Pursuit Game. In: ICMAS 1996, pp. 48–55. AAAI Press, Menlo Park (1996)

    Google Scholar 

  4. Denzinger, J., Kordt, M.: Evolutionary On-line Learning of Cooperative Behavior with Situation-Action-Pairs. In: Proc. ICMAS 2000, pp. 103–110. IEEE Press, Los Alamitos (2000)

    Google Scholar 

  5. Haynes, T., Wainwright, R., Sen, S., Schoenefeld, D.: Strongly typed genetic programming in evolving cooperation strategies. In: Proc. 6th GA, pp. 271–278. Morgan Kaufmann, San Francisco (1995)

    Google Scholar 

  6. Hu, J., Wellman, M.P.: Multiagent reinforcement learning: theoretical framework and an algorithm. In: Proc. 15th Machine Learning, pp. 242–250. AAAI Press, Menlo Park (1998)

    Google Scholar 

  7. Lanzi, P.L.: Learning Classifier Systems from a Reinforcement Learning Perspective, Technical Report 00-03, Politecnico di Milano (2000)

    Google Scholar 

  8. Manela, M., Campbell, J.A.: Designing good pursuit problems as testbeds for distributed AI: a novel application of genetic algorithms. In: Proc. 5th MAAMAW, pp. 231–252 (1993)

    Google Scholar 

  9. Stone, P.: Layered Learning in Multi-Agent Systems: A Winning Approach to Robotic Soccer. MIT Press, Cambridge (2000)

    Google Scholar 

  10. Tan, M.: Multi-agent reinforcement learning: Independent vs cooperative agents. In: Proc. 10th Machine Learning, pp. 330–337. Morgan Kaufmann, San Francisco (1993)

    Google Scholar 

  11. Watkins, C.J.C.H.: Learning from Delayed Rewards, PhD thesis, University of Cambridge (1989)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Denzinger, J., Ennis, S. (2003). Improving Evolutionary Learning of Cooperative Behavior by Including Accountability of Strategy Components. In: Schillo, M., Klusch, M., Müller, J., Tianfield, H. (eds) Multiagent System Technologies. MATES 2003. Lecture Notes in Computer Science(), vol 2831. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39869-1_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-39869-1_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-20124-3

  • Online ISBN: 978-3-540-39869-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics