skip to main content
10.1145/1160633.1160764acmconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
Article

Learning the required number of agents for complex tasks

Published: 08 May 2006 Publication History

Abstract

Coordinating agents in a complex environment is a hard problem, but it can become even harder when certain characteristics of the tasks, like the required number of agents, are unknown. In those settings, agents not only have to coordinate themselves on the different tasks, but they also have to learn how many agents are required for each task. To achieve that, we have elaborated a selective perception reinforcement learning algorithm to enable agents to learn the required number of agents. Even though there were continuous variables in the task description, the agents were able to learn their expected reward according to the task description and the number of agents. The results, obtained in the RoboCupRescue, show an improvement in the agents overall performance.

References

[1]
M. Brenner, A. Kleiner, M. Exner, M. Degen, M. Metzger, T. Nussle, and I. Thon. ResQ Freiburg: Deliberative Limitation of Damage. In D. Nardi, M. Riedmiller, and C. Sammut, editors, RoboCup-2004: Robot Soccer World Cup VIII, Berlin, 2005. Springer Verlag.
[2]
C. B. Excelente-Toledo and N. R. Jennings. The Dynamic Selection of Coordination Mechanisms. Journal of Autonomous Agents and Multi-Agent Systems, 9(1--2):55--85, 2004.
[3]
A. Garland and R. Alterman. Autonomous Agents that Learn to Better Coordinate. Autonomous Agents and Multi-Agent Systems, 8(3):267--301, May 2004.
[4]
H. Kitano. Robocup rescue: A grand challenge for multi-agent systems. In Proceedings of ICMAS 2000, Boston, MA, 2000.
[5]
A. K. McCallum. Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, University of Rochester, Rochester, New-York, 1996.
[6]
L. D. Pyeatt and A. E. Howe. Decision tree function approximation in reinforcement learning. Technical Report TR CS-98-112, Colorado State University, Fort Collins, Colorado, 1995.
[7]
J. R. Quinlan. C4.5 Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.
[8]
J. R. Quinlan. Combining instance-based and model-based learning. In Proceedings of the Tenth International Conference on Machine Learning, pages 236--243, Amherst, Massachusetts, 1993. Morgan Kaufmann.
[9]
O. Shehory and S. Kraus. Methods for task allocation via agent coalition formation. Artificial Intelligence, 101(1--2):165--200, 1998.
[10]
W. T. B. Uther and M. M. Veloso. Tree based discretization for continuous state space reinforcement learning. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 769--774, Menlo Park, CA, 1998. AAAI-Press/MIT-Press.
[11]
P. Xuan, V. Lesser, and S. Zilberstein. Modeling Cooperative Multiagent Problem Solving as Decentralized Decision Processes. Autonomous Agents and Multi-Agent Systems, 2004.

Cited By

View all
  • (2016)Trustworthy Stigmergic Service Compositionand Adaptation in Decentralized EnvironmentsIEEE Transactions on Services Computing10.1109/TSC.2014.22988739:2(317-329)Online publication date: 1-Apr-2016
  • (2014)Effective approaches to group role assignment with a flexible formation2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC.2014.6974115(1426-1431)Online publication date: Oct-2014
  • (2010)RoboCup Rescue as multiagent task allocation among teamsAutonomous Agents and Multi-Agent Systems10.1007/s10458-009-9087-820:3(421-443)Online publication date: 1-May-2010

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AAMAS '06: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
May 2006
1631 pages
ISBN:1595933034
DOI:10.1145/1160633
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 May 2006

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. coordination
  2. multiagent
  3. reinforcement learning

Qualifiers

  • Article

Conference

AAMAS06
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2016)Trustworthy Stigmergic Service Compositionand Adaptation in Decentralized EnvironmentsIEEE Transactions on Services Computing10.1109/TSC.2014.22988739:2(317-329)Online publication date: 1-Apr-2016
  • (2014)Effective approaches to group role assignment with a flexible formation2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC)10.1109/SMC.2014.6974115(1426-1431)Online publication date: Oct-2014
  • (2010)RoboCup Rescue as multiagent task allocation among teamsAutonomous Agents and Multi-Agent Systems10.1007/s10458-009-9087-820:3(421-443)Online publication date: 1-May-2010

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media