Summary
In recent years, multi-agent Partially Observable Markov Decision Processes (POMDP) have emerged as a popular decision-theoretic framework for modeling and generating policies for the control of multi-agent teams. Teams controlled by multi-agent POMDPs can use communication to share observations and coordinate. Therefore, policies are needed to enable these teams to reason about communication. Previous work on generating communication policies for multi-agent POMDPs has focused on the question of when to communicate. In this paper, we address the question of what to communicate. We describe two paradigms for representing limitations on communication and present an algorithm that enables multi-agent teams to make execution-time decisions on how to effectively utilize available communication resources.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bernstein D S, Zilberstein S, Immerman N (2000) The complexity of decentralized control of Markov decision processes. In: Uncertainty in Artificial Intelligence
Nair R, Pynadath D, Yokoo M, Tambe M, Marsella S (2003) Taming decentralized POMDPs: Towards efficient policy computation for multiagent settings. In: International Joint Conference on Artificial Intelligence
Peshkin L, Kim K-E, Meuleau N, Kaelbling L P (2000) Learning to cooperate via policy search. In: Uncertainty in Artificial Intelligence
Pynadath D V and Tambe M (2002) The communicative multiagent team decision problem: Analyzing teamwork theories and models. In: Journal of AI Research
Roth M, Simmons R, Veloso M (2005) Reasoning about joint beliefs for execution-time communication decisions. In: International Joint Conference on Autonomous Agents and Multi Agent Systems
Xuan P, Lesser V, Zilberstein S (2000) Formal modeling of communication decisions in cooperative multiagent systems. In: Workshop on Game-Theoretic and Decision-Theoretic Agents
Goldman C V and Zilberstein S (2003) Optimizing information exchange in cooperative multi-agent systems. In: International Joint Conference on Autonomous Agents and Multi Agent Systems
Nair R, Roth M, Yokoo M, Tambe M (2004) Communication for improving policy computation in distributed POMDPs. In: International Joint Conference on Autonomous Agents and Multi Agent Systems
Emery-Montemerlo R, Gordon G, Schneider J, Thrun S (2004) Approximate solutions for partially observable stochastic games with common payoffs. In: International Joint Conference on Autonomous Agents and Multi Agent Systems
Doshi P and Gmytrasiewicz P J (2005) Approximating state estimation in multiagent settings using particle filters. In: International Joint Conference on Autonomous Agents and Multi Agent Systems
Roth M, Vail D, Veloso M (2003) A real-time world model for multi-robot teams with high-latency communication. In: International Joint Conference on Intelligent Robots and Systems
Bhasin K, Hayden J, Agre J R, Clare L P, Yan T Y (2001) Advanced communication and networking technologies for Mars exploration. In: International Communications Satellite Systems Conference and Exhibit
Rosencrantz M, Gordon G, Thrun S (2003) Decentralized sensor fusion with distributed particle filters. In: Uncertainty in Artificial Intelligence
Papadimitriou C H and Tsitsiklis J N (1987) The complexity of Markov decision processes. In: Mathematics of Operations Research
Cassandra, A R (2005) Tony’s POMDP page. At: http://www.cassandra.org/pomdp/code/index.shtml
Littman M L, Cassandra A R, Kaelbling L P (1995) Learning policies for partially observable environments: Scaling up. In: International Conference on Machine Learning
Kaelbling L P, Littman M L, Cassandra A R (1998) Planning and acting in partially observable domains In: Artificial Intelligence
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Tokyo
About this paper
Cite this paper
Roth, M., Simmons, R., Veloso, M. (2006). What to Communicate? Execution-Time Decision in Multi-agent POMDPs. In: Gini, M., Voyles, R. (eds) Distributed Autonomous Robotic Systems 7. Springer, Tokyo. https://doi.org/10.1007/4-431-35881-1_18
Download citation
DOI: https://doi.org/10.1007/4-431-35881-1_18
Publisher Name: Springer, Tokyo
Print ISBN: 978-4-431-35878-7
Online ISBN: 978-4-431-35881-7
eBook Packages: EngineeringEngineering (R0)