Abstract
Multi-agent collaboration under partial observability is a difficult task. Multi-agent reinforcement learning (MARL) algorithms that do not leverage a model of the environment struggle with tasks that require sequences of collaborative actions, while Dec-POMDP algorithms that use such models to compute near-optimal policies, scale poorly. In this paper, we suggest the Team-Imitate-Synchronize (TIS) approach, a heuristic, model-based method for solving such problems. Our approach begins by solving the joint team problem, assuming that observations are shared. Then, for each agent we solve a single agent problem designed to imitate its behavior within the team plan. Finally, we adjust the single agent policies for better synchronization. Our experiments demonstrate that our method provides comparable solutions to Dec-POMDP solvers over small problems, while scaling to much larger problems, and provides collaborative plans that MARL algorithms are unable to identify.
Supported by ISF Grants 1651/19 and 1210/18, Ministry of Science and Technology’s Grant #3-15626 and the Lynn and William Frankel Center for Computer Science.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Our implementation uses the simulation function of the SARSOP solver. We precompute sample size based on concentration bounds that ensure that distribution over initial state will match the true belief state.
- 2.
DICEPS, while not new, was recommended to us, independently, by two senior researchers as still being a state-of-the-art approximate solver.
References
Amato, C., Bernstein, D.S., Zilberstein, S.: Optimizing fixed-size stochastic controllers for pomdps and decentralized pomdps. JAAMAS 21(3), 293–320 (2010)
Bazinin, S., Shani, G.: Iterative planning for deterministic qdec-pomdps. In: GCAI-2018, 4th Global Conference on Artificial Intelligence, vol. 55, pp. 15–28 (2018)
Bernstein, D.S., Givan, R., Immerman, N., Zilberstein, S.: The complexity of decentralized control of Markov decision processes. Math. Oper. Res. 27(4), 819–840 (2002)
Boutilier, C., Dean, T., Hanks, S.: Decision-theoretic planning: structural assumptions and computational leverage. J. Artif. Int. Res. 11(1), 1–94 (1999)
Brafman, R.I., Shani, G., Zilberstein, S.: Qualitative planning under partial observability in multi-agent domains. In: AAAI 2013 (2013)
Carlin, A., Zilberstein, S.: Value-based observation compression for dec-pomdps. In: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 501–508 (2008)
Dietterich, T.G.: Hierarchical reinforcement learning with the maxq value function decomposition. J. AI Res. 13, 227–303 (2000)
Hausknecht, M.J., Stone, P.: Deep recurrent q-learning for partially observable mdps. In: 2015 AAAI Fall Symposium, pp. 29–37. AAAI Press (2015)
Kurniawati, H., Hsu, D., Lee, W.S.: Sarsop: efficient point-based pomdp planning by approximating optimally reachable belief spaces. In: Proceedings Robotics: Science and Systems (2008)
Nair, R., Tambe, M., Yokoo, M., Pynadath, D., Marsella, S.: Taming decentralized pomdps: towards efficient policy computation for multiagent settings. In: IJCAI 2003, pp. 705–711 (2003)
Nair, R., Tambe, M., Yokoo, M., Pynadath, D., Marsella, S.: Taming decentralized pomdps: towards efficient policy computation for multiagent settings. In: IJCAI, vol. 3, pp. 705–711 (2003)
Oliehoek, F., Kooij, J., Vlassis, N.: The cross-entropy method for policy search in decentralized pomdps. Informatica 32, 341–357 (2008)
Oliehoek, F.A., Spaan, M.T.J., Amato, C., Whiteson, S.: Incremental clustering and expansion for faster optimal planning in decentralized POMDPs. JAIR 46, 449–509 (2013)
Oliehoek, F.A., Spaan, M.T.J., Terwijn, B., Robbel, P., Messias, J.A.V.: The madp toolbox: an open source library for planning and learning in (multi-)agent systems. J. Mach. Learn. Res. 18(1), 3112–3116 (2017)
Oliehoek, F.A., Spaan, M.T.J., Whiteson, S., Vlassis, N.: Exploiting locality of interaction in factored dec-pomdps. In: AAMAS, pp. 517–524 (2008)
Rashid, T., Farquhar, G., Peng, B., Whiteson, S.: Weighted QMIX: expanding monotonic value function factorisation for deep multi-agent reinforcement learning. Adv. Neural Inf. Process. Syst. 33, 10199–10210 (2020)
Smith, T., Simmons, R.: Point-based pomdp algorithms: improved analysis and implementation. In: UAI, pp. 542–549 (2005)
Son, K., Kim, D., Kang, W.J., Hostallero, D., Yi, Y.: QTRAN: learning to factorize with transformation for cooperative multi-agent reinforcement learning. In: ICML, pp. 5887–5896 (2019)
Wu, F., Zilberstein, S., Jennings, N.R.: Monte-carlo expectation maximization for decentralized pomdps. In: IJCAI, pp. 397–403 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Abdoo, E., Brafman, R.I., Shani, G., Soffair, N. (2023). Team-Imitate-Synchronize for Solving Dec-POMDPs. In: Amini, MR., Canu, S., Fischer, A., Guns, T., Kralj Novak, P., Tsoumakas, G. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13716. Springer, Cham. https://doi.org/10.1007/978-3-031-26412-2_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-26412-2_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-26411-5
Online ISBN: 978-3-031-26412-2
eBook Packages: Computer ScienceComputer Science (R0)