Abstract
Multiagent deep reinforcement learning (MA-DRL) has received increasingly wide attention. Most of the existing MA-DRL algorithms, however, are still inefficient when faced with the non-stationarity due to agents changing behavior consistently in stochastic environments. This paper extends the weighted double estimator to multiagent domains and proposes an MA-DRL framework, named Weighted Double Deep Q-Network (WDDQN). By leveraging the weighted double estimator and the deep neural network, WDDQN can not only reduce the bias effectively but also handle scenarios with raw visual inputs. To achieve efficient cooperation in multiagent domains, we introduce a lenient reward network and scheduled replay strategy. Empirical results show that WDDQN outperforms an existing DRL algorithm (double DQN) and an MA-DRL algorithm (lenient Q-learning) regarding the averaged reward and the convergence speed and is more likely to converge to the Pareto-optimal Nash equilibrium in stochastic cooperative environments.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Sutton R S, Barto A G. Reinforcement Learning: An Introduction. MIT Press, 1998.
Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller M. Playing Atari with deep reinforcement learning. arXiv:1312.5602, 2013. https://arxiv.org/abs/1312.5602, Nov. 2019.
Mnih V, Kavukcuoglu K, Silver D et al. Human-level control through deep reinforcement learning. Nature, 2015, 518(7540): 529-533.
Mnih V, Badia A P, Mirza M, Graves A, Lillicrap T, Harley T, Silver D, Kavukcuoglu K. Asynchronous methods for deep reinforcement learning. In Proc. the 33rd International Conference on Machine Learning, June 2016, pp.1928-1937.
Schaul T, Quan J, Antonoglou I, Silver D. Prioritized experience replay. In Proc. the 4th International Conference on Learning Representations, May 2016.
van Hasselt H, Guez A, Silver D. Deep reinforcement learning with double Q-learning. In Proc. the 30th AAAI Conference on Artificial Intelligence, February 2016, pp.2094-2100.
Wang Z, Schaul T, Hessel M, van Hasselt H, Lanctot M, de Freitas N. Dueling network architectures for deep reinforcement learning. In Proc. the 33rd International Conference on Learning Representations, June 2016, pp.1995-2003.
Bloembergen D, Kaisers M, Tuyls K. Empirical and theoretical support for lenient learning. In Proc. the 10th International Conference on Autonomous Agents and Multiagent Systems, May 2011, pp.1105-1106.
Matignon L, Laurent G J, le Fort-Piat N. Hysteretic Q-learning: An algorithm for decentralized reinforcement learning in cooperative multi-agent teams. In Proc. the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 2007, pp.64-69.
Matignon L, Laurent G J, le Fort-Piat N. Independent reinforcement learners in cooperative Markov games: A survey regarding coordination problems. Knowledge Engineering Review, 2012, 27(1): 1-31.
Panait L, Sullivan K, Luke S. Lenient learners in cooperative multiagent systems. In Proc. the 5th International Conference on Autonomous Agents and Multiagent Systems, May 2006, pp.801-803.
Wei E, Luke S. Lenient learning in independent-learner stochastic cooperative games. Journal of Machine Learning Research, 2016, 17: Article No. 84.
Yang T, Hao J, Meng Z, Zheng Y, Zhang C, Zheng Z. Bayes-ToMoP: A fast detection and best response algorithm towards sophisticated opponents. In Proc. the 18th International Conference on Autonomous Agents and Multiagent Systems, May 2019, pp.2282-2284.
Yang T, Hao J, Meng Z, Zhang C, Zheng Y, Zheng Z. Towards efficient detection and optimal response against sophisticated opponents. In Proc. the 28th International Joint Conference on Artificial Intelligence, August 2019, pp.623-629.
Zheng Y, Meng Z P, Hao J Y, Zhang Z Z, Yang T P, Fan C J. A deep Bayesian policy reuse approach against nonstationary agents. In Proc. the 2018 Annual Conference on Neural Information Processing Systems, December 2018, pp.962-972.
Gupta J K, Egorov M, Kochenderfer M. Cooperative multiagent control using deep reinforcement learning. In Proc. the 2017 International Conference on Autonomous Agents and Multiagent Systems Workshops, May 2017, pp.66-83.
Lanctot M, Zambaldi V, Gruslys A et al. A unified gametheoretic approach to multiagent reinforcement learning. In Proc. the 2017 Annual Conference on Neural Information Processing Systems, December 2017, pp.4190-4203.
Claus C, Boutilier C. The dynamics of reinforcement learning in cooperative multiagent systems. In Proc. the 15th AAAI Conference on Artificial Intelligence, July 1998, pp.746-752.
Zhang Z, Pan Z, Kochenderfer M J. Weighted double Q-learning. In Proc. the 26th International Joint Conference on Artificial Intelligence, August 2017, pp.3455-3461.
Zheng Y, Meng Z, Hao J, Zhang Z. Weighted double deep multiagent reinforcement learning in stochastic cooperative environments. In Proc. the 15th Pacific Rim International Conference on Artificial Intelligence, August 2018, pp.421-429.
Watkins C. Learning from delayed rewards [Ph.D. Thesis]. King’s College, University of Cambridge, 1989.
Sutton R S. Learning to predict by the methods of temporal differences. Machine Learning, 1988, 3: 9-44.
Smith J E, Winkler R L. The optimizer’s curse: Skepticism and postdecision surprise in decision analysis. Management Science, 2006, 52(3): 311-322.
van Hasselt H. Double Q-learning. In Proc. the 24th Annual Conference on Neural Information Processing Systems, December 2010, pp.2613-2621.
Potter M A, de Jong K A. A cooperative convolutionary approach to function optimization. In Proc. the 3rd International Conference on Parallel Problem Solving from Nature, October 1994, pp.249-257.
Tang H, Houthooft R, Foote D, Stooke A, Chen O X, Duan Y, Schulman J, de Turck F, Abbeel P. #Exploration: A study of count-based exploration for deep reinforcement learning. In Proc. the 2017 Annual Conference on Neural Information Processing Systems, December 2017, pp.2753-2762.
Benda M, Jagannathan V, Dodhiawala R. On optimal cooperation of knowledge sources — An empirical investigation. Technical Report, Boeing Advanced Technology Center, Boeing Computing Services, 1986.
Lowe R, Wu Y, Tamar A, Harb J, Abbeel P, Mordatch I. Multi-agent actor-critic for mixed cooperative-competitive environments. In Proc. the 2017 Annual Conference on Neural Information Processing Systems, December 2017, pp.6379-6390.
Palmer G, Tuyls K, Bloembergen D, Savani R. Lenient multi-agent deep reinforcement learning. In Proc. the 17th International Conference on Autonomous Agents and Multigent Systems, July 2018, pp.443-451.
Buşoniu L, Babuška R, de Schutter B. Multi-agent reinforcement learning: An overview. In Innovations in Multiagent Systems and Applications-1, Srinivasan P, Jain L C (eds.), 2010, pp.183-221.
Chou P, Maturana D, Scherer S. Improving stochastic policy gradients in continuous control with deep reinforcement learning using the beta distribution. In Proc. the 34th International Conference on Machine Learning, August 2017, pp.834-843.
Acknowledgments
We thank our industrial research partner Netease, Inc., especially the Fuxi AI Laboratory of Leihuo Business Groups for their discussion and support with the experiments.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
ESM 1
(PDF 1517 kb)
Rights and permissions
About this article
Cite this article
Zheng, Y., Hao, JY., Zhang, ZZ. et al. Efficient Multiagent Policy Optimization Based on Weighted Estimators in Stochastic Cooperative Environments. J. Comput. Sci. Technol. 35, 268–280 (2020). https://doi.org/10.1007/s11390-020-9967-6
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11390-020-9967-6