Abstract
General artificial intelligence requires an intelligent agent to understand or learn any intellectual tasks like a human being. Diverse and complex real-time strategy (RTS) game for artificial intelligence research is a promising stepping stone to achieve the goal. In the last decade, the strongest agents have either simplified the key elements of the game, or used expert rules with human knowledge, or focused on a specific environment. In this paper, we propose a unified learning model that can master various environments in RTS game without human knowledge. We use a multi-agent reinforcement learning algorithm that uses data from agents in a diverse league played on multiple maps to train the deep neural network model. We evaluate our model in microRTS, a simple real-time strategy game. The results show that the agent is competitive against the strong benchmarks in different environments.
B. Ling and X. Liu—These authors contribute equally.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Balduzzi, D., et al.: Open-ended learning in symmetric zero-sum games. In: International Conference on Machine Learning, pp. 434–443. PMLR (2019)
Barriga, N.A., Stanescu, M., Buro, M.: Combining strategic learning with tactical search in real-time strategy games. In: Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference (2017)
Barriga, N.A., Stanescu, M., Buro, M.: Puppet search: Enhancing scripted behavior by look-ahead search with applications to real-time strategy games. In: Eleventh Artificial Intelligence and Interactive Digital Entertainment Conference (2015)
Berner, C., et al.: Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019)
Brown, N., Sandholm, T.: Superhuman AI for multiplayer poker. Science 365(6456), 885–890 (2019)
Buro, M.: Real-time strategy games: a new AI research challenge. In: IJCAI, vol. 2003, pp. 1534–1535 (2003)
Churchill, D., Saffidine, A., Buro, M.: Fast heuristic search for RTS game combat scenarios. In: Eighth Artificial Intelligence and Interactive Digital Entertainment Conference (2012)
Konda, V., Tsitsiklis, J.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems 12 (1999)
Lin, S., Anshi, Z., Bo, L., Xiaoshi, F.: HTN guided adversarial planning for RTS games. In: 2020 IEEE International Conference on Mechatronics and Automation (ICMA), pp. 1326–1331. IEEE (2020)
Lowe, R., Wu, Y.I., Tamar, A., Harb, J., Pieter Abbeel, O., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. In: Advances in Neural Information Processing Systems 30 (2017)
Marino, J.R., Moraes, R.O., Toledo, C., Lelis, L.H.: Evolving action abstractions for real-time planning in extensive-form games. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 2330–2337 (2019)
Moraes, R., Lelis, L.: Asymmetric action abstractions for multi-unit control in adversarial real-time games. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
Ontanón, S.: The combinatorial multi-armed bandit problem and its application to real-time strategy games. In: Ninth Artificial Intelligence and Interactive Digital Entertainment Conference (2013)
Schrittwieser, J., et al.: Mastering atari, go, chess and shogi by planning with a learned model. Nature 588(7839), 604–609 (2020)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)
Silva, C., Moraes, R.O., Lelis, L.H., Gal, K.: Strategy generation for multiunit real-time games via voting. IEEE Trans. Games 11(4), 426–435 (2018)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)
Vinyals, O., et al.: Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575(7782), 350–354 (2019)
Wang, Z., Wu, W., Huang, Z.: Scalable multi-agent reinforcement learning architecture for semi-MDP real-time strategy games. In: Zhang, H., Zhang, Z., Wu, Z., Hao, T. (eds.) NCAA 2020. CCIS, vol. 1265, pp. 433–446. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-7670-6_36
Yang, Z., Ontanón, S.: Guiding Monte Carlo tree search by scripts in real-time strategy games. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 15, pp. 100–106 (2019)
Ye, D., et al.: Towards playing full MOBA games with deep reinforcement learning. In: Advances in Neural Information Processing Systems 33, pp. 621–632 (2020)
Acknowledgement
This work is supported by the National Key R &D Program of China under grant 2018AAA0101200, the Natural Science Foundation of China under Grant No. 62102082, 61902062, 61672154, 61972086, the Natural Science Foundation of Jiangsu Province under Grant No. 7709009016 and the Postgraduate Research & Practice Innovation Program of Jiangsu Province of China (KYCX19_0089).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Ling, B. et al. (2022). Master Multiple Real-Time Strategy Games with a Unified Learning Model Using Multi-agent Reinforcement Learning. In: Zhang, H., et al. Neural Computing for Advanced Applications. NCAA 2022. Communications in Computer and Information Science, vol 1638. Springer, Singapore. https://doi.org/10.1007/978-981-19-6135-9_3
Download citation
DOI: https://doi.org/10.1007/978-981-19-6135-9_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-6134-2
Online ISBN: 978-981-19-6135-9
eBook Packages: Computer ScienceComputer Science (R0)