Skip to main content

Master Multiple Real-Time Strategy Games with a Unified Learning Model Using Multi-agent Reinforcement Learning

  • Conference paper
  • First Online:
Neural Computing for Advanced Applications (NCAA 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1638))

Included in the following conference series:

Abstract

General artificial intelligence requires an intelligent agent to understand or learn any intellectual tasks like a human being. Diverse and complex real-time strategy (RTS) game for artificial intelligence research is a promising stepping stone to achieve the goal. In the last decade, the strongest agents have either simplified the key elements of the game, or used expert rules with human knowledge, or focused on a specific environment. In this paper, we propose a unified learning model that can master various environments in RTS game without human knowledge. We use a multi-agent reinforcement learning algorithm that uses data from agents in a diverse league played on multiple maps to train the deep neural network model. We evaluate our model in microRTS, a simple real-time strategy game. The results show that the agent is competitive against the strong benchmarks in different environments.

B. Ling and X. Liu—These authors contribute equally.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Balduzzi, D., et al.: Open-ended learning in symmetric zero-sum games. In: International Conference on Machine Learning, pp. 434–443. PMLR (2019)

    Google Scholar 

  2. Barriga, N.A., Stanescu, M., Buro, M.: Combining strategic learning with tactical search in real-time strategy games. In: Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference (2017)

    Google Scholar 

  3. Barriga, N.A., Stanescu, M., Buro, M.: Puppet search: Enhancing scripted behavior by look-ahead search with applications to real-time strategy games. In: Eleventh Artificial Intelligence and Interactive Digital Entertainment Conference (2015)

    Google Scholar 

  4. Berner, C., et al.: Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019)

  5. Brown, N., Sandholm, T.: Superhuman AI for multiplayer poker. Science 365(6456), 885–890 (2019)

    Article  MathSciNet  Google Scholar 

  6. Buro, M.: Real-time strategy games: a new AI research challenge. In: IJCAI, vol. 2003, pp. 1534–1535 (2003)

    Google Scholar 

  7. Churchill, D., Saffidine, A., Buro, M.: Fast heuristic search for RTS game combat scenarios. In: Eighth Artificial Intelligence and Interactive Digital Entertainment Conference (2012)

    Google Scholar 

  8. Konda, V., Tsitsiklis, J.: Actor-critic algorithms. In: Advances in Neural Information Processing Systems 12 (1999)

    Google Scholar 

  9. Lin, S., Anshi, Z., Bo, L., Xiaoshi, F.: HTN guided adversarial planning for RTS games. In: 2020 IEEE International Conference on Mechatronics and Automation (ICMA), pp. 1326–1331. IEEE (2020)

    Google Scholar 

  10. Lowe, R., Wu, Y.I., Tamar, A., Harb, J., Pieter Abbeel, O., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. In: Advances in Neural Information Processing Systems 30 (2017)

    Google Scholar 

  11. Marino, J.R., Moraes, R.O., Toledo, C., Lelis, L.H.: Evolving action abstractions for real-time planning in extensive-form games. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 2330–2337 (2019)

    Google Scholar 

  12. Moraes, R., Lelis, L.: Asymmetric action abstractions for multi-unit control in adversarial real-time games. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  13. Ontanón, S.: The combinatorial multi-armed bandit problem and its application to real-time strategy games. In: Ninth Artificial Intelligence and Interactive Digital Entertainment Conference (2013)

    Google Scholar 

  14. Schrittwieser, J., et al.: Mastering atari, go, chess and shogi by planning with a learned model. Nature 588(7839), 604–609 (2020)

    Article  Google Scholar 

  15. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  16. Silva, C., Moraes, R.O., Lelis, L.H., Gal, K.: Strategy generation for multiunit real-time games via voting. IEEE Trans. Games 11(4), 426–435 (2018)

    Article  Google Scholar 

  17. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)

    MATH  Google Scholar 

  18. Vinyals, O., et al.: Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575(7782), 350–354 (2019)

    Article  Google Scholar 

  19. Wang, Z., Wu, W., Huang, Z.: Scalable multi-agent reinforcement learning architecture for semi-MDP real-time strategy games. In: Zhang, H., Zhang, Z., Wu, Z., Hao, T. (eds.) NCAA 2020. CCIS, vol. 1265, pp. 433–446. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-7670-6_36

    Chapter  Google Scholar 

  20. Yang, Z., Ontanón, S.: Guiding Monte Carlo tree search by scripts in real-time strategy games. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, vol. 15, pp. 100–106 (2019)

    Google Scholar 

  21. Ye, D., et al.: Towards playing full MOBA games with deep reinforcement learning. In: Advances in Neural Information Processing Systems 33, pp. 621–632 (2020)

    Google Scholar 

Download references

Acknowledgement

This work is supported by the National Key R &D Program of China under grant 2018AAA0101200, the Natural Science Foundation of China under Grant No. 62102082, 61902062, 61672154, 61972086, the Natural Science Foundation of Jiangsu Province under Grant No. 7709009016 and the Postgraduate Research & Practice Innovation Program of Jiangsu Province of China (KYCX19_0089).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jin Jiang or Xueyong Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ling, B. et al. (2022). Master Multiple Real-Time Strategy Games with a Unified Learning Model Using Multi-agent Reinforcement Learning. In: Zhang, H., et al. Neural Computing for Advanced Applications. NCAA 2022. Communications in Computer and Information Science, vol 1638. Springer, Singapore. https://doi.org/10.1007/978-981-19-6135-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-6135-9_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-6134-2

  • Online ISBN: 978-981-19-6135-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics