Skip to main content
Log in

Search, Abstractions and Learning in Real-Time Strategy Games

A Dissertation Summary

  • Dissertation and Habilitation Abstracts
  • Published:
KI - Künstliche Intelligenz Aims and scope Submit manuscript

Abstract

Real-Time Strategy Games’ large state and action spaces pose a significant hurdle to traditional AI techniques. We propose decomposing the game into sub-problems and integrating the partial solutions into action scripts that can be used as abstract actions by a search or machine learning algorithm. The resulting high level algorithm produces sound strategic choices, and can then be combined with a low-level search algorithm to refine tactical choices. We show strong results in SparCraft, Starcraft: Brood War and \(\mu \)RTS against state-of-the-art agents. We expect advances in RTS AI can be used in commercial videogames for playtesting and game balancing, while also having possible real-world applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Barriga NA (2017) Search, abstractions and learning in real-time strategy games. Ph.D. thesis, University of Alberta

  2. Barriga NA, Stanescu M, Buro M (2014) Building placement optimization in real-time strategy games. In: Workshop on artificial intelligence in adversarial real-time games, AIIDE

  3. Barriga NA, Stanescu M, Buro M (2015) Puppet Search: enhancing scripted behaviour by look-ahead search with applications to real-time strategy games. In: Eleventh annual AAAI conference on artificial intelligence and interactive digital entertainment (AIIDE), pp 9–15

  4. Barriga NA, Stanescu M, Buro M (2017) Combining scripted behavior with game tree search for stronger, more robust game AI. In: Game AI Pro 3: collected wisdom of game AI professionals, chap. 14. CRC Press

  5. Barriga NA, Stanescu M, Buro M (2017) Combining strategic learning and tactical search in real-time Strategy games. In: Accepted for presentation at the thirteenth annual AAAI conference on artificial intelligence and interactive digital entertainment (AIIDE)

  6. Barriga NA, Stanescu M, Buro M (2017) Game tree search based on non-deterministic action scripts in real-time strategy games. In: IEEE transactions on computational intelligence and AI in games (TCIAIG)

  7. Bowling M, Burch N, Johanson M, Tammelin O (2015) Heads-up limit hold’em poker is solved. Science 347(6218):145–149. https://doi.org/10.1126/science.1259433. http://www.sciencemag.org/content/347/6218/145.abstract

    Article  Google Scholar 

  8. Churchill D (2013) SparCraft: open source StarCraft combat simulation. http://code.google.com/p/sparcraft/

  9. Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, Graves A, Riedmiller M, Fidjeland AK, Ostrovski G et al (2015) Human-level control through deep reinforcement learning. Nature 518(7540):529–533

    Article  Google Scholar 

  10. Ontañón S (2013) The combinatorial multi-armed bandit problem and its application to real-time strategy games. In: AIIDE

  11. Ontañón S (2017) Combinatorial multi-armed bandits for real-time strategy games. J Artif Intell Res 58:665–702

    Article  MathSciNet  Google Scholar 

  12. Ontañón S, Barriga NA, Silva CR, Moraes RO, Lelis LH (2018) The first microrts artificial intelligence competition. AI Mag 39(1):75–83

    Article  Google Scholar 

  13. Schaeffer J, Lake R, Lu P, Bryant M (1996) CHINOOK: the world man-machine Checkers champion. AI Mag 17(1):21–29

    Google Scholar 

  14. Shannon C (1950) A Chess-playing machine. Sci Am 182:48–51

    Article  Google Scholar 

  15. Shannon CE (1950) Programming a computer for playing chess. In: Philosophical Magazine (First presented at the National IRE Convention, March 9, 1949, New York, USA) ser.7, vol 41, no. 314

  16. Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529(7587):484–489

    Article  Google Scholar 

  17. Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A et al (2017) Mastering the game of Go without human knowledge. Nature 550(7676):354

    Article  Google Scholar 

  18. Tesauro G (1994) TD-Gammon, a self-teaching Backgammon program, reaches master-level play. Neural Comput 6(2):215–219

    Article  Google Scholar 

  19. Vinyals O, Babuschkin I, Chung J, Mathieu M, Jaderberg M, Czarnecki WM, Dudzik A, Huang A, Georgiev P, Powell R, Ewalds T, Horgan D, Kroiss M, Danihelka I, Agapiou J, Oh J, Dalibard V, Choi D, Sifre L, Sulsky Y, Vezhnevets S, Molloy J, Cai T, Budden D, Paine T, Gulcehre C, Wang Z, Pfaff T, Pohlen T, Wu Y, Yogatama D, Cohen J, McKinney K, Smith O, Schaul T, Lillicrap T, Apps C, Kavukcuoglu K, Hassabis D, Silver D (2019) AlphaStar: mastering the real-time strategy game StarCraft II. https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicolas A. Barriga.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barriga, N.A. Search, Abstractions and Learning in Real-Time Strategy Games. Künstl Intell 34, 101–103 (2020). https://doi.org/10.1007/s13218-019-00614-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13218-019-00614-0

Keywords

Navigation