Abstract
Common built-in bots act upon pre-written scripts to make decisions and actions where sometimes they acquire and take advantage of unfair information, instead of acting flexibly like human players, who make decisions only based on game screens. This paper mainly focuses on studying the applications of deep learning and reinforcement learning in the field of computer game agents. The goal is to create agents that make decisions in human’s way and gets rid of relying on unfair information. A game agent is implemented in line to the A3C algorithm. This agent takes the original real-time game screen as an input to the network and then outputs the corresponding discrete actions. The agent can interact with Viz and read the real-time game screen to make decisions for controlling the characters. This paper made an improvement to the A3C algorithm by adding an anticipator network to the original model structure. The goal of doing this is to make the agent act more like human players. It generates anticipation before making decisions, and then combines the real-time game screen with anticipation images together as an input to the network defined by the A3C algorithm. The result shows, that the A3C algorithm with Anticipation performs better than the original A3C algorithmd.
Y. Sun and A. Khan—The authors contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Khan, A., Feng, J., Liu, S., Jifara, W., Tian, Z., Fu, Y.: State-of-the-art and open challenges in RTS game-AI and StarCraft. (IJACSA) Int. J. Adv. Comput. Sci. Appl. 8(12), 9 (2017). https://doi.org/10.14569/ijacsa.2017.081203
Kot, B., et al.: Information visualization utilizing 3D computer game engines case study. In: Proceedings of the 6th ACM SIGCHI New Zealand Chapter’s International Conference on Computer-Human Interaction Making CHI Natural - CHINZ 2005, p. 53–60 (2005). https://doi.org/10.1145/1073943.1073954
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Khan, A., et al.: A competitive combat strategy and tactics in RTS games AI and StarCraft. In: Zeng, B., Huang, Q., El Saddik, A., Li, H., Jiang, S., Fan, X. (eds.) PCM 2017. LNCS, vol. 10736, pp. 3–12. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77383-4_1
Zhou, S., Liang, W., Li, J., Kim, J.-U.: Improved VGG model for road traffic sign recognition. CMC: Comput. Mater. Continua 57(1), 11–24 (2018)
Kan, X., et al.: Snow cover mapping for mountainous areas by fusion of MODIS L1B and geographic data based on stacked denoising auto-encoders. CMC: Comput. Mater. Continua 57(1), 49–68 (2018)
Khan, A., Jiang, F., Liu, S., Grigoriev, A., Gupta, B.B., Rho, S.: Training an agent for FPS doom game using visual reinforcement learning and VizDoom. (IJACSA) Int. J. Adv. Comput. Sci. Appl. 8(12) (2017). https://doi.org/10.14569/issn.2156-5570
Babaeizadeh, M., et al.: Reinforcement learning through asynchronous advantage actor-critic on a GPU (2016)
Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016). https://doi.org/10.1038/nature16961
Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: AAAI (2015)
Schaul, T., et al.: Prioritized experience replay. arXiv preprint arXiv:1511.05952 (2015)
Wang, Z., et al.: Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581 (2015)
Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning (2016)
Grondman, I., et al.: A survey of actor-critic reinforcement learning: standard and natural policy gradients. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 42(6), 1291–1307 (2012)
Jaderberg, M., et al.: Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397 (2016)
Kempka, M., et al.: ViZDoom: a Doom-based AI research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097. (2016) arXiv:1605.02097 [cs.LG]
Dosovitskiy, A., Koltun, V.: Learning to act by predicting the future. arXiv preprint arXiv:1611.01779 (2016)
Wu, Y., Tian, Y.: Training agent for first-person shooter game with actor-critic curriculum learning. In: Conference Paper at ICLR 2017 (2017)
Acknowledgment
The authors would like to thank the school of computer science and technology, Harbin Institute of Technology, Harbin, Heilongjiang 150001, PR China, and NVIDIA for powerful GPU machine donation. This work is partially funded by the Higher Education Department KPK, Pakistan, the MOE–Microsoft Key Laboratory of Natural Language Processing and Speech, Harbin Institute of Technology, the Major State Basic Research Development Program of China (973 Program 2015CB351804) and the National Natural Science Foundation of China under Grant No. 61572155, 61672188 and 61272386.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Ethics declarations
The authors declare that they have no conflicts of interest.
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Sun, Y., Khan, A., Yang, K., Feng, J., Liu, S. (2019). Playing First-Person-Shooter Games with A3C-Anticipator Network Based Agents Using Reinforcement Learning. In: Sun, X., Pan, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2019. Lecture Notes in Computer Science(), vol 11635. Springer, Cham. https://doi.org/10.1007/978-3-030-24268-8_43
Download citation
DOI: https://doi.org/10.1007/978-3-030-24268-8_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-24267-1
Online ISBN: 978-3-030-24268-8
eBook Packages: Computer ScienceComputer Science (R0)