Skip to main content

Playing First-Person-Shooter Games with A3C-Anticipator Network Based Agents Using Reinforcement Learning

  • Conference paper
  • First Online:
Artificial Intelligence and Security (ICAIS 2019)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 11635))

Included in the following conference series:

Abstract

Common built-in bots act upon pre-written scripts to make decisions and actions where sometimes they acquire and take advantage of unfair information, instead of acting flexibly like human players, who make decisions only based on game screens. This paper mainly focuses on studying the applications of deep learning and reinforcement learning in the field of computer game agents. The goal is to create agents that make decisions in human’s way and gets rid of relying on unfair information. A game agent is implemented in line to the A3C algorithm. This agent takes the original real-time game screen as an input to the network and then outputs the corresponding discrete actions. The agent can interact with Viz and read the real-time game screen to make decisions for controlling the characters. This paper made an improvement to the A3C algorithm by adding an anticipator network to the original model structure. The goal of doing this is to make the agent act more like human players. It generates anticipation before making decisions, and then combines the real-time game screen with anticipation images together as an input to the network defined by the A3C algorithm. The result shows, that the A3C algorithm with Anticipation performs better than the original A3C algorithmd.

Y. Sun and A. Khan—The authors contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Khan, A., Feng, J., Liu, S., Jifara, W., Tian, Z., Fu, Y.: State-of-the-art and open challenges in RTS game-AI and StarCraft. (IJACSA) Int. J. Adv. Comput. Sci. Appl. 8(12), 9 (2017). https://doi.org/10.14569/ijacsa.2017.081203

    Article  Google Scholar 

  2. Kot, B., et al.: Information visualization utilizing 3D computer game engines case study. In: Proceedings of the 6th ACM SIGCHI New Zealand Chapter’s International Conference on Computer-Human Interaction Making CHI Natural - CHINZ 2005, p. 53–60 (2005). https://doi.org/10.1145/1073943.1073954

  3. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Google Scholar 

  4. Khan, A., et al.: A competitive combat strategy and tactics in RTS games AI and StarCraft. In: Zeng, B., Huang, Q., El Saddik, A., Li, H., Jiang, S., Fan, X. (eds.) PCM 2017. LNCS, vol. 10736, pp. 3–12. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77383-4_1

    Chapter  Google Scholar 

  5. Zhou, S., Liang, W., Li, J., Kim, J.-U.: Improved VGG model for road traffic sign recognition. CMC: Comput. Mater. Continua 57(1), 11–24 (2018)

    Google Scholar 

  6. Kan, X., et al.: Snow cover mapping for mountainous areas by fusion of MODIS L1B and geographic data based on stacked denoising auto-encoders. CMC: Comput. Mater. Continua 57(1), 49–68 (2018)

    Google Scholar 

  7. Khan, A., Jiang, F., Liu, S., Grigoriev, A., Gupta, B.B., Rho, S.: Training an agent for FPS doom game using visual reinforcement learning and VizDoom. (IJACSA) Int. J. Adv. Comput. Sci. Appl. 8(12) (2017). https://doi.org/10.14569/issn.2156-5570

  8. Babaeizadeh, M., et al.: Reinforcement learning through asynchronous advantage actor-critic on a GPU (2016)

    Google Scholar 

  9. Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016). https://doi.org/10.1038/nature16961

    Article  Google Scholar 

  10. Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: AAAI (2015)

    Google Scholar 

  11. Schaul, T., et al.: Prioritized experience replay. arXiv preprint arXiv:1511.05952 (2015)

  12. Wang, Z., et al.: Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581 (2015)

  13. Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning (2016)

    Google Scholar 

  14. Grondman, I., et al.: A survey of actor-critic reinforcement learning: standard and natural policy gradients. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 42(6), 1291–1307 (2012)

    Google Scholar 

  15. Jaderberg, M., et al.: Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397 (2016)

  16. Kempka, M., et al.: ViZDoom: a Doom-based AI research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097. (2016) arXiv:1605.02097 [cs.LG]

  17. Dosovitskiy, A., Koltun, V.: Learning to act by predicting the future. arXiv preprint arXiv:1611.01779 (2016)

  18. Wu, Y., Tian, Y.: Training agent for first-person shooter game with actor-critic curriculum learning. In: Conference Paper at ICLR 2017 (2017)

    Google Scholar 

Download references

Acknowledgment

The authors would like to thank the school of computer science and technology, Harbin Institute of Technology, Harbin, Heilongjiang 150001, PR China, and NVIDIA for powerful GPU machine donation. This work is partially funded by the Higher Education Department KPK, Pakistan, the MOE–Microsoft Key Laboratory of Natural Language Processing and Speech, Harbin Institute of Technology, the Major State Basic Research Development Program of China (973 Program 2015CB351804) and the National Natural Science Foundation of China under Grant No. 61572155, 61672188 and 61272386.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Adil Khan or Kai Yang .

Editor information

Editors and Affiliations

Ethics declarations

The authors declare that they have no conflicts of interest.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, Y., Khan, A., Yang, K., Feng, J., Liu, S. (2019). Playing First-Person-Shooter Games with A3C-Anticipator Network Based Agents Using Reinforcement Learning. In: Sun, X., Pan, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2019. Lecture Notes in Computer Science(), vol 11635. Springer, Cham. https://doi.org/10.1007/978-3-030-24268-8_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-24268-8_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-24267-1

  • Online ISBN: 978-3-030-24268-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics