Skip to main content

Evolving Game State Features from Raw Pixels

  • Conference paper
  • First Online:
Book cover Genetic Programming (EuroGP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10196))

Included in the following conference series:

Abstract

General video game playing is the art of designing artificial intelligence programs that are capable of playing different video games with little domain knowledge. One of the great challenges is how to capture game state features from different video games in a general way. The main contribution of this paper is to apply genetic programming to evolve game state features from raw pixels. A voting method is implemented to determine the actions of the game agent. Three different video games are used to evaluate the effectiveness of the algorithm: Missile Command, Frogger, and Space Invaders. The results show that genetic programming is able to find useful game state features for all three games.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bellemare, M., Naddaf, Y., Veness, J., Bowling, M.: The arcade learning environment: an evaluation platform for general agents. J. Artif. Intell. Res. 47, 253–279 (2012)

    Google Scholar 

  2. Perez, D., Samothrakis, S., Togelius, J., Schaul, T., Lucas, S.: GVG-AI Competition. http://www.gvgai.net/index.php

  3. Finnsson, H., Bjornsson, Y.: Simulation-based approach to general game playing. In: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, pp. 259–264 (2008)

    Google Scholar 

  4. Geneserech, M., Love, N.: General game playing: overview of the AAAI competition. AI Mag. 26, 62–72 (2005)

    Google Scholar 

  5. Guo, X., Singh, S., Lee, H., Lewis, R., Wang, X.: Deep learning for real-time atari game play using offline monte-carlo tree search planning. Adv. Neural Inf. Process. Syst. 27, 3338–3346 (2014)

    Google Scholar 

  6. Hausknecht, M., Khandelwal, P., Miikkulainen, R., Stone, P.: HyperNEAT-GGP: a HyperNEAT-based atari general game player. In: Genetic and Evolutionary Computation Conference(GECCO) (2012)

    Google Scholar 

  7. Hausknecht, M., Lehman, J., Miikkulainen, R., Stone, P.: A neuroevolution approach to general atari game playing. IEEE Trans. Comput. Intell. AI Games 6, 355–366 (2013)

    Article  Google Scholar 

  8. Jia, B., Ebner, M., Schack, C.: A GP-based video game player. In: Genetic and Evolutionary Computation Conference(GECCO) (2015)

    Google Scholar 

  9. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. The MIT Press, Cambridge (1992)

    MATH  Google Scholar 

  10. Koza, J.R.: Genetic Programming II: Automatic Discovery of Reusable Programs. The MIT Press, Cambridge (1994)

    MATH  Google Scholar 

  11. Luke, S.: The ECJ Owner’s Manual, 22nd edn. (2014)

    Google Scholar 

  12. Campbell, M., Hoane, A.J., Hsu, F.H.: Deep blue. Artif. Intell. 134, 57–83 (2002)

    Article  MATH  Google Scholar 

  13. Mehat, J., Cazenave, T.: Monte-Carlo Tree Search for General Game Playing. Technical report, LIASD, Dept. Informatique, Université Paris 8 (2008)

    Google Scholar 

  14. Mehat, J., Cazenave, T.: Combining UCT and nested monte-carlo search for single-player general game playing. IEEE Trans. Comput. Intell. AI Games 2(4), 225–228 (2010)

    Article  Google Scholar 

  15. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A., Veness, J., Bellemare, M., Graves, A., Riedmiller, M., Fidieland, A., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  16. Naddaf, Y.: Game-independent AI agents for playing atari 2600 console games. Master’s thesis, University of Alberta (2010)

    Google Scholar 

  17. Perez, D., Samothrakis, S., Lucas, S.: Knowledge-based fast evolutionary MCTS for general video game playing. In: Proceedings of IEEE Conference on Computational Intelligence and Games, pp. 68–75 (2014)

    Google Scholar 

  18. Silver, D., Huang, A., Maddison, C., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., Hassabis, D.: Mastering the game of go with deep neural networks and tree search. Nature 529, 484–489 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Baozhu Jia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Jia, B., Ebner, M. (2017). Evolving Game State Features from Raw Pixels. In: McDermott, J., Castelli, M., Sekanina, L., Haasdijk, E., García-Sánchez, P. (eds) Genetic Programming. EuroGP 2017. Lecture Notes in Computer Science(), vol 10196. Springer, Cham. https://doi.org/10.1007/978-3-319-55696-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-55696-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-55695-6

  • Online ISBN: 978-3-319-55696-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics