Skip to main content

Learning to Play Tetris from Examples

  • Conference paper
Computational Intelligence in Information Systems

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 331))

Abstract

We model a Tetris player using Artificial Neural Network (ANN). In contrast to most previous works which learned weights of predefined heuristic functions, our model learns the actual actions i.e., given a board state and a tetrominoe, where the piece should be placed on the board. The problem is formulated as a classification problem where the model learns to associate a board state with fruitful actions. We compare an ANN player with a random player and a Best First Search (BFS) player. The random player and the BFS player provide baselines for uninformed search and informed search, while the ANN player learns from the examples extracted from BFS runs and its performance lies between both baselines. We observe that generalisation appears to be very hard from the nature of the game itself. Fitting Tetrimino pieces together demands an exact match. This means we cannot expect similar solutions to closely related patterns. In this paper, we explore the problem from a soft computing perspective, present our experimental design and provide a critical discussion of the results of our experiment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. “Alexey Pajitnov” in Wikipedia: The Free Encyclopedia (last retrieved April 14, 2014)

    Google Scholar 

  2. Demaine, E.D., Hohenberger, S., Liben-Nowell, D.: Tetris is hard, even to approximate. In: Warnow, T.J., Zhu, B. (eds.) COCOON 2003. LNCS, vol. 2697, pp. 351–363. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  3. Donninger, C., Lorenz, U.: The chess monster Hydra. In: Becker, J., Platzner, M., Vernalde, S. (eds.) FPL 2004. LNCS, vol. 3203, pp. 927–932. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  4. Phon-Amnuaisuk, S.: Learning chasing behaviours of non-player characters in games using SARSA. In: Di Chio, C., et al. (eds.) EvoApplications 2011, Part I. LNCS, vol. 6624, pp. 133–142. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  5. Lundgaard, N., McKee, B.: Reinforcement learning and neural networks for Tetris. Technical Report, University of Oklahoma

    Google Scholar 

  6. Carr, D.: Adapting reinforcement learning to Tetris. BSc (Honours) Thesis, Rhodes University (2005)

    Google Scholar 

  7. Böhm, N., Kókai, G., Mandl, S.: An Evolutionary Approach to Tetris. In: Proceedings of the Sixth Metaheuristics International Conference (MIC 2005), Vienna, Austria, pp. 1–6 (2005)

    Google Scholar 

  8. Boumaza, A.: On the evolution of artificial Tetris players. In: Proceedings of the IEEE Symposium on Computational Intelligence and Games (CIG 2009), pp. 387–393 (2009)

    Google Scholar 

  9. Phon-Amnuaisuk, S.: GA-Tetris bot: Evolving a better Tetris gameplay using adaptive evaluation scheme. In: Siah, Y.K. (ed.) ICONIP 2014, Part III. LNCS, vol. 8836, pp. 579–586. Springer, Heidelberg (2014)

    Google Scholar 

  10. Zhang, D., Cai, Z., Nebel, B.: Playing Tetris using learning by immitation. In: Proceedings of GAMEON 2010, Leicester, UK, pp. 23–27 (2010)

    Google Scholar 

  11. Thiery, C., Scherrer, B.: Building controllers for Tetris. International Computer Game Association Journal 32, 3–11 (2009)

    Google Scholar 

  12. Tsitsiklis, J.N., Van Roy, B.: Feature-based methods for large scale dynamic programming. Machine Learning 22, 59–94 (1996)

    MATH  Google Scholar 

  13. Brzustowski, J.: Can you win at Tetris? Masters Thesis, University of British Columbia (1992)

    Google Scholar 

  14. Perl, J.: Heuristics: Intelligence Search Strategies for Computer Problem Solving. Addoson-Wesley (1984)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Somnuk Phon-Amnuaisuk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Phon-Amnuaisuk, S. (2015). Learning to Play Tetris from Examples. In: Phon-Amnuaisuk, S., Au, T. (eds) Computational Intelligence in Information Systems. Advances in Intelligent Systems and Computing, vol 331. Springer, Cham. https://doi.org/10.1007/978-3-319-13153-5_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-13153-5_25

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-13152-8

  • Online ISBN: 978-3-319-13153-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics