Abstract
We model a Tetris player using Artificial Neural Network (ANN). In contrast to most previous works which learned weights of predefined heuristic functions, our model learns the actual actions i.e., given a board state and a tetrominoe, where the piece should be placed on the board. The problem is formulated as a classification problem where the model learns to associate a board state with fruitful actions. We compare an ANN player with a random player and a Best First Search (BFS) player. The random player and the BFS player provide baselines for uninformed search and informed search, while the ANN player learns from the examples extracted from BFS runs and its performance lies between both baselines. We observe that generalisation appears to be very hard from the nature of the game itself. Fitting Tetrimino pieces together demands an exact match. This means we cannot expect similar solutions to closely related patterns. In this paper, we explore the problem from a soft computing perspective, present our experimental design and provide a critical discussion of the results of our experiment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
“Alexey Pajitnov” in Wikipedia: The Free Encyclopedia (last retrieved April 14, 2014)
Demaine, E.D., Hohenberger, S., Liben-Nowell, D.: Tetris is hard, even to approximate. In: Warnow, T.J., Zhu, B. (eds.) COCOON 2003. LNCS, vol. 2697, pp. 351–363. Springer, Heidelberg (2003)
Donninger, C., Lorenz, U.: The chess monster Hydra. In: Becker, J., Platzner, M., Vernalde, S. (eds.) FPL 2004. LNCS, vol. 3203, pp. 927–932. Springer, Heidelberg (2004)
Phon-Amnuaisuk, S.: Learning chasing behaviours of non-player characters in games using SARSA. In: Di Chio, C., et al. (eds.) EvoApplications 2011, Part I. LNCS, vol. 6624, pp. 133–142. Springer, Heidelberg (2011)
Lundgaard, N., McKee, B.: Reinforcement learning and neural networks for Tetris. Technical Report, University of Oklahoma
Carr, D.: Adapting reinforcement learning to Tetris. BSc (Honours) Thesis, Rhodes University (2005)
Böhm, N., Kókai, G., Mandl, S.: An Evolutionary Approach to Tetris. In: Proceedings of the Sixth Metaheuristics International Conference (MIC 2005), Vienna, Austria, pp. 1–6 (2005)
Boumaza, A.: On the evolution of artificial Tetris players. In: Proceedings of the IEEE Symposium on Computational Intelligence and Games (CIG 2009), pp. 387–393 (2009)
Phon-Amnuaisuk, S.: GA-Tetris bot: Evolving a better Tetris gameplay using adaptive evaluation scheme. In: Siah, Y.K. (ed.) ICONIP 2014, Part III. LNCS, vol. 8836, pp. 579–586. Springer, Heidelberg (2014)
Zhang, D., Cai, Z., Nebel, B.: Playing Tetris using learning by immitation. In: Proceedings of GAMEON 2010, Leicester, UK, pp. 23–27 (2010)
Thiery, C., Scherrer, B.: Building controllers for Tetris. International Computer Game Association Journal 32, 3–11 (2009)
Tsitsiklis, J.N., Van Roy, B.: Feature-based methods for large scale dynamic programming. Machine Learning 22, 59–94 (1996)
Brzustowski, J.: Can you win at Tetris? Masters Thesis, University of British Columbia (1992)
Perl, J.: Heuristics: Intelligence Search Strategies for Computer Problem Solving. Addoson-Wesley (1984)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Phon-Amnuaisuk, S. (2015). Learning to Play Tetris from Examples. In: Phon-Amnuaisuk, S., Au, T. (eds) Computational Intelligence in Information Systems. Advances in Intelligent Systems and Computing, vol 331. Springer, Cham. https://doi.org/10.1007/978-3-319-13153-5_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-13153-5_25
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-13152-8
Online ISBN: 978-3-319-13153-5
eBook Packages: EngineeringEngineering (R0)