Abstract
Stockfish is a highly popular open-source chess engine known for its exceptional strength. The recent integration of a neural network called NNUE has significantly improved Stockfish’s playing ability. However, the neural network lacks the capability to explain the reasoning behind its moves. This poses a challenge for human players who seek moves that align with their playing style.
The objective of this paper is to describe some modifications to Stockfish, making the engine more suitable for training human players of all skill levels. We have refactored the move search and evaluation algorithms to selectively analyze potential continuations, incorporating dynamic evaluations based on the specific nature of the position and the player’s training abilities. The engine remains strength is still very high, in some situations even better than the original. We evaluate and discuss the outcomes of these enhancements.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Adams, M., Hurtado, P.: Think like a Super-GM. Quality Chess (2022)
Baxter, J., Tridgell, A., Weaver, L.: KnightCap: a chess program that learns by combining td () with minimax search. In: Proceeding 15th International Conference on Machine Learning, pp. 28–36 (1997)
David, O.E., Netanyahu, N.S., Wolf, L.: DeepChess: end-to-end deep neural network for automatic learning in chess. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016. LNCS, vol. 9887, pp. 88–96. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44781-0_11
Ferreira, D.: The impact of search depth on chess playing strength. ICGA J. 36(2), 67–80 (2013)
Kang, J., Yoon, J.S., Lee, B.: How AI-based training affected the performance of professional go players. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (2022)
Lang, K.J., Smith, W.D.: A test suite for chess programs. ICGA J. 16(3), 152–161 (1993)
Levene, M., Fener, T.: A methodology for learning players’ styles from game records. Int. J. Artif. Intell. Soft Comput. 2(4), 272–286 (2011)
Levinson, R., Weber, R.: Chess neighborhoods, function combination, and reinforcement learning. In: Marsland, T., Frank, I. (eds.) CG 2000. LNCS, vol. 2063, pp. 133–150. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45579-5_9
Maharaj, S., Polson, N., Turk, A.: Chess AI: competing paradigms for machine intelligence. Entropy 24(4), 550 (2022)
Manzo, A.: ShashChess repository. https://github.com/amchess/ShashChess (2023)
Manzo, A., Caruso, A.: The Computer Chess World - How to make the most of chess software. AlphaChess (2021)
McIlroy-Young, R., Sen, S., Kleinberg, J., Anderson, A.: Aligning superhuman AI with human behavior: chess as a model system. In: Proceedings 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1677–1687 (2020)
Méndez, M., Benito-Parejo, M., Ibias, A., Núñez, M.: Metamorphic testing of chess engines. Inf. Softw. Technol. 107263 (2023)
Nasu, Y.: Efficiently updatable Neural-network-based evaluation functions for computer Shogi. The 28th World Comput. Shogi Championship Appeal Doc. 185 (2018)
Plaat, A.: Conclusion. In: Learning to Play, pp. 233–254. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59238-7_8
Plaat, A., Schaeffer, J., Pijls, W., De Bruin, A.: Best-first fixed-depth minimax algorithms. Artif. Intell. 87(1–2), 255–293 (1996)
Scherzer, T., Scherzer, L., Tjaden, D.: Learning in Bebe. In: Computers, Chess, and Cognition, pp. 197–216. Springer (1990). https://doi.org/10.1007/978-1-4613-9080-0_12
Shashin, A.: Best Play The Best Method for Discovering the Strongest Move. Mongoose Press, Swindon (2013)
Silver, D., et al.: A general reinforcement learning algorithm that masters Chess, Shogi, and go through self-play. Science 362(6419), 1140–1144 (2018)
Slate, D.J.: A chess program that uses its transposition table to learn from experience. ICGA J. 10(2), 59–71 (1987)
Sutton, R.S., Barto, A.G., et al.: Reinforcement learning. J. Cogn. Neurosci. 11(1), 126–134 (1999)
Various-authors: stockfish evaluation guide. https://hxim.github.io/Stockfish-Evaluation-Guide/ (2020)
Weinstein, A., Littman, M.L., Goschin, S.: Rollout-based game-tree search outprunes traditional alpha-beta. In: Proceedings European Workshop on Reinforcement Learning, pp. 155–166. PMLR (2012)
Acknowledgments
We thank Augusto Caruso and Kelly Kinyama for fruitful discussions on some topics dealt with in this paper. We also ack the support of CN-HPC under PNRR.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 IFIP International Federation for Information Processing
About this paper
Cite this paper
Manzo, A., Ciancarini, P. (2023). Enhancing Stockfish: A Chess Engine Tailored for Training Human Players. In: Ciancarini, P., Di Iorio, A., Hlavacs, H., Poggi, F. (eds) Entertainment Computing – ICEC 2023. ICEC 2023. Lecture Notes in Computer Science, vol 14455. Springer, Singapore. https://doi.org/10.1007/978-981-99-8248-6_23
Download citation
DOI: https://doi.org/10.1007/978-981-99-8248-6_23
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8247-9
Online ISBN: 978-981-99-8248-6
eBook Packages: Computer ScienceComputer Science (R0)