Skip to main content

Enhancing Stockfish: A Chess Engine Tailored for Training Human Players

  • Conference paper
  • First Online:
Entertainment Computing – ICEC 2023 (ICEC 2023)

Abstract

Stockfish is a highly popular open-source chess engine known for its exceptional strength. The recent integration of a neural network called NNUE has significantly improved Stockfish’s playing ability. However, the neural network lacks the capability to explain the reasoning behind its moves. This poses a challenge for human players who seek moves that align with their playing style.

The objective of this paper is to describe some modifications to Stockfish, making the engine more suitable for training human players of all skill levels. We have refactored the move search and evaluation algorithms to selectively analyze potential continuations, incorporating dynamic evaluations based on the specific nature of the position and the player’s training abilities. The engine remains strength is still very high, in some situations even better than the original. We evaluate and discuss the outcomes of these enhancements.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/amchess/ShashChess.

  2. 2.

    https://gitlab.com/amchess/shashchess.

  3. 3.

    https://gitlab.com/amchess/brainlearn.

  4. 4.

    See for instance: https://ccrl.chessdom.com/ccrl/4040/rating_list_all.html,

    http://fastgm.de/60-0.60.html,

    https://sites.google.com/site/computerschess/scct-cs-1-3600-elo?pli=1.

References

  1. Adams, M., Hurtado, P.: Think like a Super-GM. Quality Chess (2022)

    Google Scholar 

  2. Baxter, J., Tridgell, A., Weaver, L.: KnightCap: a chess program that learns by combining td () with minimax search. In: Proceeding 15th International Conference on Machine Learning, pp. 28–36 (1997)

    Google Scholar 

  3. David, O.E., Netanyahu, N.S., Wolf, L.: DeepChess: end-to-end deep neural network for automatic learning in chess. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016. LNCS, vol. 9887, pp. 88–96. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44781-0_11

    Chapter  Google Scholar 

  4. Ferreira, D.: The impact of search depth on chess playing strength. ICGA J. 36(2), 67–80 (2013)

    Article  Google Scholar 

  5. Kang, J., Yoon, J.S., Lee, B.: How AI-based training affected the performance of professional go players. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (2022)

    Google Scholar 

  6. Lang, K.J., Smith, W.D.: A test suite for chess programs. ICGA J. 16(3), 152–161 (1993)

    Article  Google Scholar 

  7. Levene, M., Fener, T.: A methodology for learning players’ styles from game records. Int. J. Artif. Intell. Soft Comput. 2(4), 272–286 (2011)

    Google Scholar 

  8. Levinson, R., Weber, R.: Chess neighborhoods, function combination, and reinforcement learning. In: Marsland, T., Frank, I. (eds.) CG 2000. LNCS, vol. 2063, pp. 133–150. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45579-5_9

    Chapter  MATH  Google Scholar 

  9. Maharaj, S., Polson, N., Turk, A.: Chess AI: competing paradigms for machine intelligence. Entropy 24(4), 550 (2022)

    Article  MathSciNet  Google Scholar 

  10. Manzo, A.: ShashChess repository. https://github.com/amchess/ShashChess (2023)

  11. Manzo, A., Caruso, A.: The Computer Chess World - How to make the most of chess software. AlphaChess (2021)

    Google Scholar 

  12. McIlroy-Young, R., Sen, S., Kleinberg, J., Anderson, A.: Aligning superhuman AI with human behavior: chess as a model system. In: Proceedings 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1677–1687 (2020)

    Google Scholar 

  13. Méndez, M., Benito-Parejo, M., Ibias, A., Núñez, M.: Metamorphic testing of chess engines. Inf. Softw. Technol. 107263 (2023)

    Google Scholar 

  14. Nasu, Y.: Efficiently updatable Neural-network-based evaluation functions for computer Shogi. The 28th World Comput. Shogi Championship Appeal Doc. 185 (2018)

    Google Scholar 

  15. Plaat, A.: Conclusion. In: Learning to Play, pp. 233–254. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59238-7_8

    Chapter  Google Scholar 

  16. Plaat, A., Schaeffer, J., Pijls, W., De Bruin, A.: Best-first fixed-depth minimax algorithms. Artif. Intell. 87(1–2), 255–293 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  17. Scherzer, T., Scherzer, L., Tjaden, D.: Learning in Bebe. In: Computers, Chess, and Cognition, pp. 197–216. Springer (1990). https://doi.org/10.1007/978-1-4613-9080-0_12

  18. Shashin, A.: Best Play The Best Method for Discovering the Strongest Move. Mongoose Press, Swindon (2013)

    Google Scholar 

  19. Silver, D., et al.: A general reinforcement learning algorithm that masters Chess, Shogi, and go through self-play. Science 362(6419), 1140–1144 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  20. Slate, D.J.: A chess program that uses its transposition table to learn from experience. ICGA J. 10(2), 59–71 (1987)

    Article  Google Scholar 

  21. Sutton, R.S., Barto, A.G., et al.: Reinforcement learning. J. Cogn. Neurosci. 11(1), 126–134 (1999)

    Google Scholar 

  22. Various-authors: stockfish evaluation guide. https://hxim.github.io/Stockfish-Evaluation-Guide/ (2020)

  23. Weinstein, A., Littman, M.L., Goschin, S.: Rollout-based game-tree search outprunes traditional alpha-beta. In: Proceedings European Workshop on Reinforcement Learning, pp. 155–166. PMLR (2012)

    Google Scholar 

Download references

Acknowledgments

We thank Augusto Caruso and Kelly Kinyama for fruitful discussions on some topics dealt with in this paper. We also ack the support of CN-HPC under PNRR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paolo Ciancarini .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Manzo, A., Ciancarini, P. (2023). Enhancing Stockfish: A Chess Engine Tailored for Training Human Players. In: Ciancarini, P., Di Iorio, A., Hlavacs, H., Poggi, F. (eds) Entertainment Computing – ICEC 2023. ICEC 2023. Lecture Notes in Computer Science, vol 14455. Springer, Singapore. https://doi.org/10.1007/978-981-99-8248-6_23

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8248-6_23

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8247-9

  • Online ISBN: 978-981-99-8248-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics