Skip to main content

Associating Shallow and Selective Global Tree Search with Monte Carlo for 9 × 9 Go

  • Conference paper
Computers and Games (CG 2004)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3846))

Included in the following conference series:

Abstract

This paper explores the association of shallow and selective global tree search with Monte Carlo in 9 × 9 Go. This exploration is based on Olga and Indigo, two experimental Monte-Carlo programs. We provide a min-max algorithm that iteratively deepens the tree until one move at the root is proved to be superior to the other ones. At each iteration, random games are started at leaf nodes to compute mean values. The progressive pruning rule and the min-max rule are applied to non terminal nodes. We set up experiments demonstrating the relevance of this approach. Indigo used this algorithm at the 8th Computer Olympiad held in Graz.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abramson, B.: Expected-outcome: a general model of static evaluation. IEEE Transactions on PAMI 12, 182–193 (1990)

    Google Scholar 

  2. Baum, E., Smith, W.: A bayesian approach to relevance in game-playing. Artificial Intelligence 97, 195–242 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  3. Berliner, H.: The B* tree search algorithm: a best-first proof procedure. Artificial Intelligence 12, 23–40 (1979)

    Article  MathSciNet  Google Scholar 

  4. Billings, D., Davidson, A., Schaeffer, J., Szafron, D.: The challenge of poker. Artificial Intelligence 134, 201–240 (2002)

    Article  MATH  Google Scholar 

  5. Bouzy, B.: Indigo home page (2002), http://www.math-info.univ-paris5.fr/~bouzy/INDIGO.html,

  6. Bouzy, B.: Associating knowledge and Monte Carlo approaches within a go program. In: 7th Joint Conference on Information Sciences, Raleigh, pp. 505–508 (2003)

    Google Scholar 

  7. Bouzy, B.: The move decision process of Indigo. International Computer Game Association Journal 26(1), 14–27 (2003)

    Google Scholar 

  8. Bouzy, B., Cazenave, T.: Computer Go: an AI oriented survey. Artificial Intelligence 132, 39–103 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  9. Bouzy, B., Helmstetter, B.: Monte-Carlo Go developments. In: van den Herik, H.J., Iida, H., Heinz, E.A. (eds.) 10th Advances in Computer Games, Graz, pp. 159–174. Kluwer Academic Publishers, Dordrecht (2003)

    Google Scholar 

  10. Brügmann, B.: Monte Carlo Go (1993), www.joy.ne.jp/welcome/igs/Go/computer/-mcgo.tex.Z

  11. Bump, D.: GnuGo home page (2003), http://www.gnu.org/software/gnugo/devel.html

  12. Buro, M.: Probcut: an effective selective extension of the alpha-beta algorithm. ICCA Journal 18(2), 71–76 (1995)

    MathSciNet  Google Scholar 

  13. Chen, K.: A study of decision error in selective game tree search. Information Sciences 135, 177–186 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  14. Fishman, G.: Monte-Carlo: Concepts, Algorithms, and Applications. Springer, Heidelberg (1996)

    MATH  Google Scholar 

  15. Junghanns, A.: Are there practical alternatives to alpha-beta? ICCA Journal 21(1), 14–32 (1998)

    Google Scholar 

  16. Kaminski, P.: Vegos home page (2003), http://www.ideanest.com/vegos/

  17. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science (May 1983)

    Google Scholar 

  18. R. Korf and D. Chickering. Best-first search. Artificial Intelligence, 84:299–337, 1994.

    Article  MathSciNet  Google Scholar 

  19. Palay, A.J.: Searching with probabilities. Morgan Kaufmann, San Francisco (1985)

    MATH  Google Scholar 

  20. Rivest, R.: Game-tree searching by min-max approximation. Artificial Intelligence 34(1), 77–96 (1988)

    Article  MathSciNet  Google Scholar 

  21. Sadikov, A., Bratko, I., Kononenko, I.: Search versus knowledge: an empirical study of minimax on KRK. In: van den Herik, H.J., Iida, H., Heinz, E.A. (eds.) 10th Advances in Computer Games, Graz, pp. 33–44. Kluwer Academic Publishers, Dordrecht (2003)

    Google Scholar 

  22. Sheppard, B.: World-championship-caliber scrabble. Artificial Intelligence 134, 241–275 (2002)

    Article  MATH  Google Scholar 

  23. Tesauro, G., Galperin, G.: On-line policy improvement using Monte Carlo search. In: Advances in Neural Information Processing Systems, pp. 1068–1074. MIT Press, Cambridge (1996)

    Google Scholar 

  24. van der Werf, E., van den Herik, H.J., Uiterwijk, J.W.H.M.: Solving Go on small boards. International Computer Game Association Journal 26(2), 92–107 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bouzy, B. (2006). Associating Shallow and Selective Global Tree Search with Monte Carlo for 9 × 9 Go. In: van den Herik, H.J., Björnsson, Y., Netanyahu, N.S. (eds) Computers and Games. CG 2004. Lecture Notes in Computer Science, vol 3846. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11674399_5

Download citation

  • DOI: https://doi.org/10.1007/11674399_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-32488-1

  • Online ISBN: 978-3-540-32489-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics