Skip to main content

Knowledge Generation for Improving Simulations in UCT for General Game Playing

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5360))

Abstract

General Game Playing (GGP) aims at developing game playing agents that are able to play a variety of games and, in the absence of pre-programmed game specific knowledge, become proficient players. Most GGP players have used standard tree-search techniques enhanced by automatic heuristic learning. The UCT algorithm, a simulation-based tree search, is a new approach and has been used successfully in GGP. However, it relies heavily on random simulations to assign values to unvisited nodes and selecting nodes for descending down a tree. This can lead to slower convergence times in UCT. In this paper, we discuss the generation and evolution of domain-independent knowledge using both state and move patterns. This is then used to guide the simulations in UCT. In order to test the improvements, we create matches between a player using standard the UCT algorithm and one using UCT enhanced with knowledge.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Koffman, E.: Learning through pattern recognition applied to a class of games. IEEE Trans. on Systems, Man and Cybernetics SSC-4 (1968)

    Google Scholar 

  2. Genesereth, M., Love, N.: General game playing: Overview of the aaai competition. AI Magazine, Spring 2005 (2005)

    Google Scholar 

  3. Genesereth, M., Love, N.: General game playing: Game description language specification, http://games.standford.edu/competition/misc/aaai.pdf

  4. http://games.stanford.edu

  5. Clune, J.: Heuristic evaluation functions for general game playing. In: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence (2007)

    Google Scholar 

  6. Schiffel, S., Thielscher, M.: Fluxplayer: A successful general game player. In: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, pp. 1191–1196 (2007)

    Google Scholar 

  7. Banerjee, B., Kuhlmann, G., Stone, P.: Value function transfer for general game playing. In: ICML Workshop on Structural Knowledge Transfer for ML (2006)

    Google Scholar 

  8. Banerjee, B., Stone, P.: General game playing using knowledge transfer. In: The 20th International Joint Conference on Artificial Intelligence, pp. 672–777 (2007)

    Google Scholar 

  9. Kocsis, L., Szepesvari, C.: Bandit based monte-carlo planning. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 282–293. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  10. Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite time analysis of the multi-armed bandit problem. Machine Learning 47(2/3), 235–256 (2002)

    Article  MATH  Google Scholar 

  11. Björnsson, Y., Finnsson, H.: Simulation-based approach to general game playing. In: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI Press, Menlo Park (2008)

    Google Scholar 

  12. Gelly, S., Wang, Y.: Modifications of uct and sequence-like simulations for monte-carlo go. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, Hawaii (2007)

    Google Scholar 

  13. Sharma, S., Kobti, Z.: A multi-agent architecture for general game playing. In: IEEE Symposium on Computational Intelligence and Games, Honolulu, Hawaii (2007)

    Google Scholar 

  14. Silver, D., Sutton, R., Muller, M.: Reinforcement learning of local shape in the game of go. In: Proceedings of the 20th International Joint Conference on Artificial Intelligence, IJCAI 2007 (2007)

    Google Scholar 

  15. Schaeffer, J.: The history heuristic and alpha-beta search enhancements in practice. IEEE Transaction on Pattern Analysis and Machine Intelligence, 1203–1212 (1989)

    Google Scholar 

  16. Gelly, S.: A Contribution to Reinforcement Learning; Application to Computer-Go. PhD thesis, University of Paris South (2007)

    Google Scholar 

  17. Winikoff, M.: http://www3.cs.utwente.nl/~schooten/yprolog

  18. http://www.dcc.fc.up.pt/~vsc/Yap/

  19. http://www.probp.com/

  20. Zobrist, A.: A new hashing method with application for game playing. Technical report 99, University of Wisconsin (1970)

    Google Scholar 

  21. Sharma, S., Kobti, Z., Goodwin, S.: General game playing with ants. In: The Seventh International Conference on Simulated Evolution And Learning (SEAL 2008), Melbourne, Australia (in press, 2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sharma, S., Kobti, Z., Goodwin, S. (2008). Knowledge Generation for Improving Simulations in UCT for General Game Playing. In: Wobcke, W., Zhang, M. (eds) AI 2008: Advances in Artificial Intelligence. AI 2008. Lecture Notes in Computer Science(), vol 5360. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89378-3_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-89378-3_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-89377-6

  • Online ISBN: 978-3-540-89378-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics