Skip to main content
Log in

Evolving Neural Networks to Play Go

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Go is a difficult game for computers to master, and the best go programs are still weaker than the average human player. Since the traditional game playing techniques have proven inadequate, new approaches to computer go need to be studied. This paper presents a new approach to learning to play go. The SANE (Symbiotic, Adaptive Neuro-Evolution) method was used to evolve networks capable of playing go on small boards with no pre-programmed go knowledge. On a 9 × 9 go board, networks that were able to defeat a simple computer opponent were evolved within a few hundred generations. Most significantly, the networks exhibited several aspects of general go playing, which suggests the approach could scale up well.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. H. Enderton. The Golem go program. Technical Report CMUCS-92-101, School of Computer Science, Carnegie Mellon University, 1991.

  2. M. Enzenberger. The integration of a priori knowledge into a go playing neural network. Manuscript, 1996.

  3. F. Gomez and R. Miikkulainen. Incremental evolution of complex general behavior. Adaptive Behavior, 5:317- 342, 1997.

    Google Scholar 

  4. F. Hsu, T. Anantharaman, M. Campbell, and A. Nowatzyk. A grandmaster chess machine. Scientific American, 263:44- 50, 1990.

    Google Scholar 

  5. K. Lee and S. Mahajan. The development of a world-class othello program. Artificial Intelligence, 43:21- 36, 1990.

    Google Scholar 

  6. D. Moriarty. Symbiotic Evolution of Neural Networks in Sequential Decision Tasks. PhD thesis, Department of Computer Sciences, The University of Texas at Austin, 1997. Technical Report UT-AI97-259.

  7. D. Moriarty and R. Miikkulainen. Discovering complex Othello strategies through evolutionary neural networks. Connection Science, 7:195- 209, 1995.

    Google Scholar 

  8. D. Moriarty and R. Miikkulainen. Forming Neural Networks through Efficient and Adaptive Co-Evolution. Evolutionary Computation, 5(4) in press.

  9. D. Moriarty and R. Miikkulainen. Evolving obstacle avoidance behavior in a robot arm. In P. Maes, M. Mataric, J.-A. Meyer, and J. Pollack, editors, From Animals to Animats: The Fourth International Conference on Simulation of Adaptive Behavior (SAB96), 1996.

  10. D. Moriarty and R. Miikkulainen. Learning sequential decision tasks. In M. J. Patel and V. Honavar, editors, Evolutionary Synthesis of Neural Systems. MIT-Press, Cambridge, MA, in press.

  11. B. Pell. Exploratory learning in the game of go. In D.N.L. Levy and D.F. Beal, editors, Heuristic Programming in Artificial Intelligence 2-The Second Computer Olympiad. Ellis Horwood, 1991.

  12. D. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error propagation. In D. Rumelhart and J. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations, pages 318- 362. MIT Press, Cambridge, MA, 1986.

    Google Scholar 

  13. J. Schaeffer, R. Lake, P. Lu, and M. Bryant. CHINOOK: The world man-machine checkers champion. The AI Magazine, 16(1):21- 29, 1996.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Richards, N., Moriarty, D.E. & Miikkulainen, R. Evolving Neural Networks to Play Go. Applied Intelligence 8, 85–96 (1998). https://doi.org/10.1023/A:1008224732364

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1008224732364

Navigation