Abstract
Artificial neural networks have been successfully used as approximating value functions for tasks involving decision making. In domains where a shift in judgment for decisions is necessary as the overall state changes, it is hypothesized that multiple neural networks are likely be beneficial as an approximation of a value function over those that employ a single network. For our experiments, the card game Dominion was chosen as the domain. This work compares neural networks generated by various machine learning methods successfully applied to other games (such as in TD-Gammon) to a genetic algorithm method for generating two neural networks for different phases of the game along with evolving a transition point. The results demonstrate a greater success ratio with the method hypothesized. This suggests future work examining more complex multiple neural network configurations could apply to this game domain as well as being applicable to other problems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Azaria, Y., Sipper, M.: GP-Gammon: Using Genetic Programming to Evolve Backgammon Players. In: Keijzer, M., Tettamanzi, A.G.B., Collet, P., van Hemert, J., Tomassini, M. (eds.) EuroGP 2005. LNCS, vol. 3447, pp. 132–142. Springer, Heidelberg (2005)
Cai, X., Wensch, D.: Computer Go: a Grand Challenge to AI. SCI, vol. 63, pp. 443–465. Springer, Heidelberg (2007)
Chellapilla, K., Fogel, D.: Evolving Neural Networks to Play Checkers without Relying on Expert Knowledge. IEEE Trans. on Neural Networks 10(6), 1382–1391 (1999)
Fogel, D., Hays, T., Hahn, S., Quon, J.: A Self-Learning Evolutionary Chess Program. Proc. of the IEEE 92(12), 1947–1954 (2004)
Galway, L., Charles, D., Black, M.: Improving Temporal Difference Game Agent Control Using a Dynamic Exploration Rate During Control Learning. In: Proc. of IEEE Symposium on Computational Intelligence and Games, pp. 38–45 (2009)
Ghory, I.: Reinforcement Learning in Board Games. Technical Report CSTR-04-004. Department of Computer Science, University of Bristol, UK (2004)
Hauptman, A., Sipper, M.: GP-EndChess: Using Genetic Programming to Evolve Chess Endgame Players. In: Keijzer, M., Tettamanzi, A.G.B., Collet, P., van Hemert, J., Tomassini, M. (eds.) EuroGP 2005. LNCS, vol. 3447, pp. 120–131. Springer, Heidelberg (2005)
Konen, W., Bartz-Beielstein, T.: Reinforcement Learning for Games: Failures and Successes. In: Proc. of GECCO 2009, pp. 2641–2648 (2007)
Mahlmann, T., Togelius, J., Yannakakis, G.: Evolving Card Sets Toward Balancing Dominion. In: IEEE World Congress on Computational Intelligence (2012)
Oon, W., Henz, M.: M2ICAL Analyses HC-Gammon. In: ICTAI 2007, pp. 28–35 (2007)
Pfeiffer, M.: Reinforcement Learning of Strategies for Settlers of Catan. In: International Conference on Computer Games: Artificial Intelligence, Design and Education (2004)
Pollack, J., Blair, A.: Co-evolution in the Successful Learning of Backgammon Strategy. Machine Learning 32(3) (1998)
Riedmiller, M., Braun, H.: A Direct Adaptive Method for Faster Backpropagation Learning: the RPROP algorithm. In: IEEE Conference on Neural Networks, pp. 586–591 (1993)
Sutton, R., Barto, A.: Reinforcement Learning. MIT Press, Cambridge (1998)
Tesauro, G.: Practical Issues in Temporal Difference Learning. Machine Learning 8, 257–277 (1992)
Tesauro, G.: Temporal Difference Learning and TD-Gammon. Comm. of the ACM 38(3) (1995)
Vaccarino, D.X.: Dominion, Game Rules, http://www.riograndegames.com/uploads/Game/Game_278_gameRules.pdf (retrieved October 2012)
Vaccarino, D.X.: Dominion: Intrigue, Game Rules., http://www.riograndegames.com/uploads/Game/Game_306_gameRules.pdf (retrieved October 2012)
Vaccarino, D.X.: Dominion: Seaside, Game Rules, http://www.riograndegames.com/uploads/Game/Game_326_gameRules.pdf (retrieved October 2012)
Vaccarino, D.X.: Dominion: Prosperity, Game Rules, http://www.riograndegames.com/uploads/Game/Game_361_gameRules.pdf (retrieved October 2012)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Winder, R.K. (2013). Generating Artificial Neural Networks for Value Function Approximation in a Domain Requiring a Shifting Strategy. In: Esparcia-Alcázar, A.I. (eds) Applications of Evolutionary Computation. EvoApplications 2013. Lecture Notes in Computer Science, vol 7835. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37192-9_30
Download citation
DOI: https://doi.org/10.1007/978-3-642-37192-9_30
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-37191-2
Online ISBN: 978-3-642-37192-9
eBook Packages: Computer ScienceComputer Science (R0)