Abstract
NeuroEvolution of Augmenting Topologies (NEAT) has been successfully applied to intelligent gameplay. To further improve its effectiveness, a key technique is to reuse the knowledge learned from source gameplay tasks to boost performance on target gameplay tasks. We consider this as a Transfer Learning (TL) problem. However, Artificial Neural Networks (ANNs) evolved by NEAT are usually unnecessarily complicated, which may affect their transferability. To address this issue, we will investigate in this paper the capability of Phased Searching (PS) methods for controlling ANNs’ complexity while maintaining their effectiveness. By doing so, we can obtain more transferable ANNs. Furthermore, we will propose a new Power-Law Ranking Probability based PS (PLPS) method to more effectively control the randomness during the simplification phase. Several recent PS methods as well as our PLPS have been evaluated on four carefully-designed TL experiments. Results show clearly that NEAT can evolve more transferable and structurally simple ANNs with the help of PS methods, in particular PLPS.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
All source code and detailed parameter settings can be found in https://github.com/willhs/NEAT-Mario.
References
Cardamone, L., Loiacono, D., Lanzi, P.L.: Learning to drive in the open racing car simulator using online neuroevolution. TCIAG 2(3), 176–190 (2010)
Gauci, J., Stanley, K.O.: Autonomous evolution of topographic regularities in artificial neural networks. Neural Comput. 22(7), 1860–1898 (2010)
Green, C.D.: Phased searching with neat: alternating between complexification and simplification (2004). http://sharpneat.sourceforge.net/phasedsearch.html
Hausknecht, M., Khandelwal, P., Miikkulainen, R., Stone, P.: HyperNEAT-GGP: a HyperNEAT-based Atari general game player. In: GECCO 2012, pp. 217–224. ACM (2012)
Hausknecht, M., Lehman, J., Miikkulainen, R., Stone, P.: A neuroevolution approach to general atari game playing. TCIAG 6(4), 355–366 (2014)
James, D., Tucker, P.: A comparative analysis of simplification and complexification in the evolution of neural network topologies. In: GECCO 2004 (2004)
Karakovskiy, S., Togelius, J.: The mario ai benchmark and competitions. TCIAG 4(1), 55–67 (2012)
Kotsiantis, S., Kanellopoulos, D.: Discretization techniques : a recent survey. TCSE 32(1), 47–58 (2006)
Kuhlmann, G., Stone, P.: Graph-based domain mapping for transfer learning in general games. In: Kok, J.N., Koronacki, J., Mantaras, R.L., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 188–200. Springer, Heidelberg (2007). doi:10.1007/978-3-540-74958-5_20. (Chap. 20)
Mandziuk, J.: Knowledge-Free and Learning-Based Methods in Intelligent Game Playing. Springer, Heidelberg (2010)
Moriarty, D.E., Miikkulainen, R.: Discovering complex othello strategies through evolutionary neural networks. Connect. Sci. 7(3), 195–210 (1995)
Schrum, J., Miikkulainen, R.: Solving multiple isolated, interleaved, and blended tasks through modular neuroevolution. ECJ 24(3), 459–490 (2016)
Sharma, M., Holmes, M.P., SantamarĂa, J.C., Irani, A., Isbell Jr., C.L., Ram, A.: Transfer learning in real-time strategy games using hybrid CBR/RL. In: IJCAI, vol. 7, pp. 1041–1046 (2007)
Stanley, K.O.: Efficient reinforcement learning through evolving neural network topologies. In: GECCO 2002. Citeseer (2002)
Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002)
Tan, M., Pu, J., Zheng, B.: Optimization of network topology in computer-aided detection schemes using phased searching with neat in a time-scaled framework. Cancer Inf. 13(Suppl. 1), 17 (2014)
Taylor, M.E., Stone, P.: Behavior transfer for value-function-based reinforcement learning. In: AAMAS, p. 53 (2005)
Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: a survey. JMLR 10(Jul), 1633–1685 (2009)
Taylor, M.E., Whiteson, S., Stone, P.: Temporal difference and policy search methods for reinforcement learning: an empirical comparison. In: NAAI, vol. 22, p. 1675. AAAI Press/MIT Press, Menlo Park/Cambridge/London (1999, 2007)
Taylor, M.E., Whiteson, S., Stone, P.: Transfer via inter-task mappings in policy search reinforcement learning. In: AAMAS, p. 37. ACM (2007)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Hardwick-Smith, W., Peng, Y., Chen, G., Mei, Y., Zhang, M. (2017). Evolving Transferable Artificial Neural Networks for Gameplay Tasks via NEAT with Phased Searching. In: Peng, W., Alahakoon, D., Li, X. (eds) AI 2017: Advances in Artificial Intelligence. AI 2017. Lecture Notes in Computer Science(), vol 10400. Springer, Cham. https://doi.org/10.1007/978-3-319-63004-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-63004-5_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-63003-8
Online ISBN: 978-3-319-63004-5
eBook Packages: Computer ScienceComputer Science (R0)