Skip to main content

Evolving Transferable Artificial Neural Networks for Gameplay Tasks via NEAT with Phased Searching

  • Conference paper
  • First Online:
AI 2017: Advances in Artificial Intelligence (AI 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10400))

Included in the following conference series:

Abstract

NeuroEvolution of Augmenting Topologies (NEAT) has been successfully applied to intelligent gameplay. To further improve its effectiveness, a key technique is to reuse the knowledge learned from source gameplay tasks to boost performance on target gameplay tasks. We consider this as a Transfer Learning (TL) problem. However, Artificial Neural Networks (ANNs) evolved by NEAT are usually unnecessarily complicated, which may affect their transferability. To address this issue, we will investigate in this paper the capability of Phased Searching (PS) methods for controlling ANNs’ complexity while maintaining their effectiveness. By doing so, we can obtain more transferable ANNs. Furthermore, we will propose a new Power-Law Ranking Probability based PS (PLPS) method to more effectively control the randomness during the simplification phase. Several recent PS methods as well as our PLPS have been evaluated on four carefully-designed TL experiments. Results show clearly that NEAT can evolve more transferable and structurally simple ANNs with the help of PS methods, in particular PLPS.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    All source code and detailed parameter settings can be found in https://github.com/willhs/NEAT-Mario.

References

  1. Cardamone, L., Loiacono, D., Lanzi, P.L.: Learning to drive in the open racing car simulator using online neuroevolution. TCIAG 2(3), 176–190 (2010)

    Google Scholar 

  2. Gauci, J., Stanley, K.O.: Autonomous evolution of topographic regularities in artificial neural networks. Neural Comput. 22(7), 1860–1898 (2010)

    Article  MATH  Google Scholar 

  3. Green, C.D.: Phased searching with neat: alternating between complexification and simplification (2004). http://sharpneat.sourceforge.net/phasedsearch.html

  4. Hausknecht, M., Khandelwal, P., Miikkulainen, R., Stone, P.: HyperNEAT-GGP: a HyperNEAT-based Atari general game player. In: GECCO 2012, pp. 217–224. ACM (2012)

    Google Scholar 

  5. Hausknecht, M., Lehman, J., Miikkulainen, R., Stone, P.: A neuroevolution approach to general atari game playing. TCIAG 6(4), 355–366 (2014)

    Google Scholar 

  6. James, D., Tucker, P.: A comparative analysis of simplification and complexification in the evolution of neural network topologies. In: GECCO 2004 (2004)

    Google Scholar 

  7. Karakovskiy, S., Togelius, J.: The mario ai benchmark and competitions. TCIAG 4(1), 55–67 (2012)

    Google Scholar 

  8. Kotsiantis, S., Kanellopoulos, D.: Discretization techniques : a recent survey. TCSE 32(1), 47–58 (2006)

    Google Scholar 

  9. Kuhlmann, G., Stone, P.: Graph-based domain mapping for transfer learning in general games. In: Kok, J.N., Koronacki, J., Mantaras, R.L., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 188–200. Springer, Heidelberg (2007). doi:10.1007/978-3-540-74958-5_20. (Chap. 20)

    Chapter  Google Scholar 

  10. Mandziuk, J.: Knowledge-Free and Learning-Based Methods in Intelligent Game Playing. Springer, Heidelberg (2010)

    Book  MATH  Google Scholar 

  11. Moriarty, D.E., Miikkulainen, R.: Discovering complex othello strategies through evolutionary neural networks. Connect. Sci. 7(3), 195–210 (1995)

    Article  Google Scholar 

  12. Schrum, J., Miikkulainen, R.: Solving multiple isolated, interleaved, and blended tasks through modular neuroevolution. ECJ 24(3), 459–490 (2016)

    Google Scholar 

  13. Sharma, M., Holmes, M.P., Santamaría, J.C., Irani, A., Isbell Jr., C.L., Ram, A.: Transfer learning in real-time strategy games using hybrid CBR/RL. In: IJCAI, vol. 7, pp. 1041–1046 (2007)

    Google Scholar 

  14. Stanley, K.O.: Efficient reinforcement learning through evolving neural network topologies. In: GECCO 2002. Citeseer (2002)

    Google Scholar 

  15. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002)

    Article  Google Scholar 

  16. Tan, M., Pu, J., Zheng, B.: Optimization of network topology in computer-aided detection schemes using phased searching with neat in a time-scaled framework. Cancer Inf. 13(Suppl. 1), 17 (2014)

    Google Scholar 

  17. Taylor, M.E., Stone, P.: Behavior transfer for value-function-based reinforcement learning. In: AAMAS, p. 53 (2005)

    Google Scholar 

  18. Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: a survey. JMLR 10(Jul), 1633–1685 (2009)

    MathSciNet  MATH  Google Scholar 

  19. Taylor, M.E., Whiteson, S., Stone, P.: Temporal difference and policy search methods for reinforcement learning: an empirical comparison. In: NAAI, vol. 22, p. 1675. AAAI Press/MIT Press, Menlo Park/Cambridge/London (1999, 2007)

    Google Scholar 

  20. Taylor, M.E., Whiteson, S., Stone, P.: Transfer via inter-task mappings in policy search reinforcement learning. In: AAMAS, p. 37. ACM (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yiming Peng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Hardwick-Smith, W., Peng, Y., Chen, G., Mei, Y., Zhang, M. (2017). Evolving Transferable Artificial Neural Networks for Gameplay Tasks via NEAT with Phased Searching. In: Peng, W., Alahakoon, D., Li, X. (eds) AI 2017: Advances in Artificial Intelligence. AI 2017. Lecture Notes in Computer Science(), vol 10400. Springer, Cham. https://doi.org/10.1007/978-3-319-63004-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-63004-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-63003-8

  • Online ISBN: 978-3-319-63004-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics