Skip to main content

Adaptive State Space Abstraction Using Neuroevolution

  • Conference paper
Book cover Agents and Artificial Intelligence (ICAART 2009)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 67))

Included in the following conference series:

  • 591 Accesses

Abstract

In this paper, we present a new machine learning algorithm, RL-SANE, which uses a combination of neuroevolution (NE) and traditional reinforcement learning (RL) techniques to improve learning performance. RL-SANE is an innovative combination of the neuroevolutionary algorithm NEAT[9] and the RL algorithm Sarsa(λ)[12]. It uses the special ability of NEAT to generate and train customized neural networks that provide a means for reducing the size of the state space through state aggregation. Reducing the size of the state space through aggregation enables Sarsa(λ) to be applied to much more difficult problems than standard tabular based approaches. Previous similar work in this area, such as in Whiteson and Stone [15] and Stanley and Miikkulainen [10], have shown positive results. This paper gives a brief overview of neuroevolutionary methods, introduces the RL-SANE algorithm, presents a comparative analysis of RL-SANE to other neuroevolutionary algorithms, and concludes with a discussion of enhancements that need to be made to RL-SANE.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Boyan, J.A., Moore, A.W.: Generalization in reinforcement learning: Safely approximating the value function. In: Tesauro, G., Touretzky, D.S., Leen, T.K. (eds.) Advances in Neural Information Processing Systems, vol. 7, pp. 369–376. The MIT Press, Cambridge (1995)

    Google Scholar 

  2. Carreras, M., Ridao, P., Batlle, J., Nicosebici, T., Ursulovici, Z.: Learning reactive robot behaviors with neural-q learning. In: IEEE-TTTC International Conference on Automation, Quality and Testing, Robotics. IEEE, Los Alamitos (2002)

    Google Scholar 

  3. Gomez, F.J., Miikkulainen, R.: Solving non-markovian control tasks with neuro-evolution. In: IJCAI 1999: Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, pp. 1356–1361. Morgan Kaufmann Publishers Inc., San Francisco (1999)

    Google Scholar 

  4. James, D., Tucker, P.: A comparative analysis of simplification and complexification in the evolution of neural network topologies. In: Proceedings of the 2004 Conference on Genetic and Evolutionary Computation, GECCO 2004 (2004)

    Google Scholar 

  5. Moriarty, D.E., Miikkulainen, R.: Forming neural networks through efficient and adaptive coevolution. Evolutionary Computation 5, 373–399 (1997)

    Article  Google Scholar 

  6. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Neurocomputing: foundations of research, 696–699 (1988)

    Google Scholar 

  7. Siebel, N.T., Krause, J., Sommer, G.: Efficient Learning of Neural Networks with Evolutionary Algorithms. In: Hamprecht, F.A., Schnörr, C., Jähne, B. (eds.) DAGM 2007. LNCS, vol. 4713, pp. 466–475. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  8. Singh, S.P., Jaakkola, T., Jordan, M.I.: Reinforcement learning with soft state aggregation. In: Tesauro, G., Touretzky, D., Leen, T. (eds.) Advances in Neural Information Processing Systems, vol. 7, pp. 361–368. The MIT Press, Cambridge (1995)

    Google Scholar 

  9. Stanley, K.O.: Efficient evolution of neural networks through complexification. PhD thesis, The University of Texas at Austin. Supervisor-Risto P. Miikkulainen (2004)

    Google Scholar 

  10. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Tech. rep., University of Texas at Austin, Austin, TX, USA (2001)

    Google Scholar 

  11. Stanley, K.O., Miikkulainen, R.: Efficient reinforcement learning through evolving neural network topologies. In: GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 569–577. Morgan Kaufmann Publishers Inc., San Francisco (2002)

    Google Scholar 

  12. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). The MIT Press, Cambridge (1998)

    Google Scholar 

  13. Tesauro, G.: Temporal difference learning and td-gammon. Commun. ACM 38(3), 58–68 (1995)

    Article  Google Scholar 

  14. Watkins, C.J.C.H., Dayan, P.: Q-learning. Machine Learning 8(3-4), 279–292 (1992)

    Article  MATH  Google Scholar 

  15. Whiteson, S., Stone, P.: Evolutionary function approximation for reinforcement learning. Journal of Machine Learning Research 7, 877–917 (2006)

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wright, R., Gemelli, N. (2010). Adaptive State Space Abstraction Using Neuroevolution. In: Filipe, J., Fred, A., Sharp, B. (eds) Agents and Artificial Intelligence. ICAART 2009. Communications in Computer and Information Science, vol 67. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11819-7_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-11819-7_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-11818-0

  • Online ISBN: 978-3-642-11819-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics