Skip to main content
Log in

Neural Q-learning

  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

In this paper we introduce a novel neural reinforcement learning method. Unlike existing methods, our approach does not need a model of the system and can be trained directly using the measurements of the system. We achieve this by only using one function approximator and approximate the improved policy from this. An experiment using a mobile robot shows that it can be trained using a real system within reasonable time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stephan ten Hagen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

ten Hagen, S., Kröse, B. Neural Q-learning. Neural Comput&Applic 12, 81–88 (2003). https://doi.org/10.1007/s00521-003-0369-9

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-003-0369-9

Keywords

Navigation