Loading [a11y]/accessibility-menu.js
Continuous-Time Q-Learning for Infinite-Horizon Discounted Cost Linear Quadratic Regulator Problems | IEEE Journals & Magazine | IEEE Xplore

Continuous-Time Q-Learning for Infinite-Horizon Discounted Cost Linear Quadratic Regulator Problems


Abstract:

This paper presents a method of Q-learning to solve the discounted linear quadratic regulator (LQR) problem for continuous-time (CT) continuous-state systems. Most availa...Show More

Abstract:

This paper presents a method of Q-learning to solve the discounted linear quadratic regulator (LQR) problem for continuous-time (CT) continuous-state systems. Most available methods in the existing literature for CT systems to solve the LQR problem generally need partial or complete knowledge of the system dynamics. Q-learning is effective for unknown dynamical systems, but has generally been well understood only for discrete-time systems. The contribution of this paper is to present a Q-learning methodology for CT systems which solves the LQR problem without having any knowledge of the system dynamics. A natural and rigorous justified parameterization of the Q-function is given in terms of the state, the control input, and its derivatives. This parameterization allows the implementation of an online Q-learning algorithm for CT systems. The simulation results supporting the theoretical development are also presented.
Published in: IEEE Transactions on Cybernetics ( Volume: 45, Issue: 2, February 2015)
Page(s): 165 - 176
Date of Publication: 29 May 2014

ISSN Information:

PubMed ID: 24879648

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.