Loading [MathJax]/extensions/MathZoom.js
Output Feedback Optimal Tracking Control Using Reinforcement Q-Learning | IEEE Conference Publication | IEEE Xplore

Output Feedback Optimal Tracking Control Using Reinforcement Q-Learning


Abstract:

In this paper, we present an output feedback Q-learning scheme to solve the discrete-time optimal tracking problem for linear systems. The problem consists of finding the...Show More

Abstract:

In this paper, we present an output feedback Q-learning scheme to solve the discrete-time optimal tracking problem for linear systems. The problem consists of finding the optimal feedback and feedforward control gains to achieve asymptotic tracking without the knowledge of system dynamics. Both policy iteration and value iteration algorithms, of which the later does not require an initially stabilizing policy, are developed for learning the optimal feedback gain. The learning of the optimal feedforward control gain is achieved by an adaptive algorithm that guarantees convergence to zero of the tracking error. Only the information of the system relative degree and the sign of the system gain is needed for adaptive tracking. The proposed technique is not affected by excitation noise and does not require a discounting factor, which has been a bottleneck in the past in achieving stability guarantee. The learned control parameters are optimal and match exactly the solution of the Riccati equation. Simulation results show the effectiveness of the scheme.
Date of Conference: 27-29 June 2018
Date Added to IEEE Xplore: 16 August 2018
ISBN Information:
Electronic ISSN: 2378-5861
Conference Location: Milwaukee, WI, USA

References

References is not available for this document.