Loading web-font TeX/Math/Italic
Data-Driven H∞ Optimal Output Feedback Control for Linear Discrete-Time Systems Based on Off-Policy Q-Learning | IEEE Journals & Magazine | IEEE Xplore

Data-Driven H Optimal Output Feedback Control for Linear Discrete-Time Systems Based on Off-Policy Q-Learning


Abstract:

This article develops two novel output feedback (OPFB) Q -learning algorithms, on-policy Q -learning and off-policy Q -learning, to solve H_{\infty } static ...Show More

Abstract:

This article develops two novel output feedback (OPFB) Q -learning algorithms, on-policy Q -learning and off-policy Q -learning, to solve H_{\infty } static OPFB control problem of linear discrete-time (DT) systems. The primary contribution of the proposed algorithms lies in a newly developed OPFB control algorithm form for completely unknown systems. Under the premise of satisfying disturbance attenuation conditions, the conditions for the existence of the optimal OPFB solution are given. The convergence of the proposed Q -learning methods, and the difference and equivalence of two algorithms are rigorously proven. Moreover, considering the effects brought by probing noise for the persistence of excitation (PE), the proposed off-policy Q -learning method has the advantage of being immune to probing noise and avoiding biasedness of solution. Simulation results are presented to verify the effectiveness of the proposed approaches.
Page(s): 3553 - 3567
Date of Publication: 18 October 2021

ISSN Information:

PubMed ID: 34662280

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.