Loading [a11y]/accessibility-menu.js
Optimal Output Feedback Tracking Control for Takagi–Sugeno Fuzzy Systems | IEEE Journals & Magazine | IEEE Xplore

Optimal Output Feedback Tracking Control for Takagi–Sugeno Fuzzy Systems


Impact Statement:With the rapid development of artificial intelligence, optimal control has been widely investigated and extended to practical engineering. In addition, fuzzy output feedb...Show More

Abstract:

In this study, an optimal output feedback tracking control approach with a Q-learning algorithm is presented for Takagi–Sugeno (T–S) fuzzy discrete-time systems with imme...Show More
Impact Statement:
With the rapid development of artificial intelligence, optimal control has been widely investigated and extended to practical engineering. In addition, fuzzy output feedback tracking control not only can solve immeasurable states problem but also ensure systems output track the desired reference trajectory. It should be noticed that there are no available results on optimal output feedback tracking controllers for fuzzy discrete-time systems with immeasurable states. Hence, inspired by this consideration, we develop a fuzzy optimal output feedback tracking control method with a Q-learning algorithm by utilizing state reconstruction methodology for T–S fuzzy discrete-time systems, which not only can achieve the optimal control performance but also ensure the systems output track the given reference signal. Finally, we apply the presented Q-learning optimal control method to the truck-trailer system, the simulation results validate the effectiveness of the designed optimal control method...

Abstract:

In this study, an optimal output feedback tracking control approach with a Q-learning algorithm is presented for Takagi–Sugeno (T–S) fuzzy discrete-time systems with immeasurable states. First, a state reconstruction method based on the measured output data and input data is applied to handle immeasurable states problem. Then, the optimal output feedback tracking control input policy is designed and boiled down to the algebraic Riccati equations (AREs). To obtain the solution to AREs, a Q-learning value iteration (VI) algorithm is formulated, which directly learns each state-action value. Consequently, the sufficient conditions for the convergence of the proposed optimal algorithm are derived by constructing an approximate Q-function. It is proved that the presented optimal output feedback tracking control method can guarantee the controlled systems to be stable and output track the given reference signal. Finally, we take the truck-trailer system as the simulation example, the simulat...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 12, December 2024)
Page(s): 6320 - 6329
Date of Publication: 13 August 2024
Electronic ISSN: 2691-4581

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.