Abstract:
In this paper, we propose a model-free algorithm for global stabilization of linear systems subject to actuator saturation. The idea of gain-scheduled low gain feedback i...Show MoreMetadata
Abstract:
In this paper, we propose a model-free algorithm for global stabilization of linear systems subject to actuator saturation. The idea of gain-scheduled low gain feedback is applied to develop control laws that avoid saturation and achieve global stabilization. To design these control laws, we employ the framework of parameterized algebraic Riccati equations (AREs). Reinforcement learning techniques are developed to find the solution of the parameterized ARE without requiring any knowledge of the system dynamics. In particular, we present an iterative Q-learning algorithm that searches for an appropriate value of the low gain parameter and iteratively solves the parameterized ARE using the Bellman equation. It is shown that the proposed algorithm achieves model-free global stabilization under bounded controls and converges to the solution of the ARE. The proposed scheme neither requires an initially stabilizing policy nor is affected by any excitation noise bias. Simulation results are presented that confirm the effectiveness of the proposed method.
Published in: 2018 IEEE Conference on Decision and Control (CDC)
Date of Conference: 17-19 December 2018
Date Added to IEEE Xplore: 20 January 2019
ISBN Information: