Abstract:
The performance of linear-quadratic regulators (LQRs), which possess robustness with guaranteed levels of gain and phase margin, will be affected by incomplete state info...Show MoreMetadata
Abstract:
The performance of linear-quadratic regulators (LQRs), which possess robustness with guaranteed levels of gain and phase margin, will be affected by incomplete state information. By furnishing this regulator with a linear-quadratic estimator (LQE), a recursive algorithm with an optimal feedback law can be obtained. Notwithstanding this, the optimal operation of this so-called linear-quadratic Gaussian (LQG) algorithm might be compromised in transformed stochastic descriptor systems if constraints are violated. Inspired by recent results in stochastic control, we deal with the LQG problem using a direct strategy without any transformations and regularity assumptions. First, an expected quadratic cost function using a regularized least-squares is formulated. Using the estimated states with a designed LQE, we then connect the formulation to a constrained recursive minimization problem under Bellman’s principle. To accomplish this, a dynamic programming approach is used in a backward policy within a finite horizon to develop an LQG algorithm for the original system. We conclude the study by stating the separation principle of optimal control and state estimation and verifying the results, where despite noisy measurements, the controller effectively track the reference position while stabilizing the DC motor.
Published in: 2024 IEEE 63rd Conference on Decision and Control (CDC)
Date of Conference: 16-19 December 2024
Date Added to IEEE Xplore: 26 February 2025
ISBN Information: