Abstract:
Model error and external disturbance have been separately addressed by optimizing the definite H_{\infty} performance in standard linear H_{\infty} control prob...Show MoreMetadata
Abstract:
Model error and external disturbance have been separately addressed by optimizing the definite H_{\infty} performance in standard linear H_{\infty} control problems. However, the concurrent handling of both introduces uncertainty and nonconvexity into the H_{\infty} performance, posing a huge challenge for solving nonlinear problems. This article introduces an additional cost function in the augmented Hamilton–Jacobi–Isaacs (HJI) equation of zero-sum games to simultaneously manage the model error and external disturbance in nonlinear robust performance problems. For satisfying the Hamilton–Jacobi inequality in nonlinear robust control theory under all considered model errors, the relationship between the additional cost function and model uncertainty is revealed. A critic online learning algorithm, applying Lyapunov stabilizing terms and historical states to reinforce training stability and achieve persistent learning, is proposed to approximate the solution of the augmented HJI equation. By constructing a joint Lyapunov candidate about the critic weight and system state, both stability and convergence are proved by the second method of Lyapunov. Theoretical results also show that introducing historical data reduces the ultimate bounds of system state and critic error. Three numerical examples are conducted to demonstrate the effectiveness of the proposed method.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 36, Issue: 1, January 2025)