Abstract:
This article puts emphasis on the deterministic value-iteration-based Q -learning (VIQL) algorithm with adjustable convergence speed, followed by the application verif...Show MoreMetadata
Abstract:
This article puts emphasis on the deterministic value-iteration-based Q -learning (VIQL) algorithm with adjustable convergence speed, followed by the application verification on trajectory tracking for completely unknown nonaffine systems. It is worth emphasizing that, under the effect of learning rates, the convergence speed can be adjusted and the new convergence criterion of the VIQL framework is investigated. The merit of the adjustable VIQL scheme is that it can quicken the learning speed and decrease the number of iterations, thereby reducing the computation burden. To carry out the model-free VIQL algorithm, the offline data of system states and reference trajectories are collected to provide the reference control, the tracking error, and the tracking control, which promotes the parameter updating of the adjustable VIQL algorithm via the off-policy learning scheme. By this updating operation, the convergent optimal tracking policy can guarantee that arbitrary initial state tracks the desired trajectory and can completely obviate the terminal tracking error. Finally, numerical simulations are conducted to indicate the validity of the designed tracking control algorithm.
Published in: IEEE Transactions on Systems, Man, and Cybernetics: Systems ( Volume: 54, Issue: 2, February 2024)