Loading [a11y]/accessibility-menu.js
Model-Based Actor-Critic Learning for Optimal Tracking Control of Robots With Input Saturation | IEEE Journals & Magazine | IEEE Xplore

Model-Based Actor-Critic Learning for Optimal Tracking Control of Robots With Input Saturation


Abstract:

As robots normally perform repetitive work, reinforcement learning (RL) appears to be a promising tool for designing robot control. However, the learning cycle of control...Show More

Abstract:

As robots normally perform repetitive work, reinforcement learning (RL) appears to be a promising tool for designing robot control. However, the learning cycle of control strategy tends to be long, thereby limiting the applications of RL for a real robotic system. This article proposes model-based actor-critic learning for optimal tracking control of robotic systems to address this limitation. A preconstructed critic is defined in the framework of linear quadratic tracker, and a model-based actor update law is presented on the basis of deterministic policy gradient algorithm to improve learning efficiency. A low gain parameter is introduced in the critic to avoid input saturation. Compared with neural network-based RL, the proposed method including the preconstructed critic and actor, has rapid, steady, and reliable learning process, which is friendly for the physical hardware. The performance and effectiveness of the proposed method are validated using a dual-robot test rig. The experimental results show that the proposed learning algorithm can train multiple robots to learn their optimal tracking control laws within a training time of 200 s.
Published in: IEEE Transactions on Industrial Electronics ( Volume: 68, Issue: 6, June 2021)
Page(s): 5046 - 5056
Date of Publication: 07 May 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.