Loading [a11y]/accessibility-menu.js
Residual Reinforcement Learning for Motion Control of a Bionic Exploration Robot—RoboDact | IEEE Journals & Magazine | IEEE Xplore

Residual Reinforcement Learning for Motion Control of a Bionic Exploration Robot—RoboDact


Abstract:

This article aims to investigate the motion control method of a bionic underwater exploration robot (RoboDact). The robot is equipped with a double-joint tail fin and two...Show More

Abstract:

This article aims to investigate the motion control method of a bionic underwater exploration robot (RoboDact). The robot is equipped with a double-joint tail fin and two undulating pectoral fins to obtain good mobility and stability. The hybrid propulsion mode helps perform stable and effective underwater exploration and measurement. To coordinate these two kinds of bionic propulsion fins and address the challenge of measurement noises and external disturbances during underwater exploration, a novel residual reinforcement learning method with parameter randomization (PR-RRL) is proposed. The control strategy is a weighted superposition of a feedback controller and a residual controller. The observation feedback controller based on active disturbance rejection control (ADRC) is adapted to improve stability and convergence. And the residual controller based on the soft actor–critic (SAC) algorithm is adapted to improve adaptability to uncertainties and disturbances. Moreover, the parameter randomization training strategy is proposed for adapting natural complicated scenarios by randomizing the partial dynamics of the underwater exploration robot during the training phase. Finally, the feasibility and efficacy of the presented motion control method are validated by comprehensive simulation tests and RoboDact prototype physical experiments.
Article Sequence Number: 7504313
Date of Publication: 02 June 2023

ISSN Information:

Funding Agency:


References

References is not available for this document.