Abstract:
Modular robots have the potential for an unmatched ability to perform versatile and robust locomotion. However, designing effective and adaptive locomotion controllers fo...Show MoreMetadata
Abstract:
Modular robots have the potential for an unmatched ability to perform versatile and robust locomotion. However, designing effective and adaptive locomotion controllers for modular robots is challenging, resulting in a number of model-based methods that typically require various forms of prior knowledge. Deep reinforcement learning (DRL) provides a promising model-free approach for locomotion control by trial-and-error. However, current DRL methods often require extensive interaction data, hindering many possible applications. In this letter, a novel two-level hierarchical locomotion framework for modular quadrupedal robots is proposed. The approach combines a low-level central pattern generator (CPG)-based controller with a high-level neural network to learn a variety of locomotion tasks using DRL. The low-level CPG controller is pre-optimized to generate stable rhythmic walking gaits, while the high-level network is trained to modulate the CPG parameters for achieving task goals based on high-dimensional inputs, including the robot states and user commands. The proposed approach is employed on a simulated modular quadruped. With a limited amount of prior knowledge, the proposed method is demonstrated to be capable of learning a variety of locomotion skills such as velocity tracking, path following, and navigating to a target. Simulation results show that the proposed method can achieve higher sample efficiency than the model-free DRL method and are substantially more robust than the baseline methods to external disturbances and irregular terrain.
Published in: IEEE Robotics and Automation Letters ( Volume: 6, Issue: 4, October 2021)