Abstract:
Transferring skills to robots by demonstrations has been extensively researched for decades. However, the majority of the work focuses on individual or low-level task lea...Show MoreMetadata
Abstract:
Transferring skills to robots by demonstrations has been extensively researched for decades. However, the majority of the work focuses on individual or low-level task learning. Theories and applications for learning complex sequential tasks are not well-investigated. For this reason, this paper presents a unified top-down framework for complex tasks learning. Specifically, we conclude two critical objectives. First, a segmentation algorithm which can segment unstructured demonstrations into movement primitives (MPs) with minimal prior knowledge requirements needs to be proposed. Second, choosing a representation model used to jointly extract tasks constraints from the discovered MPs. To achieve the first goal, a change-point detection algorithm based on Bayesian inference is used. It can segment unstructured demonstrations online. Then, we propose to model MPs with dynamical system approximated by the Gaussian mixture models (GMMs), which is flexible and powerful in movement representation. Finally, the whole framework is evaluated by an open-and-place task on a real robot. Experiments show the segmentation accuracy can reach to 95.6% and the task can be replayed in new contexts successfully.
Date of Conference: 12-15 December 2018
Date Added to IEEE Xplore: 14 March 2019
ISBN Information: