Abstract:
Garment folding is a ubiquitous domestic task that is difficult to automate due to the highly deformable nature of fabrics. In this article, we propose a novel method of ...View moreMetadata
Abstract:
Garment folding is a ubiquitous domestic task that is difficult to automate due to the highly deformable nature of fabrics. In this article, we propose a novel method of learning from demonstrations that enables robots to autonomously manipulate an assistive tool to fold garments. In contrast to traditional methods (that rely on low-level pixel features), our proposed solution uses a dense visual descriptor to encode the demonstration into a high-level
hand-object graph
(HoG) that allows to efficiently represent the interactions between the manipulated tool and robots. With that, we leverage graph neural network to autonomously learn the forward dynamics model from HoGs, then, given only a single demonstration, the imitation policy is optimized with a model predictive controller to accomplish the folding task. To validate the proposed approach, we conducted a detailed experimental study on a robotic platform instrumented with vision sensors and a custom-made end-effector that interacts with the folding board.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 20, Issue: 4, April 2024)