Abstract:
Manipulating deformable objects, such as ropes (1D), fabrics (2D), and bags (3D), poses a significant challenge in robotics research due to their high degree of freedom i...Show MoreMetadata
Abstract:
Manipulating deformable objects, such as ropes (1D), fabrics (2D), and bags (3D), poses a significant challenge in robotics research due to their high degree of freedom in physical state and nonlinear dynamics. Compared with single-dimensional deformable objects, multi-dimensional object manipulation suffers from the difficulty in recognizing the characteristics of the object correctly and making an accurate action decision on the deformable object of various dimensions. Some methods are proposed to use neural networks to rearrange deformable objects in all dimensions, but their approaches are not accurate in predicting the motion of the robot as they just consider the equivariance in the picking objects. To address this problem, we present a novel Transporter Network encoded and decoded with equivariance to generalize to different picking and placing positions. Additionally, we propose an equivariant goal-conditioned model to enable the robot to manipulate deformable objects into flexible configurations without relying on artificially marked visual anchors for the target position. Finally, experiments conducted in both Deformable-Ravens and the real world demonstrate that our equivariant models are more sample efficient than the traditional Transporter Network. The video is available at https://youtu.be/5_q5ff9c9FU.
Date of Conference: 01-05 October 2023
Date Added to IEEE Xplore: 13 December 2023
ISBN Information: