Abstract:
In this paper, we present a new approach to spatially self-organize a modular artificial skin in 3D space. We were motivated by the demand to efficiently and automaticall...Show MoreMetadata
Abstract:
In this paper, we present a new approach to spatially self-organize a modular artificial skin in 3D space. We were motivated by the demand to efficiently and automatically acquire the position and orientation of a steadily growing number of artificial skin sensor elements. Here, we combine our 3D surface reconstruction algorithm for individual patches of artificial skin, with a common active visual marker approach. Light emitting diodes, built into every element of our modular artificial skin, enable us to turn each reconstructed patch of skin into an active 6 DoF visual marker.With the help of a calibrated monocular camera, we can then estimate the homogeneous transformations between multiple, at least partially visible skin patches e.g. when distributed on the body of a robot. Our approach allows to quickly combine distributed tactile and visual coordinate systems into one homogeneous rigid body representation. We demonstrate the robustness of our approach by calibrating several patches mounted on a robot arm using only a standard web-cam.
Date of Conference: 14-18 September 2014
Date Added to IEEE Xplore: 06 November 2014
ISBN Information: