Abstract:
We focus on generating consistent reconstructions of indoor spaces from a freely moving handheld RGB-D sensor, with the aim of creating virtual models that can be used fo...Show MoreMetadata
Abstract:
We focus on generating consistent reconstructions of indoor spaces from a freely moving handheld RGB-D sensor, with the aim of creating virtual models that can be used for measuring and remodeling. We propose a novel 6D RGBD odometry approach that finds the relative camera pose between consecutive RGB-D frames by keypoint extraction and feature matching both on the RGB and depth image planes. Furthermore, we feed the estimated pose to the highly accurate KinectFusion algorithm, which uses a fast ICP (Iterative-Closest-Point) to fine-tune the frame-to-frame relative pose and fuse the Depth data into a global implicit surface. We evaluate our method on a publicly available RGB-D SLAM benchmark dataset by Sturm et al. The experimental results show that our proposed reconstruction method solely based on visual odometry and KinectFusion outperforms the state-of-the-art RGB-D SLAM system accuracy. Our algorithm outputs a ready-to-use polygon mesh (highly suitable for creating 3D virtual worlds) without any post-processing steps.
Date of Conference: 14-18 September 2014
Date Added to IEEE Xplore: 06 November 2014
ISBN Information: