Abstract:
Most methods of estimating camera motion for visual odometry are based on point features. Point-based algorithms often fail in low-texture scenes because it is difficult ...Show MoreMetadata
Abstract:
Most methods of estimating camera motion for visual odometry are based on point features. Point-based algorithms often fail in low-texture scenes because it is difficult to build a large number of features. Instead, line segments are often very rich. However, the instability of the endpoints of line segments and the loss of line segment connectivity make the matching of the line segments difficult. This paper proposes a new odometry algorithm based on line intersection structure feature (LISF). LISFs are formed by the adjacent line pairs in 2D images that are coplanar in 3D, which include two line segments and their joint point. These LISFs are then described with the proposed combined descriptor consists of structure feature and gradient feature for matching. Also, we implement a RGB-D odometry system that utilizes LISFs by adopting a RANSAC-based motion estimation, followed by a g2o-based motion refinement. In experiments, using data sets in weak texture scenes, results show that proposed method can achieve high continuity and accuracy.
Published in: 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS)
Date of Conference: 23-25 November 2018
Date Added to IEEE Xplore: 14 April 2019
ISBN Information: