Abstract:
Recently, visual-inertial odometry (VIO) has been widely adopted for tracking the movement of robots in simultaneous localization and mapping (SLAM). In this paper, a low...Show MoreMetadata
Abstract:
Recently, visual-inertial odometry (VIO) has been widely adopted for tracking the movement of robots in simultaneous localization and mapping (SLAM). In this paper, a low-cost and highly accurate depth-added VIO framework is proposed for robots in indoor environments by taking advantage of an RGB-D camera and a micro-electromechanical system (MEMS) inertial measurement unit (IMU). Movement estimation is achieved by IMU pre-integration and visual tracking in this tightly coupled framework. Meanwhile, an empirical IMU model is developed by using Allan variance analysis to guarantee the accuracy of the estimated errors. Images with depth information are deployed during initialization to achieve a fast response. Extensive experiments are conducted to validate the effectiveness and its performance by comparing it with other advanced schemes in indoor scenarios. The results show that the scale drift error is reduced to 2.6 % and the response time of the initialization process is improved by about 124 % compared to its counterpart.
Published in: 2022 Human-Centered Cognitive Systems (HCCS)
Date of Conference: 17-18 December 2022
Date Added to IEEE Xplore: 05 April 2023
ISBN Information: