ABSTRACT
When the traditional Visual Odometry calculation method based on image feature points is used for indoor positioning and mapping, there will be holes in the depth map obtained by the depth camera due to the highly reflective or light-transmitting area on the surface of the target object, resulting in low positioning accuracy. Moreover, when encountering a scene with a textureless area, there will be insufficient effective feature points, resulting in loss of tracking. To solve the above problems, an improved indoor visual odometry algorithm based on point and line features is proposed. On the one hand, this method uses a Curvature Driven Diffusion model based on edge-first filling to repair the depth map extracted by the depth camera. The repair results are compared with Joint Bilateral Filtering, Fast Marching Method and Curvature Driven Diffusion. Experiments show that this method has better filling effect, clear edges and more complete object structure information. On the other hand, in the feature extraction stage, the method of multi-feature fusion positioning is used, and the point and line features is extracted at the same time to complete indoor positioning and mapping. The algorithm in this paper is adopted in the Visual Odometry section of ORB-SLAM2, and the result obtained from 6 sets of TUM data set sequences with different structures and texture features is compared with the traditional ORB-SLAM2. The absolute trajectory error of ORB-SLAM2 using the algorithm in this paper is reduced by 8.99% on average. The experimental result shows that the improvement of the algorithm in this paper in two aspects effectively improves the positioning accuracy and robustness of the system.
- C. Cadena , "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age," in IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309-1332, Dec. 2016.Google ScholarDigital Library
- F. Demim, A. Nemra, A. Boucheloukh, K. Louadj, M. Hamerlain and A. Bazoula, "Robust SVSF-SLAM Algorithm for Unmanned Vehicle in Dynamic Environment," 2018 International Conference on Signal, Image, Vision and their Applications (SIVA), 2018, pp. 1-5.Google Scholar
- O. Guclu and A. Can, "k-SLAM: A fast RGB-D SLAM approach for large indoor environments", Computer Vision and Image Understanding, vol. 184, pp. 31-44, 2019.Google ScholarDigital Library
- A. Atapour-Abarghouei and T. Breckon, "A comparative review of plausible hole filling strategies in the context of scene depth image completion", Computers & Graphics, vol. 72, pp. 39-58, 2018.Google ScholarCross Ref
- T. Mallick, P. P. Das and A. K. Majumdar, "Characterizations of Noise in Kinect Depth Images: A Review," in IEEE Sensors Journal, vol. 14, no. 6, pp. 1731-1740, June 2014.Google ScholarCross Ref
- M. Camplani, L Salgado. ”Efficient spatiotemporal hole filling strategy for kinect depth maps", Electronic Imaging, vol. 8290. International Society for Optics and Photonics, 2012.Google Scholar
- Y. Zhang and T. Funkhouser, "Deep Depth Completion of a Single RGB-D Image," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 175-185.Google Scholar
- A. Atapour-Abarghouei and T. Breckon, "A comparative review of plausible hole filling strategies in the context of scene depth image completion", Computers & Graphics, vol. 72, pp. 39-58, 2018.Google ScholarCross Ref
- A. Pumarola, A. Vakhitov, A. Agudo, A. Sanfeliu and F. Moreno-Noguer, "PL-SLAM: Real-time monocular visual SLAM with points and lines," 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 4503-4508.Google Scholar
- X. Dong, J. Yuan, S. Huang, S. Yang, Y. Huang, "RGB-D Visual Odometry Based on Features of Planes and Line Segments in Indoor Environments", Robot, vol. 40, no. 6, pp. 921-932, 2018.Google Scholar
- L. Ding and A. Goshtasby, "On the Canny edge detector", Pattern Recognition, vol. 34, no. 3, pp. 721-725, 2001.Google ScholarCross Ref
- R. Grompone von Gioi, J. Jakubowicz, J. Morel and G. Randall, "LSD: A Fast Line Segment Detector with a False Detection Control," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722-732, April 2010Google ScholarDigital Library
- L. Zhang and R. Koch, "An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency", Journal of Visual Communication and Image Representation, vol. 24, no. 7, 2013, pp. 794-805Google ScholarDigital Library
- J. Sturm, N. Engelhard, F. Endres, W. Burgard and D. Cremers, "A benchmark for the evaluation of RGB-D SLAM systems," 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 573-580.Google Scholar
Recommendations
Visual Odometry with Improved Adaptive Feature Tracking
IVCNZ '14: Proceedings of the 29th International Conference on Image and Vision Computing New ZealandIn visual odometry applications, tracking of features in a video sequence greatly impacts the accuracy of ego-motion estimation. A robust visual tracking has to take into account either geometric or photometric conditions to exclude outliers in the ...
Improved monocular visual-inertial odometry with point and line features using adaptive line feature extraction
AbstractWith the development of intelligent era, the application of UAVs is more and more extensive, and positioning and navigation technology is the key. Among them, the monocular VIO algorithm, with its advantages of lightweight and low cost, is the ...
Comments