skip to main content
10.1145/3548608.3559309acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccirConference Proceedingsconference-collections
research-article

Improved algorithm of indoor visual odometer based on point and line feature

Authors Info & Claims
Published:14 October 2022Publication History

ABSTRACT

When the traditional Visual Odometry calculation method based on image feature points is used for indoor positioning and mapping, there will be holes in the depth map obtained by the depth camera due to the highly reflective or light-transmitting area on the surface of the target object, resulting in low positioning accuracy. Moreover, when encountering a scene with a textureless area, there will be insufficient effective feature points, resulting in loss of tracking. To solve the above problems, an improved indoor visual odometry algorithm based on point and line features is proposed. On the one hand, this method uses a Curvature Driven Diffusion model based on edge-first filling to repair the depth map extracted by the depth camera. The repair results are compared with Joint Bilateral Filtering, Fast Marching Method and Curvature Driven Diffusion. Experiments show that this method has better filling effect, clear edges and more complete object structure information. On the other hand, in the feature extraction stage, the method of multi-feature fusion positioning is used, and the point and line features is extracted at the same time to complete indoor positioning and mapping. The algorithm in this paper is adopted in the Visual Odometry section of ORB-SLAM2, and the result obtained from 6 sets of TUM data set sequences with different structures and texture features is compared with the traditional ORB-SLAM2. The absolute trajectory error of ORB-SLAM2 using the algorithm in this paper is reduced by 8.99% on average. The experimental result shows that the improvement of the algorithm in this paper in two aspects effectively improves the positioning accuracy and robustness of the system.

References

  1. C. Cadena , "Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age," in IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309-1332, Dec. 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. F. Demim, A. Nemra, A. Boucheloukh, K. Louadj, M. Hamerlain and A. Bazoula, "Robust SVSF-SLAM Algorithm for Unmanned Vehicle in Dynamic Environment," 2018 International Conference on Signal, Image, Vision and their Applications (SIVA), 2018, pp. 1-5.Google ScholarGoogle Scholar
  3. O. Guclu and A. Can, "k-SLAM: A fast RGB-D SLAM approach for large indoor environments", Computer Vision and Image Understanding, vol. 184, pp. 31-44, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Atapour-Abarghouei and T. Breckon, "A comparative review of plausible hole filling strategies in the context of scene depth image completion", Computers & Graphics, vol. 72, pp. 39-58, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  5. T. Mallick, P. P. Das and A. K. Majumdar, "Characterizations of Noise in Kinect Depth Images: A Review," in IEEE Sensors Journal, vol. 14, no. 6, pp. 1731-1740, June 2014.Google ScholarGoogle ScholarCross RefCross Ref
  6. M. Camplani, L Salgado. ”Efficient spatiotemporal hole filling strategy for kinect depth maps", Electronic Imaging, vol. 8290. International Society for Optics and Photonics, 2012.Google ScholarGoogle Scholar
  7. Y. Zhang and T. Funkhouser, "Deep Depth Completion of a Single RGB-D Image," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 175-185.Google ScholarGoogle Scholar
  8. A. Atapour-Abarghouei and T. Breckon, "A comparative review of plausible hole filling strategies in the context of scene depth image completion", Computers & Graphics, vol. 72, pp. 39-58, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  9. A. Pumarola, A. Vakhitov, A. Agudo, A. Sanfeliu and F. Moreno-Noguer, "PL-SLAM: Real-time monocular visual SLAM with points and lines," 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 4503-4508.Google ScholarGoogle Scholar
  10. X. Dong, J. Yuan, S. Huang, S. Yang, Y. Huang, "RGB-D Visual Odometry Based on Features of Planes and Line Segments in Indoor Environments", Robot, vol. 40, no. 6, pp. 921-932, 2018.Google ScholarGoogle Scholar
  11. L. Ding and A. Goshtasby, "On the Canny edge detector", Pattern Recognition, vol. 34, no. 3, pp. 721-725, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  12. R. Grompone von Gioi, J. Jakubowicz, J. Morel and G. Randall, "LSD: A Fast Line Segment Detector with a False Detection Control," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722-732, April 2010Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. L. Zhang and R. Koch, "An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency", Journal of Visual Communication and Image Representation, vol. 24, no. 7, 2013, pp. 794-805Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Sturm, N. Engelhard, F. Endres, W. Burgard and D. Cremers, "A benchmark for the evaluation of RGB-D SLAM systems," 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 573-580.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICCIR '22: Proceedings of the 2022 2nd International Conference on Control and Intelligent Robotics
    June 2022
    905 pages
    ISBN:9781450397179
    DOI:10.1145/3548608

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 14 October 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate131of239submissions,55%
  • Article Metrics

    • Downloads (Last 12 months)17
    • Downloads (Last 6 weeks)0

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format