skip to main content
10.1145/3234664.3234666acmotherconferencesArticle/Chapter ViewAbstractPublication PageshpcctConference Proceedingsconference-collections
research-article

An Improved Fast Visual Odometry Based on Semi-probabilistic Trimmed ICP

Published: 22 June 2018 Publication History

Abstract

This paper presents an improved method for Fast Visual Odometry estimation from a Kinect-style RGB-D camera. In order to improve the accuracy and robustness of Fast Visual Odometry, we propose a modified ICP algorithm called Semi-probabilistic Trimmed ICP and a transform strategy between frame-to-model approach and frame-to-frame approach. An overlap parameter is computed to reject outlier before the registration. And if it comes to a occasional large camera motion, we skip the current frame and compute a coarse initial guess by the RANSAC algorithm between the next frame and the previous frame, finally refine the pose of camera by the original ICP. The evaluation on TUM RGB-D benchmark shows that Our Visual Odometry outperforms state-of-the-art in certain scenarios like a small-scale camera motion and it's capable of dealing with an occasional large camera motion.

References

[1]
Kurt Konolige, Motilal Agrawal, Robert C. Bolles, Cregg Cowan, Martin Fischler, and Brian Gerkey. 2008. Outdoor Mapping and Navigation Using Stereo Vision, Springer Tracts in Advanced Robotics, 39, 179--190.
[2]
Kerl, Christian, J. Sturm, and D. Cremers. 2013. Robust odometry estimation for RGB-D cameras. IEEE International Conference on Robotics and Automation IEEE, 3748--3754.
[3]
Steinbrücker, Frank, J. Sturm, and D. Cremers. 2011. Real-time visual odometry from dense RGB-D images. IEEE International Conference on Computer Vision Workshops IEEE, 719--722.
[4]
Li, Shile, and D. Lee. 2016. Fast Visual Odometry Using Intensity-Assisted Iterative Closest Point. IEEE Robotics & Automation Letters 1.2, 992--999.
[5]
P.Henry, M.Krainin, E.Herbst, X.Ren, D.Fox. 2014. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments. Experimental Robotics. Springer Berlin Heidelberg, 647--663.
[6]
Prakhya, S. M., Liu, B., Lin, W., and Qayyum, U. 2015. Sparse Depth Odometry: 3D keypoint based pose estimation from dense depth data. IEEE International Conference on Robotics and Automation IEEE, 4216--4223.
[7]
Dryanovski, I, R. G. Valenti, and J. Xiao. 2013. Fast visual odometry and mapping from RGB-D data. IEEE International Conference on Robotics and Automation IEEE, 2305-2310
[8]
Segal, Aleksandr, D. Hähnel, and S. Thrun. 2009. Generalized-ICP. DBLP.
[9]
Salvador Domínguez, et al. 2013. Fast 6D Odometry Based on Visual Features and Depth. Studies in Computational Intelligence, 193, 5--16.
[10]
Endres, F., Hess, J., Sturm, J., Cremers, D., and Burgard, W. 2017. 3-d mapping with an rgb-d camera. IEEE Transactions on Robotics, 30(1), 177--187.
[11]
Khoshelham, K, and S. O. Elberink. 2012. Accuracy and resolution of Kinect depth data for indoor mapping applications. Sensors,1437.
[12]
Besl, P. J, and N. D. Mckay. 1992. A method for registration of 3-D shapes. Sensor Fusion IV: Control Paradigms and Data Structures, 239--256.
[13]
Chetverikov, D., Svirko, D., Stepanov, D., & Krsek, P. 2002. The Trimmed Iterative Closest Point Algorithm. 16 Th International Conference on Pattern Recognition (Vol.16, pp.30545). IEEE Computer Society.
[14]
Horn, Berthold K. P. 1987. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America, 629--642.
[15]
Lin, T. Y, Chen, C. W, Wang, J, and Shieh, M. D, 2017. Motion-Aware Iterative Closest Point Estimation for Fast Visual Odometry, IEEE International Symposium on Mixed and Augmented Reality IEEE, 268--269.
[16]
Sturm, J., Engelhard, N., Endres, F., Burgard, W., and Cremers, D. 2012. A benchmark for the evaluation of RGB-D SLAM systems. Ieee/rsj International Conference on Intelligent Robots and Systems (Vol.57, pp. 573--580). IEEE

Index Terms

  1. An Improved Fast Visual Odometry Based on Semi-probabilistic Trimmed ICP

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    HPCCT '18: Proceedings of the 2018 2nd High Performance Computing and Cluster Technologies Conference
    June 2018
    126 pages
    ISBN:9781450364850
    DOI:10.1145/3234664
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    In-Cooperation

    • Shanghai Jiao Tong University: Shanghai Jiao Tong University
    • Xi'an Jiaotong-Liverpool University: Xi'an Jiaotong-Liverpool University
    • Chinese Academy of Sciences

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 June 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Visual odometry
    2. frame-to-frame
    3. frame-to-model
    4. semi-probabilistic trimmed-ICP

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    HPCCT 2018

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 71
      Total Downloads
    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media