Skip to main content
Log in

R-SDSO: Robust stereo direct sparse odometry

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

This paper presents a robust stereo direct visual odometry method with improved robustness against drastic brightness variation and aggressive rotation. It is achieved by incorporating a direct sparse odometry based on image preprocessing, a depth initialization module, and an abend recovery module into the visual odometry framework. The image preprocessing enhances raw camera images, which facilitates more accurate pixels detecting. Meanwhile, a new error function based on image preprocessing is proposed for making direct visual odometry robust to brightness variation in the environment. In the depth initialization module, the Delaunay triangulation algorithm and feature point matching are combined together for efficient and robust depth estimation. Furthermore, in the abend recovery module, we design a lost/abnormal detection method and a robust state restoration strategy to address the tracking lost/abnormal problem in harsh conditions. Evaluation results on KITTI and EuRoC datasets and a light-switch experiment demonstrate that, with the aid of these three modules, the proposed method can achieve state-of-the-art performance, even compared with visual-inertial fusion methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Notes

  1. When considering the sparsity of the Jacobian matrix \({\varvec{J}}\), \(C_{opt}\simeq O(N^2_{I}N_{p})\).

References

  1. Baker, S., Matthews, I.: Lucas-kanade 20 years on: a unifying framework. Int. J. Comput. Vis. 56(3), 221–255 (2004)

    Article  Google Scholar 

  2. Bergmann, P., Wang, R., Cremers, D.: Online photometric calibration of auto exposure video for realtime visual odometry and slam. IEEE Robot. Autom. Lett. 3, 627–634 (2018)

    Article  Google Scholar 

  3. Bloesch, M., Omari, S., Hutter, M., Siegwart, R.: (2015) Robust visual inertial odometry using a direct ekf-based approach. In: Proc. IEEEE/RSJ Int. Conf. Intell. Robot Syst., IEEE, pp 298–304

  4. Burri, M., Nikolic, J., Gohl, P., Schneider, T., Rehder, J., Omari, S., Achtelik, M.W., Siegwart, R.: The euroc micro aerial vehicle datasets. Int. J. Robot. Res. 35, 1157–1163 (2016). https://doi.org/10.1177/0278364915620033

    Article  Google Scholar 

  5. Comport, AI., Malis, E., Rives, P.: (2007) Accurate quadrifocal tracking for robust 3d visual odometry. In: Proc. IEEE Int. Conf. Robot. Autom., pp 40–45, https://doi.org/10.1109/ROBOT.2007.363762

  6. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: (2016) The cityscapes dataset for semantic urban scene understanding. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  7. Engel, J., Sturm, J., Cremers, D.: (2013) Semi-dense visual odometry for a monocular camera. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV)

  8. Engel, J., Schöps, T., Cremers, D.: (2014) Lsd-slam: Large-scale direct monocular slam. In: Proc. Eur. Conf. Comput. Vis., Cham, pp 834–849

  9. Engel, J., Stückler, J., Cremers, D.: (2015) Large-scale direct slam with stereo cameras. In: Proc. IEEEE/RSJ Int. Conf. Intell. Robot Syst., pp 1935–1942, https://doi.org/10.1109/IROS.2015.7353631

  10. Engel, J., Usenko, V., Cremers, D.: (2016) A photometrically calibrated benchmark for monocular visual odometry. In: arXiv:1607.02555

  11. Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018). https://doi.org/10.1109/TPAMI.2017.2658577

    Article  Google Scholar 

  12. Forster, C., Pizzoli, M., Scaramuzza, D.: (2014) Svo: Fast semi-direct monocular visual odometry. In: Proc. IEEE Int. Conf. Robot. Autom., pp 15–22, https://doi.org/10.1109/ICRA.2014.6906584

  13. Fu, Y., Yan, Q., Liao, J., Chow, A.L., Xiao, C.: Real-time dense 3D reconstruction and camera tracking via embedded planes representation. Vis. Comput. 36(10), 2215–2226 (2020)

    Article  Google Scholar 

  14. Geiger, A., Roser, M., Urtasun, R.: Efficient large-scale stereo matching. In: Asian Conference on Computer Vision, pp. 25–38. Springer, Berlin (2010)

    Google Scholar 

  15. Geiger, A., Lenz, P., Urtasun, R.: (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In: Proc. IEEE Conf. Comput. Vis. Pattern Recogn., pp 3354–3361, https://doi.org/10.1109/CVPR.2012.6248074

  16. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  17. He, M., Zhu, C., Huang, Q., Ren, B., Liu, J.: A review of monocular visual odometry. Vis. Comput. 36(5), 1053–1065 (2020)

    Article  Google Scholar 

  18. Hirschmuller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. Proc. IEEE Conf. Comput. Vis. Pattern Recogn. 2, 807–814 (2005). https://doi.org/10.1109/CVPR.2005.56

    Article  Google Scholar 

  19. Jin, H., Favaro, P., Soatto, S.: A semi-direct approach to structure from motion. Vis. Comput. 19(6), 377–394 (2003)

    Article  Google Scholar 

  20. Krombach, N., Droeschel, D., Houben, S., Behnke, S.: Feature-based visual odometry prior for real-time semi-dense stereo slam. Robot. Auton. Syst. 109, 38–58 (2018)

    Article  Google Scholar 

  21. Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015). https://doi.org/10.1177/0278364914554813

    Article  Google Scholar 

  22. Liang, H.J., Sanket, N.J., Fermüller, C., Aloimonos, Y.: Salientdso: Bringing attention to direct sparse odometry. IEEE Trans. Autom. Sci. Eng. 16(4), 1619–1626 (2019)

    Article  Google Scholar 

  23. Liu, Y., Chen, X., Gu, T., Zhang, Y., Xing, G.: Real-time camera pose estimation via line tracking. Vis. Comput. 34(6), 899–909 (2018)

    Article  Google Scholar 

  24. Mur-Artal, R., Montiel, J., Tardós, J.D.: Orb-slam: a versatile and accurate monocular slam system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  25. Newcombe, RA., Lovegrove, SJ., Davison, AJ.: (2011) Dtam: Dense tracking and mapping in real-time. In: Proc. IEEE Int. Conf. Comput. Vis., pp 2320–2327, https://doi.org/10.1109/ICCV.2011.6126513

  26. Ng, P.C., Henikoff, S.: Sift: Predicting amino acid changes that affect protein function. Nucleic Acids Res. 31(13), 3812–3814 (2003)

    Article  Google Scholar 

  27. Pillai, S., Ramalingam, S., Leonard, JJ.: (2016) High-performance and tunable stereo reconstruction. In: Proc. IEEE Int. Conf. Robot. Autom., pp 3188–3195, https://doi.org/10.1109/ICRA.2016.7487488

  28. Pire, T., Fischer, T., Civera, J., De Cristóforis, P., Berlles, JJ.: (2015) Stereo parallel tracking and mapping for robot localization. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, pp 1373–1378

  29. Qin, T., Li, P., Shen, S.: Vins-mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 34(4), 1004–1020 (2018)

    Article  Google Scholar 

  30. Rublee, E., Rabaud, V., Konolige, K., Bradski, G.R.: ORB: An efficient alternative to sift or surf. Proc. IEEE Int. Conf. Comput. Vis. 11, 2564–2571 (2011)

    Google Scholar 

  31. Shi, J., Tomasi, C.: (2002) Good features to track. In: Proc. IEEE Conf. Comput. Vis. Pattern Recognit

  32. Shi, J., Sun, Y., Bai, S., Sun, Z., Tian, Z.: A self-supervised method of single-image depth estimation by feeding forward information using max-pooling layers. Vis. Comput. 37, 815–829 (2020)

    Article  Google Scholar 

  33. Sun, K., Mohta, K., Pfrommer, B., Watterson, M., Liu, S., Mulgaonkar, Y., Taylor, C.J., Kumar, V.: Robust stereo visual inertial odometry for fast autonomous flight. IEEE Robot. Autom. Lett. 3(2), 965–972 (2018)

    Article  Google Scholar 

  34. Wang, R., Schwörer, M., Cremers, D.: (2017) Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras. In: Proc. IEEE Int. Conf. Comput. Vis., Venice, Italy

  35. Wu, X., Pradalier, C.: (2019) Illumination robust monocular direct visual odometry for outdoor environment mapping. In: 2019 International Conference on Robotics and Automation (ICRA), IEEE, pp 2392–2398

  36. Younes, G., Asmar, DC., Zelek, JS.: (2019) FDMO: Feature assisted direct monocular odometry. arXiv preprint arXiv:1804.05422

  37. Zhou, Y., Zhao, J., Luo, C.: A novel method for reconstructing general 3D curves from stereo images. Vis. Comput. 37, 2009–2021 (2020)

    Article  Google Scholar 

Download references

Funding

The research was supported by Innovative Research Group Project of the National Natural Science Foundation of China Grant No. 61871265 and National Natural Science Foundation of China Grant No. 61903246.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peilin Liu.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 16267 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Miao, R., Liu, P., Wen, F. et al. R-SDSO: Robust stereo direct sparse odometry. Vis Comput 38, 2207–2221 (2022). https://doi.org/10.1007/s00371-021-02278-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02278-0

Keywords

Navigation