Skip to main content

Visual Odometry for Indoor Mobile Robot by Recognizing Local Manhattan Structures

  • Conference paper
  • First Online:
Book cover Computer Vision – ACCV 2018 (ACCV 2018)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11365))

Included in the following conference series:

  • 2363 Accesses

Abstract

In this paper, we propose a novel 3-DOF visual odometry method to estimate the location and pose (yaw) of a mobile robot when the robot is navigating indoors. Particularly, we mainly aim at dealing with the corridor-like scenarios where the RGB-D camera mounted on the robot can capture apparent planar structures such as floor or walls. The novelties of our method lie in two-folds. First, to fully exploit the planar structures for odometry estimation, we propose a fast plane segmentation scheme based on efficiently extracted inverse-depth induced histograms. This training-free scheme can extract dominant planar structures by only exploiting the depth image of the RGB-D camera. Second, we regard the global indoor scene as a composition of some local Manhattan-like structures. At any specific location, we recognize at least one local Manhattan coordinate frame based on the detected planar structures. Pose estimation is realized based on the alignment of the camera coordinate frame to one dominant local Manhattan coordinate frame. Knowing pose information, the location estimation is carried out by a combination of a one-point RANSAC method and the ICP algorithm depending on the number of point matches available. We evaluate our work extensively on real-world data, the experimental result shows the promising performance in term of accuracy and robustness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Besl, P.J., McKay, N.D.: Method for registration of 3-D shapes. In: Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611, pp. 586–607. International Society for Optics and Photonics (1992)

    Google Scholar 

  2. Concha, A., Civera, J.: DPPTAM: dense piecewise planar tracking and mapping from a monocular sequence. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5686–5693. IEEE (2015)

    Google Scholar 

  3. Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)

    Article  Google Scholar 

  4. Flint, A., Mei, C., Reid, I., Murray, D.: Growing semantically meaningful models for visual SLAM. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 467–474. IEEE (2010)

    Google Scholar 

  5. Ghanem, B., Thabet, A., Carlos Niebles, J., Caba Heilbron, F.: Robust Manhattan frame estimation from a single RGB-D image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3772–3780 (2015)

    Google Scholar 

  6. Gomez-Ojeda, R., Zuñiga-Noël, D., Moreno, F.A., Scaramuzza, D., Gonzalez-Jimenez, J.: PL-SLAM: a stereo SLAM system through the combination of points and line segments. arXiv preprint arXiv:1705.09479 (2017)

  7. Hess, W., Kohler, D., Rapp, H., Andor, D.: Real-time loop closure in 2D LIDAR SLAM. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1271–1278. IEEE (2016)

    Google Scholar 

  8. Hsiao, M., Westman, E., Zhang, G., Kaess, M.: Keyframe-based dense planar SLAM. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 5110–5117. IEEE (2017)

    Google Scholar 

  9. Kim, P., Coltin, B., Kim, H.J.: Low-drift visual odometry in structured environments by decoupling rotational and translational motion. In: 2018 IEEE international conference on Robotics and automation (ICRA)

    Google Scholar 

  10. Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: 6th IEEE and ACM International Symposium on Mixed and Augmented Reality 2007, ISMAR 2007, pp. 225–234. IEEE (2007)

    Google Scholar 

  11. Labayrade, R., Aubert, D., Tarel, J.P.: Real time obstacle detection in stereovision on non-fat road geometry through “v-disparity” representation. In: Proceedings of the IEEE Intelligent Vehicles Symposium 2002, pp. 646–651. IEEE (2002)

    Google Scholar 

  12. Le, P.H., Košecka, J.: Dense piecewise planar RGB-D SLAM for indoor environments. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4944–4949. IEEE (2017)

    Google Scholar 

  13. Lemaire, T., Lacroix, S.: Monocular-vision based SLAM using line segments. In: 2007 IEEE International Conference on Robotics and Automation, pp. 2791–2796. IEEE (2007)

    Google Scholar 

  14. Li, S.J., Ren, B., Liu, Y., Cheng, M.M., Frost, D., Prisacariu, V.A.: Direct line guidance odometry. In: 2018 IEEE international conference on Robotics and automation (ICRA) (2018)

    Google Scholar 

  15. Lu, Y., Song, D.: Robust RGB-D odometry using point and line features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3934–3942 (2015)

    Google Scholar 

  16. Lu, Y., Song, D.: Robustness to lighting variations: an RGB-D indoor visual odometry using line segments. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 688–694. IEEE (2015)

    Google Scholar 

  17. Montemerlo, M., Thrun, S., Koller, D., Wegbreit, B., et al.: FastSLAM: a factored solution to the simultaneous localization and mapping problem. In: pp. 593–598. AAAI/IAAI (2002)

    Google Scholar 

  18. Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)

    Article  Google Scholar 

  19. Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)

    Article  Google Scholar 

  20. Pumarola, A., Vakhitov, A., Agudo, A., Sanfeliu, A., Moreno-Noguer, F.: PL-SLAM: real-time monocular visual SLAM with points and lines. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 4503–4508. IEEE (2017)

    Google Scholar 

  21. Raposo, C., Lourenço, M., Antunes, M., Barreto, J.P.: Plane-based Odometry using an RGB-D camera. In: BMVC (2013)

    Google Scholar 

  22. Smith, P., Reid, I.D., Davison, A.J.: Real-time monocular SLAM with straight lines (2006)

    Google Scholar 

  23. Sola, J., Vidal-Calleja, T., Devy, M.: Undelayed initialization of line segments in monocular SLAM. In: IEEE/RSJ International Conference on Intelligent Robots and Systems 2009, IROS 2009, pp. 1553–1558. IEEE (2009)

    Google Scholar 

  24. Thabet, A.K., Lahoud, J., Asmar, D., Ghanem, B.: 3D aware correction and completion of depth maps in piecewise planar scenes. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9004, pp. 226–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16808-1_16

    Chapter  Google Scholar 

  25. Yang, S., Scherer, S.: Direct monocular odometry using points and lines. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3871–3877. IEEE (2017)

    Google Scholar 

  26. Yang, S., Song, Y., Kaess, M., Scherer, S.: Pop-up SLAM: semantic monocular plane SLAM for low-texture environments. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1222–1229. IEEE (2016)

    Google Scholar 

  27. Zhou, H., Zou, D., Pei, L., Ying, R., Liu, P., Yu, W.: StructSLAM: visual SLAM with building structure lines. IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015)

    Article  Google Scholar 

  28. Zuo, X., Xie, X., Liu, Y., Huang, G.: Robust visual SLAM with point and line features. arXiv preprint arXiv:1711.08654 (2017)

Download references

Acknowledgement

This work was supported in part by the Jiangsu Province Natural Science Foundation under Grant BK20151491 and in part by the Natural Science Foundation of China under Grant 61672287.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hui Kong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hou, Z., Ding, Y., Wang, Y., Yang, H., Kong, H. (2019). Visual Odometry for Indoor Mobile Robot by Recognizing Local Manhattan Structures. In: Jawahar, C., Li, H., Mori, G., Schindler, K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science(), vol 11365. Springer, Cham. https://doi.org/10.1007/978-3-030-20873-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-20873-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-20872-1

  • Online ISBN: 978-3-030-20873-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics