Abstract
Simultaneous Localization and Mapping (SLAM) with the incorporation of the Manhattan World (MW) assumption has been significantly discovered in recent years. While previous methods relied on the MW assumption to estimate camera rotation accurately, they faced limitations due to the requirement of a suitable planar environment. These constraints restricted the applicability of such systems. To overcome these limitations, we propose a novel approach that addresses the strict requirements of MW-based systems and significantly enhances tracking robustness in low-texture scenes. Our system leverages planar information in the environment to identify the presence of an MW scene. By decoupling the process, we achieve drift-free rotation estimation when the system detects an MW scene. Simultaneously, utilizing a semi-direct approach that combines point and line features to estimate translations in MW scenes, while performing full-camera pose estimation in non-MW scenes. Furthermore, we introduce a more precise loop closure detection strategy by exploiting the relative relationship between the Manhattan axes (MA) and line features in the scene. This strategy enhances the accuracy of identifying loop closures, which are crucial for SLAM systems. To evaluate the performance of our approach, we conducted experiments using public benchmarks. The results demonstrate improved pose estimation and loop closure performance compared to state-of-the-art methods. Overall, our proposed method alleviates the strict requirements of previous MW-based systems, enhances tracking robustness in low-texture scenes, and achieves improved performance in terms of pose estimation and loop closure detection.
This work was supported in part by research grants from the Science and Technology Cooperation and Exchange Special Project of Shanxi Province (No.202204041101016), the 1331 Engineering Project of Shanxi Province, and the Key Research and Development Project of Shanxi Province (NO.202102020101008).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Straub, J., Bhandari, N., Leonard, J.J., Fisher, J.W.: Real-time Manhattan world rotation estimation in 3D. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1913–1920. IEEE (2015)
Kim, P., Li, H., Joo, K.: Quasi-globally optimal and real-time visual compass in manhattan structured environments. IEEE Rob. Autom. Lett. 7(2), 2613–2620 (2022)
Ge, W., Song, Y., Zhang, B., Dong, Z.: Globally optimal and efficient manhattan frame estimation by delimiting rotation search space. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15213–15221 (2021)
Coughlan, J.M., Yuille, A.L.: Manhattan world: compass direction from a single image by bayesian inference. In: Proceedings of the Seventh IEEE International Conference on Computer Vision, pp. 941–947. IEEE (1999)
Li, H., Zhao, J., Bazin, J.C., Liu, Y.H.: Quasi-globally optimal and near/true real-time vanishing point estimation in manhattan world. IEEE Trans. Pattern Anal. Mach. Intell. 44(3), 1503–1518 (2020)
Campos, C., Elvira, R., Rodríguez, J.J.G., Montiel, J.M., Tardós, J.D.: Orb-slam3: an accurate open-source library for visual, visual-inertial, and multimap slam. IEEE Trans. Rob. 37(6), 1874–1890 (2021)
Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234. IEEE (2007)
Mur-Artal, R., Tardós, J.D.: Orb-slam2: an open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017)
Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Rob. 31(5), 1147–1163 (2015)
Gomez-Ojeda, R., Moreno, F.A., Zuniga-Noël, D., Scaramuzza, D., Gonzalez-Jimenez, J.: PL-SLAM: a stereo SLAM system through the combination of points and line segments. IEEE Trans. Rob. 35(3), 734–746 (2019)
Zhang, X., Wang, W., Qi, X., Liao, Z., Wei, R.: Point-plane slam using supposed planes for indoor environments. Sensors 19(17), 3795 (2019)
Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: dense tracking and mapping in real-time. In: 2011 International Conference on Computer Vision, pp. 2320–2327. IEEE (2011)
Li, Y., Brasch, N., Wang, Y., Navab, N., Tombari, F.: Structure-slam: low-drift monocular slam in indoor environments. IEEE Rob. Autom. Lett. 5(4), 6583–6590 (2020)
Li, Y., Yunus, R., Brasch, N., Navab, N., Tombari, F.: RGB-D SLAM with structural regularities. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 11581–11587. IEEE (2021)
Yunus, R., Li, Y., Tombari, F.: Manhattanslam: robust planar tracking and mapping leveraging mixture of manhattan frames. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6687–6693. IEEE (2021)
Joo, K., Oh, T.H., Rameau, F., Bazin, J.C., Kweon, I.S.: Linear rgb-d slam for atlanta world. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 1077–1083. IEEE (2020)
Company-Corcoles, J.P., Garcia-Fidalgo, E., Ortiz, A.: MSC-VO: exploiting manhattan and structural constraints for visual odometry. IEEE Rob. Autom. Lett. 7(2), 2803–2810 (2022)
Yang, S., Scherer, S.: Direct monocular odometry using points and lines. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3871–3877. IEEE (2017)
Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 15–22. IEEE (2014)
Peng, X., Liu, Z., Wang, Q., Kim, Y.T., Lee, H.S.: Accurate visual-inertial slam by manhattan frame re-identification. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5418–5424. IEEE (2021)
Gomez-Ojeda, R., Briales, J., Gonzalez-Jimenez, J.: PL-SVO: semi-direct monocular visual odometry by combining points and line segments. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4211–4216. IEEE (2016)
Zhou, Y., Kneip, L., Rodriguez, C., Li, H.: Divide and conquer: efficient density-based tracking of 3D sensors in manhattan worlds. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10115, pp. 3–19. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54193-8_1
Handa, A., Whelan, T., McDonald, J., Davison, A.J.: A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 1524–1531. IEEE (2014)
Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580. IEEE (2012)
Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF. In: 2011 International Conference on Computer Vision, pp. 2564–2571. IEEE (2011)
Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: LSD: a line segment detector. Image Process. Line 2, 35–55 (2012)
Zhang, L., Koch, R.: An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 24(7), 794–805 (2013)
Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)
Trevor, A.J., Gedikli, S., Rusu, R.B., Christensen, H.I.: Efficient organized point cloud segmentation with connected components. In: Semantic Perception Mapping and Exploration (SPME), vol. 1 (2013)
Gálvez-López, D., Tardos, J.D.: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Rob. 28(5), 1188–1197 (2012)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zheng, Z., Zhang, Q., Wang, H., Li, R. (2024). Semi-Direct SLAM with Manhattan for Indoor Low-Texture Environment. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14427. Springer, Singapore. https://doi.org/10.1007/978-981-99-8435-0_28
Download citation
DOI: https://doi.org/10.1007/978-981-99-8435-0_28
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8434-3
Online ISBN: 978-981-99-8435-0
eBook Packages: Computer ScienceComputer Science (R0)