Skip to main content

Advertisement

SFE-SLAM: an effective LiDAR SLAM based on step-by-step feature extraction

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

LiDAR Simultaneous Localization and Mapping (SLAM) plays a crucial role in intelligent robotics, finding extensive applications in autonomous driving and exploration. The traditional feature-based LiDAR SLAM holds a prominent position due to its robustness and accuracy. However, these methods still exhibit limitations in point cloud preprocessing and feature extraction. In this paper, we introduce an effective LiDAR SLAM method to address these issues. Specifically, we propose a novel Concentric Cluster Model (CCM) for clustering point clouds, aiming to preserve stable point clouds and eliminate the unstable ones. Additionally, we propose a Step-by-step Feature Extraction (SFE), which significantly enhances the effect of traditional feature extraction methods. We test the proposed SLAM method on several sequences of the KITTI odometry, M2DGR, and M2DGR-plus datasets. Experimental results show that our method achieves superior accuracy compared to several state-of-the-art LiDAR SLAM methods, while maintaining real-time performance.

Graphical abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Algorithm 1
Fig. 5
Fig. 6
Algorithm 2
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability and Access

All data analyzed during this study are included in this published article. Code is not accessible.

References

  1. Temeltas H, Kayak D (2008) SLAM for robot navigation. IEEE Aerosp Electron Syst Mag 16–19

  2. Wang Z, Li M, Zhou D et al (2021) Direct sparse stereo visual-inertial global odometry. In: 2021 IEEE International conference on robotics and automation (ICRA), pp 14403–14409

  3. Mo J, Islam MJ, Sattar J (2021) Fast direct stereo visual SLAM. IEEE Robot Autom Lett 778–785

  4. Campos C, Elvira R, Rodríguez JJG et al (2021) ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Trans Robot 1874–1890

  5. Ayman B, Malik M, Lotfi B (2023) DAM-SLAM: depth attention module in a semantic visual SLAM based on objects interaction for dynamic environments. Appl Intell 25802–25815

  6. Li J, Luo J (2024) YS-SLAM: YOLACT++ based semantic visual SLAM for autonomous adaptation to dynamic environments of mobile robots. Complex Intell Syst 1–22

  7. Yuan X, Chen S (2020) SaD-SLAM: A Visual SLAM Based on Semantic and Depth Information. In: 2020 IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 4930–4935

  8. Montiel J, Mur-Arta R, Tardós JD (2015) ORB-SLAM: A versatile and accurate monocular. IEEE Trans Robot 1147–1163

  9. Chen W, Shang G, Hu K et al (2022) A Monocular-visual SLAM system with semantic and optical-flow fusion for indoor dynamic environments. Micromachines 2006

  10. He M, Rajkumar RR (2021) Extended VINS-MONO: A systematic approach for absolute and relative vehicle localization in large-scale outdoor environments. In: 2021 IEEE/RSJ International conference on intelligent robots and systems (IROS), IEEE, pp 4861–4868

  11. Li Q, Chen S, Wang C et al (2019) Lo-net: Deep real-time lidar odometry. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8473–8482

  12. Li Z, Wang N (2020) Dmlo: Deep matching lidar odometry. In: 2020 ieee/rsj international conference on intelligent robots and systems (iros), IEEE, pp 6010–6017

  13. Cho Y, Kim G, Kim A (2020) Unsupervised geometry-aware deep lidar odometry. In: 2020 IEEE international conference on robotics and automation (ICRA), IEEE, pp 2145–2152

  14. Xu Y, Lin J, Shi J et al (2022) Robust Self-Supervised LiDAR Odometry Via Representative Structure Discovery and 3D Inherent Error Modeling. IEEE Robot Autom Lett 1651–1658

  15. Nubert J, Khattak S, Hutter M (2020) Self-supervised Learning of LiDAR Odometry for Robotic Applications. In 2021 IEEE International conference on robotics and automation (ICRA) pp 9601–9607

  16. Wang G, Wu X, Liu Z et al (2021) Pwclo-net: Deep lidar odometry in 3d point clouds using hierarchical embedding mask optimization. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 15910–15919

  17. Zhou B, Tu Y, Jin Z et al (2024) HPPLO-Net: Unsupervised LiDAR Odometry Using a Hierarchical Point-to-Plane Solver. IEEE Trans Intell Veh 2727–2739

  18. Wang G, Wu X, Jiang S et al (2022) Efficient 3d deep lidar odometry. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. IEEE, pp 5749–5765

  19. Biber P, Straßer W (2003) The normal distributions transform: A new approach to laser scan matching. In: Proceedings 2003 IEEE/RSJ international conference on intelligent robots and systems (IROS 2003)(Cat. No. 03CH37453), IEEE, pp 2743–2748

  20. Besl PJ, McKay ND (1992) Method for registration of 3-D shapes. In: Sensor fusion IV: control paradigms and data structures, Spie, pp 586–606

  21. Segal A, Haehnel D, Thrun S (2009) Generalized-icp. In: Robotics: science and systems, Seattle, WA, p 435

  22. Koide K, Yokozuka M, Oishi S et al (2021) Voxelized gicp for fast and accurate 3d point cloud registration. In: 2021 IEEE International conference on robotics and automation (ICRA), IEEE, pp 11054–11059

  23. Zhang J, Singh S (2014) LOAM: Lidar odometry and mapping in real-time. In: Robotics: Science and systems, Berkeley, CA, pp 1–9

  24. Shan T, Englot B (2018) Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: 2018 IEEE/RSJ International conference on intelligent robots and systems (IROS), IEEE, pp 4758–4765

  25. Zhou P, Guo X, Pei X et al (2021) T-loam: truncated least squares lidar-only odometry and mapping in real time. IEEE Trans Geosci Remote Sens 1–13

  26. Oelsch M, Karimi M, Steinbach E (2021) R-LOAM: Improving LiDAR odometry and mapping with point-to-mesh features of a known 3D reference object. IEEE Robot Autom Lett 2068–2075

  27. Wang H, Wang C, Chen CL et al (2021) F-loam: Fast lidar odometry and mapping. In: 2021 IEEE/RSJ International conference on intelligent robots and systems (IROS), IEEE, pp 4390–4396

  28. Yang H, Antonante P, Tzoumas V et al (2020) Graduated non-convexity for robust spatial perception: From non-minimal solvers to global outlier rejection. IEEE Robot Autom Lett 1127–1134

  29. Seo DU, Lim H, Lee S et al (2022) PaGO-LOAM: Robust ground-optimized LiDAR odometry. In: 2022 19th International conference on ubiquitous robots (UR), IEEE, pp 1–7

  30. Lim H, Oh M, Myung H (2021) Patchwork: Concentric zone-based region-wise ground segmentation with ground likelihood estimation using a 3D LiDAR sensor. IEEE Robot Autom Lett 6458–6465

  31. Park S, Wang S, Lim H et al (2019) Curved-voxel clustering for accurate segmentation of 3D LiDAR point clouds with real-time performance. In: 2019 IEEE/RSJ International conference on intelligent robots and systems (IROS), IEEE, pp 6459–6464

  32. Zhao Y, Zhang X, Huang X (2021) A technical survey and evaluation of traditional point cloud clustering methods for lidar panoptic segmentation. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 2464–2473

  33. Bogoslavskyi I, Stachniss C (2016) Fast range image-based segmentation of sparse 3D laser scans for online operation. In: 2016 IEEE/RSJ International conference on intelligent robots and systems (IROS), IEEE, pp 163–169

  34. Pan Y, Xiao P, He Y et al (2021) MULLS: Versatile LiDAR SLAM via multi-metric linear least square. In: 2021 IEEE International conference on robotics and automation (ICRA), IEEE, pp 11633–11640

  35. Dong H, Chen X, Stachniss C (2021) Online range image-based pole extractor for long-term lidar localization in urban environments. In: 2021 European conference on mobile robots (ECMR), IEEE, pp 1–6

  36. Chen SW, Nardari GV, Lee ES et al (2020) Sloam: Semantic lidar odometry and mapping for forest inventory. IEEE Robot Autom Lett 612–619

  37. Qi CR, Su H, Mo K et al (2017) Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 652–660

  38. Zaganidis A, Sun L, Duckett T et al (2018) Integrating deep semantic segmentation into 3-d point cloud registration. IEEE Robot Autom Lett 2942–2949

  39. Li L, Kong X, Zhao X et al (2021) SA-LOAM: Semantic-aided LiDAR SLAM with loop closure. In: 2021 IEEE International conference on robotics and automation (ICRA), IEEE, pp 7627–7634

  40. Duan Y, Peng J, Zhang Y et al (2022) Pfilter: Building persistent maps through feature filtering for fast and accurate lidar-based slam. In: 2022 IEEE/RSJ International conference on intelligent robots and systems (IROS), IEEE, pp 11087–11093

  41. Geiger A, Lenz P, Urtasun R (2012) Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE conference on computer vision and pattern recognition, IEEE, pp 3354–3361

  42. Yin J, Li A, Li T et al (2021) M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots. IEEE Robot Autom Lett 2266–2273

  43. Yin J, Li A, Xi W et al (2024) Ground-Fusion: A Low-cost Ground SLAM System Robust to Corner Cases. arXiv preprint arXiv:2402.14308

Download references

Funding

This work was supported by the National Natural Science Foundation of China (Grant Nos. 62273034, 61973029, and 62076026) and the Scientific and Technological Innovation Foundation of Foshan (BK21BF004).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the research. Yang Ren: Contributed to the conception of the study, performed the experiment, performed data analyses and wrote manuscript. Hui Zeng contributed significantly to analysis and manuscript edition. Yiyou Liang helped perform the analysis with constructive discussions.

Corresponding author

Correspondence to Hui Zeng.

Ethics declarations

Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Consent to participate

All authors participated in the manuscript and agreed to participate in it.

Consent for publication

All authors read and approved the final manuscript. All authors agreed to publish it.

Consent for data used

All authors have obtained informed consent for the data used.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ren, Y., Zeng, H. & Liang, Y. SFE-SLAM: an effective LiDAR SLAM based on step-by-step feature extraction. Appl Intell 55, 87 (2025). https://doi.org/10.1007/s10489-024-05963-4

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10489-024-05963-4

Keywords