Abstract
This paper presents a brief survey of the current achievements in LiDAR SLAM and discusses some recent trends and new ideas in this area. The focus is on LiDAR SLAM applied to autonomous vehicles, which still strugle with real-world complexity. We identify the challenges in efficient environment representation, robust estimation over large state spaces, and real-time handling of the scene dynamics and diversified semantics. Some of these issues are illustrated by preliminary results of our recent research in LiDAR SLAM.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bȩdkowski, J., Röhling, T., Hoeller, F., Shulz, D., Schneider, F.E.: Benchmark of 6D SLAM (6D simultaneous localization and mapping) algorithms with robotic mobile mapping systems. Found. Comput. Decis. Sci. 42(3), 275–295 (2017)
Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., Gall, J.: SemanticKITTI: a dataset for semantic scene understanding of LiDAR sequences. In: IEEE/CVF International Conference on Computer Vision, pp. 9296–9306 (2019)
Behley, J., Stachniss, C.: Efficient surfel-based SLAM using 3D laser range data in urban environments. In: Robotics: Science and Systems (2018)
Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., Leonard, J.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Robot. 32(6), 1309–1332 (2016)
Campos, C., Elvira, R., Gómez Rodríguez, J.J., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM3: an accurate open-source library for visual, visual-inertial and multi-map SLAM. arXiv, cs.RO 2007.11898 (2020)
Cho, Y., Kim, G., Kim, A.: Unsupervised geometry-aware deep LiDAR odometry. In: IEEE International Conference on Robotics and Automation, pp. 2145–2152 (2020)
Ćwian, K., Nowicki, M.R., Nowak, T., Skrzypczyński, P.: Planar features for accurate laser-based 3-D SLAM in urban environments. In: Bartoszewicz, A., et al. (eds.) Advanced, Contemporary Control. AISC, vol. 1196, pp. 941–953. Springer (2020)
Deschaud, J.: IMLS-SLAM: Scan-to-model matching based on 3D data. In: IEEE International Conference on Robotics and Automation, pp. 2480–2485 (2018)
Della Corte, B., Bogoslavskyi, I., Stachniss, C., Grisetti, G.: A general framework for flexible multi-cue photometric point cloud registration. In: IEEE International Conference on Robotics and Automation, pp. 4969–4976 (2018)
Dewan, A., Oliveira, G.L., Burgard, W.: Deep semantic classification for 3D LiDAR data. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3544–3549 (2017)
Dube, R., Cramariuc, A., Dugas, D., Sommer, H., Dymczyk, M., Nieto, J., Siegwart, R., Cadena, C.: SegMap: segment-based mapping and localization using data-driven descriptors. Int. J. Robot. Res. 39(2–3), 339–355 (2020)
Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)
Facil, J., Olid, D., Montesano, L., Civera, J.: Condition-invariant multi-view place recognition. arXiv, cs.CV 1902.09516 (2019)
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? In: The KITTI Vision Benchmark Suite IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 3354–3361 (2012)
Guo, J., Borges, P., Park, C., Gawel, A.: Local descriptor for robust place recognition using LiDAR intensity. IEEE Robot. Autom. Lett. 4(2), 1470–1477 (2019)
Konolige, K.: Sparse sparse bundle adjustment. In: British Machine Vision Conference, pp. 102.1–102.11 (2010)
Kümmerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: g2o: a general framework for graph optimization. In: IEEE International Conference on Robotics and Automation, pp. 3607–3613 (2011)
Kschischang, F., Frey, B., Loeliger, H.-A.: Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theory 47(2), 498–519 (2001)
Lang, A., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: PointPillars: fast encoders for object detection from point clouds. arXiv, cs.LG 1812.05784 (2018)
Li, Q., Chen, S., Wang, C., Li, X., Wen, C., Cheng, M., Li, J.: LO-net: deep real-time LiDAR odometry. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition (2019)
Li, Y., Ibanez-Guzman, J.: Lidar for autonomous driving: the principles, challenges, and trends for automotive lidar and perception systems. arXiv, cs.RO 2004.08467 (2020)
Lu, W., Zhou, Y., Wan, G., Hou ,S., Song, S.: L\(^3\)-net: towards learning based LiDAR localization for autonomous driving. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 6382–6391 (2019)
Neuhaus, F., Koß, T., Kohnen, R., Paulus, D.: MC2SLAM: real-time inertial lidar odometry using two-scan motion compensation. In: Brox, T., et al. (eds.) Pattern Recognition GCPR 2018. LNCS, vol. 11269, pp. 60–72. Springer (2019)
Nowak, T., Ćwian, K., Skrzypczyński, P.: Cross-modal transfer learning for segmentation of non-stationary objects using LiDAR intensity data. In: IEEE International Conference on Robotics and Automation, Workshop on Sensing, Estimating and Understanding the Dynamic World (2020)
Nowicki, M.R.: Spatiotemporal calibration of camera and 3D laser scanner. IEEE Robot. Autom. Lett. 5(4), 6451–6458 (2020)
Qi, C., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)
Pomerleau, F., Colas, F., Siegwart, R.: A review of point cloud registration algorithms for mobile robotics. Found. Trends Robot. 4(1), 1–104 (2015)
Romera, F., Alvarez, J., Bergasa, L., Arroyo, R.: ERFNet: efficient residual factorized ConvNet for real-time semantic segmentation. IEEE Trans. Intell. Transp. Syst. 19(1), 263–272 (2018)
Ranjan, A., Jampani, V., Balles, L., Kim, K., Sun, D., Wulff, J., Black, M.: Competitive collaboration: joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 12232–12241 (2019)
Salas-Moreno, R.F., Glocken, B., Kelly, P.H.J., Davison, A.J.: Dense planar SLAM. In: IEEE International Symposium on Mixed and Augmented Reality, Munich, pp. 157–164 (2014)
Segal, A., Haehnel, D., Thrun, S.: Generalized-ICP. In: Robotics: Science and Systems (2009)
Serafin, J., Grisetti, G.: NICP: Dense normal based point cloud registration. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 742–749 (2015)
Shan, T., Englot, B.: LeGO-LOAM: lightweight and ground-optimized LiDAR odometry and mapping on variable terrain. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4758–4765 (2018)
Shin, Y., Park, Y.S., Kim, A.: Direct visual SLAM using sparse depth for camera-LiDAR system. In: IEEE International Conference on Robotics and Automation, pp. 5144–5151 (2018)
Skrzypczyński P.: Mobile robot localization: where we are and what are the challenges? In: Szewczyk, R., et al. (eds.) Automation 2017. Innovations in Automation, Robotics and Measurement Techniques. AISC, vol. 550, pp. 249–267. Springer (2017)
Steinbrücker, F., Sturm, J., Cremers, D.: Real-time visual odometry from dense RGB-D images. In: International Conference on Computer Vision, Workshop on Live Dense Reconstruction with Moving Cameras, pp. 719–722 (2011)
Velas, M., Spanel, M., Hradis, M., Herout, A.: CNN for IMU assisted odometry estimation using Velodyne LiDAR. In: International Conference on Autonomous Robotic Systems and Computing, pp. 71–77 (2018)
Strasdat, H., Montiel, J., Davison, A.: Real-time mococular SLAM: why filter? In: IEEE International Conference on Robotics and Automation, pp. 2657–2664 (2010)
Wietrzykowski, J., Skrzypczyński, P.: PlaneLoc: probabilistic global localization in 3-D using local planar features. Robot. Auton. Syst. 113(3), 160–173 (2019)
Weingarten, J., Siegwart, R.: 3D SLAM using planar segments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3062–3067 (2006)
Zhang, J., Singh, S.: Low-drift and real-time LiDAR odometry and mapping. Auton. Robot. 41(2), 401–416 (2017)
Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 4490–4499 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Skrzypczyński, P. (2021). LiDAR Localization and Mapping for Autonomous Vehicles: Recent Solutions and Trends. In: Szewczyk, R., Zieliński, C., Kaliczyńska, M. (eds) Automation 2021: Recent Achievements in Automation, Robotics and Measurement Techniques. AUTOMATION 2021. Advances in Intelligent Systems and Computing, vol 1390. Springer, Cham. https://doi.org/10.1007/978-3-030-74893-7_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-74893-7_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-74892-0
Online ISBN: 978-3-030-74893-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)