Skip to main content

LiDAR Localization and Mapping for Autonomous Vehicles: Recent Solutions and Trends

  • Conference paper
  • First Online:
Automation 2021: Recent Achievements in Automation, Robotics and Measurement Techniques (AUTOMATION 2021)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1390))

Included in the following conference series:

Abstract

This paper presents a brief survey of the current achievements in LiDAR SLAM and discusses some recent trends and new ideas in this area. The focus is on LiDAR SLAM applied to autonomous vehicles, which still strugle with real-world complexity. We identify the challenges in efficient environment representation, robust estimation over large state spaces, and real-time handling of the scene dynamics and diversified semantics. Some of these issues are illustrated by preliminary results of our recent research in LiDAR SLAM.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bȩdkowski, J., Röhling, T., Hoeller, F., Shulz, D., Schneider, F.E.: Benchmark of 6D SLAM (6D simultaneous localization and mapping) algorithms with robotic mobile mapping systems. Found. Comput. Decis. Sci. 42(3), 275–295 (2017)

    Article  Google Scholar 

  2. Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., Gall, J.: SemanticKITTI: a dataset for semantic scene understanding of LiDAR sequences. In: IEEE/CVF International Conference on Computer Vision, pp. 9296–9306 (2019)

    Google Scholar 

  3. Behley, J., Stachniss, C.: Efficient surfel-based SLAM using 3D laser range data in urban environments. In: Robotics: Science and Systems (2018)

    Google Scholar 

  4. Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., Leonard, J.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Robot. 32(6), 1309–1332 (2016)

    Article  Google Scholar 

  5. Campos, C., Elvira, R., Gómez Rodríguez, J.J., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM3: an accurate open-source library for visual, visual-inertial and multi-map SLAM. arXiv, cs.RO 2007.11898 (2020)

    Google Scholar 

  6. Cho, Y., Kim, G., Kim, A.: Unsupervised geometry-aware deep LiDAR odometry. In: IEEE International Conference on Robotics and Automation, pp. 2145–2152 (2020)

    Google Scholar 

  7. Ćwian, K., Nowicki, M.R., Nowak, T., Skrzypczyński, P.: Planar features for accurate laser-based 3-D SLAM in urban environments. In: Bartoszewicz, A., et al. (eds.) Advanced, Contemporary Control. AISC, vol. 1196, pp. 941–953. Springer (2020)

    Google Scholar 

  8. Deschaud, J.: IMLS-SLAM: Scan-to-model matching based on 3D data. In: IEEE International Conference on Robotics and Automation, pp. 2480–2485 (2018)

    Google Scholar 

  9. Della Corte, B., Bogoslavskyi, I., Stachniss, C., Grisetti, G.: A general framework for flexible multi-cue photometric point cloud registration. In: IEEE International Conference on Robotics and Automation, pp. 4969–4976 (2018)

    Google Scholar 

  10. Dewan, A., Oliveira, G.L., Burgard, W.: Deep semantic classification for 3D LiDAR data. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3544–3549 (2017)

    Google Scholar 

  11. Dube, R., Cramariuc, A., Dugas, D., Sommer, H., Dymczyk, M., Nieto, J., Siegwart, R., Cadena, C.: SegMap: segment-based mapping and localization using data-driven descriptors. Int. J. Robot. Res. 39(2–3), 339–355 (2020)

    Article  Google Scholar 

  12. Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)

    Article  Google Scholar 

  13. Facil, J., Olid, D., Montesano, L., Civera, J.: Condition-invariant multi-view place recognition. arXiv, cs.CV 1902.09516 (2019)

    Google Scholar 

  14. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? In: The KITTI Vision Benchmark Suite IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 3354–3361 (2012)

    Google Scholar 

  15. Guo, J., Borges, P., Park, C., Gawel, A.: Local descriptor for robust place recognition using LiDAR intensity. IEEE Robot. Autom. Lett. 4(2), 1470–1477 (2019)

    Article  Google Scholar 

  16. Konolige, K.: Sparse sparse bundle adjustment. In: British Machine Vision Conference, pp. 102.1–102.11 (2010)

    Google Scholar 

  17. Kümmerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: g2o: a general framework for graph optimization. In: IEEE International Conference on Robotics and Automation, pp. 3607–3613 (2011)

    Google Scholar 

  18. Kschischang, F., Frey, B., Loeliger, H.-A.: Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Theory 47(2), 498–519 (2001)

    Article  MathSciNet  Google Scholar 

  19. Lang, A., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: PointPillars: fast encoders for object detection from point clouds. arXiv, cs.LG 1812.05784 (2018)

    Google Scholar 

  20. Li, Q., Chen, S., Wang, C., Li, X., Wen, C., Cheng, M., Li, J.: LO-net: deep real-time LiDAR odometry. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  21. Li, Y., Ibanez-Guzman, J.: Lidar for autonomous driving: the principles, challenges, and trends for automotive lidar and perception systems. arXiv, cs.RO 2004.08467 (2020)

    Google Scholar 

  22. Lu, W., Zhou, Y., Wan, G., Hou ,S., Song, S.: L\(^3\)-net: towards learning based LiDAR localization for autonomous driving. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 6382–6391 (2019)

    Google Scholar 

  23. Neuhaus, F., Koß, T., Kohnen, R., Paulus, D.: MC2SLAM: real-time inertial lidar odometry using two-scan motion compensation. In: Brox, T., et al. (eds.) Pattern Recognition GCPR 2018. LNCS, vol. 11269, pp. 60–72. Springer (2019)

    Google Scholar 

  24. Nowak, T., Ćwian, K., Skrzypczyński, P.: Cross-modal transfer learning for segmentation of non-stationary objects using LiDAR intensity data. In: IEEE International Conference on Robotics and Automation, Workshop on Sensing, Estimating and Understanding the Dynamic World (2020)

    Google Scholar 

  25. Nowicki, M.R.: Spatiotemporal calibration of camera and 3D laser scanner. IEEE Robot. Autom. Lett. 5(4), 6451–6458 (2020)

    Article  Google Scholar 

  26. Qi, C., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)

    Google Scholar 

  27. Pomerleau, F., Colas, F., Siegwart, R.: A review of point cloud registration algorithms for mobile robotics. Found. Trends Robot. 4(1), 1–104 (2015)

    Article  Google Scholar 

  28. Romera, F., Alvarez, J., Bergasa, L., Arroyo, R.: ERFNet: efficient residual factorized ConvNet for real-time semantic segmentation. IEEE Trans. Intell. Transp. Syst. 19(1), 263–272 (2018)

    Article  Google Scholar 

  29. Ranjan, A., Jampani, V., Balles, L., Kim, K., Sun, D., Wulff, J., Black, M.: Competitive collaboration: joint unsupervised learning of depth, camera motion, optical flow and motion segmentation. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 12232–12241 (2019)

    Google Scholar 

  30. Salas-Moreno, R.F., Glocken, B., Kelly, P.H.J., Davison, A.J.: Dense planar SLAM. In: IEEE International Symposium on Mixed and Augmented Reality, Munich, pp. 157–164 (2014)

    Google Scholar 

  31. Segal, A., Haehnel, D., Thrun, S.: Generalized-ICP. In: Robotics: Science and Systems (2009)

    Google Scholar 

  32. Serafin, J., Grisetti, G.: NICP: Dense normal based point cloud registration. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 742–749 (2015)

    Google Scholar 

  33. Shan, T., Englot, B.: LeGO-LOAM: lightweight and ground-optimized LiDAR odometry and mapping on variable terrain. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4758–4765 (2018)

    Google Scholar 

  34. Shin, Y., Park, Y.S., Kim, A.: Direct visual SLAM using sparse depth for camera-LiDAR system. In: IEEE International Conference on Robotics and Automation, pp. 5144–5151 (2018)

    Google Scholar 

  35. Skrzypczyński P.: Mobile robot localization: where we are and what are the challenges? In: Szewczyk, R., et al. (eds.) Automation 2017. Innovations in Automation, Robotics and Measurement Techniques. AISC, vol. 550, pp. 249–267. Springer (2017)

    Google Scholar 

  36. Steinbrücker, F., Sturm, J., Cremers, D.: Real-time visual odometry from dense RGB-D images. In: International Conference on Computer Vision, Workshop on Live Dense Reconstruction with Moving Cameras, pp. 719–722 (2011)

    Google Scholar 

  37. Velas, M., Spanel, M., Hradis, M., Herout, A.: CNN for IMU assisted odometry estimation using Velodyne LiDAR. In: International Conference on Autonomous Robotic Systems and Computing, pp. 71–77 (2018)

    Google Scholar 

  38. Strasdat, H., Montiel, J., Davison, A.: Real-time mococular SLAM: why filter? In: IEEE International Conference on Robotics and Automation, pp. 2657–2664 (2010)

    Google Scholar 

  39. Wietrzykowski, J., Skrzypczyński, P.: PlaneLoc: probabilistic global localization in 3-D using local planar features. Robot. Auton. Syst. 113(3), 160–173 (2019)

    Article  Google Scholar 

  40. Weingarten, J., Siegwart, R.: 3D SLAM using planar segments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3062–3067 (2006)

    Google Scholar 

  41. Zhang, J., Singh, S.: Low-drift and real-time LiDAR odometry and mapping. Auton. Robot. 41(2), 401–416 (2017)

    Article  Google Scholar 

  42. Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection. In: IEEE/CVF International Conference on Computer Vision and Pattern Recognition, pp. 4490–4499 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Piotr Skrzypczyński .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Skrzypczyński, P. (2021). LiDAR Localization and Mapping for Autonomous Vehicles: Recent Solutions and Trends. In: Szewczyk, R., Zieliński, C., Kaliczyńska, M. (eds) Automation 2021: Recent Achievements in Automation, Robotics and Measurement Techniques. AUTOMATION 2021. Advances in Intelligent Systems and Computing, vol 1390. Springer, Cham. https://doi.org/10.1007/978-3-030-74893-7_24

Download citation

Publish with us

Policies and ethics