Abstract
Simultaneous Localization and Mapping (SLAM) have been widely studied over the last years for autonomous vehicles. SLAM achieves its purpose by constructing a map of the unknown environment while keeping track of the location. A major challenge, which is paramount during the design of SLAM systems, lies in the efficient use of onboard sensors to perceive the environment. The most widely applied algorithms are camera-based SLAM and LiDAR-based SLAM. Recent research focuses on the fusion of camera-based and LiDAR-based frameworks that show promising results. In this paper, we present a study of commonly used sensors and the fundamental theories behind SLAM algorithms. The study then presents the hardware architectures used to process these algorithms and the performance obtained when possible. Secondly, we highlight state-of-the-art methodologies in each modality and in the multi-modal framework. A brief comparison followed by future challenges is then underlined. Additionally, we provide insights to possible fusion approaches that can increase the robustness and accuracy of modern SLAM algorithms; hence allowing the hardware-software co-design of embedded systems taking into account the algorithmic complexity and the embedded architectures and real-time constraints.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Code Availability
Not applicable
References
Abouzahir, M, Elouardi, A, Latif, R, Bouaziz, S, algorithms, A.T.: Embedding slam has it come of age? Robot. Auton. Syst. 100, 14–26 (2018)
Agarwal, S, Mierle, K, et al.: Ceres solver. http://ceres-solver.org
Andresen, L, Brandemuehl, A, Hönger, A, Kuan, B, Vödisch, N, Blum, H, Reijgwart, V, Bernreiter, L, Schaupp, L, Chung, JJ, et al: Accurate mapping and planning for autonomous racing. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4743–4749 (2020)
Andrew, AM: Multiple view geometry in computer vision Kybernetes (2001)
Arandjelovic, R, Gronat, P, Torii, A, Pajdla, T, Josef, S: NetVLAD: CNN architecture for weakly supervised place recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 5297 –5307 (2016)
Arandjelovic, R, Zisserman, A: All about VLAD. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp 1578–1585 (2013)
Sarvrood, YB, Hosseinyalamdary, S, Gao, Y: Visual-liDAR odometry aided by reduced IMU. ISPRS Int. J. Geo-Inform. 5(1), 3 (2016)
Behley, J, Stachniss, C: Efficient surfel-based slam using 3d laser range data in urban environments. In: Robotics: Science and Systems 2018 (2018)
Bernuy, F, Ruiz-del Solar, J.: Topological semantic mapping and localization in urban road scenarios. J. Intell. Robot. Syst. 92(1), 19–32 (2018)
Berrio, J.S., Worrall, S., Shan, M., Nebot, E: Long-term map maintenance pipeline for autonomous vehicles. arXiv:2008.12449 (2020)
Besl, P.J., McKay, ND: Method for registration of 3-d shapes. In: Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611, pp 586–606. International Society for Optics and Photonics (1992)
Biber, P, Straßer, W: The normal distributions transform A new approach to laser scan matching. In: Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453), vol. 3, pp. 2743–2748. IEEE (2003)
Blanco, J-L, Fernández-Madrigal, J-A, Gonzalez, J: Toward a unified Bayesian approach to hybrid metric–topological SLAM. IEEE Trans. Robot. 24(2), 259–270 (2008)
Borenstein, J, Everett, H R, Feng, L, Wehe, D: Mobile robot positioning: Sensors and techniques. J. Robot. Syst. 14(4), 231–249 (1997)
Bowman, SL, Atanasov, N, Daniilidis, K, Pappas, GJ: Probabilistic data association for semantic SLAM. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp 1722–1729. IEEE (2017)
Cadena, C, Carlone, L, Carrillo, H, Latif, Y, Scaramuzza, D, Neira, J, Reid, I, Leonard, JJ: Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 32(6), 1309–1332 (2016)
Campos, C, Elvira, R, Gómez Rodríguez, JJ, Montiel, JMM, Tardós, JD: ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM. arXiv:2007.11898(2020)
Cao, F, Zhuang, Y, Zhang, H, Wang, W: Robust place recognition and loop closing in laser-based SLAM for ugvs in urban environments. IEEE Sensors J. 18(10), 4242–4252 (2018)
Censi, A: An ICP variant using a point-to-line metric. In: 2008 IEEE International Conference on Robotics and Automation, pp. 19–25. IEEE (2008)
Chen, X, Milioto, A, Palazzolo, E, Giguere, P, Behley, J, Stachniss, C: SuMa++: Efficient LiDAR-based semantic slam. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4530–4537. IEEE (2019)
Concha, A, Civera, J: DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 5686–5693. IEEE (2015)
Cvišić, I, Ćesić, J, Marković, I, Petrović, I: SOFT-SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles. J. Field Robot. 35 (4), 578–595 (2018)
Das, A, Waslander, SL: Scan registration with multi-scale k-means normal distributions transform. In: 2012 IEEE/RSJ International Conference On Intelligent Robots and Systems, pp 2705–2710. IEEE (2012)
Davison, AJ: Real-time simultaneous localisation and mapping with a single camera. In: IEEE International Conference on Computer Vision, vol. 3, pp 1403–1403. IEEE Computer Society (2003)
Davison, AJ, Reid, ID, Molton, ND, Olivier, S: MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)
De Croce, M, Pire, T, Bergero, F: DS-PTAM: Distributed stereo parallel tracking and mapping SLAM system. J. Intell. Robot. Syst. 95(2), 365–377 (2019)
Debeunne, C, Vivet, D: A review of visual-lidar fusion based simultaneous localization and mapping. Sensors 20(7), 2068 (2020)
Dellaert, F: Factor graphs and GTSAM: A handson introduction. Technical report, Georgia Institute of Technology (2012)
Deschaud, J-E: IMLS-SLAM: Scan-to-model matching based on 3d data. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp 2480–2485. IEEE (2018)
Ding, X, Wang, Y, Xiong, R, Li, D, Li, T, Yin, H, Zhao, L: Persistent stereo visual localization on cross-modal invariant map. IEEE Trans. Intell. Transp. Syst. 21(11), 4646–4658 (2019)
Dissanayake, GM W M, Newman, P, Clark, S, Durrant-Whyte, HF, Csorba, M: A solution to the simultaneous localization and map building (SLAM) problem. IEEE Trans. Robot. Autom. 17(3), 229–241 (2001)
Dubé, R, Cramariuc, A, Dugas, D, Nieto, J, Siegwart, R, Cadena, C: SegMap: 3d segment mapping using data-driven descriptors. arXiv:1804.09557 (2018)
Einhorn, E, Gross, H-M: Generic NDT mapping in dynamic environments and its application for lifelong SLAM. Robot. Auton. Syst. 69, 28–39 (2015)
Engel, J, Koltun, V, Cremers, D: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2017)
Engel, J, Schöps, T, Cremers, D: LSD-SLAM: Large-scale direct monocular SLAM. In: European Conference on Computer Vision, pp 834–849. Springer (2014)
Forster, C, Zhang, Z, Gassner, M, Werlberger, M, Davide, S: SVO: Semidirect Visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 33(2), 249–265 (2016)
Fuentes-Pacheco, J, Ruiz-Ascencio, J, Rendón-Mancha, JM: Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev. 43(1), 55–81 (2015)
Gálvez-López, D, Tardos, JD: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot. 28(5), 1188–1197 (2012)
Geiger, A, Lenz, P, Urtasun, R: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2012)
Geiger, A, Ziegler, J, Stiller, C: StereoScan: Dense 3d reconstruction in real-time. In: Intelligent Vehicles Symposium (IV) (2011)
Gong, Z, Ying, R, Wen, F, Qian, J, Liu, P: Tightly coupled integration of GNSS and vision SLAM using 10-DoF optimization on manifold. IEEE Sensors J. 19(24), 12105–12117 (2019)
Graeter, J, Wilczynski, A, Lauer, M: LIMO: Lidar-monocular visual odometry. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 7872–7879. IEEE (2018)
Grisetti, G, Kümmerle, R, Stachniss, C, Burgard, W: A tutorial on graph-based SLAM. IEEE Intell. Transp. Syst. Mag. 2(4), 31–43 (2010)
Grisetti, G, Stachniss, C, Burgard, W: Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Robot. 23(1), 34–46 (2007)
Guo, Y, Sohel, F, Bennamoun, M, Lu, M, Wan, J: Rotational projection statistics for 3d local surface description and object recognition. Int. J. Comput. Vis. 105(1), 63–86 (2013)
He, K, Gkioxari, G, DollṔar, P, Girshick, R: Mask r-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp 2961–2969 (2017)
Henry, P, Krainin, M, Herbst, E, Ren, X, Fox, D: RGB-D mapping: Using depth cameras for dense 3d modeling of indoor environments. In: Experimental Robotics, pp 477–491. Springer (2014)
Hess, W, Kohler, D, Rapp, H, Andor, D: Real-time loop closure in 2d liDAR SLAM. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp 1271–1278. IEEE (2016)
Hong, Z, Petillot, Y, Sen, W: RadarSLAM: Radar based large-scale SLAM, in all weathers. arXiv:2005.02198 (2020)
Hornung, A, Wurm, KM, Bennewitz, M, Stachniss, C, Burgard, W: OctoMap: An efficient probabilistic 3d mapping framework based on octrees. Autonom Rob 34(3), 189–206 (2013)
Houseago, C, Bloesch, M, Leutenegger, S: KO-Fusion: dense visual SLAM with tightly-coupled kinematic and odometric tracking. In: 2019 International Conference on Robotics and Automation (ICRA), pp 4054–4060. IEEE (2019)
Hyun, E, Jin, Y-S, Lee, J-H: Moving and stationary target detection scheme using coherent integration and subtraction for automotive fmcw radar systems. In: 2017 IEEE Radar Conference (RadarConf), pp 0476–0481. IEEE (2017)
Iandola, FN, Han, S, Moskewicz, MW, Ashraf, K, Dally, WJ, Keutzer, K: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 mb model size. arXiv:1602.07360 (2016)
Ji, K, Chen, H, Di, H, Gong, J, Xiong, G, Qi, J, Yi, T: CPFG-SLAM: a robust simultaneous localization and mapping based on LIDAR in off-road environment. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp 650–655. IEEE (2018)
Jiang, G, Yin, L, Jin, S, Tian, C, Ma, X, Ou, Y: A simultaneous localization and mapping (SLAM) framework for 2.5 d map building based on low-cost liDAR and vision fusion. Appl. Sci. 9 (10), 2105 (2019)
Kaess, M, Ranganathan, A, Frank, D: ISAM: Incremental smoothing and mapping. IEEE Trans. Robot. 24(6), 1365–1378 (2008)
Kim, G, Kim, A: Scan context: Egocentric spatial descriptor for place recognition within 3D point cloud map. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems Madrid (2018)
Kim, H, Leutenegger, S, Davison, AJ: Real-time 3d reconstruction and 6-dof tracking with an event camera. In: European Conference on Computer Vision, pp 349–364. Springer (2016)
Kim, U-H, Kim, S, Jong-Hwan, K: SimVODIS: Simultaneous visual odometry, object detection, and instance segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)
Klein, G, Murray, D: Parallel tracking and mapping for small AR workspaces. In: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp 225–234. IEEE (2007)
Klein, G, Murray, D: Parallel tracking and mapping on a camera phone. In: 2009 8th IEEE International Symposium on Mixed and Augmented Reality, pp 83–86. IEEE (2009)
Kohlbrecher, S, Meyer, J, von Stryk, O, Klingauf, U: A flexible and scalable SLAM system with full 3d motion estimation. In: Proc. IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR). IEEE (2011)
Konolige, K, Grisetti, G, Kümmerle, R, Burgard, W, Limketkai, B, Vincent, R: Efficient sparse pose adjustment for 2d mapping. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp 22–29. IEEE (2010)
Kümmerle, R, Grisetti, G, Strasdat, H, Konolige, K, Burgard, W: g2o: A general framework for graph optimization. In: 2011 IEEE International Conference on Robotics and Automation, pp 3607–3613. IEEE (2011)
Laidlow, T, Bloesch, M, Li, W, Leutenegger, S: Dense RGB-D-Inertial SLAM with map deformations. In: 2017 IEEE/RSJ International Conference On Intelligent Robots and Systems (IROS), pp 6741–6748. IEEE (2017)
Li, Q, Chen, S, Wang, C, Li, X, Wen, C, Cheng, M, Li, J: LO-Net: Deep real-time liDAR odometry. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 8473–8482 (2019)
Li, R, Wang, S, Gu, D: DeepSLAM: A robust monocular SLAM system with unsupervised deep learning. IEEE Trans. Ind. Electron. 68(4), 3577–3587 (2020)
Li, Y, Ushiku, Y, Tatsuya, H: Pose graph optimization for unsupervised monocular visual odometry. In: 2019 International Conference on Robotics and Automation (ICRA), pp 5439–5445. IEEE (2019)
Liang, X, Chen, H, Li, Y, Liu, Y: Visual laser-SLAM in large-scale indoor environments. In: 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp 19–24. IEEE (2016)
Lin, J, Zheng, C, Xu, W, Fu, Z: R2LIVE: A robust, real-time, LiDAR-inertial-visual tightly-coupled state estimator and mapping, arXiv:2102.12400 (2021)
Liu, Q, Duan, F: Fast and consistent matching for landmark-based place recognition. Journal of Intelligent & Robotic Systems, 1–14 (2020)
Liu, Y, Yang, D, Li, J, Gu, Y, Pi, J, Zhang, X: Stereo visual-inertial SLAM with points and lines. IEEE Access 6, 69381–69392 (2018)
Liu, Y, Thrun, S: Results for outdoor-SLAM using sparse extended information filters. In: 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), vol. 1, pp 1227–1233. IEEE (2003)
López, E, García, S, Barea, R, Bergasa, LM, Molinos, EJ, Arroyo, R, Romera, E, Pardo, S: A multi-sensorial simultaneous localization and mapping (SLAM) system for low-cost micro aerial vehicles in GPS-denied environments. Sensors 17(4), 802 (2017)
Low, K-L: Linear least-squares optimization for point-to-plane ICP surface registration. Chapel Hill, University of North Carolina 4(10), 1–3 (2004)
Lu, W, Wan, G, Zhou, Y, Fu, X, Yuan, P, Song, S: DeepICP: An end-to-end deep neural network for 3d point cloud registration. arXiv:1905.04153 (2019)
Lu, W, Wan, G, Zhou, Y, Fu, X, Yuan, P, Shiyu, S: DeepVCP: An end-to-end deep neural network for point cloud registration. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 12–21 (2019)
Lu, W, Zhou, Y, Wan, G, Hou, S, Shiyu, S: L3-Net: Towards Learning based liDAR localization for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 6389–6398 (2019)
Magnusson, M, Lilienthal, A, Duckett, T: Scan registration for autonomous mining vehicles using 3d-NDT. J Field Robot 24(10), 803–827 (2007)
McCormac, J, Clark, R, Bloesch, M, Davison, A, Leutenegger, S: Fusion++: Volumetric object-level SLAM. In: 2018 International Conference on 3D Vision (3DV), pp 32–41. IEEE (2018)
Mendes, E, Koch, P, Lacroix, S: pose-graph SLAM. In: 2016 Icp-based IEEE International Symposium On Safety, Security, and Rescue Robotics (SSRR), pp 195–200. IEEE (2016)
Mithun, NC, Sikka, K, Chiu, H-P, Samarasekera, S, Rakesh, K: RGB2LIDAR: Towards solving large-scale cross-modal visual localization. In: Proceedings of the 28th ACM International Conference on Multimedia, pp 934–954 (2020)
Muja, M, Lowe, DG: Fast approximate nearest neighbors with automatic algorithm configuration. VISAPP (1) 2(331-340), 2 (2009)
Munoz-Salinas, R, Rafael, M-C: UcoSLAM: Simultaneous localization and mapping by fusion of keypoints and squared planar markers. Pattern Recogn. 101, 107193 (2020)
Mur-Artal, R, Martinez Montiel, JM, Tardos, JD: ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)
Mur-Artal, R, Tardós, JD: ORB-SLAM2:An open-source SLAM system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)
Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: Real-time dense surface mapping and tracking. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality, pp 127–136 (2011)
Newcombe, R.A, Lovegrove, S.J, Davison, AJ: DTAM: Dense tracking and mapping in real-time. In: 2011 International Conference on Computer Vision, pp 2320–2327. IEEE (2011)
Newman, P, Cole, D, Ho, K: Outdoor SLAM using visual appearance and laser ranging. In: 2006 Proceedings 2006 IEEE International Conference on Robotics and Automation ICRA 2006., pp 1180–1187. IEEE (2006)
Bowden, R, Kaygusuz, N, Mendez, O: MDN-VO: Estimating visual odometry with confidence. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2021)
Pascoe, G, Maddern, W, Tanner, M, Piniés, P, Paul, N: NID-SLAM: Robust Monocular SLAM using normalised information distance. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1435–1444 (2017)
Pfrommer, B, Kostas, D: TagSLAM: Robust SLAM, with fiducial markers. arXiv:1910.00679 (2019)
Pire, T, Fischer, T, Civera, J, De Cristóforis, P, Berlles, JJ: Stereo parallel tracking and mapping for robot localization. In: 2015 Stereo parallel IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 1373–1378. IEEE (2015)
Polok, L, Ila, V, Solony, M, Smrz, P, Zemcik, P: Incremental block cholesky factorization for nonlinear least squares in robotics. In: Robotics: Science and Systems, pp 328–336 (2013)
Prophet, R, Li, G, Sturm, C, Vossiek, M: Semantic segmentation on automotive radar maps. In: 2019 IEEE Intelligent Vehicles Symposium (IV), pp 756–763. IEEE (2019)
Qayyum, U, Ahsan, Q, Mahmood, Z: IMU aided RGB-D SLAM. In: 2017 14th International Bhurban Conference on Applied Sciences and Technology (IBCAST), pp 337–341. IEEE (2017)
Qi, CR, Su, H, Mo, K, Guibas, LJ: PointNet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 652–660 (2017)
Radmanesh, R, Wang, Z, Chipade, VS, Tsechpenakis, G, Panagou, D: LIV-LAM: LiDAR and visual localization and mapping. In: 2020 American Control Conference (ACC), pp 659–664. IEEE (2020)
Rebecq, H, Horstschäfer, T, Gallego, G, Davide, S: EVO: A geometric approach to event-based 6-DOF parallel tracking and mapping in real-time. IEEE Robot. Autom. Lett. 2(2), 593–600 (2016)
Redmon, J, Ali, F.: YOLOV3: An incremental improvement. arXiv:1804.02767 (2018)
Rusu, RB, Blodow, N, Beetz, M: Fast point feature histograms (FPFH) for 3d registration. In: 2009 Fast Point Feature IEEE International Conference on Robotics and Automation, pp 3212–3217. IEEE (2009)
Salas-Moreno, RF, Glocken, B, Kelly, PHJ, Davison, AJ: Dense planar SLAM. In: 2014 Dense IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp 157–164. IEEE (2014)
Sallab, AE, Sobh, I, Zahran, M, Essam, N: Lidar sensor modeling and data augmentation with gans for autonomous driving, arXiv:1905.07290 (2019)
Schuster, F, Keller, CG, Rapp, M, Haueis, M, Curio, C: SLAM using graph optimization. In: 2016 Landmark based IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp 2559–2564. IEEE (2016)
Segal, A, Haehnel, D, Thrun, S: Generalized-ICP. In: Robotics: Science and Systems, vol. 2, p 435, Seattle (2009)
Seo, Y, Chou, C-c: A tight coupling of vision-liDAR measurements for an effective odometry. IEEE (2019)
Servières, M, Renaudin, V, Dupuis, A, Antigny, N: Visual and visual-inertial SLAM: State of the art, classification, and experimental benchmarking. Journal of Sensors, 2021 (2021)
Shan, T, Englot, B: LeGO-LOAM: Lightweight and ground-optimized LiDAR odometry and mapping on variable terrain. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 4758–4765. IEEE (2018)
Shan, T, Englot, B, Meyers, D, Wang, W, Ratti, C, Rus, D: LIO-SAM: Tightly-coupled LiDAR inertial odometry via smoothing and mapping. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 5135–5142. IEEE (2020)
Shao, W, Vijayarangan, S, Li, C, Kantor, G: Stereo visual inertial LiDAR, simultaneous localization and mapping, arXiv:1902.10741 (2019)
Shin, H, Kim, D, Kwon, Y, Kim, Y: Illusion and dazzle: Adversarial optical channel exploits against lidars for automotive applications. In: International Conference on Cryptographic Hardware and Embedded Systems, pp 445–467 . Springer (2017)
Shin, Y-S, Park, YS, Kim, A: Direct visual SLAM using sparse depth for camera- LiDAR system. In: 2018 Direct IEEE International Conference on Robotics and Automation (ICRA), pp 5144–5151. IEEE (2018)
Shin, Y-S, Park, YS, Kim, A: DVL-SLAM: Sparse depth enhanced direct visual-liDAR SLAM. Auton. Robot. 44(2), 115–130 (2020)
Song, H, Shin, H-C: Classification and spectral mapping of stationary and moving objects in road environments using fmcw radar. IEEE Access 8, 22955–22963 (2020)
Steder, B, Rusu, RB, Konolige, K, Burgard, W: NARF: 3d range image features for object recognition. In: Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ. Int. Conf. on Intelligent Robots and Systems (IROS), vol. 44 (2010)
Strasdat, H, Montiel, JMM, Davison, AJ: Visual SLAM: Why filter? Image Vis. Comput. 30(2), 65–77 (2012)
Sünderhauf, N, Pham, TT, Latif, Y, Milford, M, Reid, I: Meaningful maps with object-oriented semantic mapping. In: 2017 IEEE/RSJ International Conference On Intelligent Robots and Systems (IROS), pp 5079–5085. IEEE (2017)
Szeliski, R: Computer Vision: Algorithms and Applications. Springer Science & Business Media (2010)
Taketomi, T, Uchiyama, H, Ikeda, S: Visual SLAM algorithms: a survey from 2010 to 2016. IPSJ Trans. Comput. Vis. Applic. 9(1), 1–11 (2017)
Tam, GKL, Cheng, Z-Q, Lai, Y-K, Langbein, FC, Liu, Y, Marshall, D, Martin, RR, Sun, X-F, Rosin, PL: Registration of 3d point clouds and meshes: a survey from rigid to nonrigid. IEEE Trans. Visual. Comput. Graph. 19(7), 1199–1217 (2012)
Tian, Y, Suwoyo, H, Wang, W, Mbemba, D, Li, L : An AEKF-SLAM algorithm with recursive noise statistic based on MLE and EM. J Intelli Robot Syst 97(2), 339–355 (2020)
Uy, MA, Lee, HG: pointnetVLAD: Deep point cloud based retrieval for large-scale place recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4470–4479 (2018)
Vidal, AR, Rebecq, H, Horstschaefer, T, Scaramuzza, D: Ultimate SLAM? Combining events, images, and imu for robust visual slam in hdr and high-speed scenarios. IEEE Robot. Autom. Lett. 3(2), 994–1001 (2018)
Wan, Z, Yu, B, Li, TY, Tang, J, Zhu, Y, Yu, W, Raychowdhury, A, Liu, S: A survey of fpga-based robotic computing. IEEE Circ. Syst. Mag. 21(2), 48–74 (2021)
Wang, R, Schworer, M, Daniel, C: Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras. In: Proceedings of the IEEE International Conference on Computer Vision, pp 3903–3911 (2017)
Wang, Y, Shi, T, Yun, P, Tai, L, Ming, L: PointSeg: Real-time semantic segmentation based on 3d LiDAR, point cloud. arXiv:1807.06288 (2018)
Wofk, D, Ma, F, Yang, T-J, Karaman, S, Sze, V: FastDepth: Fast monocular depth estimation on embedded systems. In: 2019 International Conference on Robotics and Automation (ICRA), pp 6101–6108. IEEE (2019)
Yan, M, Wang, J, Li, J, Zhang, C: Loose coupling visual-LiDAR odometry by combining VISO2 and LOAM. In: 2017 36th Chinese Control Conference (CCC), pp 6841–6846. IEEE (2017)
Yang, N, von Stumberg, L, Wang, R, Daniel, C: D3VO: Deep depth, deep pose and deep uncertainty for monocular visual odometry. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1281–1292 (2020)
Yu, W, Amigoni, F: Standard for robot map data representation for navigation. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 3–4 (2014)
Zhang, G, Liu, H, Dong, Z, Jia, J, Wong, T-T, Bao, H: Efficient non-consecutive feature tracking for robust structure-from-motion. IEEE Trans. Image Process. 25(12), 5957–5970 (2016)
Ji, Z, Kaess, M, Singh, S: Real-time depth enhanced monocular odometry. In: 2014 IEEE/RSJ International Conference On Intelligent Robots and Systems, p 2014. IEEE (2014)
Ji, Z, Kaess, M, Singh, S: A real-time method for depth enhanced visual odometry. Auton. Robot. 41(1), 31–43 (2017)
Ji, Z, Sanjiv, S: LOAM: Lidar Odometry and mapping in real-time. In: Robotics: Science and Systems, vol. 2 (2014)
Zhang, J, Singh, S: Visual-LiDAR odometry and mapping: Low-drift, robust, and fast. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp 2174–2181. IEEE (2015)
Zheng, X, Huang, B, Ni, D, Xu, Q: A novel intelligent vehicle risk assessment method combined with multi-sensor fusion in dense traffic environment. Journal of Intelligent and Connected Vehicles (2018)
Zhou, Y, Gallego, G, Shen, S: Event-based stereo visual odometry, arXiv:2007.15548(2020)
Zuo, X, Geneva, P, Lee, W, Liu, Y, Huang, G: LIC-Fusion: LiDAR-Inertial-Camera odometry, arXiv:1909.04102 (2019)
Zuo, X, Yang, Y, Geneva, P, Lv, J, Liu, Y, Huang, G, Pollefeys, M: LIC-Fusion, 2.0: LiDAR-inertial-camera odometry with sliding-window plane-feature tracking, arXiv:2008.07196(2020)
Funding
This work was supported by the French Ministry of Higher Education, Research and Innovation.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and methodology. Abdelhafid El Ouardi and Sergio Rodriguez had the idea for the article and supervised the process. Sergio Rodriguez was in charge of providing useful insights to the algorithmic aspects of the discussed SLAM methods. Abdelhafid El Ouardi was in charge of providing useful insights to the hardware aspects of the discussed SLAM systems. Mohammed Chghaf was in charge of synthetizing recent mono-modal and multi-modal SLAM strategies and prepared the first draft of the manuscript. He was in charge of investigation, literature search and data analysis. Sergio Rodriguez and Abdelhafid El Ouardi commented on previous versions of the manuscript and critically revised the work. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable
Consent for Publication
Not applicable
Conflicts of interest/Competing interests
The authors declare that they have no known conflicts or competing interests that could have appeared to influence the work reported in this paper.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chghaf, M., Rodriguez, S. & Ouardi, A.E. Camera, LiDAR and Multi-modal SLAM Systems for Autonomous Ground Vehicles: a Survey. J Intell Robot Syst 105, 2 (2022). https://doi.org/10.1007/s10846-022-01582-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-022-01582-8