Abstract
Autonomous driving emphasizes precise multi-sensor fusion positioning on limit resource embedded systems. LiDAR-centered sensor fusion system serves as a mainstream navigation system due to its insensitivity to illumination and viewpoint change. However, these types of systems suffer from handling large-scale sequential LiDAR data using limited resources on board, leading LiDAR-centralized sensor fusion unpractical. As a result, hand-crafted features such as plane and edge are leveraged in majority mainstream positioning methods to alleviate this unsatisfaction, triggering a new cornerstone in LiDAR Inertial sensor fusion. However, such super light weight feature extraction, although it achieves real-time constraint in LiDAR-centered sensor fusion, encounters severe vulnerability under high speed rotational or translational perturbation. In this paper, we propose a sparse tensor based LiDAR Inertial fusion method for autonomous driving embedded system. Leveraging the power of sparse tensor, the global geometrical feature is fetched so that the point cloud sparsity defect is alleviated. Inertial sensor is deployed to conquer the time-consuming step caused by the coarse level point-wise inlier matching. We construct our experiments on both representative dataset benchmarks and realistic scenes. The evaluation results show the robustness and accuracy of our proposed solution compared to classical methods.
- [1] . 2019. PointNetLK: Robust & efficient point cloud registration using PointNet. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. Computer Vision Foundation / IEEE, 7163–7172.Google ScholarCross Ref
- [2] . 2018. Efficient surfel-based SLAM using 3D laser range data in urban environments. In Robotics: Science and Systems XIV, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA, June 26–30, 2018, , , , and (Eds.).Google Scholar
- [3] . 1992. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14, 2 (1992), 239–256.Google ScholarDigital Library
- [4] . 2021. ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM. IEEE Transactions on Robotics 37, 6 (2021), 1874–1890.Google ScholarCross Ref
- [5] . 2017. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 4 (2017), 834–848.Google ScholarCross Ref
- [6] . 2021. Moving object segmentation in 3D LiDAR data: A learning-based approach exploiting sequential data. IEEE Robotics and Automation Letters (RA-L) 6 (2021), 6529–6536. Issue 4.
DOI: Google ScholarCross Ref - [7] . 2017. Multi-view 3D object detection network for autonomous driving. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017. IEEE Computer Society, 6526–6534.Google ScholarCross Ref
- [8] . 2019. SuMa++: Efficient LiDAR-based semantic SLAM. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, Macau, SAR, China, November 3–8, 2019. IEEE, 4530–4537.Google ScholarDigital Library
- [9] . 2020. Deep global registration. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation / IEEE, 2511–2520.Google ScholarCross Ref
- [10] . 2019. 4D spatio-temporal ConvNets: Minkowski convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. Computer Vision Foundation / IEEE, 3075–3084.Google ScholarCross Ref
- [11] . 2019. Fully convolutional geometric features. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27–November 2, 2019. IEEE, 8957–8965.Google ScholarCross Ref
- [12] . 2021. CT-ICP: Real-time Elastic LiDAR Odometry with Loop Closure.
DOI: Google ScholarCross Ref - [13] . 2003. Robust registration of 2D and 3D point sets. Image Vis. Comput. 21, 13-14 (2003), 1145–1153.Google ScholarCross Ref
- [14] . 2012. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR ’12).Google ScholarCross Ref
- [15] . 2005. Robust global registration. In Third Eurographics Symposium on Geometry Processing, Vienna, Austria, July 4–6, 2005(
ACM International Conference Proceeding Series , Vol. 255), and (Eds.). Eurographics Association, 197–206.Google Scholar - [16] . 2019. An efficient Schmidt-EKF for 3D visual-inertial SLAM. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12105–12115.Google ScholarCross Ref
- [17] . 2020. Reliable and secure design-space-exploration for cyber-physical systems. ACM Transactions on Embedded Computing Systems (TECS) 19, 3 (2020), 1–29.Google ScholarDigital Library
- [18] . 2021. Tesla’s ‘Phantom Braking’ Problem is Getting Worse, and the US Government has Questions. https://www.theverge.com/2022/6/3/23153241/tesla-phantom-braking-nhtsa-complaints-investigationGoogle Scholar
- [19] . 2018. Attitude fusion of inertial and magnetic sensor under different magnetic filed distortions. ACM Transactions on Embedded Computing Systems (TECS) 17, 2 (2018), 1–22.Google ScholarDigital Library
- [20] . 2015. Ard-mu-Copter: A simple open source quadcopter platform. In 2015 11th International Conference on Mobile Ad-hoc and Sensor Networks (MSN ’15). IEEE, 158–164.Google ScholarDigital Library
- [21] . 2022. EmPointMovSeg: Sparse tensor based moving object segmentation in 3D LiDAR point clouds for autonomous driving embedded system. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2022).Google Scholar
- [22] . 2018. SeMo: Service-oriented and model-based software framework for cooperating robots. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 37, 11 (2018), 2952–2963.Google ScholarCross Ref
- [23] . 2022. Verifying controllers with vision-based perception using safe approximate abstractions. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 41, 11 (2022), 4205–4216.Google ScholarDigital Library
- [24] . 1999. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21, 5 (1999), 433–449.Google ScholarDigital Library
- [25] . 2016. CPS oriented control design for networked surveillance robots with multiple physical constraints. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 35, 5 (2016), 778–791.Google ScholarDigital Library
- [26] . 2020. DMLO: Deep matching LiDAR odometry. In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020, Las Vegas, NV, USA, October 24, 2020 - January 24, 2021. IEEE, 6010–6017.Google ScholarDigital Library
- [27] . 2022. R 3 LIVE: A robust, real-time, RGB-colored, LiDAR-inertial-visual tightly-coupled state estimation and mapping package. In 2022 International Conference on Robotics and Automation (ICRA ’22). IEEE, 10672–10678.Google ScholarDigital Library
- [28] . 2021. Vehicle localization during GPS outages with extended Kalman filter and deep learning. IEEE Transactions on Instrumentation and Measurement 70 (2021), 1–10.Google ScholarCross Ref
- [29] . 2019. FlowNet3D: Learning scene flow in 3D point clouds. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. Computer Vision Foundation / IEEE, 529–537.Google ScholarCross Ref
- [30] . 2006. Fully automatic registration of 3D point clouds. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), 17–22 June 2006, New York, NY, USA. IEEE Computer Society, 1297–1304.Google ScholarDigital Library
- [31] . 2020. VOLDOR: Visual odometry from log-logistic dense optical flow residuals. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation / IEEE, 4897–4908.Google ScholarCross Ref
- [32] . 2021. Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).Google ScholarCross Ref
- [33] . 2015. A review of point cloud registration algorithms for mobile robotics. Found. Trends Robotics 4, 1 (2015), 1–104.Google ScholarDigital Library
- [34] . 2013. Comparing ICP variants on real-world data sets. Autonomous Robots 34, 3 (2013), 133–148.Google ScholarDigital Library
- [35] . 2017. PointNet: Deep learning on point sets for 3D classification and segmentation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017. IEEE Computer Society, 77–85.Google Scholar
- [36] . 2017. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4–9, 2017, Long Beach, CA, USA, , , , , , , and (Eds.). 5099–5108.Google Scholar
- [37] . 2021. ViP-DeepL ab: Learning visual perception with depth-aware video panoptic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3997–4008.Google Scholar
- [38] . 2018. VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics 34, 4 (2018), 1004–1020.Google ScholarDigital Library
- [39] . 2009. Fast point feature histograms (FPFH) for 3D registration. In 2009 IEEE International Conference on Robotics and Automation, ICRA 2009, Kobe, Japan, May 12–17, 2009. IEEE, 3212–3217.Google ScholarCross Ref
- [40] . 2008. Aligning point cloud views using persistent feature histograms. In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 22–26, 2008, Acropolis Convention Center, Nice, France. IEEE, 3384–3391.Google ScholarCross Ref
- [41] . 2021. Rotation Dataset Bag. https://drive.google.com/drive/folders/1gJHwfdHCRdjP7vuT556pv8atqrCJPbUqGoogle Scholar
- [42] . 2018. LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018, Madrid, Spain, October 1–5, 2018. IEEE, 4758–4765.Google ScholarDigital Library
- [43] . 2020. LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping. In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020, Las Vegas, NV, USA, October 24, 2020 - January 24, 2021. IEEE, 5135–5142.Google ScholarDigital Library
- [44] . 2021. Walking Dataset Bag. https://drive.google.com/drive/folders/1gJHwfdHCRdjP7vuT556pv8atqrCJPbUqGoogle Scholar
- [45] . 2002. ICP registration using invariant features. IEEE Trans. Pattern Anal. Mach. Intell. 24, 1 (2002), 90–102.Google ScholarDigital Library
- [46] . 2021. TrackInFactory: A tight coupling particle filter for industrial vehicle tracking in indoor environments. IEEE Transactions on Systems, Man, and Cybernetics: Systems 52, 7 (2021), 4151–4162.Google ScholarCross Ref
- [47] . 2013. Adaptive calibration for fusion-based cyber-physical systems. ACM Transactions on Embedded Computing Systems (TECS) 11, 4 (2013), 1–25.Google ScholarDigital Library
- [48] . 2018. CNN for IMU assisted odometry estimation using velodyne LiDAR. In 2018 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2018, Torres Vedras, Portugal, April 25–27, 2018, , , , , and (Eds.). IEEE, 71–77.Google ScholarCross Ref
- [49] . 2021. Hierarchical attention learning of scene flow in 3D point clouds. IEEE Trans. Image Process. 30 (2021), 5168–5181.Google ScholarCross Ref
- [50] . 2021. F-LOAM: Fast lidar odometry and mapping. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’21). IEEE, 4390–4396.Google ScholarDigital Library
- [51] . 2017. DeepVO: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA ’17). IEEE, 2043–2050.Google ScholarDigital Library
- [52] . 2019. DeepPCO: End-to-end point cloud odometry through deep parallel neural network. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, Macau, SAR, China, November 3–8, 2019. IEEE, 3248–3254.Google ScholarDigital Library
- [53] . 2021. Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry. IEEE Robotics and Automation Letters 6, 2 (2021), 1004–1011.Google ScholarCross Ref
- [54] . 2022. Quadratic pose estimation problems: Globally optimal solutions, solvability/observability analysis, and uncertainty description. IEEE Transactions on Robotics 38, 5 (2022), 3314–3335.
DOI: Google ScholarCross Ref - [55] . 2022. FAST-LIO2: Fast direct lidar-inertial odometry. IEEE Transactions on Robotics (2022).Google Scholar
- [56] . 2022. LeGO-LOAM-SC: An improved simultaneous localization and mapping method fusing LeGO-LOAM and scan context for underground coalmine. Sensors 22, 2 (2022).
DOI: Google ScholarCross Ref - [57] . 2021. RTLIO: Real-time lidar-inertial odometry and mapping for UAVS. Sensors 21, 12 (2021), 3955.Google ScholarCross Ref
- [58] . 2020. D3VO: Deep depth, deep pose and deep uncertainty for monocular visual odometry. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation / IEEE, 1278–1289.Google ScholarCross Ref
- [59] . 2018. Deep virtual stereo odometry: Leveraging deep depth prediction for monocular direct sparse odometry. In Proceedings of the European Conference on Computer Vision (ECCV ’18). 817–833.Google ScholarDigital Library
- [60] . 2014. LOAM: Lidar odometry and mapping in real-time. In Robotics: Science and Systems, Vol. 2. Berkeley, CA, 1–9.Google ScholarCross Ref
- [61] . 2017. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1851–1858.Google ScholarCross Ref
Index Terms
- Robust Embedded Autonomous Driving Positioning System Fusing LiDAR and Inertial Sensors
Recommendations
Multi-Modal 3D Object Detection in Autonomous Driving: A Survey
AbstractThe past decade has witnessed the rapid development of autonomous driving systems. However, it remains a daunting task to achieve full autonomy, especially when it comes to understanding the ever-changing, complex driving scenes. To alleviate the ...
SRIF-RCNN: Sparsely represented inputs fusion of different sensors for 3D object detection
Abstract3D object detection is a vital task in many practical applications, such as autonomous driving, augmented reality and robot navigation. Significant advances have been made in recent LiDAR-only 3D detection methods, but sensor fusion 3D detection ...
Integration of Vision and Inertial Sensors for 3D Arm Motion Tracking in Home-based Rehabilitation
The integration of visual and inertial sensors for human motion tracking has attracted significant attention recently, due to its robust performance and wide potential application. This paper introduces a real-time hybrid solution to articulated 3D arm ...
Comments