Abstract:
LiDAR sensors are almost indispensable for autonomous robots to perceive the surrounding environment. However, the transmission of large-scale LiDAR point clouds is highl...Show MoreMetadata
Abstract:
LiDAR sensors are almost indispensable for autonomous robots to perceive the surrounding environment. However, the transmission of large-scale LiDAR point clouds is highly bandwidth-intensive, which can easily lead to transmission problems, especially for unstable communication networks. Meanwhile, existing LiDAR data compression is mainly based on rate-distortion optimization, which ignores the semantic information of ordered point clouds and the task requirements of autonomous robots. To address these challenges, this article presents a task-driven Scene-Aware LiDAR Point Clouds Coding (SA-LPCC) framework for autonomous vehicles. Specifically, a semantic segmentation model is developed based on multidimension information, in which both 2-D texture and 3-D topology information are fully utilized to segment movable objects. Furthermore, a prediction-based deep network is explored to remove the spatial–temporal redundancy. The experimental results on the benchmark semantic KITTI dataset validate that our SA-LPCC achieves state-of-the-art performance in terms of the reconstruction quality and storage space for downstream tasks. We believe that SA-LPCC jointly considers the scene-aware characteristics of movable objects and removes the spatial–temporal redundancy from an end-to-end learning mechanism, which will boost the related applications from algorithm optimization to industrial products.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 19, Issue: 8, August 2023)