Abstract:
Estimating the bounding box from an object point cloud is an essential task in autonomous driving with LiDAR/laser sensors. We present an efficient bounding box estimatio...View moreMetadata
Abstract:
Estimating the bounding box from an object point cloud is an essential task in autonomous driving with LiDAR/laser sensors. We present an efficient bounding box estimation method that can be applied on 2D bird's-eye view (BEV) LiDAR points to generate the bounding box geometry including length, width and orientation. Given a set of 2D points, the method utilizes their convex-hull points to calculate a small set of candidate directions of the box yaw orientation, and therefore reduces the searching space – usually a fine partition of an angle range (e.g. [0,
\pi/2
)) as in the previous solutions – to find the optinal angle. To further improve the efficiency, we investigate the techniques of controlling the number of convex-hull points, by both applying approximate collinearity condition and downsampling the raw point cloud to a smaller size. We provide comprehensive analysis on both accuracy and efficiency of the proposed method on the KITTI 3D object dataset. The results show that without obviously sacrificing the accuracy, the method, especially when using approximate convex-hull points, can significantly improve the time of estimating the bounding box orientation by almost one order of magnitude.
Published in: 2020 IEEE Intelligent Vehicles Symposium (IV)
Date of Conference: 19 October 2020 - 13 November 2020
Date Added to IEEE Xplore: 08 January 2021
ISBN Information: