ABSTRACT
Recent years have witnessed the rapid progress of perception algorithms on top of LiDAR, a widely adopted sensor for autonomous driving systems. These LiDAR-based solutions are typically data hungry, requiring a large amount of data to be labeled for training and evaluation. However, annotating this kind of data is very challenging due to the sparsity and irregularity of point clouds and more complex interaction involved in this procedure. To tackle this problem, we propose FLAVA, a systematic approach to minimizing human interaction in the annotation process. Specifically, we divide the annotation pipeline into four parts: find, localize, adjust and verify. In addition, we carefully design the UI for different stages of the annotation procedure, thus keeping the annotators to focus on the aspects that are most important to each stage. Furthermore, our system also greatly reduces the amount of interaction by introducing a lightweight yet effective mechanism to propagate the annotation results. Experimental results show that our method can remarkably accelerate the procedure and improve the annotation quality.
Supplemental Material
- Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. 2019. nuScenes: A multimodal dataset for autonomous driving. CoRR abs/1903.11027 (2019). http://arxiv.org/abs/1903.11027Google Scholar
- Andreas Geiger, Philip Lenz, and Raquel Urtasun. 2012. Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
- R. Kesten, M. Usman, J. Houston, T. Pandya, K. Nadhamuni, A. Ferreira, M. Yuan, B. Low, A. Jain, P. Ondruska, S. Omari, S. Shah, A. Kulkarni, A. Kazakova, C. Tao, L. Platinsky, W. Jiang, and V. Shet. 2019. Lyft Level 5 AV Dataset 2019. https://level5.lyft.com/dataset/. (2019).Google Scholar
- Alex H. Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. 2019. PointPillars: Fast Encoders for Object Detection from Point Clouds. In IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarCross Ref
- Jungwook Lee, Sean Walsh, Ali Harakeh, and Steven L. Waslander. 2018. Leveraging Pre-Trained 3D Object Detection Models For Fast Ground Truth Generation. In IEEE International Conference on Intelligent Transportation Systems.Google Scholar
- Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. 2019. PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud. In IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarCross Ref
- Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Sheng Zhao, Shuyang Cheng, Yu Zhang, Jon Shlens, Zhifeng Chen, and Dragomir Anguelov. 2019. Scalability in Perception for Autonomous Driving: Waymo Open Dataset. arXiv preprint arXiv:2019 (2019).Google Scholar
- Bernie Wang, Virginia Wu, Bichen Wu, and Kurt Keutzer. 2019. LATTE: Accelerating LiDAR Point Cloud Annotation via Sensor Fusion, One-Click Annotation, and Tracking. In IEEE International Conference on Intelligent Transportation Systems.Google ScholarDigital Library
- Tai Wang, Xinge Zhu, and Dahua Lin. 2020. Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds. arXiv preprint arXiv:2020 (2020).Google Scholar
- Xinshuo Weng and Kris Kitani. 2019. A Baseline for 3D Multi-Object Tracking. arXiv preprint arXiv:2019 (2019).Google Scholar
- Sergey Zakharov, Wadim Kehl, Arjun Bhargava, and Adrien Gaidon. 2020. Autolabeling 3D Objects with Differentiable Rendering of SDF Shape Priors. In IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarCross Ref
- Yin Zhou and Oncel Tuzel. 2018. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarCross Ref
- Xinge Zhu, Yuexin Ma, Tai Wang, Yan Xu, Jianping Shi, and Dahua Lin. 2020. SSN: Shape Signature Networks for Multi-class Object Detection from Point Clouds. In Proceedings of the European Conference on Computer Vision.Google ScholarDigital Library
- Walter Zimmer, Akshay Rangesh, and Mohan Trivedi. 2019. 3D BAT: A Semi-Automatic, Web-based 3D Annotation Toolbox for Full-Surround, Multi-Modal Data Streams. In 30th IEEE Intelligent Vehicles Symposium.Google Scholar
Index Terms
- FLAVA: Find, Localize, Adjust and Verify to Annotate LiDAR-based Point Clouds
Recommendations
Image Aided Point-wise Autonomous Annotation for LiDAR Data
CSAI '19: Proceedings of the 2019 3rd International Conference on Computer Science and Artificial IntelligenceThis paper presents an autonomous method to realize the annotation of LiDAR (Light Detection And Ranging) point clouds with the help of images. Different from directly labeling on point clouds, our approach first utilizes mature image segmentation ...
Web-based Video-Assisted Point Cloud Annotation for ADAS validation
Web3D '19: Proceedings of the 24th International Conference on 3D Web TechnologyThis paper introduces a web application for point cloud annotation that is used in the advanced driver assistance systems field. Apart from the point cloud viewer, the web tool has an object viewer and a timeline to define the attributes of the ...
In-situ fruit analysis by means of LiDAR 3D point cloud of normalized difference vegetation index (NDVI)
Highlights- The use of LiDAR to segment fruit in fruit production has been confirmed.
- Analysing chlorophyll-related NDVI as 3D fruit point cloud is presented.
- 3D fruit NDVI is correlated with spectroscopically measured NDVI.
- 3D NDVI point ...
AbstractA feasible method to analyse fruit at the tree is requested in precise production management. The employment of light detection and ranging (LiDAR) was approached aimed at measuring the number of fruit, quality-related size, and ripeness-related ...
Comments