Loading [a11y]/accessibility-menu.js
Image Detector Based Automatic 3D Data Labeling and Training for Vehicle Detection on Point Cloud | IEEE Conference Publication | IEEE Xplore

Image Detector Based Automatic 3D Data Labeling and Training for Vehicle Detection on Point Cloud


Abstract:

Nowadays, a large amount of labeled data is crucial for deep neural network training. However, data labeling is still a time- and labor-consuming task, especially when la...Show More

Abstract:

Nowadays, a large amount of labeled data is crucial for deep neural network training. However, data labeling is still a time- and labor-consuming task, especially when labeling 3D point clouds. Meanwhile, object recognition has achieved great success on 2D images, even beyond the ability of humans. In this paper, we propose an effective framework to produce labeled data by using an image detector as a supervisor, and we train the network with a simple trick to eliminate noisy labels. For object-sparse scenes, this method is able to obtain good label data, while for object-dense scenes, we can use our training method to detect some of the corrupted labels. This is realized by building a cohesive camera and LiDAR system (named “Licam”) and performing target frustum region proposal on point clouds using the camera detection result. Efficient and effective vehicle detection is achieved based on this learning and training framework. We examine this method on the KITTI dataset [7] and our own road running data collected from a micro electro mechanical system (MEMS) LiDAR, demonstrating fast and accurate detection results. The results show that our automatic data labeling and training framework is effective and efficient. It provides the ability to obtain large-scale labeled data, and is easy to use for online learning.
Date of Conference: 09-12 June 2019
Date Added to IEEE Xplore: 29 August 2019
ISBN Information:

ISSN Information:

Conference Location: Paris, France

Contact IEEE to Subscribe

References

References is not available for this document.