Abstract
3D object detection is a fundamental technique in autonomous driving. However, current LiDAR-based single-stage 3D object detection algorithms do not pay sufficient attention to the encoding of the inhomogeneity of LiDAR point clouds and the shape encoding of each object. This paper introduces a novel 3D object detection network called the spatial and part-aware aggregation network (SPANet), which utilizes a spatial aggregation network to remedy the inhomogeneity of LiDAR point clouds, and embodies a part-aware aggregation network that learns the statistic shape priors of objects. SPANet deeply integrates both 3D voxel-based features and point-based spatial features to learn more discriminative point cloud features. Specifically, the spatial aggregation network takes advantage of the efficient learning and high-quality proposals by providing flexible receptive fields from PointNet-based networks. The part-aware aggregation network includes a part-aware attention mechanism that learns the statistic shape priors of objects to enhance the semantic embeddings. Experimental results reveal that the proposed single-stage method outperforms state-of-the-art single-stage methods on the KITTI 3D object detection benchmark. It achieved a bird’s eye view (BEV) average precision (AP) of 91.59%, 3D AP of 80.34%, and heading AP of 95.03% in the detection of cars.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: IEEE CVPR, vol. 1, p. 3 (2017)
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361. IEEE (2012)
Girshick, R.: Fast r-cNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448 (2015)
Graham, B., Engelcke, M., Van Der Maaten, L.: 3D semantic segmentation with submanifold sparse convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9224–9232 (2018)
Graham, B., van der Maaten, L.: Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307 (2017)
He, C., Zeng, H., Huang, J., Hua, X.S., Zhang, L.: Structure aware single-stage 3d object detection from point cloud. In: CVPR (2020)
Ku, J., Mozifian, M., Lee, J., Harakeh, A., Waslander, S.: Joint 3D proposal generation and object detection from view aggregation. arXiv preprint arXiv:1712.02294 (2017)
Kuang, H., Wang, B., An, J., Zhang, M., Zhang, Z.: Voxel-FPN: multi-scale voxel feature aggregation for 3d object detection from lidar point clouds. Sensors 20(3), 704 (2020)
Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: Pointpillars: fast encoders for object detection from point clouds. arXiv preprint arXiv:1812.05784 (2018)
Lin, T.Y., Goyal, P., Girshick, R., He, K., DollĂ¡r, P.: Focal loss for dense object detection. IEEE Trans. Patt. Anal. Mach. Intell. 42, 318–327(2018)
Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Qi, C.R., Litany, O., He, K., Guibas, L.J.: Deep hough voting for 3d object detection in point clouds. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9277–9286 (2019)
Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum pointnets for 3D object detection from rgb-d data. arXiv preprint arXiv:1711.08488 (2017)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR), vol. 1, p. 4. IEEE (2017)
Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in Neural Information Processing Systems, pp. 5099–5108 (2017)
Shi, S., et al.: Pv-RCNN: Point-voxel feature set abstraction for 3D object detection. In: CVPR (2020)
Shi, S., Wang, X., Li, H.: Pointrcnn: 3D object proposal generation and detection from point cloud. arXiv preprint arXiv:1812.04244 (2018)
Shi, S., Wang, Z., Shi, J., Wang, X., Li, H.: From points to parts: 3D object detection from point cloud with part-aware and part-aggregation network. In: IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)
Yan, Y., Mao, Y., Li, B.: Second: sparsely embedded convolutional detection. Sensors 18(10), 3337 (2018)
Ye, Y., Chen, H., Zhang, C., Hao, X., Zhang, Z.: Sarpnet: shape attention regional proposal network for lidar-based 3D object detection. Neurocomputing 379, 53–63 (2020). https://doi.org/10.1016/j.neucom.2019.09.086
Zhou, Y., Tuzel, O.: Voxelnet: end-to-end learning for point cloud based 3D object detection. arXiv preprint arXiv:1711.06396 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Ye, Y. (2021). SPANet: Spatial and Part-Aware Aggregation Network for 3D Object Detection. In: Pham, D.N., Theeramunkong, T., Governatori, G., Liu, F. (eds) PRICAI 2021: Trends in Artificial Intelligence. PRICAI 2021. Lecture Notes in Computer Science(), vol 13033. Springer, Cham. https://doi.org/10.1007/978-3-030-89370-5_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-89370-5_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89369-9
Online ISBN: 978-3-030-89370-5
eBook Packages: Computer ScienceComputer Science (R0)