Skip to main content
Log in

3D Lidar Target Detection Method at the Edge for the Cloud Continuum

  • Research
  • Published:
Journal of Grid Computing Aims and scope Submit manuscript

Abstract

In the internet of things, machine learning at the edge of cloud continuum is developing rapidly, providing more convenient services for design developers. The paper proposes a lidar target detection method based on scene density-awareness network for cloud continuum. The density-awareness network architecture is designed, and the context column feature network is proposed. The BEV density attention feature network is designed by cascading the density feature map with the spatial attention mechanism, and then connected with the BEV column feature network to generate the ablation BEV map. Multi-head detector is designed to regress the object center point, scale size and direction, and loss function is used for active supervision. The experiment is conducted on Alibaba Cloud services. On the validation dataset of KITTI, the 3D objects and BEV objects are detected and evaluated for three types of objects. The results show that most of the AP values of the density-awareness model proposed in this paper are higher than other methods, and the detection time is 0.09 s, which can meet the requirements of high accuracy and real-time of vehicle-borne lidar target detection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate target detection and semantic segmentation. Tech Report 1311. 2524v5 (2014)

  2. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You Only Look Once: Unified, Real-Time target detection. The IEEE Conference on Computer Vision and Pattern Recognition, 779–788 (2016)

  3. Redmon, J., Farhadi, A.: YOLO9000: Better, Faster, Stronger, Proceedings of the IEEE conference oncomputer vision and pattern recognition (2017)

  4. Kong, T., Yao, A.B., Chen, Y.R., Sun, F.C.: HyperNet: Towards Accurate Region Proposal Generation and target detection. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 1604, 00600 (2016)

    Google Scholar 

  5. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., Berg, A. C.: SSD: Single shot multibox detector. European Conference on Computer vision, 21–37 (2016)

  6. Charles, R. Q., Hao, S., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. The IEEE Conference on Computer Vision and Pattern Recognition (2017)

  7. Charles, R. Q., Li, Y., Hao, S., Guibas, L. J.; Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in Neural Information Processing Systems (2017)

  8. Chen, X., Ma, H., Wan, J ., Li, B., Xia, T. ; Multi-view 3D target detection network for autonomous driving. The IEEE Conference on Computer Vision and Pattern Recognition, 1907–1915 (2017)

  9. Li, Y. Y., Bu, R., Sun, M. C., Wu, W., Di, X H., Chen, B.Q.: PointCNN: Convolution On X-Transformed Points. 32th Conference and Workshop on Neural Information Processing Systems (2018)

  10. Fu, J., Liu, J. , Tian, H. J., Li, Y., Bao, Y. J., Fang, Z. W., Lu, H.Q.: Dual Attention Network for Scene Segmentation. The IEEE Conference on Computer Vision and Pattern Recognition, 02983v4 (2018)

  11. Yan, Y., Mao, Y.X., Li, B.: SECOND: Sparsely Embedded Convolutional Detection. Sensors 18(10), 3337 (2018)

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  12. Liu, Z. J., Tang, H. T., Lin, Y. J., Han, S. : Point-Voxel CNN for Efficient 3D Deep Learing. Thirty-third Conference on Neural Information Processing Systems (2019)

  13. Wang, Z ., Jia, K. : Frustum convnet: Sliding frustums to aggregate local point-wise features for a modal 3D target detection. International Conference on Intelligent Robots and Systems. 1742–1749 (2019)

  14. Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V.E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., Beijbom, O.: nuScenes: A multimodal dataset for autonomous driving. The IEEE Conference on Computer Vision and Pattern Recognition. 11618–11628 (2020)

  15. Halloran, B., Premaratne, P., Vial, P.J.: Robust one-dimensional calibration and localisation of a distributed camera sensor network. Pattern Recognition. 107058 (2020)

  16. Srinivas, A., Laskin, M., Abbeel, P. : CURL: Contrastive Unsupervised Representations for Reinforcement Learning. 2004.04136v4 (2020)

  17. He, C. H., Zeng, H., Huang, J.Q., Hua, X.S., Zhang, L. : Structure aware single stage 3D target detection from point cloud. The IEEE Conference on Computer Vision and Pattern Recognition. 11870–11879 (2020)

  18. Liu, Y., Wang, L., Liu, M. ; YOLOStereo3D: A step back to 2D for efficient stereo 3D detection. International Conference on Robotics and Automation. 13018–13024.770–779 (2021)

  19. Zhou, J., Hu, Y.Y. , Huang, X.Y., Wang, T. J. : Three-dimensional object detection network based on geometric information supplement strategy. Computers & Electrical Engineering. 108577 (2023)

  20. Liu, X.Y., Zhang, B.F., Liu, N. : The Graph Neural Network Detector Based on Neighbor Feature Alignment Mechanism in LIDAR Point Clouds. Machines. 11010116 (2023)

  21. Zhou, Y., Tuzel, O. : Voxelnet: End-end learning for point cloud based 3d target detection. The IEEE Conference on Computer Vision and Pattern Recognition. 4490–4499 (2018)

  22. Lang, A. H., Vora, S., Caesar, H., Zhou, L.B., Yang, J., Beijbom, O. : Pointpillars: Fast encoders for target detection from point clouds.The IEEE Conference on Computer Vision and Pattern Recognition (2019)

  23. Yang, Z. T., Sun, Y., Liu, S., Shen, X.Y., Jia, J .Y. : STD: Sparse-to-Dense 3D Object Detector for Point Cloud. The IEEE/CVF International Conference on Computer Vision. 1951–1960 (2019)

  24. Ioffe, S., Szegedy, C. ; Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 1502. 03167 (2015)

  25. Law, H., Deng, J.: CornerNet: Detecting Objects as Paired Keypoints. European conference on computer vision. 765–781 (2018)

  26. Ku, J., Mozififian, M., Lee, J., Harakeh, A., Waslander, S.L.: Joint 3d proposal generation and target detection from view aggregation. IEEE/RSJ International Conference on Intelligent Robots and Systems. 1–8 (2018).

  27. Charles, R. Q., Liu, W., Wu, C. X., Su, H., Guibas, L. J. : Frustum PointNets for 3D target detection From RGB-D Data. The IEEE Conference on Computer Vision and Pattern Recognition. 918–927 (2018)

  28. Wang, Z., Ding, S. H., Li, Y., Zhao, M.M., Roychowdhury, S., Wallin, A., Sapiro, G., Qiu, Q.: “Range Adaptation for 3D target detection in LiDAR. The IEEE/CVF International Conference on Computer Vision. (2019)

  29. Shi, S. S., Wang, X. G., Li, H. S.: PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud. The IEEE CVF Conference on Computer Vision and Pattern Recognition. (2019)

Download references

Funding

The authors acknowledge the Natural Science Foundation of Jilin Province of China (Grant: YDZJ202201ZYTS655).

Author information

Authors and Affiliations

Authors

Contributions

Xuemei Li carried out the design of methods and the writing of the manuscript. Xuelian Liu carried out the design of methods and the guidance of the manuscript writing. Da Xie carried out the software design and model validation. Chong Chen carried out the model training and data management. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xuelian Liu.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, X., Liu, X., Xie, D. et al. 3D Lidar Target Detection Method at the Edge for the Cloud Continuum. J Grid Computing 22, 14 (2024). https://doi.org/10.1007/s10723-023-09736-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10723-023-09736-0

Keywords

Navigation