Skip to main content

A Perception Method Based on Point Cloud Processing in Autonomous Driving

  • Conference paper
  • First Online:
Neural Computing for Advanced Applications (NCAA 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1638))

Included in the following conference series:

  • 558 Accesses

Abstract

The abstract should briefly summarize the contents In the field of autonomous driving, autonomous cars need to perceive and understand their surroundings autonomously. At present, the most commonly used sensors in the field of driverless cars are RGB-D camera and LIDAR. Therefore, how to process the environmental information collected by these sensors, and extract the features we are interested in, and then use them to guide the driving of unmanned vehicles, has become an essential research point in the field of autonomous driving. Among them, compared with 2D image information, 3D point cloud can provide 3D orientation information of objects that 2D image does not have. Based on this, how to accurately process and perceive 3D point cloud and separate objects such as obstacles, cars and roads is crucial to the safety of autonomous driving. This paper adopts a method of preprocessing point cloud data and enhancing point cloud with images. KITTI dataset is the most classic and representative dataset in the field of autonomous driving. In the experiment, KITTI dataset is used to verify that this method can get good perception effect. In addition, we use the bird’s-eye view benchmark on KITTI to evaluate the performance of the original network. It is found that the improvement effect of this experiment on the original network is also very obvious.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Charles, R., Li, Y., Hao, S., Leonidas, J.: Pointnet++ Deep hierarchical feature learning on point sets in a metric space. In: Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, pp. 4–9 (2017)

    Google Scholar 

  2. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062 (2014)

  3. Chen, L,-C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)

  4. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pp. 801–818 (2018)

    Google Scholar 

  5. Chen, S., Liu, B., Feng, C., Vallespi-Gonzalez, C., Wellington, C.: 3D point cloud processing and learning for autonomous driving: impacting map creation, localization, and perception. IEEE Signal Process. Mag. 38(1), 68–86 (2020)

    Article  Google Scholar 

  6. Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1907–1915 (2017)

    Google Scholar 

  7. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE (2012)

    Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. Ji, S., Wei, X., Yang, M., Kai, Yu.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2012)

    Article  Google Scholar 

  10. Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: Pointpillars: Fast encoders for object detection from point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697–12705 (2019)

    Google Scholar 

  11. Niu, R., HmaNet: Hybrid multiple attention network for semantic segmentation in aerial images. arxiv 2020. arXiv preprint arXiv:2001.02870

  12. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In: Advances in neural information processing systems, vol. 30 (2017)

    Google Scholar 

  13. Vora, S., Lang, A.H., Helou, B., Beijbom, O.: Pointpainting: Sequential fusion for 3d object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4604–4612 (2020)

    Google Scholar 

  14. Wang, H., Lou, X., Cai, Y., Li, Y., Chen, L.: Real-time vehicle detection algorithm based on vision and lidar point cloud fusion. J. Sensors 2019 1–9 (2019)

    Google Scholar 

  15. Wu, Z.: 3D shapenets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920 (2015)

    Google Scholar 

  16. Zhou, J., et al.: Graph neural networks: a review of methods and applications. AI Open 1, 57–81 (2020)

    Article  Google Scholar 

  17. Zhou, Y., Tuzel, O.: Voxelnet: End-to-end learning for point cloud based 3D object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiangshuai Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, Q., Huang, J., Sheng, X., Yue, X. (2022). A Perception Method Based on Point Cloud Processing in Autonomous Driving. In: Zhang, H., et al. Neural Computing for Advanced Applications. NCAA 2022. Communications in Computer and Information Science, vol 1638. Springer, Singapore. https://doi.org/10.1007/978-981-19-6135-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-6135-9_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-6134-2

  • Online ISBN: 978-981-19-6135-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics