Skip to main content
Log in

PillarVTP: vehicle trajectory prediction method based on local point cloud aggregation and receptive field expansion

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

Vehicle trajectory prediction plays a crucial role in the control and safety warning of autonomous vehicles. Existing methods often depend on costly high definition (HD) maps for generating trajectories to fit their scenarios, or involve inefficient aggregation of local point clouds into voxels. Therefore, an end-to-end vehicle trajectory prediction method (PillarVTP) is proposed based on local point cloud aggregation and receptive field expansion. Firstly, we construct a novel pillar-based object detection network, introducing SPPCSPC which uses max pooling layers with multiple kernel sizes on a single feature level as the neck for extracting multi-scale features, and improving ResNet-18 by adding a depth stage to expand the receptive field at multiple levels. Then, we present performing feature upsampling to improve performance before predicting vehicle positions. And a shallow convolutional network is utilized to implement the future feature learning network, which learns future features from the previous features for predicting vehicle positions in future frames. Subsequently, the positions of vehicles are matched greedily from future frames to the current frame, and the matched future trajectories are associated with the vehicles detected in the current frame. Finally, the proposed PillarVTP is evaluated on the nuScenes and Argoverse 1 datasets. Experimental results demonstrate that PillarVTP outperforms recent end-to-end prediction method based on point cloud data, FutureDet, by 3.4% and surpasses traditional multi-stage method, Trajectron +  + , by 13.7%. Furthermore, PillarVTP shows good robustness under various weather conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

This study utilized the following public autonomous driving datasets: nuScenes and Argoverse 1, which are accessible via the links below: nuScenes: https://www.nuscenes.org/nuscenes#download, Argoverse 1: https://www.argoverse.org/av1.html#download-link.

References

  1. Zhou, Y., Tuzel, O.: Voxelnet: end-to-end learning for point cloud based 3d object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4490–4499 (2018)

  2. Yan, Y., Mao, Y., Li, B.: Second: sparsely embedded convolutional detection. Sensors 18(10), 3337 (2018)

    Article  Google Scholar 

  3. Lang, A.H., Vora, S., Caesar, H., et al.: Pointpillars: fast encoders for object detection from point clouds. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12697–12705 (2019)

  4. Yin, T., Zhou, X., Krahenbuhl, P.: Center-based 3d object detection and tracking. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11784–11793 (2021)

  5. Deo, N., Trivedi, M.M.: Convolutional social pooling for vehicle trajectory prediction. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 1468–1476 (2018)

  6. Lin, L., Li, W., Bi, H., et al.: Vehicle trajectory prediction using LSTMs with spatial–temporal attention mechanisms. IEEE Intell. Transp. Syst. Mag. 14(2), 197–208 (2021)

    Article  Google Scholar 

  7. Gupta, A., Johnson, J., Fei-Fei, L., et al.: Social gan: socially acceptable trajectories with generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2255–2264 (2018)

  8. Cai, Y., Wang, Z., Wang, H., et al.: Environment-attention network for vehicle trajectory prediction. IEEE Trans. Veh. Technol. 70(11), 11216–11227 (2021)

    Article  Google Scholar 

  9. Zhang, Z., Wang, Y., Liu, X.: Tapnet: Enhancing trajectory prediction with auxiliary past learning task. In: 2021 IEEE intelligent vehicles symposium (IV), pp. 421–426 (2021)

  10. Peri, N., Luiten, J., Li, M., et al.: Forecasting from lidar via future object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17202–17211 (2022)

  11. Meng, Q., Guo, H., Li, J., et al.: Vehicle trajectory prediction method driven by raw sensing data for intelligent vehicles. IEEE Trans. Intell. Veh. 8(7), 3799–3812 (2023)

    Article  Google Scholar 

  12. Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7464–7475 (2023)

  13. Qi, C.R., Su, H., Mo, K., et al.: Pointnet: deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660 (2017)

  14. Shi, S., Wang, X., Li, H.: Pointrcnn: 3d object proposal generation and detection from point cloud. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 770–779 (2019)

  15. Qi, C.R., Yi, L., Su, H., et al.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in neural information processing systems, pp. 5105–5114 (2017)

  16. Shi, G., Li, R., Ma, C.: Pillarnet: Real-time and high-performance pillar-based 3d object detection. In: European Conference on Computer Vision, pp. 35–52. Springer Nature Switzerland, Cham (2022)

    Google Scholar 

  17. Li, J., Luo, C., Yang, X.: PillarNeXt: rethinking network designs for 3D object detection in LiDAR point clouds. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17567–17576 (2023)

  18. Zhou, S., Tian, Z., Chu, X., et al.: FastPillars: a deployment-friendly pillar-based 3D detector (2023). arXiv:2302.02367. Accessed 9 Apr 2024

  19. Mozaffari, S., Sormoli, M.A., Koufos, K., et al.: Multimodal manoeuvre and trajectory prediction for automated driving on highways using transformer networks. IEEE Robot. Autom. Lett. (2023). https://doi.org/10.1109/LRA.2023.3301720

    Article  Google Scholar 

  20. Schmidt, J., Jordan, J., Gritschneder, F., et al.: Crat-pred: vehicle trajectory prediction with crystal graph convolutional neural networks and multi-head self-attention. In: 2022 international conference on robotics and automation (ICRA). IEEE, pp. 7799–7805 (2022)

  21. Liang, M., Yang, B., Zeng, W., et al.: Pnpnet: end-to-end perception and prediction with tracking in the loop. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11553–11562 (2020)

  22. Wu, P., Chen, S., Metaxas, D.N.: Motionnet: joint perception and motion prediction for autonomous driving based on bird's eye view maps. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11385–11395 (2020)

  23. Fang, Y., Luo, B., Zhao, T., et al.: ST-SIGMA: Spatio-temporal semantics and interaction graph aggregation for multi-agent perception and trajectory forecasting. CAAI Trans. Intell. Technol. 7(4), 744–757 (2022)

    Article  Google Scholar 

  24. Agro, B., Sykora, Q., Casas, S., et al.: Implicit occupancy flow fields for perception and prediction in self-driving. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1379–1388 (2023)

  25. Zhao, W., Jia, L., Zhai, H., et al.: PointSGLN: a novel point cloud classification net-work based on sampling grouping and local point normalization. Multimed. Syst. 30(2), 106 (2024)

    Article  Google Scholar 

  26. Liao, Z.H., Zhang, H., Zhao, Y.J., Liu, Y.Z., Yang, J.Y.: A fast point cloud registration method based on spatial relations and features. Meas. Sci. Technol. 35(10), 106303 (2024)

    Article  Google Scholar 

  27. Mao, W., Wang, T., Zhang, D., et al.: Pillarnest: Embracing backbone scaling and pretraining for pillar-based 3d object detection. IEEE Trans. Intell. Veh. (2024). https://doi.org/10.1109/TIV.2024.3386576

    Article  Google Scholar 

  28. He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)

    Article  Google Scholar 

  29. Wang, C.Y., Liao, H.Y.M, Wu, Y.H., et al.: CSPNet: a new backbone that can enhance learning capability of CNN. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 390–391 (2020)

  30. Law, H., Deng, J.: Cornernet: detecting objects as paired keypoints. In: Proceedings of the European conference on computer vision (ECCV), pp. 734–750 (2018)

  31. Caesar, H., Bankiti, V., Lang, A.H., et al.: Nuscenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11621–11631 (2020)

  32. Chang, M.F., Lambert, J., Sangkloy, P., et al.: Argoverse: 3d tracking and forecasting with rich maps. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8748–8757 (2019)

  33. Zhu, B., Jiang, Z., Zhou, X., et al.: Class-balanced grouping and sampling for point cloud 3d object detection (2019) arXiv:1908.09492. Accessed 9 Apr 2024

  34. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization (2017) arXiv:1711.05101. Accessed 9 Apr 2024

  35. Robicquet, A., Sadeghian, A., Alahi, A., et al.: Learning social etiquette: human trajectory understanding in crowded scenes. In: Eur. Conf. Comput. Vis., pp. 549–565 (2016)

  36. Salzmann, T., Ivanovic, B., Chakravarty, P., et al.: Trajectron++: dynamically-feasible trajectory forecasting with heterogeneous data. In: Eur. Conf. Comput. Vis., pp. 683–700 (2020)

Download references

Acknowledgements

This work was supported by the Natural Science Foundation of Hunan Province, China (Grant No. 2024JJ5163), Humanities and Social Sciences Project of Ministry of Education of China (Grant No.24YJAZH237), and the Science and Technology Innovation Program of Hunan Province (Grant No. 2023SK2081).

Author information

Authors and Affiliations

Authors

Contributions

Conception: Z.L. Design: J.Y. and Z.L. Data analysis: J.Y. Drafting: J.Y. and Z.L. Critical revision: Y.Z. and Y.L. All the authors approved the final version to publish.

Corresponding author

Correspondence to Zhuhua Liao.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Communicated by Junyu Gao.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liao, Z., Yang, J., Zhao, Y. et al. PillarVTP: vehicle trajectory prediction method based on local point cloud aggregation and receptive field expansion. Multimedia Systems 30, 316 (2024). https://doi.org/10.1007/s00530-024-01521-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00530-024-01521-7

Keywords

Navigation