Abstract
Vehicle trajectory prediction plays a crucial role in the control and safety warning of autonomous vehicles. Existing methods often depend on costly high definition (HD) maps for generating trajectories to fit their scenarios, or involve inefficient aggregation of local point clouds into voxels. Therefore, an end-to-end vehicle trajectory prediction method (PillarVTP) is proposed based on local point cloud aggregation and receptive field expansion. Firstly, we construct a novel pillar-based object detection network, introducing SPPCSPC which uses max pooling layers with multiple kernel sizes on a single feature level as the neck for extracting multi-scale features, and improving ResNet-18 by adding a depth stage to expand the receptive field at multiple levels. Then, we present performing feature upsampling to improve performance before predicting vehicle positions. And a shallow convolutional network is utilized to implement the future feature learning network, which learns future features from the previous features for predicting vehicle positions in future frames. Subsequently, the positions of vehicles are matched greedily from future frames to the current frame, and the matched future trajectories are associated with the vehicles detected in the current frame. Finally, the proposed PillarVTP is evaluated on the nuScenes and Argoverse 1 datasets. Experimental results demonstrate that PillarVTP outperforms recent end-to-end prediction method based on point cloud data, FutureDet, by 3.4% and surpasses traditional multi-stage method, Trajectron + + , by 13.7%. Furthermore, PillarVTP shows good robustness under various weather conditions.







Similar content being viewed by others
Data availability
This study utilized the following public autonomous driving datasets: nuScenes and Argoverse 1, which are accessible via the links below: nuScenes: https://www.nuscenes.org/nuscenes#download, Argoverse 1: https://www.argoverse.org/av1.html#download-link.
References
Zhou, Y., Tuzel, O.: Voxelnet: end-to-end learning for point cloud based 3d object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4490–4499 (2018)
Yan, Y., Mao, Y., Li, B.: Second: sparsely embedded convolutional detection. Sensors 18(10), 3337 (2018)
Lang, A.H., Vora, S., Caesar, H., et al.: Pointpillars: fast encoders for object detection from point clouds. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12697–12705 (2019)
Yin, T., Zhou, X., Krahenbuhl, P.: Center-based 3d object detection and tracking. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11784–11793 (2021)
Deo, N., Trivedi, M.M.: Convolutional social pooling for vehicle trajectory prediction. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 1468–1476 (2018)
Lin, L., Li, W., Bi, H., et al.: Vehicle trajectory prediction using LSTMs with spatial–temporal attention mechanisms. IEEE Intell. Transp. Syst. Mag. 14(2), 197–208 (2021)
Gupta, A., Johnson, J., Fei-Fei, L., et al.: Social gan: socially acceptable trajectories with generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2255–2264 (2018)
Cai, Y., Wang, Z., Wang, H., et al.: Environment-attention network for vehicle trajectory prediction. IEEE Trans. Veh. Technol. 70(11), 11216–11227 (2021)
Zhang, Z., Wang, Y., Liu, X.: Tapnet: Enhancing trajectory prediction with auxiliary past learning task. In: 2021 IEEE intelligent vehicles symposium (IV), pp. 421–426 (2021)
Peri, N., Luiten, J., Li, M., et al.: Forecasting from lidar via future object detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17202–17211 (2022)
Meng, Q., Guo, H., Li, J., et al.: Vehicle trajectory prediction method driven by raw sensing data for intelligent vehicles. IEEE Trans. Intell. Veh. 8(7), 3799–3812 (2023)
Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7464–7475 (2023)
Qi, C.R., Su, H., Mo, K., et al.: Pointnet: deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660 (2017)
Shi, S., Wang, X., Li, H.: Pointrcnn: 3d object proposal generation and detection from point cloud. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 770–779 (2019)
Qi, C.R., Yi, L., Su, H., et al.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in neural information processing systems, pp. 5105–5114 (2017)
Shi, G., Li, R., Ma, C.: Pillarnet: Real-time and high-performance pillar-based 3d object detection. In: European Conference on Computer Vision, pp. 35–52. Springer Nature Switzerland, Cham (2022)
Li, J., Luo, C., Yang, X.: PillarNeXt: rethinking network designs for 3D object detection in LiDAR point clouds. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17567–17576 (2023)
Zhou, S., Tian, Z., Chu, X., et al.: FastPillars: a deployment-friendly pillar-based 3D detector (2023). arXiv:2302.02367. Accessed 9 Apr 2024
Mozaffari, S., Sormoli, M.A., Koufos, K., et al.: Multimodal manoeuvre and trajectory prediction for automated driving on highways using transformer networks. IEEE Robot. Autom. Lett. (2023). https://doi.org/10.1109/LRA.2023.3301720
Schmidt, J., Jordan, J., Gritschneder, F., et al.: Crat-pred: vehicle trajectory prediction with crystal graph convolutional neural networks and multi-head self-attention. In: 2022 international conference on robotics and automation (ICRA). IEEE, pp. 7799–7805 (2022)
Liang, M., Yang, B., Zeng, W., et al.: Pnpnet: end-to-end perception and prediction with tracking in the loop. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11553–11562 (2020)
Wu, P., Chen, S., Metaxas, D.N.: Motionnet: joint perception and motion prediction for autonomous driving based on bird's eye view maps. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11385–11395 (2020)
Fang, Y., Luo, B., Zhao, T., et al.: ST-SIGMA: Spatio-temporal semantics and interaction graph aggregation for multi-agent perception and trajectory forecasting. CAAI Trans. Intell. Technol. 7(4), 744–757 (2022)
Agro, B., Sykora, Q., Casas, S., et al.: Implicit occupancy flow fields for perception and prediction in self-driving. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1379–1388 (2023)
Zhao, W., Jia, L., Zhai, H., et al.: PointSGLN: a novel point cloud classification net-work based on sampling grouping and local point normalization. Multimed. Syst. 30(2), 106 (2024)
Liao, Z.H., Zhang, H., Zhao, Y.J., Liu, Y.Z., Yang, J.Y.: A fast point cloud registration method based on spatial relations and features. Meas. Sci. Technol. 35(10), 106303 (2024)
Mao, W., Wang, T., Zhang, D., et al.: Pillarnest: Embracing backbone scaling and pretraining for pillar-based 3d object detection. IEEE Trans. Intell. Veh. (2024). https://doi.org/10.1109/TIV.2024.3386576
He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)
Wang, C.Y., Liao, H.Y.M, Wu, Y.H., et al.: CSPNet: a new backbone that can enhance learning capability of CNN. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 390–391 (2020)
Law, H., Deng, J.: Cornernet: detecting objects as paired keypoints. In: Proceedings of the European conference on computer vision (ECCV), pp. 734–750 (2018)
Caesar, H., Bankiti, V., Lang, A.H., et al.: Nuscenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11621–11631 (2020)
Chang, M.F., Lambert, J., Sangkloy, P., et al.: Argoverse: 3d tracking and forecasting with rich maps. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8748–8757 (2019)
Zhu, B., Jiang, Z., Zhou, X., et al.: Class-balanced grouping and sampling for point cloud 3d object detection (2019) arXiv:1908.09492. Accessed 9 Apr 2024
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization (2017) arXiv:1711.05101. Accessed 9 Apr 2024
Robicquet, A., Sadeghian, A., Alahi, A., et al.: Learning social etiquette: human trajectory understanding in crowded scenes. In: Eur. Conf. Comput. Vis., pp. 549–565 (2016)
Salzmann, T., Ivanovic, B., Chakravarty, P., et al.: Trajectron++: dynamically-feasible trajectory forecasting with heterogeneous data. In: Eur. Conf. Comput. Vis., pp. 683–700 (2020)
Acknowledgements
This work was supported by the Natural Science Foundation of Hunan Province, China (Grant No. 2024JJ5163), Humanities and Social Sciences Project of Ministry of Education of China (Grant No.24YJAZH237), and the Science and Technology Innovation Program of Hunan Province (Grant No. 2023SK2081).
Author information
Authors and Affiliations
Contributions
Conception: Z.L. Design: J.Y. and Z.L. Data analysis: J.Y. Drafting: J.Y. and Z.L. Critical revision: Y.Z. and Y.L. All the authors approved the final version to publish.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Communicated by Junyu Gao.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liao, Z., Yang, J., Zhao, Y. et al. PillarVTP: vehicle trajectory prediction method based on local point cloud aggregation and receptive field expansion. Multimedia Systems 30, 316 (2024). https://doi.org/10.1007/s00530-024-01521-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00530-024-01521-7