Skip to main content

LiDAR-Based All-Weather 3D Object Detection via Prompting and Distilling 4D Radar

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

LiDAR-based 3D object detection models show remarkable performance, however their effectiveness diminishes in adverse weather. On the other hand, 4D radar exhibits strengths in adverse weather but faces limitations in standalone use. While fusing LiDAR and 4D radar seems to be the most intuitive approach, this method comes with limitations, including increased computational load due to radar pre-processing, situational constraints when both domain information is present, and the potential loss of sensor advantages through joint optimization. In this paper, we propose a novel LiDAR-only-based 3D object detection framework that works robustly in all-weather (normal and adverse) conditions. Specifically, we first propose 4D radar-based 3D prompt learning to inject auxiliary radar information into a LiDAR-based pre-trained 3D detection model while preserving the precise geometry capabilities of LiDAR. Subsequently, using the preceding model as a teacher, we distill weather-insensitive features and responses into a LiDAR-only student model through our four levels of inter-/intra-modal knowledge distillation. Extensive experiments demonstrate that our prompt learning effectively integrates the strengths of LiDAR and 4D radar, and our LiDAR-only student model even surpasses the detection performance of teacher and state-of-the-art models under various weather conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Arnold, E., Al-Jarrah, O.Y., Dianati, M., Fallah, S., Oxtoby, D., Mouzakitis, A.: A survey on 3d object detection methods for autonomous driving applications. IEEE Trans. Intell. Transp. Syst. 20(10), 3782–3795 (2019). https://doi.org/10.1109/TITS.2019.2892405

    Article  Google Scholar 

  2. Chen, Y., Liu, J., Zhang, X., Qi, X., Jia, J.: Voxelnext: fully sparse voxelnet for 3d object detection and tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 21674–21683 (2023)

    Google Scholar 

  3. Cho, H., Choi, J., Baek, G., Hwang, W.: itkd: interchange transfer-based knowledge distillation for 3d object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13540–13549 (2023)

    Google Scholar 

  4. Dong, B., Zhou, P., Yan, S., Zuo, W.: Lpt: long-tailed prompt tuning for image classification. arXiv preprint arXiv:2210.01033 (2022)

  5. Gupta, H., Kotlyar, O., Andreasson, H., Lilienthal, A.J.: Robust object detection in challenging weather conditions. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 7523–7532 (2024)

    Google Scholar 

  6. Hahner, M., et al.: Lidar snowfall simulation for robust 3d object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16364–16374 (2022)

    Google Scholar 

  7. Hegde, D., Kilic, V., Sindagi, V., Cooper, A.B., Foster, M., Patel, V.M.: Source-free unsupervised domain adaptation for 3d object detection in adverse weather. In: 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 6973–6980 (2023). https://doi.org/10.1109/ICRA48891.2023.10161341

  8. Huang, K., Shi, B., Li, X., Li, X., Huang, S., Li, Y.: Multi-modal sensor fusion for auto driving perception: a survey. ArXiv arxiv:2202.02703 (2022). https://api.semanticscholar.org/CorpusID:246634264

  9. Huang, K.C., Wu, T.H., Su, H.T., Hsu, W.H.: Monodtr: monocular 3d object detection with depth-aware transformer. In: CVPR (2022)

    Google Scholar 

  10. Jia, M., et al.: Visual prompt tuning. In: European Conference on Computer Vision, pp. 709–727. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-19827-4_41

  11. Kim, Y., Shin, J., Kim, S., Lee, I.J., Choi, J.W., Kum, D.: Crn: camera radar net for accurate, robust, efficient 3d perception. In: 2023 IEEE/CVF International Conference on Computer Vision (ICCV) (2023)

    Google Scholar 

  12. Kong, L., et al.: Robo3d: towards robust and reliable 3d perception against corruptions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 19994–20006 (2023)

    Google Scholar 

  13. Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: Pointpillars: fast encoders for object detection from point clouds. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12689–12697 (2018). https://api.semanticscholar.org/CorpusID:55701967

  14. Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691 (2021)

  15. Li, J., Luo, C., Yang, X.: Pillarnext: rethinking network designs for 3d object detection in lidar point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17567–17576 (2023)

    Google Scholar 

  16. Li, X.L., Liang, P.: Prefix-tuning: optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190 (2021)

  17. Li, Y., Xu, S., Lin, M., Yin, J., Zhang, B., Cao, X.: Representation disparity-aware distillation for 3d object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6715–6724 (2023)

    Google Scholar 

  18. Li, Y.J., Park, J., O’Toole, M., Kitani, K.: Modality-agnostic learning for radar-lidar fusion in vehicle detection. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 908–917 (2022). https://doi.org/10.1109/CVPR52688.2022.00099

  19. Lin, J., Yin, H., Yan, J., Ge, W., Zhang, H., Rigoll, G.: Improved 3d object detector under snowfall weather condition based on lidar point cloud. IEEE Sens. J. 22(16), 16276–16292 (2022). https://doi.org/10.1109/JSEN.2022.3188985

    Article  Google Scholar 

  20. Lin, T.Y., Goyal, P., Girshick, R.B., He, K., Dollár, P.: Focal loss for dense object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2999–3007 (2017). https://api.semanticscholar.org/CorpusID:47252984

  21. Liu, Z., et al.: Bevfusion: multi-task multi-sensor fusion with unified bird’s-eye view representation. In: IEEE International Conference on Robotics and Automation (ICRA) (2023)

    Google Scholar 

  22. Meyer, M., Kuschk, G.: Automotive radar dataset for deep learning based 3d object detection. In: 2019 16th European Radar Conference (EuRAD), pp. 129–132 (2019)

    Google Scholar 

  23. Ngiam, J., et al.: Starnet: targeted computation for object detection in point clouds. CoRR arxiv:1908.11069 (2019). http://arxiv.org/abs/1908.11069

  24. Nobis, F., Shafiei, E., Karle, P., Betz, J., Lienkamp, M.: Radar voxel fusion for 3d object detection. Appl. Sci. 11(12) (2021). https://www.mdpi.com/2076-3417/11/12/5598

  25. Paek, D.H., Kong, S.H., Wijaya, K.T.: K-radar: 4d radar object detection for autonomous driving in various weather conditions. In: Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2022). https://openreview.net/forum?id=W_bsDmzwaZ7

  26. Palffy, A., Pool, E., Baratam, S., Kooij, J.F.P., Gavrila, D.M.: Multi-class road user detection with 3+1d radar in the view-of-delft dataset. IEEE Rob. Autom. Lett. 7(2), 4961–4968 (2022)

    Article  Google Scholar 

  27. Piroli, A., Dallabetta, V., Kopp, J., Walessa, M., Meissner, D., Dietmayer, K.: Towards robust 3d object detection in rainy conditions (2023)

    Google Scholar 

  28. Qian, K., Zhu, S., Zhang, X., Li, L.E.: Robust multimodal vehicle detection in foggy weather using complementary lidar and radar signals. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 444–453 (2021)

    Google Scholar 

  29. Schick, T., Schütze, H.: Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:2001.07676 (2020)

  30. Shi, S., et al.: Pv-rcnn: point-voxel feature set abstraction for 3d object detection. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  31. Shi, S., et al.: Pv-rcnn++: point-voxel feature set abstraction with local vector representation for 3d object detection. Int. J. Comput. Vision 131, 531–551 (2021). https://api.semanticscholar.org/CorpusID:231741181

  32. Shi, S., Wang, X., Li, H.: Pointrcnn: 3d object proposal generation and detection from point cloud. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  33. Shin, T., Razeghi, Y., Logan IV, R.L., Wallace, E., Singh, S.: Autoprompt: eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980 (2020)

  34. The Do, A., Yoo, M.: Lossdistillnet: 3d object detection in point cloud under harsh weather conditions. IEEE Access 10, 84882–84893 (2022). https://doi.org/10.1109/ACCESS.2022.3197765

    Article  Google Scholar 

  35. Wang, C., Ma, C., Zhu, M., Yang, X.: Pointaugmenting: cross-modal augmentation for 3d object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11794–11803 (2021)

    Google Scholar 

  36. Wang, L., et al.: Multi-modal 3d object detection in autonomous driving: a survey and taxonomy. IEEE Trans. Intell. Veh. 8(7), 3781–3798 (2023). https://doi.org/10.1109/TIV.2023.3264658

    Article  Google Scholar 

  37. Wang, L., et al.: Interfusion: interaction-based 4d radar and lidar fusion for 3d object detection. In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 12247–12253 (2022). https://doi.org/10.1109/IROS47612.2022.9982123

  38. Wang, Y., Yin, J., Li, W., Frossard, P., Yang, R., Shen, J.: Ssda3d: semi-supervised domain adaptation for 3d object detection from point cloud. In: Proceedings of the AAAI Conference on Artificial Intelligence (2023)

    Google Scholar 

  39. Wang, Y., et al.: Club: cluster meets BEV for liDAR-based 3d object detection. In: Thirty-seventh Conference on Neural Information Processing Systems (2023)

    Google Scholar 

  40. Wang, Y., Solomon, J.M.: Object dgcnn: 3d object detection using dynamic graphs. Adv. Neural. Inf. Process. Syst. 34, 20745–20758 (2021)

    Google Scholar 

  41. Wu, H., Wen, C., Shi, S., Wang, C.: Virtual sparse convolution for multimodal 3d object detection. In: CVPR (2023)

    Google Scholar 

  42. Yan, Y., Mao, Y., Li, B.: Second: sparsely embedded convolutional detection. Sensors 18(10) (2018). https://www.mdpi.com/1424-8220/18/10/3337

  43. Yang, B., Guo, R., Liang, M., Casas, S., Urtasun, R.: Radarnet: exploiting radar for robust perception of dynamic objects. In: European Conference on Computer Vision (2020). https://api.semanticscholar.org/CorpusID:220831382

  44. Yang, B., Luo, W., Urtasun, R.: Pixor: real-time 3d object detection from point clouds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  45. Yang, J., Shi, S., Ding, R., Wang, Z., Qi, X.: Towards efficient 3d object detection with knowledge distillation. Adv. Neural. Inf. Process. Syst. 35, 21300–21313 (2022)

    Google Scholar 

  46. Yang, Z., Sun, Y., Liu, S., Jia, J.: 3dssd: point-based 3d single stage object detector. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  47. Yin, T., Zhou, X., Krahenbuhl, P.: Center-based 3d object detection and tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11784–11793 (2021)

    Google Scholar 

  48. Zhang, L., Dong, R., Tai, H.S., Ma, K.: Pointdistiller: structured knowledge distillation towards efficient and compact 3d detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21791–21801 (2023)

    Google Scholar 

  49. Zheng, W., Hong, M., Jiang, L., Fu, C.W.: Boosting 3d object detection by simulating multimodality on point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13638–13647 (2022)

    Google Scholar 

  50. Zheng, W., Tang, W., Jiang, L., Fu, C.W.: Se-ssd: self-ensembling single-stage object detector from point cloud. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14494–14503 (2021)

    Google Scholar 

  51. Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Conditional prompt learning for vision-language models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16816–16825 (2022)

    Google Scholar 

  52. Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vision 130(9), 2337–2348 (2022)

    Article  Google Scholar 

  53. Zhou, S., Liu, W., Hu, C., Zhou, S., Ma, C.: Unidistill: a universal cross-modality knowledge distillation framework for 3d object detection in bird’s-eye view. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5116–5125 (2023)

    Google Scholar 

  54. Zhou, Y., Tuzel, O.: Voxelnet: end-to-end learning for point cloud based 3d object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  55. Zhou, Z., Zhao, X., Wang, Y., Wang, P., Foroosh, H.: Centerformer: center-based transformer for 3d object detection. In: ECCV (2022)

    Google Scholar 

  56. Zhu, J., Lai, S., Chen, X., Wang, D., Lu, H.: Visual prompt multi-modal tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9516–9526 (2023)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Technology Innovation Program (1415187329, 20024355, Development of autonomous driving connectivity technology based on sensor-infrastructure cooperation) funded By the Ministry of Trade, Industry & Energy(MOTIE, Korea) and the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (NRF2022R1A2B5B03002636).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yujeong Chae .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 3618 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chae, Y., Kim, H., Oh, C., Kim, M., Yoon, KJ. (2025). LiDAR-Based All-Weather 3D Object Detection via Prompting and Distilling 4D Radar. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15114. Springer, Cham. https://doi.org/10.1007/978-3-031-72992-8_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72992-8_21

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72991-1

  • Online ISBN: 978-3-031-72992-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics