Skip to main content

SparseRadNet: Sparse Perception Neural Network on Subsampled Radar Data

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15144))

Included in the following conference series:

  • 444 Accesses

Abstract

Radar-based perception has gained increasing attention in autonomous driving, yet the inherent sparsity of radars poses challenges. Radar raw data often contains excessive noise, whereas radar point clouds retain only limited information. In this work, we holistically treat the sparse nature of radar data by introducing an adaptive subsampling method together with a tailored network architecture that exploits the sparsity patterns to discover global and local dependencies in the radar signal. Our subsampling module selects a subset of pixels from range-doppler (RD) spectra that contribute most to the downstream perception tasks. To improve the feature extraction on sparse subsampled data, we propose a new way of applying graph neural networks on radar data and design a novel two-branch backbone to capture both global and local neighbor information. An attentive fusion module is applied to combine features from both branches. Experiments on the RADIal dataset show that our SparseRadNet exceeds state-of-the-art (SOTA) performance in object detection and achieves close to SOTA accuracy in freespace segmentation, meanwhile using sparse subsampled input data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bansal, K., Rungta, K., Zhu, S., Bharadia, D.: Pointillism: accurate 3D bounding box estimation with multi-radars. In: Proceedings of the 18th Conference on Embedded Networked Sensor Systems, pp. 340–353 (2020)

    Google Scholar 

  2. Brooker, G.M., et al.: Understanding millimetre wave fmcw radars. In: 1st international Conference on Sensing Technology, vol. 1 (2005)

    Google Scholar 

  3. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)

    Article  Google Scholar 

  4. Dalbah, Y., Lahoud, J., Cholakkal, H.: Transradar: adaptive-directional transformer for real-time multi-view radar semantic segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision pp, 353–362 (2024),

    Google Scholar 

  5. Danzer, A., Griebel, T., Bach, M., Dietmayer, K.: 2D car detection in radar data with pointnets. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 61–66. IEEE (2019)

    Google Scholar 

  6. Decourt, C., VanRullen, R., Salle, D., Oberlin, T.: Darod: a deep automotive radar object detector on range-doppler maps. In: 2022 IEEE Intelligent Vehicles Symposium (IV), pp. 112–118. IEEE (2022)

    Google Scholar 

  7. Dong, X., Wang, P., Zhang, P., Liu, L.: Probabilistic oriented object detection in automotive radar. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 102–103 (2020)

    Google Scholar 

  8. Dreher, M., Erçelik, E., Bänziger, T., Knoll, A.: Radar-based 2d car detection using deep neural networks. In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–8. IEEE (2020)

    Google Scholar 

  9. Fent, F., Bauerschmidt, P., Lienkamp, M.: Radargnn: transformation invariant graph neural network for radar-based perception. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 182–191 (2023)

    Google Scholar 

  10. Gao, X., Xing, G., Roy, S., Liu, H.: Ramp-CNN: a novel neural network for enhanced automotive radar object recognition. IEEE Sens. J. 21(4), 5119–5132 (2020)

    Article  Google Scholar 

  11. Giroux, J., Bouchard, M., Laganiere, R.: T-fftradnet: object detection with swin vision transformers from raw ADC radar signals. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4030–4039 (2023)

    Google Scholar 

  12. Graham, B., Van der Maaten, L.: Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307 (2017)

  13. Gumbel, E.J.: Statistical theory of extreme values and some practical applications: a series of lectures, vol. 33. US Government Printing Office (1948)

    Google Scholar 

  14. Han, K., Wang, Y., Guo, J., Tang, Y., Wu, E.: Vision GNN: an image is worth graph of nodes. Adv. Neural. Inf. Process. Syst. 35, 8291–8303 (2022)

    Google Scholar 

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  16. He, Q., Wang, Z., Zeng, H., Zeng, Y., Liu, Y.: Svga-net: sparse voxel-graph attention network for 3D object detection from point clouds. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 870–878 (2022)

    Google Scholar 

  17. Hendrycks, D., Gimpel, K.: Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 (2016)

  18. Ho, J., Kalchbrenner, N., Weissenborn, D., Salimans, T.: Axial attention in multidimensional transformers. arXiv preprint arXiv:1912.12180 (2019)

  19. Huijben, I.A., Veeling, B.S., van Sloun, R.J.: Deep probabilistic subsampling for task-adaptive compressed sensing. In: International Conference on Learning Representations (2019)

    Google Scholar 

  20. Jalil, A., Yousaf, H., Baig, M.I.: Analysis of CFAR techniques. In: 2016 13th International Bhurban Conference on Applied Sciences and Technology (IBCAST), pp. 654–659. IEEE (2016)

    Google Scholar 

  21. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: International Conference on Learning Representations (2017)

    Google Scholar 

  22. Jin, Y., Deligiannis, A., Fuentes-Michel, J.C., Vossiek, M.: Cross-modal supervision-based multitask learning with automotive radar raw data. IEEE Transactions on Intelligent Vehicles (2023)

    Google Scholar 

  23. Jose, E., Adams, M.D.: Millimetre wave radar spectra simulation and interpretation for outdoor slam. In: IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004. vol. 2, pp. 1321–1326. IEEE (2004)

    Google Scholar 

  24. Li, P., Wang, P., Berntorp, K., Liu, H.: Exploiting temporal relations on radar perception for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17071–17080 (2022)

    Google Scholar 

  25. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012–10022 (2021)

    Google Scholar 

  26. Madani, S., Guan, J., Ahmed, W., Gupta, S., Hassanieh, H.: Radatron: accurate detection using multi-resolution cascaded mimo radar. In: European Conference on Computer Vision, pp. 160–178. Springer (2022). https://doi.org/10.1007/978-3-031-19842-7_10

  27. Major, B., et al.: Vehicle detection with automotive radar using deep learning on range-azimuth-doppler tensors. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pp. 0–0 (2019)

    Google Scholar 

  28. Meyer, M., Kuschk, G., Tomforde, S.: Graph convolutional networks for 3D object detection on radar data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3060–3069 (2021)

    Google Scholar 

  29. Munir, M., Avery, W., Marculescu, R.: Mobilevig: graph-based sparse attention for mobile vision applications. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2210–2218 (2023)

    Google Scholar 

  30. Ouaknine, A., Newson, A., Pérez, P., Tupin, F., Rebut, J.: Multi-view radar semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15671–15680 (2021)

    Google Scholar 

  31. Palffy, A., Dong, J., Kooij, J.F., Gavrila, D.M.: CNN based road user detection using the 3d radar cube. IEEE Robot. Autom. Lett. 5(2), 1263–1270 (2020)

    Article  Google Scholar 

  32. Palffy, A., Pool, E., Baratam, S., Kooij, J.F., Gavrila, D.M.: Multi-class road user detection with 3+ 1d radar in the view-of-delft dataset. IEEE Robot. Autom. Lett. 7(2), 4961–4968 (2022)

    Article  Google Scholar 

  33. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  34. Rebut, J., Ouaknine, A., Malik, W., Pérez, P.: Raw high-definition radar for multi-task learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17021–17030 (2022)

    Google Scholar 

  35. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google Scholar 

  36. Robey, F.C., Coutts, S., Weikle, D., McHarg, J.C., Cuomo, K.: Mimo radar theory and experimental results. In: Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and Computers, 2004. vol. 1, pp. 300–304. IEEE (2004)

    Google Scholar 

  37. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  38. Schumann, O., Lombacher, J., Hahn, M., Wöhler, C., Dickmann, J.: Scene understanding with automotive radar. IEEE Trans. Intell. Veh. 5(2), 188–203 (2019)

    Article  Google Scholar 

  39. Sheeny, M., De Pellegrin, E., Mukherjee, S., Ahrabian, A., Wang, S., Wallace, A.: Radiate: a radar dataset for automotive perception in bad weather. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–7. IEEE (2021)

    Google Scholar 

  40. Shi, W., Rajkumar, R.: Point-gnn: graph neural network for 3D object detection in a point cloud. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1711–1719 (2020)

    Google Scholar 

  41. Tan, B., et al.: 3D object detection for multi-frame 4d automotive millimeter-wave radar point cloud. IEEE Sensors Journal (2022)

    Google Scholar 

  42. Thomas, H., Qi, C.R., Deschaud, J.E., Marcotegui, B., Goulette, F., Guibas, L.J.: Kpconv: flexible and deformable convolution for point clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6411–6420 (2019)

    Google Scholar 

  43. Ulrich, M., et al.: Improved orientation estimation and detection with hybrid object detection networks for automotive radar. In: 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), pp. 111–117. IEEE (2022)

    Google Scholar 

  44. Van Gorp, H., Huijben, I., Veeling, B.S., Pezzotti, N., Van Sloun, R.J.: Active deep probabilistic subsampling. In: International Conference on Machine Learning, pp. 10509–10518. PMLR (2021)

    Google Scholar 

  45. Wang, Y., Jiang, Z., Li, Y., Hwang, J.N., Xing, G., Liu, H.: Rodnet: a real-time radar object detection network cross-supervised by camera-radar fused object 3d localization. IEEE J. Selected Topics Signal Process. 15(4), 954–967 (2021)

    Article  Google Scholar 

  46. Wei, Z., Zhang, F., Chang, S., Liu, Y., Wu, H., Feng, Z.: Mmwave radar and vision fusion for object detection in autonomous driving: a review. Sensors 22(7), 2542 (2022)

    Article  Google Scholar 

  47. Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: Cbam: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV) (September 2018)

    Google Scholar 

  48. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)

    Google Scholar 

  49. Yang, B., Khatri, I., Happold, M., Chen, C.: Adcnet: learning from raw radar data via distillation. arXiv preprint arXiv:2303.11420 (2023)

  50. Zhang, A., Nowruzi, F.E., Laganiere, R.: Raddet: range-azimuth-doppler based radar object detection for dynamic road users. In: 2021 18th Conference on Robots and Vision (CRV), pp. 95–102. IEEE (2021)

    Google Scholar 

  51. Zhang, G., Li, H., Wenger, F.: Object detection and 3D estimation via an FMCW radar using a fully convolutional network. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4487–4491. IEEE (2020)

    Google Scholar 

  52. Zhang, L., et al.: Peakconv: Learning peak receptive field for radar semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17577–17586 (2023)

    Google Scholar 

  53. Zhou, Y., Liu, L., Zhao, H., López-Benítez, M., Yu, L., Yue, Y.: Towards deep radar perception for autonomous driving: Datasets, methods, and challenges. Sensors 22(11), 4208 (2022)

    Article  Google Scholar 

Download references

Acknowledgements

J.W. and M.R. acknowledge support by the German Federal Ministry of Education and Research within the junior research group project “UnrEAL” (grant no. 01IS22069).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jialong Wu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 585 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, J., Meuter, M., Schoeler, M., Rottmann, M. (2025). SparseRadNet: Sparse Perception Neural Network on Subsampled Radar Data. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15144. Springer, Cham. https://doi.org/10.1007/978-3-031-73016-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73016-0_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73015-3

  • Online ISBN: 978-3-031-73016-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics