Skip to main content

SAMFusion: Sensor-Adaptive Multimodal Fusion for 3D Object Detection in Adverse Weather

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Multimodal sensor fusion is an essential capability for autonomous robots, enabling object detection and decision-making in the presence of failing or uncertain inputs. While recent fusion methods excel in normal environmental conditions, these approaches fail in adverse weather, e.g., heavy fog, snow, or obstructions due to soiling. We introduce a novel multi-sensor fusion approach tailored to adverse weather conditions. In addition to fusing RGB and LiDAR sensors, which are employed in recent autonomous driving literature, our sensor fusion stack is also capable of learning from NIR gated camera and radar modalities to tackle low light and inclement weather.

We fuse multimodal sensor data through attentive, depth-based blending schemes, with learned refinement on the Bird’s Eye View (BEV) plane to combine image and range features effectively. Our detections are predicted by a transformer decoder that weighs modalities based on distance and visibility. We demonstrate that our method improves the reliability of multimodal sensor fusion in autonomous vehicles under challenging weather conditions, bridging the gap between ideal conditions and real-world edge cases. Our approach improves average precision by \(17.2\,AP\) compared to the next best method for vulnerable pedestrians in long distances and challenging foggy scenes. Our project page is available here (https://light.princeton.edu/samfusion/).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bai, X., et al.: TransFusion: robust lidar-camera fusion for 3D object detection with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1090–1099 (2022)

    Google Scholar 

  2. Baumann, N., et al.: CR3DT: camera-radar fusion for 3D detection and tracking. arXiv preprint arXiv:2403.15313 (2024)

  3. Bijelic, M., et al.: Seeing through fog without seeing fog: deep multimodal sensor fusion in unseen adverse weather. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11682–11692 (2020)

    Google Scholar 

  4. Bijelic, M., Gruber, T., Ritter, W.: A benchmark for lidar sensors in fog: is detection breaking down? In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 760–767. IEEE (2018)

    Google Scholar 

  5. Bijelic, M., Gruber, T., Ritter, W.: Benchmarking image sensors under adverse weather conditions for autonomous driving. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1773–1779. IEEE (2018)

    Google Scholar 

  6. Brazil, G., Liu, X.: M3D-RPN: monocular 3D region proposal network for object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9287–9296 (2019)

    Google Scholar 

  7. Broedermann, T., Sakaridis, C., Dai, D., Van Gool, L.: HRFuser: a multi-resolution sensor fusion architecture for 2D object detection. In: IEEE International Conference on Intelligent Transportation Systems (ITSC) (2023)

    Google Scholar 

  8. Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020)

    Google Scholar 

  9. Cai, H., Zhang, Z., Zhou, Z., Li, Z., Ding, W., Zhao, J.: BEVFusion4D: learning lidar-camera fusion under bird’s-eye-view via cross-modality guidance and temporal aggregation. arXiv preprint arXiv:2303.17099 (2023)

  10. Cai, Z., Vasconcelos, N.: Cascade R-CNN: high quality object detection and instance segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 43(5), 1483–1498 (2019)

    Article  Google Scholar 

  11. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I, pp. 213–229. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13

    Chapter  Google Scholar 

  12. Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1907–1915 (2017)

    Google Scholar 

  13. Chen, X., Zhang, T., Wang, Y., Wang, Y., Zhao, H.: FUTR3D: a unified sensor fusion framework for 3D detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 172–181 (2023)

    Google Scholar 

  14. Chen, Y., Li, Y., Zhang, X., Sun, J., Jia, J.: Focal sparse convolutional networks for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5428–5437 (2022)

    Google Scholar 

  15. Chen, Y., Liu, J., Zhang, X., Qi, X., Jia, J.: LargeKernel3D: scaling up kernels in 3D sparse CNNs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13488–13498 (2023)

    Google Scholar 

  16. Chen, Y., Liu, J., Zhang, X., Qi, X., Jia, J.: VoxelNext: fully sparse VoxelNet for 3D object detection and tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21674–21683 (2023)

    Google Scholar 

  17. Contributors, M.: MMDetection3D: OpenMMLab next-generation platform for general 3D object detection. https://github.com/open-mmlab/mmdetection3d (2020)

  18. Diaz-Ruiz, C.A., et al.: Ithaca365: dataset and driving perception under repeated and challenging weather conditions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21383–21392 (2022)

    Google Scholar 

  19. Ge, C., et al.: MetaBEV: solving sensor failures for BEV detection and map segmentation. arXiv preprint arXiv:2304.09801 (2023)

  20. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. (IJRR) (2013)

    Google Scholar 

  21. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE (2012)

    Google Scholar 

  22. Grauer, Y.: Active gated imaging in driver assistance system. Adv. Optical Technol. 3(2), 151–160 (2014)

    Article  Google Scholar 

  23. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

    Google Scholar 

  24. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  25. Hu, Y., et al.: Planning-oriented autonomous driving (2023)

    Google Scholar 

  26. Huang, J., Huang, G., Zhu, Z., Ye, Y., Du, D.: BEVDet: high-performance multi-camera 3D object detection in bird-eye-view. arXiv preprint arXiv:2112.11790 (2021)

  27. Huang, L., et al.: Leveraging vision-centric multi-modal expertise for 3D object detection. In: Advances in Neural Information Processing Systems, vol. 36 (2024)

    Google Scholar 

  28. Hwang, J.J., et al.: CramNet: camera-radar fusion with ray-constrained cross-attention for robust 3D object detection. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) European Conference on Computer Vision, pp. 388–405. Springer (2022). https://doi.org/10.1007/978-3-031-19839-7_23

  29. Jiao, Y., Jie, Z., Chen, S., Chen, J., Ma, L., Jiang, Y.G.: MsmdFusion: fusing lidar and camera at multiple scales with multi-depth seeds for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21643–21652 (2023)

    Google Scholar 

  30. Julca-Aguilar, F., Taylor, J., Bijelic, M., Mannan, F., Tseng, E., Heide, F.: Gated3D: monocular 3D object detection from temporal illumination cues. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2938–2948 (2021)

    Google Scholar 

  31. Ku, J., Harakeh, A., Waslander, S.L.: In defense of classical image processing: Fast depth completion on the CPU. In: 2018 15th Conference on Computer and Robot Vision (CRV), pp. 16–22. IEEE (2018)

    Google Scholar 

  32. Kuhn, H.W.: The Hungarian method for the assignment problem. Naval Res. Logistics (NRL) 52 (1955)

    Google Scholar 

  33. Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., Beijbom, O.: PointPillars: fast encoders for object detection from point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12697–12705 (2019)

    Google Scholar 

  34. Li, J., et al.: Practical stereo matching via cascaded recurrent network with adaptive correlation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16263–16272 (2022)

    Google Scholar 

  35. Li, P., Chen, X., Shen, S.: Stereo R-CNN based 3D object detection for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7644–7652 (2019)

    Google Scholar 

  36. Li, Q., Wang, Y., Wang, Y., Zhao, H.: HDMapNet: an online HD map construction and evaluation framework. CoRR abs/2107.06307 (2021)

    Google Scholar 

  37. Li, Y., Chen, Y., Qi, X., Li, Z., Sun, J., Jia, J.: Unifying voxel-based representation with transformer for 3D object detection. Adv. Neural. Inf. Process. Syst. 35, 18442–18455 (2022)

    Google Scholar 

  38. Li, Y., et al.: BEVDepth: acquisition of reliable depth for multi-view 3d object detection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 1477–1485 (2023)

    Google Scholar 

  39. Li, Z., et al.: BEVFormer: learning bird’s-eye-view representation from multi-camera images via spatiotemporal transformers (2022)

    Google Scholar 

  40. Liang, M., Yang, B., Chen, Y., Hu, R., Urtasun, R.: Multi-task multi-sensor fusion for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7345–7353 (2019)

    Google Scholar 

  41. Liang, T., et al.: BEVFusion: a simple and robust lidar-camera fusion framework. Adv. Neural. Inf. Process. Syst. 35, 10421–10434 (2022)

    Google Scholar 

  42. Lin, Z., et al.: RCBEVDet: radar-camera fusion in bird’s eye view for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14928–14937 (2024)

    Google Scholar 

  43. Liu, X., Zheng, C., Cheng, K.B., Xue, N., Qi, G.J., Wu, T.: Monocular 3D object detection with bounding box denoising in 3D by perceiver. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6436–6446 (2023)

    Google Scholar 

  44. Liu, Y., et al.: PETRv2: a unified framework for 3D perception from multi-camera images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3262–3272 (2023)

    Google Scholar 

  45. Liu, Z., Wu, Z., Tóth, R.: Smoke: Single-stage monocular 3D object detection via keypoint estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 996–997 (2020)

    Google Scholar 

  46. Liu, Z., et al.: BEVFusion: multi-task multi-sensor fusion with unified bird’s-eye view representation. In: 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 2774–2781. IEEE (2023)

    Google Scholar 

  47. Ma, X., Liu, S., Xia, Z., Zhang, H., Zeng, X., Ouyang, W.: Rethinking Pseudo-LiDAR representation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII, pp. 311–327. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-58601-0_19

    Chapter  Google Scholar 

  48. Meyer, M., Kuschk, G.: Automotive radar dataset for deep learning based 3D object detection. In: 2019 16th European Radar Conference (EuRAD), pp. 129–132. IEEE (2019)

    Google Scholar 

  49. Mirza, M.J., et al.: Robustness of object detectors in degrading weather conditions. In: 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp. 2719–2724. IEEE (2021)

    Google Scholar 

  50. Nabati, R., Qi, H.: CenterFusion: center-based radar and camera fusion for 3D object detection. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1527–1536 (2021)

    Google Scholar 

  51. Paszke, A., et al.: Automatic differentiation in PyTorch. In: NIPS 2017 Workshop on Autodiff (2017). https://openreview.net/forum?id=BJJsrmfCZ

  52. Peng, L., Chen, Z., Fu, Z., Liang, P., Cheng, E.: BEVSegFormer: bird’s eye view semantic segmentation from arbitrary camera rigs (2022)

    Google Scholar 

  53. Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum PointNets for 3D object detection from RGB-D data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 918–927 (2018)

    Google Scholar 

  54. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  55. Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12179–12188 (2021)

    Google Scholar 

  56. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google Scholar 

  57. Shi, S., et al.: PV-RCNN: point-voxel feature set abstraction for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10529–10538 (2020)

    Google Scholar 

  58. Shi, S., Wang, X., Li, H.: PointRCNN: 3D object proposal generation and detection from point cloud. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 770–779 (2019)

    Google Scholar 

  59. Shi, S., Wang, Z., Shi, J., Wang, X., Li, H.: From points to parts: 3D object detection from point cloud with part-aware and part-aggregation network. IEEE Trans. Pattern Anal. Mach. Intell. 43(8), 2647–2664 (2020)

    Google Scholar 

  60. Simonelli, A., Bulo, S.R., Porzi, L., López-Antequera, M., Kontschieder, P.: Disentangling monocular 3D object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1991–1999 (2019)

    Google Scholar 

  61. Sindagi, V.A., Zhou, Y., Tuzel, O.: MVX-Net: multimodal VoxelNet for 3D object detection. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 7276–7282. IEEE (2019)

    Google Scholar 

  62. Sun, J., Cao, Y., Chen, Q.A., Mao, Z.M.: Towards robust \(\{\)LiDAR-based\(\}\) perception in autonomous driving: General black-box adversarial sensor attack and countermeasures. In: 29th USENIX Security Symposium (USENIX Security 20), pp. 877–894 (2020)

    Google Scholar 

  63. Uricár, M., et al.: Desoiling dataset: restoring soiled areas on automotive fisheye cameras. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019)

    Google Scholar 

  64. Vora, S., Lang, A.H., Helou, B., Beijbom, O.: PointPainting: sequential fusion for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4604–4612 (2020)

    Google Scholar 

  65. Walia, A., et al.: Gated2Gated: self-supervised depth estimation from gated images. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2801–2811 (2021)

    Google Scholar 

  66. Walz, S., Bijelic, M., Ramazzina, A., Walia, A., Mannan, F., Heide, F.: Gated Stereo: joint depth estimation from gated and wide-baseline active stereo cues. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13252–13262 (2023)

    Google Scholar 

  67. Wang, C., Ma, C., Zhu, M., Yang, X.: PointAugmenting: cross-modal augmentation for 3D object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11794–11803 (2021)

    Google Scholar 

  68. Wang, H., et al.: UNITR: a unified and efficient multi-modal transformer for bird’s-eye-view representation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6792–6802 (2023)

    Google Scholar 

  69. Wang, J., Lan, S., Gao, M., Davis, L.S.: InfoFocus: 3D object detection for autonomous driving with dynamic information modeling. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 405–420. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_24

    Chapter  Google Scholar 

  70. Wang, S., Caesar, H., Nan, L., Kooij, J.F.: UniBEV: multi-modal 3D object detection with uniform BEV encoders for robustness against missing sensor modalities. arXiv preprint arXiv:2309.14516 (2023)

  71. Wang, T., Zhu, X., Pang, J., Lin, D.: FCOS3D: fully convolutional one-stage monocular 3D object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 913–922 (2021)

    Google Scholar 

  72. Wang, Y., et al.: Pillar-based object detection for autonomous driving. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12367, pp. 18–34. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58542-6_2

    Chapter  Google Scholar 

  73. Wang, Y., Guizilini, V.C., Zhang, T., Wang, Y., Zhao, H., Solomon, J.: DETR3D: 3D object detection from multi-view images via 3D-to-2D queries. In: Conference on Robot Learning, pp. 180–191. PMLR (2022)

    Google Scholar 

  74. Wu, P., et al.: PV-RCNN++: semantical point-voxel feature interaction for 3D object detection. Vis. Comput. 39(6), 2425–2440 (2023)

    Article  Google Scholar 

  75. Xie, Y., et al.: SparseFusion: fusing multi-modal sparse representations for multi-sensor 3D object detection (2023)

    Google Scholar 

  76. Xu, S., Zhou, D., Fang, J., Yin, J., Bin, Z., Zhang, L.: FusionPainting: multimodal fusion with adaptive attention for 3D object detection. In: 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp. 3047–3054. IEEE (2021)

    Google Scholar 

  77. Yan, J., et al.: Cross modal transformer: towards fast and robust 3D object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 18268–18278 (2023)

    Google Scholar 

  78. Yan, Y., Mao, Y., Li, B.: SECOND: sparsely embedded convolutional detection. Sensors 18(10), 3337 (2018)

    Article  Google Scholar 

  79. Yang, B., Luo, W., Urtasun, R.: PIXOR: real-time 3D object detection from point clouds. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7652–7660 (2018)

    Google Scholar 

  80. Yang, Z., Sun, Y., Liu, S., Jia, J.: 3DSSD: point-based 3D single stage object detector. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11040–11048 (2020)

    Google Scholar 

  81. Yang, Z., Chen, J., Miao, Z., Li, W., Zhu, X., Zhang, L.: DeepInteraction: 3D object detection via modality interaction. Adv. Neural. Inf. Process. Syst. 35, 1992–2005 (2022)

    Google Scholar 

  82. Yin, T., Zhou, X., Krahenbuhl, P.: Center-based 3D object detection and tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11784–11793 (2021)

    Google Scholar 

  83. Yin, T., Zhou, X., Krähenbühl, P.: Multimodal virtual point 3D detection. Adv. Neural. Inf. Process. Syst. 34, 16494–16507 (2021)

    Google Scholar 

  84. Yoo, J.H., Kim, Y., Kim, J., Choi, J.W.: 3D-CVF: generating joint camera and lidar features using cross-view spatial feature fusion for 3D object detection. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVII 16, pp. 720–736. Springer (2020). https://doi.org/10.1007/978-3-030-58583-9_43

  85. Zhang, J., Singh, S.: Visual-lidar odometry and mapping: low-drift, robust, and fast. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2174–2181. IEEE (2015)

    Google Scholar 

  86. Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499 (2018)

    Google Scholar 

  87. Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable \(\{\)detr\(\}\): deformable transformers for end-to-end object detection. In: International Conference on Learning Representations (2021)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the AI-SEE project with funding from the FFG, BMBF, and NRC-IRA. Felix Heide was supported by an NSF CAREER Award (2047359), a Packard Foundation Fellowship, a Sloan Research Fellowship, a Sony Young Faculty Award, a Project X Innovation Award, and an Amazon Science Research Award. Further the authors would like to thank Stefanie Walz, Samuel Brucker, Anush Kumar, Fathemeh Azimi and David Borts.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Edoardo Palladin .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 27693 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Palladin, E., Dietze, R., Narayanan, P., Bijelic, M., Heide, F. (2025). SAMFusion: Sensor-Adaptive Multimodal Fusion for 3D Object Detection in Adverse Weather. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15119. Springer, Cham. https://doi.org/10.1007/978-3-031-73030-6_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73030-6_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73029-0

  • Online ISBN: 978-3-031-73030-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics