Skip to main content

UniCal: Unified Neural Sensor Calibration

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15094))

Included in the following conference series:

  • 431 Accesses

Abstract

Self-driving vehicles (SDVs) require accurate calibration of LiDARs and cameras to fuse sensor data accurately for autonomy. Traditional calibration methods typically leverage fiducials captured in a controlled and structured scene and compute correspondences to optimize over. These approaches are costly and require substantial infrastructure and operations, making it challenging to scale for vehicle fleets. In this work, we propose UniCal, a unified framework for effortlessly calibrating SDVs equipped with multiple LiDARs and cameras. Our approach is built upon a differentiable scene representation capable of rendering multi-view geometrically and photometrically consistent sensor observations. We jointly learn the sensor calibration and the underlying scene representation through differentiable volume rendering, utilizing outdoor sensor data without the need for specific calibration fiducials. This “drive-and-calibrate” approach significantly reduces costs and operational overhead compared to existing calibration systems, enabling efficient calibration for large SDV fleets at scale. To ensure geometric consistency across observations from different sensors, we introduce a novel surface alignment loss that combines feature-based registration with neural rendering. Comprehensive evaluations on multiple datasets demonstrate that UniCal outperforms or matches the accuracy of existing calibration approaches while being more efficient, demonstrating the value of UniCal for scalable calibration. For more information, visit waabi.ai/unical.

Z. Yang and G. Chen—Equal contribution.

G. Chen—Work done while an intern at Waabi.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Alismail, H., Baker, L.D., Browning, B.: Automatic calibration of a range sensor and camera system. In: 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, pp. 286–292. IEEE (2012)

    Google Scholar 

  2. Attal, B., et al.: Törf: Time-of-flight radiance fields for dynamic scene view synthesis. In: Advances in Neural Information Processing Systems, vol. 34, pp. 26289–26301 (2021)

    Google Scholar 

  3. Barfoot, T.D., Furgale, P.T.: Associating uncertainty with three-dimensional poses for use in estimation problems. IEEE Trans. Rob. 30(3), 679–693 (2014)

    Article  Google Scholar 

  4. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: ICCV, pp. 5855–5864 (2021)

    Google Scholar 

  5. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: unbounded anti-aliased neural radiance fields. In: CVPR, pp. 5470–5479 (2022)

    Google Scholar 

  6. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: anti-aliased grid-based neural radiance fields. In: ICCV, pp. 19697–19705 (2023)

    Google Scholar 

  7. Bian, W., Wang, Z., Li, K., Bian, J.W., Prisacariu, V.A.: Nope-NeRF: optimising neural radiance field with no pose prior. In: CVPR, pp. 4160–4169 (2023)

    Google Scholar 

  8. Boss, M., et al.: SAMURAI: shape and material from unconstrained real-world arbitrary image collections. In: NeurIPS, vol. 35, pp. 26389–26403 (2022)

    Google Scholar 

  9. Cao, A., Johnson, J.: HexPlane: a fast representation for dynamic scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 130–141 (2023)

    Google Scholar 

  10. Chai, Z., Sun, Y., Xiong, Z.: A novel method for LiDAR camera calibration by plane fitting. In: 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp. 286–291. IEEE (2018)

    Google Scholar 

  11. Chen, Y., Medioni, G.: Object modelling by registration of multiple range images. Image Vis. Comput. 10(3), 145–155 (1992)

    Article  Google Scholar 

  12. Choi, S., Zhou, Q.Y., Koltun, V.: Robust reconstruction of indoor scenes. In: CVPR, pp. 5556–5565 (2015)

    Google Scholar 

  13. DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperPoint: self-supervised interest point detection and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 224–236 (2018)

    Google Scholar 

  14. Domhof, J., Kooij, J.F., Gavrila, D.M.: A joint extrinsic calibration tool for radar, camera and LiDAR. IEEE Trans. Intell. Veh. 6(3), 571–582 (2021)

    Article  Google Scholar 

  15. Fang, C., Ding, S., Dong, Z., Li, H., Zhu, S., Tan, P.: Single-shot is enough: panoramic infrastructure based calibration of multiple cameras and 3D LiDARs. In: IROS, pp. 8890–8897. IEEE (2021)

    Google Scholar 

  16. Foucard, L., Xia, S., Griffith, T., Lutz, K.: Continuous real-time sensor recalibration: a long-range perception game-changer, Aurora, March 2023

    Google Scholar 

  17. Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. J. Field Rob. 27(5), 534–560 (2010)

    Article  Google Scholar 

  18. Geiger, A., Moosmann, F., Car, Ö., Schuster, B.: Automatic camera and range sensor calibration using a single shot. In: ICRA, pp. 3936–3943. IEEE (2012)

    Google Scholar 

  19. Hagemann, A., Knorr, M., Stiller, C.: Modeling dynamic target deformation in camera calibration. In: WACV, pp. 1747–1755 (2022)

    Google Scholar 

  20. Heo, H., et al.: Robust camera pose refinement for multi-resolution hash encoding. arXiv preprint arXiv:2302.01571 (2023)

  21. Herau, Q., et al.: MOISST: multi-modal optimization of implicit scene for spatiotemporal calibration. In: IROS (2023)

    Google Scholar 

  22. Herau, Q., et al.: SOAC: spatio-temporal overlap-aware multi-sensor calibration using neural radiance fields. In: CVPR (2024). http://arxiv.org/abs/2311.15803

  23. Huang, S., et al.: Neural LiDAR fields for novel view synthesis (2023)

    Google Scholar 

  24. Ishikawa, R., Oishi, T., Ikeuchi, K.: LiDAR and camera calibration using motions estimated by sensor fusion odometry. In: IROS, pp. 7342–7349. IEEE (2018)

    Google Scholar 

  25. Iyer, G., Ram., R.K., Murthy, J.K., Krishna, K.M.: CalibNet: geometrically supervised extrinsic calibration using 3D spatial transformer networks. In: IROS (2018)

    Google Scholar 

  26. Jain, A., Zhang, L., Jiang, L.: High-fidelity sensor calibration for autonomous vehicles. Woven Planet Level 5 (2019)

    Google Scholar 

  27. Jeong, Y., Ahn, S., Choy, C., Anandkumar, A., Cho, M., Park, J.: Self-calibrating neural radiance fields. In: ICCV, pp. 5846–5854 (2021)

    Google Scholar 

  28. Jiang, P., Osteen, P., Saripalli, S.: SemCal: semantic LiDAR-camera calibration using neural mutual information estimator. In: 2021 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 1–7. IEEE (2021)

    Google Scholar 

  29. Jing, X., Ding, X., Xiong, R., Deng, H., Wang, Y.: DXQ-Net: differentiable LiDAR-camera extrinsic calibration using quality-aware flow. In: IROS (2022)

    Google Scholar 

  30. Kang, J., Doh, N.L.: Automatic targetless camera-LiDAR calibration by aligning edge with gaussian mixture model. J. Field Rob. 37(1), 158–179 (2020)

    Article  Google Scholar 

  31. Kim, H., Rangan, S.N.K., Pagad, S., Yalla, V.G.: Motion-based calibration between multiple LiDARs and INS with rigid body constraint on vehicle platform. In: 2020 IEEE Intelligent Vehicles Symposium (IV), pp. 2058–2064. IEEE (2020)

    Google Scholar 

  32. Koide, K., Oishi, S., Yokozuka, M., Banno, A.: General, single-shot, target-less, and automatic LiDAR-camera extrinsic calibration toolbox. In: ICRA (2023)

    Google Scholar 

  33. Levinson, J., Thrun, S.: Automatic online calibration of cameras and lasers. In: RSS, vol. 2. Citeseer (2013)

    Google Scholar 

  34. Levy, A., Matthews, M., Sela, M., Wetzstein, G., Lagun, D.: MELON: NeRF with unposed images using equivalence class estimation. arXiv:preprint (2023)

    Google Scholar 

  35. Li, L., et al.: Joint intrinsic and extrinsic lidar-camera calibration in targetless environments using plane-constrained bundle adjustment (2023)

    Google Scholar 

  36. Li, X., Xiao, Y., Wang, B., Ren, H., Zhang, Y., Ji, J.: Automatic targetless LiDAR-camera calibration: a survey. Artif. Intell. Rev. 56(9), 9949–9987 (2023)

    Article  Google Scholar 

  37. Li, Z., et al.: Neuralangelo: high-fidelity neural surface reconstruction. In: CVPR, pp. 8456–8465 (2023)

    Google Scholar 

  38. Lin, C.H., Ma, W.C., Torralba, A., Lucey, S.: BARF: bundle-adjusting neural radiance fields. In: ICCV, pp. 5741–5751 (2021)

    Google Scholar 

  39. Lindenberger, P., Sarlin, P.E., Pollefeys, M.: LightGlue: local feature matching at light speed. In: ICCV (2023)

    Google Scholar 

  40. Liu, X., Yuan, C., Zhang, F.: Targetless extrinsic calibration of multiple small FoV LiDARS and cameras using adaptive voxelization. IEEE Trans. Instrum. Meas. 71, 1–12 (2022)

    Article  Google Scholar 

  41. Lv, X., Wang, B., Dou, Z., Ye, D., Wang, S.: LCCNet: LiDAR and camera self-calibration using cost volume network. In: CVPR Workshop, pp. 2894–2901 (2021)

    Google Scholar 

  42. Meng, Q., et al.: GNeRF: GAN-based neural radiance field without posed camera. In: ICCV, pp. 6351–6361 (2021)

    Google Scholar 

  43. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: CVPR, pp. 4460–4470 (2019)

    Google Scholar 

  44. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)

    Article  Google Scholar 

  45. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding (2022)

    Google Scholar 

  46. Ou, N., Cai, H., Wang, J.: Targetless LiDAR-camera calibration via cross-modality structure consistency. IEEE Trans. Intell. Veh. (2023)

    Google Scholar 

  47. Pandey, G., McBride, J., Savarese, S., Eustice, R.: Automatic targetless extrinsic calibration of a 3D LiDAR and camera by maximizing mutual information. In: AAAI, vol. 26, pp. 2053–2059 (2012)

    Google Scholar 

  48. Pandey, G., McBride, J.R., Savarese, S., Eustice, R.M.: Automatic extrinsic calibration of vision and LiDAR by maximizing mutual information. J. Field Rob. 32(5), 696–722 (2015)

    Article  Google Scholar 

  49. Peng, S., et al.: Animatable neural radiance fields for modeling dynamic human bodies. In: ICCV, pp. 14314–14323 (2021)

    Google Scholar 

  50. Powell, M.J.: An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput. J. 7(2), 155–162 (1964)

    Article  MathSciNet  Google Scholar 

  51. Pumarola, A., Corona, E., Pons-Moll, G., Moreno-Noguer, F.: D-NeRF: neural radiance fields for dynamic scenes. In: CVPR, pp. 10318–10327 (2021)

    Google Scholar 

  52. Pun, A., et al.: LightSim: neural lighting simulation for urban scenes. In: NeurIPS (2023)

    Google Scholar 

  53. Ruan, M., Huber, D.: Calibration of 3D sensors using a spherical target. In: 3DV, vol. 1, pp. 187–193. IEEE (2014)

    Google Scholar 

  54. Schneider, N., Piewak, F., Stiller, C., Franke, U.: RegNet: multimodal sensor registration using deep neural networks. In: 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 1803–1810. IEEE (2017)

    Google Scholar 

  55. Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)

    Google Scholar 

  56. Sitzmann, V., Martel, J., Bergman, A., Lindell, D., Wetzstein, G.: Implicit neural representations with periodic activation functions. In: Advances in Neural Information Processing Systems, vol. 33, pp. 7462–7473 (2020)

    Google Scholar 

  57. Smith, C., Du, Y., Tewari, A., Sitzmann, V.: FlowCam: training generalizable 3D radiance fields without camera poses via pixel-aligned scene flow (2023)

    Google Scholar 

  58. Sun, P., et al.: Scalability in perception for autonomous driving: Waymo open dataset. In: CVPR, pp. 2446–2454 (2020)

    Google Scholar 

  59. Tarimu Fu, L.F., Fallon, M.: Batch differentiable pose refinement for in-the-wild camera/LiDAR extrinsic calibration. In: CoRL (2023)

    Google Scholar 

  60. Taylor, Z., Nieto, J.: Automatic calibration of LiDAR and camera images using normalized mutual information. In: ICRA (2013)

    Google Scholar 

  61. Tonderski, A., Lindström, C., Hess, G., Ljungbergh, W., Svensson, L., Petersson, C.: NeuRAD: neural rendering for autonomous driving. In: CVPR, pp. 14895–14904 (2024)

    Google Scholar 

  62. Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment—a modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) IWVA 1999. LNCS, vol. 1883, pp. 298–372. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44480-7_21

    Chapter  Google Scholar 

  63. Tu, D., Wang, B., Cui, H., Liu, Y., Shen, S.: Multi-camera-LiDAR auto-calibration by joint structure-from-motion. In: IROS (2022)

    Google Scholar 

  64. Tóth, T., Pusztai, Z., Hajder, L.: Automatic LiDAR-camera calibration of extrinsic parameters using a spherical target. In: ICRA, pp. 8580–8586 (2020). https://doi.org/10.1109/ICRA40945.2020.9197316

  65. Unnikrishnan, R., Hebert, M.: Fast extrinsic calibration of a laser rangefinder to a camera. Robotics Institute, Pittsburgh, PA, Technical report, CMU-RI-TR-05-09 (2005)

    Google Scholar 

  66. Wang, J., et al.: CADSim: robust and scalable in-the-wild 3D reconstruction for controllable sensor simulation. In: Conference on Robot Learning (2023)

    Google Scholar 

  67. Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction. In: NeurIPS (2021)

    Google Scholar 

  68. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE TIP (2004)

    Google Scholar 

  69. Wang, Z., et al.: Neural fields meet explicit geometric representations for inverse rendering of urban scenes. In: CVPR, pp. 8370–8380 (2023)

    Google Scholar 

  70. Wang, Z., Wu, S., Xie, W., Chen, M., Prisacariu, V.A.: NeRF–: neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064 (2021)

  71. Wilson, B., et al.: Argoverse 2: next generation datasets for self-driving perception and forecasting. In: Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS Datasets and Benchmarks 2021) (2021)

    Google Scholar 

  72. Wu, S., Hadachi, A., Vivet, D., Prabhakar, Y.: NetCalib: a novel approach for LiDAR-camera auto-calibration based on deep learning. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 6648–6655, January 2021. https://doi.org/10.1109/ICPR48806.2021.9412653

  73. Xiao, P., et al.: PandaSet: advanced sensor suite dataset for autonomous driving. In: ITSC (2021)

    Google Scholar 

  74. Yan, G., He, F., Shi, C., Wei, P., Cai, X., Li, Y.: Joint camera intrinsic and LiDAR-camera extrinsic calibration. In: ICRA, pp. 11446–11452. IEEE (2023)

    Google Scholar 

  75. Yang, Z., et al.: UniSim: a neural closed-loop sensor simulator. In: CVPR (2023)

    Google Scholar 

  76. Yang, Z., Manivasagam, S., Chen, Y., Wang, J., Hu, R., Urtasun, R.: Reconstructing objects in-the-wild for realistic sensor simulation. In: ICRA, pp. 11661–11668. IEEE (2023)

    Google Scholar 

  77. Yang, Z., Manivasagam, S., Liang, M., Yang, B., Ma, W.C., Urtasun, R.: Recovering and simulating pedestrians in the wild. In: Conference on Robot Learning, pp. 419–431. PMLR (2021)

    Google Scholar 

  78. Yang, Z., et al.: S3: neural shape, skeleton, and skinning fields for 3D human modeling. In: CVPR, pp. 13284–13293 (2021)

    Google Scholar 

  79. Yariv, L., Gu, J., Kasten, Y., Lipman, Y.: Volume rendering of neural implicit surfaces. In: NeurIPS, vol. 34, pp. 4805–4815 (2021)

    Google Scholar 

  80. Yuan, C., Liu, X., Hong, X., Zhang, F.: Pixel-level extrinsic self calibration of high resolution lidar and camera in targetless environments. IEEE Rob. Autom. Lett. 6(4), 7517–7524 (2021)

    Article  Google Scholar 

  81. Zhang, Q., Pless, R.: Extrinsic calibration of a camera and laser range finder (improves camera calibration). In: IROS, vol. 3, pp. 2301–2306. IEEE (2004)

    Google Scholar 

  82. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)

    Google Scholar 

  83. Zhang, X., Zhu, S., Guo, S., Li, J., Liu, H.: Line-based automatic extrinsic calibration of LiDAR and camera. In: ICRA, pp. 9347–9353. IEEE (2021)

    Google Scholar 

  84. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)

    Article  Google Scholar 

  85. Zhou, L., Li, Z., Kaess, M.: Automatic extrinsic calibration of a camera and a 3D LiDAR using line and plane correspondences. In: IROS, pp. 5562–5569. IEEE (2018)

    Google Scholar 

  86. Zhou, Q.Y., Park, J., Koltun, V.: Open3D: a modern library for 3D data processing. arXiv preprint arXiv:1801.09847 (2018)

  87. Zhou, S., Xie, S., Ishikawa, R., Sakurada, K., Onishi, M., Oishi, T.: INF: implicit neural fusion for LiDAR and camera. In: IROS (2023)

    Google Scholar 

Download references

Acknowledgements

We thank Yun Chen, Jingkang Wang, Richard Slocum for their helpful discussions. We also appreciate the invaluable assistance and support from the Waabi team. Additionally, we thank the anonymous reviewers for their constructive comments and suggestions to improve this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ze Yang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 39193 KB)

Supplementary material 2 (pdf 27857 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, Z. et al. (2025). UniCal: Unified Neural Sensor Calibration. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15094. Springer, Cham. https://doi.org/10.1007/978-3-031-72764-1_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72764-1_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72763-4

  • Online ISBN: 978-3-031-72764-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics