Skip to main content

LiDAR-Event Stereo Fusion with Hallucinations

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Event stereo matching is an emerging technique to estimate depth from neuromorphic cameras; however, events are unlikely to trigger in the absence of motion or the presence of large, untextured regions, making the correspondence problem extremely challenging. Purposely, we propose integrating a stereo event camera with a fixed-frequency active sensor – e.g., a LiDAR – collecting sparse depth measurements, overcoming the aforementioned limitations. Such depth hints are used by hallucinating – i.e., inserting fictitious events – the stacks or raw input streams, compensating for the lack of information in the absence of brightness changes. Our techniques are general, can be adapted to any structured representation to stack events and outperform state-of-the-art fusion methods applied to event-based stereo.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Andreopoulos, A., Kashyap, H.J., Nayak, T.K., Amir, A., Flickner, M.D.: A low power, high throughput, fully event-based stereo system. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7532–7542 (2018)

    Google Scholar 

  2. Badino, H., Huber, D.F., Kanade, T.: Integrating lidar into stereo for fast and improved disparity computation. In: 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, pp. 405–412 (2011)

    Google Scholar 

  3. Baldwin, R.W., Liu, R., Almatrafi, M., Asari, V., Hirakawa, K.: Time-ordered recent event (tore) volumes for event cameras. IEEE Trans. Pattern Anal. Mach. Intell. 45(2), 2519–2532 (2022)

    Article  Google Scholar 

  4. Bartolomei, L., Poggi, M., Tosi, F., Conti, A., Mattoccia, S.: Active stereo without pattern projector. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 18470–18482 (October 2023)

    Google Scholar 

  5. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001)

    Article  Google Scholar 

  6. Brebion, V., Moreau, J., Davoine, F.: Learning to estimate two dense depths from lidar and event data. In: Scandinavian Conference on Image Analysis. pp. 517–533. Springer (2023). https://doi.org/10.1007/978-3-031-31438-4_34

  7. Camuñas-Mesa, L.A., Serrano-Gotarredona, T., Ieng, S.H., Benosman, R.B., Linares-Barranco, B.: On the use of orientation filters for 3d reconstruction in event-driven stereo vision. Front. Neurosci. 8, 48 (2014)

    Google Scholar 

  8. Carneiro, J., Ieng, S.H., Posch, C., Benosman, R.: Event-based 3d reconstruction from neuromorphic retinas. Neural Netw. 45, 27–38 (2013)

    Article  Google Scholar 

  9. Chaney, K., et al.: M3ed: Multi-robot, multi-sensor, multi-environment event dataset. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 4015–4022 (June 2023)

    Google Scholar 

  10. Chang, J.R., Chen, Y.S.: Pyramid stereo matching network. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5410–5418 (2018)

    Google Scholar 

  11. Cheng, X., Wang, P., Yang, R.: Learning depth with convolutional spatial propagation network. IEEE Trans. Pattern Anal. Mach. Intell. 42(10), 2361–2379 (2019)

    Article  Google Scholar 

  12. Cheng, X., Zhong, Y., Dai, Y., Ji, P., Li, H.: Noise-aware unsupervised deep lidar-stereo fusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6339–6348 (2019)

    Google Scholar 

  13. Cheng, X., et al.: Hierarchical neural architecture search for deep stereo matching. Adv. Neural Inform. Process. Syst. 33 (2020)

    Google Scholar 

  14. Cui, M., Zhu, Y., Liu, Y., Liu, Y., Chen, G., Huang, K.: Dense depth-map estimation based on fusion of event camera and sparse lidar. IEEE Trans. Instrum. Meas. 71, 1–11 (2022). https://doi.org/10.1109/TIM.2022.3144229

    Article  Google Scholar 

  15. Dikov, G., Firouzi, M., Röhrbein, F., Conradt, J., Richter, C.: Spiking cooperative stereo-matching at 2 ms latency with neuromorphic hardware. In: Mangan, M., Cutkosky, M., Mura, A., Verschure, P.F.M.J., Prescott, T., Lepora, N. (eds.) Living Machines 2017. LNCS (LNAI), vol. 10384, pp. 119–137. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63537-8_11

    Chapter  Google Scholar 

  16. Duggal, S., Wang, S., Ma, W.C., Hu, R., Urtasun, R.: Deeppruner: learning efficient stereo matching via differentiable patchmatch. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4384–4393 (2019)

    Google Scholar 

  17. Gallego, G., et al.: Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(1), 154–180 (2020)

    Article  Google Scholar 

  18. Gallego, G., et al.: Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(1), 154–180 (2022). https://doi.org/10.1109/TPAMI.2020.3008413

    Article  Google Scholar 

  19. Gandhi, V., Čech, J., Horaud, R.: High-resolution depth maps based on tof-stereo fusion. In: 2012 IEEE International Conference on Robotics and Automation, pp. 4742–4749. IEEE (2012)

    Google Scholar 

  20. Gao, L., et al.: Vector: a versatile event-centric benchmark for multi-sensor slam. IEEE Robot. Autom. Lett. 7(3), 8217–8224 (2022)

    Article  Google Scholar 

  21. Gehrig, M., Aarents, W., Gehrig, D., Scaramuzza, D.: Dsec: a stereo event camera dataset for driving scenarios. IEEE Robot. Autom. Lett. (2021). https://doi.org/10.1109/LRA.2021.3068942

    Article  Google Scholar 

  22. Guo, W., et al.: Context-enhanced stereo transformer. In: Proceedings of the European Conference on Computer Vision (ECCV) (2022)

    Google Scholar 

  23. Guo, X., Yang, K., Yang, W., Wang, X., Li, H.: Group-wise correlation stereo network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3273–3282 (2019)

    Google Scholar 

  24. Hirschmuller, H.: Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 328–341 (2007)

    Article  Google Scholar 

  25. Huang, Y.K., et al.: S3: learnable sparse signal superdensity for guided depth estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16706–16716 (2021)

    Google Scholar 

  26. Huang, Z., Sun, L., Zhao, C., Li, S., Su, S.: Eventpoint: self-supervised interest point detection and description for event-based camera. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 5396–5405 (January 2023)

    Google Scholar 

  27. Kendall, A., et al.: End-to-end learning of geometry and context for deep stereo regression. In: The IEEE International Conference on Computer Vision (ICCV) (Oct 2017)

    Google Scholar 

  28. Khamis, S., Fanello, S., Rhemann, C., Kowdle, A., Valentin, J., Izadi, S.: Stereonet: guided hierarchical refinement for real-time edge-aware depth prediction. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 573–590 (2018)

    Google Scholar 

  29. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  30. Kogler, J., Sulzbachner, C., Humenberger, M., Eibensteiner, F.: Address-event based stereo vision with bio-inspired silicon retina imagers. Advances in theory and applications of stereo vision, pp. 165–188 (2011)

    Google Scholar 

  31. Kolmogorov, V., Zabin, R.: What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 147–159 (2004)

    Article  Google Scholar 

  32. Lagorce, X., Orchard, G., Galluppi, F., Shi, B.E., Benosman, R.B.: Hots: a hierarchy of event-based time-surfaces for pattern recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(7), 1346–1359 (2016)

    Article  Google Scholar 

  33. Li, B., et al.: Enhancing 3-d lidar point clouds with event-based camera. IEEE Trans. Instrum. Meas. 70, 1–12 (2021)

    Google Scholar 

  34. Li, J., et al.: Practical stereo matching via cascaded recurrent network with adaptive correlation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16263–16272 (2022)

    Google Scholar 

  35. Li, Z., et al.: Revisiting stereo depth estimation from a sequence-to-sequence perspective with transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6197–6206 (2021)

    Google Scholar 

  36. Liang, C.K., Cheng, C.C., Lai, Y.C., Chen, L.G., Chen, H.H.: Hardware-efficient belief propagation. IEEE Trans. Circuits Syst. Video Technol. 21(5), 525–537 (2011)

    Article  Google Scholar 

  37. Liang, Z., et al.: Learning for disparity estimation through feature constancy. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2018)

    Google Scholar 

  38. Lipson, L., Teed, Z., Deng, J.: Raft-stereo: multilevel recurrent field transforms for stereo matching. In: International Conference on 3D Vision (3DV) (2021)

    Google Scholar 

  39. Maqueda, A.I., Loquercio, A., Gallego, G., García, N., Scaramuzza, D.: Event-based vision meets deep learning on steering prediction for self-driving cars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5419–5427 (2018)

    Google Scholar 

  40. Marin, G., Zanuttigh, P., Mattoccia, S.: Reliable fusion of ToF and stereo depth driven by confidence measures. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9911, pp. 386–401. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46478-7_24

    Chapter  Google Scholar 

  41. Marr, D.C., Poggio, T.A.: Cooperative computation of stereo disparity. Science 194(4262), 283–7 (1976)

    Article  Google Scholar 

  42. Mayer, N., et al.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (June 2016)

    Google Scholar 

  43. Nam, Y., Mostafavi, M., Yoon, K.J., Choi, J.: Stereo depth from events cameras: Concentrate and focus on the future. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6114–6123 (2022)

    Google Scholar 

  44. Osswald, M., Ieng, S.H., Benosman, R., Indiveri, G.: A spiking neural network model of 3d perception for event-based neuromorphic stereo vision systems. Sci. Rep. 7(1), 40703 (2017)

    Article  Google Scholar 

  45. Pang, J., Sun, W., Ren, J.S., Yang, C., Yan, Q.: Cascade residual learning: A two-stage convolutional neural network for stereo matching. In: The IEEE International Conference on Computer Vision (ICCV) (Oct 2017)

    Google Scholar 

  46. Park, K., Kim, S., Sohn, K.: High-precision depth estimation with the 3d lidar and stereo fusion. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2156–2163. IEEE (2018)

    Google Scholar 

  47. Piatkowska, E., Belbachir, A., Gelautz, M.: Asynchronous stereo vision for event-driven dynamic stereo sensor using an adaptive cooperative approach. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 45–50 (2013)

    Google Scholar 

  48. Poggi, M., Agresti, G., Tosi, F., Zanuttigh, P., Mattoccia, S.: Confidence estimation for tof and stereo sensors and its application to depth data fusion. IEEE Sens. J. 20(3), 1411–1421 (2020). https://doi.org/10.1109/JSEN.2019.2946591

    Article  Google Scholar 

  49. Poggi, M., Pallotti, D., Tosi, F., Mattoccia, S.: Guided stereo matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 979–988 (2019)

    Google Scholar 

  50. Poggi, M., Tosi, F.: Federated online adaptation for deep stereo. In: CVPR (2024)

    Google Scholar 

  51. Poggi, M., Tosi, F., Batsos, K., Mordohai, P., Mattoccia, S.: On the synergies between machine learning and binocular stereo for depth estimation from images: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(9), 5314–5334 (2022)

    Google Scholar 

  52. Rogister, P., Benosman, R., Ieng, S.H., Lichtsteiner, P., Delbruck, T.: Asynchronous event-based binocular stereo matching. IEEE Trans. Neural Netw. Learn. Syst. 23(2), 347–353 (2011)

    Article  Google Scholar 

  53. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  54. Saikia, T., Marrakchi, Y., Zela, A., Hutter, F., Brox, T.: Autodispnet: improving disparity estimation with automl. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1812–1823 (2019)

    Google Scholar 

  55. Saucedo, M.A., et al.: Event camera and lidar based human tracking for adverse lighting conditions in subterranean environments. IFAC-PapersOnLine 56(2), 9257–9262 (2023)

    Article  MathSciNet  Google Scholar 

  56. Schraml, S., Belbachir, A.N., Milosevic, N., Schön, P.: Dynamic stereo vision system for real-time tracking. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems, pp. 1409–1412. IEEE (2010)

    Google Scholar 

  57. Shen, Z., Dai, Y., Rao, Z.: Cfnet: Cascade and fused cost volume for robust stereo matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 13906–13915 (June 2021)

    Google Scholar 

  58. Song, R., Jiang, Z., Li, Y., Shan, Y., Huang, K.: Calibration of event-based camera and 3d lidar. In: 2018 WRC Symposium on Advanced Robotics and Automation (WRC SARA), pp. 289–295. IEEE (2018)

    Google Scholar 

  59. Song, X., Zhao, X., Hu, H., Fang, L.: Edgestereo: a context integrated residual pyramid network for stereo matching. In: ACCV (2018)

    Google Scholar 

  60. Sulzbachner, C., Zinner, C., Kogler, J.: An optimized silicon retina stereo matching algorithm using time-space correlation. In: CVPR 2011 WORKSHOPS, pp. 1–7. IEEE (2011)

    Google Scholar 

  61. Ta, K., Bruggemann, D., Brödermann, T., Sakaridis, C., Van Gool, L.: L2e: lasers to events for 6-dof extrinsic calibration of lidars and event cameras. In: 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 11425–11431. IEEE (2023)

    Google Scholar 

  62. Taniai, T., Matsushita, Y., Naemura, T.: Graph cut based continuous stereo matching using locally shared labels. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1613–1620 (2014)

    Google Scholar 

  63. Tankovich, V., Hane, C., Zhang, Y., Kowdle, A., Fanello, S., Bouaziz, S.: Hitnet: hierarchical iterative tile refinement network for real-time stereo matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14362–14372 (June 2021)

    Google Scholar 

  64. Tonioni, A., Tosi, F., Poggi, M., Mattoccia, S., Stefano, L.D.: Real-time self-adaptive deep stereo. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019)

    Google Scholar 

  65. Tosi, F., Tonioni, A., De Gregorio, D., Poggi, M.: Nerf-supervised deep stereo. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 855–866 (June 2023)

    Google Scholar 

  66. Tulyakov, S., Fleuret, F., Kiefel, M., Gehler, P., Hirsch, M.: Learning an event sequence embedding for dense event-based deep stereo. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1527–1537 (2019)

    Google Scholar 

  67. Uddin, S.N., Ahmed, S.H., Jung, Y.J.: Unsupervised deep event stereo for depth estimation. IEEE Trans. Circuits Syst. Video Technol. 32(11), 7489–7504 (2022)

    Article  Google Scholar 

  68. Veksler, O.: Stereo correspondence by dynamic programming on a tree. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 2, pp. 384–390. IEEE (2005)

    Google Scholar 

  69. Wang, T.H., Hu, H.N., Lin, C.H., Tsai, Y.H., Chiu, W.C., Sun, M.: 3d lidar and stereo fusion using stereo matching network with conditional cost volume normalization. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5895–5902. IEEE (2019)

    Google Scholar 

  70. Wang, Y., et al.: Anytime stereo image depth estimation on mobile devices. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 5893–5900 (2019)

    Google Scholar 

  71. Xu, G., Wang, X., Ding, X., Yang, X.: Iterative geometry encoding volume for stereo matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21919–21928 (2023)

    Google Scholar 

  72. Xu, H., Zhang, J.: Aanet: adaptive aggregation network for efficient stereo matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1959–1968 (2020)

    Google Scholar 

  73. Yang, G., Manela, J., Happold, M., Ramanan, D.: Hierarchical deep stereo matching on high-resolution images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5515–5524 (2019)

    Google Scholar 

  74. Yang, G., Zhao, H., Shi, J., Deng, Z., Jia, J.: SegStereo: exploiting semantic information for disparity estimation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 660–676. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_39

    Chapter  Google Scholar 

  75. Yang, Q., Wang, L., Ahuja, N.: A constant-space belief propagation algorithm for stereo matching. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1458–1465. IEEE (2010)

    Google Scholar 

  76. Yang, Q., Wang, L., Yang, R., Stewénius, H., Nistér, D.: Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling. IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 492–504 (2008)

    Article  Google Scholar 

  77. Yin, Z., Darrell, T., Yu, F.: Hierarchical discrete distribution decomposition for match density estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6044–6053 (2019)

    Google Scholar 

  78. Zabih, R., Woodfill, J.: Non-parametric local transforms for computing visual correspondence. In: Third European Conference on Computer Vision (Vol. II). pp. 151–158. 3rd European Conference on Computer Vision (ECCV), Springer-Verlag New York, Inc., Secaucus, NJ, USA (1994)

    Google Scholar 

  79. Zbontar, J., LeCun, Y., et al.: Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res. 17(1), 2287–2318 (2016)

    Google Scholar 

  80. Zhang, F., Prisacariu, V., Yang, R., Torr, P.H.: GA-Net: guided aggregation net for end-to-end stereo matching. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  81. Zhang, J., Ramanagopal, M.S., Vasudevan, R., Johnson-Roberson, M.: Listereo: generate dense depth maps from lidar and stereo imagery. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 7829–7836. IEEE (2020)

    Google Scholar 

  82. Zhang, Y., Zou, S., Liu, X., Huang, X., Wan, Y., Yao, Y.: Lidar-guided stereo matching with a spatial consistency constraint. ISPRS J. Photogramm. Remote. Sens. 183, 164–177 (2022)

    Article  Google Scholar 

  83. Zhao, H., Zhou, H., Zhang, Y., Chen, J., Yang, Y., Zhao, Y.: High-frequency stereo matching network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1327–1336 (2023)

    Google Scholar 

  84. Zhao, H., Zhou, H., Zhang, Y., Zhao, Y., Yang, Y., Ouyang, T.: Eai-stereo: error aware iterative network for stereo matching. In: Proceedings of the Asian Conference on Computer Vision, pp. 315–332 (2022)

    Google Scholar 

  85. Zhou, Y., Gallego, G., Rebecq, H., Kneip, L., Li, H., Scaramuzza, D.: Semi-dense 3D reconstruction with a stereo event camera. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 242–258. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_15

    Chapter  Google Scholar 

  86. Zhu, A.Z., Yuan, L., Chaney, K., Daniilidis, K.: Unsupervised event-based learning of optical flow, depth, and egomotion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 989–997 (2019)

    Google Scholar 

  87. Zubić, N., Gehrig, D., Gehrig, M., Scaramuzza, D.: From chaos comes order: Ordering event representations for object recognition and detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12846–12856 (October 2023)

    Google Scholar 

Download references

Acknowledgement

This study was carried out within the MOST - Sustainable Mobility National Research Center and received funding from the European Union Next-GenerationEU - PIANO NAZIONALE DI RIPRESA E RESILIENZA (PNRR) - MISSIONE 4 COMPONENTE 2, INVESTIMENTO 1.4 - D.D. 1033 17/06/2022, CN00000023. This manuscript reflects only the authors’ views and opinions, neither the European Union nor the European Commission can be considered responsible for them.

We acknowledge the CINECA award under the ISCRA initiative, for the availability of high-performance computing resources and support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luca Bartolomei .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 41602 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bartolomei, L., Poggi, M., Conti, A., Mattoccia, S. (2025). LiDAR-Event Stereo Fusion with Hallucinations. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15064. Springer, Cham. https://doi.org/10.1007/978-3-031-72658-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72658-3_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72657-6

  • Online ISBN: 978-3-031-72658-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics