Skip to main content

BeNeRF: Neural Radiance Fields from a Single Blurry Image and Event Stream

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Implicit scene representation has attracted a lot of attention in recent research of computer vision and graphics. Most prior methods focus on how to reconstruct 3D scene representation from a set of images. In this work, we demonstrate the possibility to recover the neural radiance fields (NeRF) from a single blurry image and its corresponding event stream. To eliminate motion blur, we introduce event stream to regularize the learning process of NeRF by accumulating it into an image. We model the camera motion with a cubic B-Spline in SE(3) space. Both the blurry image and the brightness change within a time interval, can then be synthesized from the NeRF given the 6-DoF poses interpolated from the cubic B-Spline. Our method can jointly learn both the implicit scene representation and the camera motion by minimizing the differences between the synthesized data and the real measurements without any prior knowledge of camera poses. We evaluate the proposed method with both synthetic and real datasets. The experimental results demonstrate that we are able to render view-consistent latent sharp images from the learned NeRF and bring a blurry image alive in high quality.

Wenpu Li, Pian Wan, Peng Wang: Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Unreal Engine: The most powerful real-time 3D creation tool. https://www.unrealengine.com/en-US/

  2. Bian, W., Wang, Z., Li, K., Bian, J., Prisacariu, V.A.: NoPe-NeRF: optimising neural radiance field with no pose prior. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2023)

    Google Scholar 

  3. Cai, S., Obukhov, A., Dai, D., Van Gool, L.: Pix2NeRF: unsupervised conditional pi-GAN for single image to neural radiance fields translation. In: CVPR (2022)

    Google Scholar 

  4. Chen, L., Chu, X., Zhang, X., Sun, J.: Simple Baselines for Image Restoration. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022, pp. 17–33. Lecture Notes in Computer Science, Springer Nature Switzerland, Cham (2022). https://doi.org/10.1007/978-3-031-20071-7_2

  5. Chen, L., Lu, X., Zhang, J., Chu, X., Chen, C.: HINet: half instance normalization network for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 182–192 (2021)

    Google Scholar 

  6. Deng, K., Liu, A., Zhu, J.Y., Ramanan, D.: Depth-supervised nerf: Fewer views and faster training for free. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12882–12891 (2022)

    Google Scholar 

  7. Foundation, B.: Blender.org - Home of the Blender project - Free and Open 3D Creation Software.

    Google Scholar 

  8. Gallego, G., et al.: Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44(1), 154–180 (2022). https://doi.org/10.1109/TPAMI.2020.3008413

    Article  Google Scholar 

  9. Hidalgo-Carrió, J., Gallego, G., Scaramuzza, D.: Event-aided Direct Sparse Odometry (2022). https://doi.org/10.48550/arXiv.2204.07640

  10. Hwang, I., Kim, J., Kim, Y.M.: EV-nerf: event based neural radiance field. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 837–847 (2023)

    Google Scholar 

  11. Jia, J.: Single image motion deblurring using transparency. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2007)

    Google Scholar 

  12. Jiang, Z., Zhang, Y., Zou, D., Ren, J., Lv, J., Liu, Y.: Learning Event-Based Motion Deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3320–3329 (2020)

    Google Scholar 

  13. Jin, M., Meishvili, G., Favaro, P.: Learning to extract a video sequence from a single motion-blurred image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6334–6342 (2018)

    Google Scholar 

  14. Joshi, N., Zitnick, C.L., Szeliski, R., Kriegman, D.J.: Image deblurring and denoising using color priors. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1550–1557. IEEE (2009)

    Google Scholar 

  15. Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization (2017). https://doi.org/10.48550/arXiv.1412.6980

  16. Klenk, S., Koestler, L., Scaramuzza, D., Cremers, D.: E-nerf: neural radiance fields from a moving event camera. IEEE Robot. Auto. Lett. (2023)

    Google Scholar 

  17. Klenk, S., Koestler, L., Scaramuzza, D., Cremers, D.: E-NeRF: neural radiance fields from a moving event camera. IEEE Robot. Auto. Lett. 8(3), 1587–1594 (2023). https://doi.org/10.1109/LRA.2023.3240646

    Article  Google Scholar 

  18. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: Deblurgan: Blind motion deblurring using conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 8183–8192 (2018)

    Google Scholar 

  19. Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8878–8887 (2019)

    Google Scholar 

  20. Lee, D., Oh, J., Rim, J., Cho, S., Lee, K.M.: Exblurf: efficient radiance fields for extreme motion blurred images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2023)

    Google Scholar 

  21. Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding and evaluating blind deconvolution algorithms. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1964–1971. IEEE (2009)

    Google Scholar 

  22. Li, M., Wang, P., Zhao, L., Liao, B., Liu, P.: USB-NeRF: unrolling shutter bundle adjusted neural radiance fields. In: International Conference on Learning Representations (ICLR) (2024)

    Google Scholar 

  23. Lichtsteiner, P., Posch, C., Delbruck, T.: A 128\({\backslash }{\rm times}\)128 120 dB 15 \(M\)s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43(2), 566–576 (2008). https://doi.org/10.1109/JSSC.2007.914337

    Article  Google Scholar 

  24. Lin, C.H., Ma, W.C., Torralba, A., Lucey, S.: BARF: bundle-adjusting neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5741–5751 (2021)

    Google Scholar 

  25. Low, W.F., Lee, G.H.: Robust e-nerf: nerf from sparse and noisy events under non-uniform motion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2023)

    Google Scholar 

  26. Ma, L., et al.: Deblur-NeRF: neural radiance fields from blurry images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12861–12870 (2022)

    Google Scholar 

  27. Mildenhall, B., Hedman, P., Martin-Brualla, R., Srinivasan, P.P., Barron, J.T.: NeRF in the dark: High dynamic range view synthesis from noisy raw images. CVPR (2022)

    Google Scholar 

  28. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis (2020). https://doi.org/10.48550/arXiv.2003.08934

  29. Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012)

    Article  MathSciNet  Google Scholar 

  30. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. 41(4), 1–15 (2022). https://doi.org/10.1145/3528223.3530127

    Article  Google Scholar 

  31. Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883–3891 (2017)

    Google Scholar 

  32. Niemeyer, M., Barron, J.T., Mildenhall, B., Sajjadi, M.S., Geiger, A., Radwan, N.: Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5480–5490 (2022)

    Google Scholar 

  33. Pan, L., Scheerlinck, C., Yu, X., Hartley, R., Liu, M., Dai, Y.: Bringing a blurry frame alive at high frame-rate with an event camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6820–6829 (2019)

    Google Scholar 

  34. Qi, Y., Zhu, L., Zhang, Y., Li, J.: E2NeRF: event enhanced neural radiance fields from blurry images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13254–13264 (2023)

    Google Scholar 

  35. Qin, K.: General matrix representations for B-splines. In: Proceedings Pacific Graphics ’98. Sixth Pacific Conference on Computer Graphics and Applications (Cat. No.98EX208), pp. 37–43 (1998). https://doi.org/10.1109/PCCGA.1998.731996

  36. Rebain, D., Matthews, M., Yi, K.M., Lagun, D., Tagliasacchi, A.: LOLNeRF: Learn from One Look. In: CVPR (2022)

    Google Scholar 

  37. Rebecq, H., Gehrig, D., Scaramuzza, D.: ESIM: an open event camera simulator. In: Proceedings of The 2nd Conference on Robot Learning, pp. 969–982. PMLR (Oct 2018)

    Google Scholar 

  38. Rematas, K., Martin-Brualla, R., Ferrari, V.: ShaRF: shape-conditioned radiance fields from a single view. In: ICML (2021)

    Google Scholar 

  39. Rudnev, V., Elgharib, M., Theobalt, C., Golyanik, V.: EventNeRF: neural radiance fields from a single colour event camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4992–5002 (2023)

    Google Scholar 

  40. Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  41. Schops, T., Sattler, T., Pollefeys, M.: BAD SLAM: bundle adjusted direct RGB-D SLAM. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 134–144 (2019)

    Google Scholar 

  42. Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graph. (tog) 27(3), 1–10 (2008)

    Article  Google Scholar 

  43. Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 769–777 (2015)

    Google Scholar 

  44. Sun, L., et al.: Event-based fusion for motion deblurring with cross-modal attention. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022, pp. 412–428. Lecture Notes in Computer Science, Springer Nature Switzerland, Cham (2022). https://doi.org/10.1007/978-3-031-19797-0_24

  45. Sun, L., et al.: Event-based frame interpolation with ad-hoc deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18043–18052 (2023)

    Google Scholar 

  46. Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8174–8182 (2018)

    Google Scholar 

  47. Wang, B., He, J., Yu, L., Xia, G.S., Yang, W.: Event enhanced high-quality image recovery. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision – ECCV 2020, pp. 155–171. Lecture Notes in Computer Science, Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-58601-0_10

  48. Wang, P., Zhao, L., Ma, R., Liu, P.: BAD-NeRF: bundle adjusted deblur neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4170–4179 (2023)

    Google Scholar 

  49. Wang, Z., Wu, S., Xie, W., Chen, M., Prisacariu, V.A.: NeRF–: neural radiance fields without known camera parameters (2022). https://doi.org/10.48550/arXiv.2102.07064

  50. Xu, F., et al.: Motion deblurring with real events. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2583–2592 (2021)

    Google Scholar 

  51. Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part I 11, pp. 157–170. Springer (2010). https://doi.org/10.1007/978-3-642-15549-9_12

  52. Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: Plenoctrees for real-time rendering of neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5752–5761 (2021)

    Google Scholar 

  53. Yu, A., Ye, V., Tancik, M., Kanazawa, A.: pixelnerf: neural radiance fields from one or few images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4578–4587 (2021)

    Google Scholar 

  54. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5728–5739 (2022)

    Google Scholar 

  55. Zamir, S.W., et al.: Multi-stage progressive image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14821–14831 (2021)

    Google Scholar 

  56. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  57. Zhang, X., Yu, L., Yang, W., Liu, J., Xia, G.S.: Generalizing event-based motion deblurring in real-world scenarios. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10734–10744 (2023)

    Google Scholar 

  58. Zhao, L., Wang, P., Liu, P.: BAD-Gaussians: bundle adjusted deblur gaussian splatting. In: European Conference on Computer Vision (ECCV). Springer (2024)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by NSFC under Grant 62202389, in part by a grant from the Westlake University-Muyuan Joint Research Institute, and in part by the Westlake Education Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peidong Liu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 21760 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, W., Wan, P., Wang, P., Li, J., Zhou, Y., Liu, P. (2025). BeNeRF: Neural Radiance Fields from a Single Blurry Image and Event Stream. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15089. Springer, Cham. https://doi.org/10.1007/978-3-031-72751-1_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72751-1_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72750-4

  • Online ISBN: 978-3-031-72751-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics