Skip to main content

URS-NeRF: Unordered Rolling Shutter Bundle Adjustment for Neural Radiance Fields

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15093))

Included in the following conference series:

  • 384 Accesses

Abstract

In this paper, we propose a novel rolling shutter bundle adjustment method for neural radiance fields (NeRF), which utilizes the unordered rolling shutter (RS) images to obtain the implicit 3D representation. Existing NeRF methods suffer from low-quality images and inaccurate initial camera poses due to the RS effect in the image. Furthermore, the previous method that incorporates RS images into NeRF requires strict sequential data input, thus limiting its widespread applicability. In contrast, our method recovers the physical formation of RS images by estimating camera poses and velocities, thereby removing the input constraints on sequential data. Moreover, we adopt a coarse-to-fine training strategy, in which the RS epipolar constraints of the pairwise frames in the scene graph are used to detect the camera poses that fall into local minima. The poses detected as outliers are corrected by the interpolation method with neighboring poses. The experimental results validate the effectiveness of our method over state-of-the-art works and demonstrate that the reconstruction of 3D representations is not constrained by the requirement of video sequence input.

The work was done while Bo Xu is a visiting student at the National University of Singapore.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Albl, C., Sugimoto, A., Pajdla, T.: Degeneracies in rolling shutter SfM. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 36–51. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46454-1_3

    Chapter  Google Scholar 

  2. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: MIP-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855–5864 (2021)

    Google Scholar 

  3. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: MIP-NeRF 360: unbounded anti-aliased neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5470–5479 (2022)

    Google Scholar 

  4. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: anti-aliased grid-based neural radiance fields. arXiv preprint arXiv:2304.06706 (2023)

  5. Cao, L., Ling, J., Xiao, X.: The WHU rolling shutter visual-inertial dataset. IEEE Access 8, 50771–50779 (2020)

    Article  Google Scholar 

  6. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 333–350. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_20

    Chapter  Google Scholar 

  7. Chen, Y., Lee, G.H.: DBARF: deep bundle-adjusting generalizable neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 24–34 (2023)

    Google Scholar 

  8. Dai, Y., Li, H., Kneip, L.: Rolling shutter camera relative pose: generalized epipolar geometry. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4132–4140 (2016)

    Google Scholar 

  9. DeTone, D., Malisiewicz, T., Rabinovich, A.: SuperPoint: self-supervised interest point detection and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 224–236 (2018)

    Google Scholar 

  10. Dollár, P., Welinder, P., Perona, P.: Cascaded pose regression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1078–1085. IEEE (2010)

    Google Scholar 

  11. Fan, B., Dai, Y., He, M.: Sunet: symmetric undistortion network for rolling shutter correction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4541–4550 (2021)

    Google Scholar 

  12. Fan, B., Dai, Y., Zhang, Z., Liu, Q., He, M.: Context-aware video reconstruction for rolling shutter cameras. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17572–17582 (2022)

    Google Scholar 

  13. Fu, H., Yu, X., Li, L., Zhang, L.: CBARF: cascaded bundle-adjusting neural radiance fields from imperfect camera poses. arXiv preprint arXiv:2310.09776 (2023)

  14. Hedborg, J., Forssén, P.E., Felsberg, M., Ringaby, E.: Rolling shutter bundle adjustment. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1434–1441. IEEE (2012)

    Google Scholar 

  15. Hu, W., et al.: Tri-MipRF: Tri-Mip representation for efficient anti-aliasing neural radiance fields. In: ICCV (2023)

    Google Scholar 

  16. Huynh-Thu, Q., Ghanbari, M.: Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 44(13), 800–801 (2008)

    Article  Google Scholar 

  17. Im, S., Ha, H., Choe, G., Jeon, H.G., Joo, K., Kweon, I.S.: Accurate 3D reconstruction from small motion clip for rolling shutter cameras. IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 775–787 (2018)

    Article  Google Scholar 

  18. Jarrett, K., Kavukcuoglu, K., Ranzato, M., LeCun, Y.: What is the best multi-stage architecture for object recognition? In: 2009 IEEE 12th International Conference on Computer Vision, pp. 2146–2153. IEEE (2009)

    Google Scholar 

  19. Jinyu, L., Bangbang, Y., Danpeng, C., Nan, W., Guofeng, Z., Hujun, B.: Survey and evaluation of monocular visual-inertial slam algorithms for augmented reality. Virtual Reality Intell. Hardware 1(4), 386–410 (2019)

    Article  Google Scholar 

  20. Lao, Y., Ait-Aider, O., Araujo, H.: Robustified structure from motion with rolling-shutter camera using straightness constraint. Pattern Recogn. Lett. 111, 1–8 (2018)

    Article  Google Scholar 

  21. Lao, Y., Ait-Aider, O., Bartoli, A.: Solving rolling shutter 3D vision problems using analogies with non-rigidity. Int. J. Comput. Vis. 129, 100–122 (2021)

    Article  MathSciNet  Google Scholar 

  22. Li, M., Wang, P., Zhao, L., Liao, B., Liu, P.: USB-NeRF: unrolling shutter bundle adjusted neural radiance fields. arXiv preprint arXiv:2310.02687 (2023)

  23. Liao, B., Qu, D., Xue, Y., Zhang, H., Lao, Y.: Revisiting rolling shutter bundle adjustment: toward accurate and fast solution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4863–4871 (2023)

    Google Scholar 

  24. Lin, C.H., Ma, W.C., Torralba, A., Lucey, S.: BARF: bundle-adjusting neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5741–5751 (2021)

    Google Scholar 

  25. Meingast, M., Geyer, C., Sastry, S.: Geometric models of rolling-shutter cameras. arXiv preprint cs/0503076 (2005)

    Google Scholar 

  26. Mildenhall, B., et al.: Local light field fusion: practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. (TOG) 38(4), 1–14 (2019)

    Article  Google Scholar 

  27. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)

    Article  Google Scholar 

  28. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. 41(4), 102:1–102:15 (2022). https://doi.org/10.1145/3528223.3530127

  29. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (ToG) 41(4), 1–15 (2022)

    Article  Google Scholar 

  30. Patron-Perez, A., Lovegrove, S., Sibley, G.: A spline-based trajectory representation for sensor fusion and rolling shutter cameras. Int. J. Comput. Vis. 113(3), 208–219 (2015)

    Article  Google Scholar 

  31. Rengarajan, V., Balaji, Y., Rajagopalan, A.: Unrolling the shutter: CNN to correct motion distortions. In: Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, pp. 2291–2299 (2017)

    Google Scholar 

  32. Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A.: Superglue: learning feature matching with graph neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4938–4947 (2020)

    Google Scholar 

  33. Saurer, O., Pollefeys, M., Lee, G.H.: Sparse to dense 3D reconstruction from rolling shutter images. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3337–3345 (2016)

    Google Scholar 

  34. Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  35. Song, L., Wang, G., Liu, J., Fu, Z., Miao, Y., et al.: SC-NeRF: self-correcting neural radiance field with sparse views. arXiv preprint arXiv:2309.05028 (2023)

  36. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  37. Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: PlenOctrees for real-time rendering of neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5752–5761 (2021)

    Google Scholar 

  38. Zamir, S.W., et al.: Multi-stage progressive image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14821–14831 (2021)

    Google Scholar 

  39. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  40. Zhuang, B., Cheong, L.F., Hee Lee, G.: Rolling-shutter-aware differential SFM and image rectification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 948–956 (2017)

    Google Scholar 

Download references

Acknowledgement

This research/project is supported by the National Research Foundation, Singapore, under its NRF-Investigatorship Programme (Award ID. NRF-NRFI09-0008), the Tier 2 grant MOE-T2EP20120-0011 from the Singapore Ministry of Education, and the National Key Research and Development Program of China (2021YFB2501100).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bo Xu .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1837 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, B., Liu, Z., Guo, M., Li, J., Lee, G.H. (2025). URS-NeRF: Unordered Rolling Shutter Bundle Adjustment for Neural Radiance Fields. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15093. Springer, Cham. https://doi.org/10.1007/978-3-031-72761-0_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72761-0_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72760-3

  • Online ISBN: 978-3-031-72761-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics