Skip to main content

Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

High-quality scene reconstruction and novel view synthesis based on Gaussian Splatting (3DGS) typically require steady, high-quality photographs, often impractical to capture with handheld cameras. We present a method that adapts to camera motion and allows high-quality scene reconstruction with handheld video data suffering from motion blur and rolling shutter distortion. Our approach is based on detailed modelling of the physical image formation process and utilizes velocities estimated using visual-inertial odometry (VIO). Camera poses are considered non-static during the exposure time of a single image frame and camera poses are further optimized in the reconstruction process. We formulate a differentiable rendering pipeline that leverages screen space approximation to efficiently incorporate rolling-shutter and motion blur effects into the 3DGS framework. Our results with both synthetic and real data demonstrate superior performance in mitigating camera motion over existing methods, thereby advancing 3DGS in naturalistic settings.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Spectacular AI mapping tools (2024). https://spectacularai.github.io/docs/sdk/tools/nerf.html. Accessed 01 Mar 2024

  2. Cai, J.F., Ji, H., Liu, C., Shen, Z.: Blind motion deblurring from a single image using sparse approximation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 104–111. IEEE (2009)

    Google Scholar 

  3. Chakrabarti, A.: A neural approach to blind motion deblurring. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 221–235. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_14

    Chapter  Google Scholar 

  4. Community, B.O.: Blender – a 3D modelling and rendering package. Blender Foundation (2018). http://www.blender.org

  5. Dai, P., Zhang, Y., Yu, X., Lyu, X., Qi, X.: Hybrid neural rendering for large-scale scenes with motion blur. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 154–164 (2023)

    Google Scholar 

  6. Fan, B., Dai, Y., Zhang, Z., Liu, Q., He, M.: Context-aware video reconstruction for rolling shutter cameras. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17551–17561 (2022)

    Google Scholar 

  7. Fu, Y., Liu, S., Kulkarni, A., Kautz, J., Efros, A.A., Wang, X.: COLMAP-Free 3D Gaussian splatting. arXiv preprint arXiv:2312.07504 (2023)

  8. Gong, D., et al.: From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2319–2328 (2017)

    Google Scholar 

  9. Grundmann, M., Kwatra, V., Castro, D., Essa, I.: Calibration-free rolling shutter removal. In: IEEE International Conference on Computational Photography (ICCP), pp. 1–8 (2012)

    Google Scholar 

  10. Hedborg, J., Forsén, P.E., Felsberg, M., Ringaby, E.: Rolling shutter bundle adjustment. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1434–1441 (2012)

    Google Scholar 

  11. Kaipio, J., Somersalo, E.: Statistical and Computational Inverse Problems. Springer, Berlin (2004)

    Google Scholar 

  12. Keetha, N., et al.: SplaTAM: splat, track & map 3D gaussians for dense RGB-D SLAM. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 21357–21366 (2024)

    Google Scholar 

  13. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian Splatting for real-time radiance field rendering. ACM Trans. Graph. (TOG) 42(4) (2023)

    Google Scholar 

  14. Kim, H., Song, M., Lee, D., Kim, P.: Visual-inertial odometry priors for bundle-adjusting neural radiance fields. In: International Conference on Control, Automation and Systems (ICCAS), pp. 1131–1136 (2022)

    Google Scholar 

  15. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: DeblurGAN: blind motion deblurring using conditional adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8183–8192 (2018)

    Google Scholar 

  16. Lao, Y., Ait-Aider, O.: A robust method for strong rolling shutter effects correction using lines with automatic feature selection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4795–4803 (2018)

    Google Scholar 

  17. Lee, B., Lee, H., Sun, X., Ali, U., Park, E.: Deblurring 3D Gaussian Splatting. arXiv preprint arXiv:2401.00834 (2024)

  18. Li, M., Wang, P., Zhao, L., Liao, B., Liu, P.: USB-neRF: unrolling shutter bundle adjusted neural radiance fields. In: International Conference on Learning Representations (ICLR) (2024)

    Google Scholar 

  19. Liang, C.K., Chang, L.W., Chen, H.H.: Analysis and compensation of rolling shutter effect. IEEE Trans. Image Process. 17(8), 1323–1330 (2008)

    Article  MathSciNet  Google Scholar 

  20. Liao, B., Qu, D., Xue, Y., Zhang, H., Lao, Y.: Revisiting rolling shutter bundle adjustment: Toward accurate and fast solution. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4863–4871 (2023)

    Google Scholar 

  21. Lin, C.H., Ma, W.C., Torralba, A., Lucey, S.: BARF: bundle-adjusting neural radiance fields. IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5721–5731 (2021)

    Google Scholar 

  22. Liu, P., Cui, Z., Larsson, V., Pollefeys, M.: Deep shutter unrolling network. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5940–5948 (2020)

    Google Scholar 

  23. Lovegrove, S., Patron-Perez, A., Sibley, G.: Spline fusion: a continuous-time representation for visual-inertial fusion with application to rolling shutter cameras. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 93.1–93.11 (2013)

    Google Scholar 

  24. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)

    Article  Google Scholar 

  25. Ma, L., et al.: Deblur-NeRF: neural radiance fields from blurry images. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12861–12870 (2022)

    Google Scholar 

  26. Matsuki, H., Murai, R., Kelly, P.H.J., Davison, A.J.: Gaussian splatting SLAM. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18039–18048 (2024)

    Google Scholar 

  27. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: European Conference on Computer Vision (ECCV), pp. 405–421 (2020)

    Google Scholar 

  28. Mohan M.R.M., Rajagopalan, A., Seetharaman, G.: Going unconstrained with rolling shutter deblurring. In: IEEE International Conference on Computer Vision (ICCV), pp. 4030–4038 (2017)

    Google Scholar 

  29. Park, K., Henzler, P., Mildenhall, B., Barron, J.T., Martin-Brualla, R.: CamP: camera preconditioning for neural radiance fields. ACM Trans. Graph. (TOG) 42(6), 1–11 (2023)

    Article  Google Scholar 

  30. Rengarajan, V., Rajagopalan, A.N., Aravind, R.: From bows to arrows: rolling shutter rectification of urban scenes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2773–2781 (2016)

    Google Scholar 

  31. Schölkopf, B., Platt, J., Hofmann, T.: Blind motion deblurring using image statistics. In: Advances in Neural Information Processing Systems, vol. 19, pp. 841–848 (2007)

    Google Scholar 

  32. Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  33. Schubert, D., Demmel, N., von Stumberg, L., Usenko, V., Cremers, D.: Rolling-shutter modelling for visual-inertial odometry. In: International Conference on Intelligent Robots and Systems (IROS) (2019)

    Google Scholar 

  34. Seiskari, O., Rantalankila, P., Kannala, J., Ylilammi, J., Rahtu, E., Solin, A.: HybVIO: pushing the limits of real-time visual-inertial odometry. In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 287–296. IEEE Winter Conference on Applications of Computer Vision, IEEE (2022)

    Google Scholar 

  35. Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graph. (TOG) 27(3), 1–10 (2008)

    Article  Google Scholar 

  36. Su, S., Heidrich, W.: Rolling shutter motion deblurring. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1529–1537 (2015)

    Google Scholar 

  37. Tai, Y.W., Tan, P., Brown, M.S.: Richardson-Lucy deblurring for scenes under a projective motion path. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1603–1618 (2011)

    Article  Google Scholar 

  38. Tancik, M., et al.: Nerfstudio: a modular framework for neural radiance field development. In: ACM SIGGRAPH 2023 Conference Proceedings (2023)

    Google Scholar 

  39. Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment — a modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) IWVA 1999. LNCS, vol. 1883, pp. 298–372. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44480-7_21

    Chapter  Google Scholar 

  40. Vasu, S., Mohan M.R., M., Rajagopalan, A.: Occlusion-aware rolling shutter rectification of 3D scenes. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 636–645 (2018)

    Google Scholar 

  41. Wang, P., Zhao, L., Ma, R., Liu, P.: BAD-NeRF: bundle adjusted deblur neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4170–4179 (2023)

    Google Scholar 

  42. Weber, E., et al.: NeRFiller: completing scenes via generative 3D inpainting. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 20731–20741 (2024)

    Google Scholar 

  43. Wu, R., et al.: ReconFusion: 3D reconstruction with diffusion priors. arXiv preprint arXiv:2312.02981 (2023)

  44. Xie, T., et al.: PhysGaussian: physics-integrated 3D gaussians for generative dynamics. arXiv preprint arXiv:2311.12198 (2023)

  45. Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 157–170. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15549-9_12

    Chapter  Google Scholar 

  46. Yan, C., et al.: GS-SLAM: dense visual SLAM with 3D gaussian splatting. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 19595–19604 (2024)

    Google Scholar 

  47. Ye, V., Kanazawa, A.: Mathematical supplement for the gsplat library (2023)

    Google Scholar 

  48. Ye, V., Turkulainen, M., the Nerfstudio team: gsplat. https://github.com/nerfstudio-project/gsplat

  49. Ye, Z., Li, W., Liu, S., Qiao, P., Dou, Y.: AbsGS: recovering fine details for 3D gaussian splatting. arXiv preprint arXiv:2404.10484 (2024)

  50. Yu, Z., Chen, A., Huang, B., Sattler, T., Geiger, A.: Mip-splatting: alias-free 3D gaussian splatting. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 19447–19456 (2024)

    Google Scholar 

  51. Zamir, S.W., et al.: Multi-stage progressive image restoration. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14816–14826 (2021)

    Google Scholar 

Download references

Acknowledgements

MT was supported by the Research Council of Finland Flagship programme: Finnish Center for Artificial Intelligence (FCAI). AS acknowledges funding from the Research Council of Finland (grant id 339730). We acknowledge CSC – IT Center for Science, Finland, for computational resources.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Otto Seiskari .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 69348 KB)

Supplementary material 2 (pdf 1678 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Seiskari, O. et al. (2025). Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15129. Springer, Cham. https://doi.org/10.1007/978-3-031-73209-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73209-6_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73208-9

  • Online ISBN: 978-3-031-73209-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics