Skip to main content

Distractor-Free Novel View Synthesis via Exploiting Memorization Effect in Optimization

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have greatly advanced novel view synthesis, which is capable of photo-realistic rendering. However, these methods require the foundational assumption of the static scene (e.g., consistent lighting condition and persistent object positions), which is often violated in real-world scenarios. In this study, we introduce MemE, an unsupervised plug-and-play module, to achieve high-quality novel view synthesis in noisy input scenarios. MemE leverages the inherent property in parameter optimization, known as the memorization effect to achieve distractor filtering and can be easily combined with NeRF or 3DGS. Furthermore, MemE is applicable in environments both with and without distractors, significantly enhancing the adaptability of NeRF and 3DGS across diverse input scenarios. Extensive experiments show that our methods (i.e., MemE-NeRF and MemE-3DGS) achieve state-of-the-art performance on both real and synthetic noisy scenes. We will release our code for further research at https://github.com/Yukun66/MemE.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Arazo, E., Ortego, D., Albert, P., O’Connor, N., McGuinness, K.: Unsupervised label noise modeling and loss correction. In: ICML, pp. 312–321. PMLR (2019)

    Google Scholar 

  2. Arpit, D., et al.: A closer look at memorization in deep networks. In: ICML, pp. 233–242 (2017)

    Google Scholar 

  3. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: ICCV, pp. 5855–5864 (2021)

    Google Scholar 

  4. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: unbounded anti-aliased neural radiance fields. In: CVPR, pp. 5470–5479 (2022)

    Google Scholar 

  5. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: anti-aliased grid-based neural radiance fields. arXiv preprint arXiv:2304.06706 (2023)

  6. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensorRF: tensorial radiance fields. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 333–350. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_20

    Chapter  Google Scholar 

  7. Chen, M., Wang, L., Lei, Y., Dong, Z., Guo, Y.: Learning spherical radiance field for efficient 360\(^{\circ }\) unbounded novel view synthesis. IEEE Trans. Image Process. 33, 3722–3734 (2024)

    Article  Google Scholar 

  8. Cordeiro, F.R., Sachdeva, R., Belagiannis, V., Reid, I., Carneiro, G.: LongReMix: robust learning with high confidence samples in a noisy label environment. PR 133, 109013 (2023)

    Google Scholar 

  9. Dai, P., Zhang, Y., Yu, X., Lyu, X., Qi, X.: Hybrid neural rendering for large-scale scenes with motion blur. In: CVPR, pp. 154–164 (2023)

    Google Scholar 

  10. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Stat. Soc.: Ser. B (Methodol.) 39(1), 1–22 (1977)

    Article  MathSciNet  Google Scholar 

  11. Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: CVPR, pp. 5501–5510 (2022)

    Google Scholar 

  12. Goli, L., Reading, C., Selllán, S., Jacobson, A., Tagliasacchi, A.: Bayes’ rays: uncertainty quantification for neural radiance fields. arXiv preprint arXiv:2309.03185 (2023)

  13. Greff, K., et al.: Kubric: a scalable dataset generator. In: CVPR, pp. 3749–3761 (2022)

    Google Scholar 

  14. Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: NeurIPS, vol. 31 (2018)

    Google Scholar 

  15. Huang, X., Zhang, Q., Feng, Y., Li, H., Wang, X., Wang, Q.: HDR-NeRF: high dynamic range neural radiance fields. In: CVPR, pp. 18398–18408 (2022)

    Google Scholar 

  16. Jiang, R., Yan, Y., Xue, J.H., Chen, S., Wang, N., Wang, H.: Knowledge distillation meets label noise learning: ambiguity-guided mutual label refinery. IEEE Trans. Neural Netw. Learn. Syst. (TNNLS) (2023)

    Google Scholar 

  17. Jiang, R., Yan, Y., Xue, J.H., Wang, B., Wang, H.: When sparse neural network meets label noise learning: a multistage learning framework. IEEE Trans. Neural Netw. Learn. Syst. (TNNLS) 35(2), 2208–2222 (2022)

    Article  Google Scholar 

  18. Jin, Y., et al.: Image matching across wide baselines: from paper to practice. IJCV 129(2), 517–547 (2021)

    Article  Google Scholar 

  19. Karim, N., Rizve, M.N., Rahnavard, N., Mian, A., Shah, M.: UNICON: combating label noise through uniform selection and contrastive learning. In: CVPR, pp. 9676–9686 (2022)

    Google Scholar 

  20. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian splatting for real-time radiance field rendering. ACM TOG 42(4), 139-1 (2023)

    Google Scholar 

  21. Lee, D., Lee, M., Shin, C., Lee, S.: DP-NeRF: deblurred neural radiance field with physical scene priors. In: CVPR, pp. 12386–12396 (2023)

    Google Scholar 

  22. Lee, D., Oh, J., Rim, J., Cho, S., Lee, K.M.: ExBluRF: efficient radiance fields for extreme motion blurred images. In: ICCV, pp. 17639–17648 (2023)

    Google Scholar 

  23. Levy, D., et al.: SeaThru-NeRF: neural radiance fields in scattering media. In: CVPR, pp. 56–65 (2023)

    Google Scholar 

  24. Li, J., Socher, R., Hoi, S.C.: DivideMix: learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394 (2020)

  25. Li, Z., Niklaus, S., Snavely, N., Wang, O.: Neural scene flow fields for space-time view synthesis of dynamic scenes. In: CVPR, pp. 6498–6508 (2021)

    Google Scholar 

  26. Li, Z., Wang, Q., Cole, F., Tucker, R., Snavely, N.: DynIBaR: neural dynamic image-based rendering. In: CVPR, pp. 4273–4284 (2023)

    Google Scholar 

  27. Liu, Y.L., et al.: Robust dynamic radiance fields. In: CVPR, pp. 13–23 (2023)

    Google Scholar 

  28. Lu, Y., Zhang, Y., Han, B., Cheung, Y.M., Wang, H.: Label-noise learning with intrinsically long-tailed data. In: ICCV, pp. 1369–1378 (2023)

    Google Scholar 

  29. Lu, Y., Bo, Y., He, W.: An ensemble model for combating label noise. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 608–617 (2022)

    Google Scholar 

  30. Lu, Y., Bo, Y., He, W.: Noise attention learning: enhancing noise robustness by gradient scaling. In: NeurIPS, vol. 35, pp. 23164–23177 (2022)

    Google Scholar 

  31. Lu, Y., He, W.: SELC: self-ensemble label correction improves learning with noisy labels. arXiv preprint arXiv:2205.01156 (2022)

  32. Lu, Y., He, W.: Learning with noisy ground truth: from 2D classification to 3D reconstruction. arXiv preprint arXiv:2406.15982 (2024)

  33. Ma, L., et al.: Deblur-NeRF: neural radiance fields from blurry images. In: CVPR, pp. 12861–12870 (2022)

    Google Scholar 

  34. Martin-Brualla, R., Radwan, N., Sajjadi, M.S., Barron, J.T., Dosovitskiy, A., Duckworth, D.: NeRF in the wild: neural radiance fields for unconstrained photo collections. In: CVPR, pp. 7210–7219 (2021)

    Google Scholar 

  35. Mildenhall, B., Hedman, P., Martin-Brualla, R., Srinivasan, P.P., Barron, J.T.: NeRFs in the dark: high dynamic range view synthesis from noisy raw images. In: CVPR, pp. 16190–16199 (2022)

    Google Scholar 

  36. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  37. Mirzaei, A., et al.: SPIn-NeRF: multiview segmentation and perceptual inpainting with neural radiance fields. In: CVPR, pp. 20669–20679 (2023)

    Google Scholar 

  38. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM TOG 41(4), 1–15 (2022)

    Article  Google Scholar 

  39. Pearl, N., Treibitz, T., Korman, S.: NAN: noise-aware nerfs for burst-denoising. In: CVPR, pp. 12672–12681 (2022)

    Google Scholar 

  40. Rematas, K., et al.: Urban radiance fields. In: CVPR, pp. 12932–12942 (2022)

    Google Scholar 

  41. Sabour, S., Vora, S., Duckworth, D., Krasin, I., Fleet, D.J., Tagliasacchi, A.: RobustNeRF: ignoring distractors with robust losses. In: CVPR, pp. 20626–20636 (2023)

    Google Scholar 

  42. Song, H., Kim, M., Park, D., Shin, Y., Lee, J.G.: Learning from noisy labels with deep neural networks: a survey. IEEE Trans. Neural Netw. Learn. Syst. (TNNLS) 34(11), 8135–8153 (2022)

    Article  Google Scholar 

  43. Tanay, T., Leonardis, A., Maggioni, M.: Efficient view synthesis and 3D-based multi-frame denoising with multiplane feature representations. In: CVPR, pp. 20898–20907 (2023)

    Google Scholar 

  44. Tancik, M., et al.: Block-NeRF: scalable large scene neural view synthesis. In: CVPR, pp. 8248–8258 (2022)

    Google Scholar 

  45. Teigen, A.L., Yip, M., Hamran, V.P., Skui, V., Stahl, A., Mester, R.: Removing adverse volumetric effects from trained neural radiance fields. arXiv preprint arXiv:2311.10523 (2023)

  46. Tschernezki, V., Larlus, D., Vedaldi, A.: NeuralDiff: segmenting 3D objects that move in egocentric videos. In: Proceedings of the IEEE International Conference on 3D Vision (3DV), pp. 910–919 (2021)

    Google Scholar 

  47. Tu, Y., et al.: Learning from noisy labels with decoupled meta label purifier. In: CVPR, pp. 19934–19943 (2023)

    Google Scholar 

  48. Turki, H., Zhang, J.Y., Ferroni, F., Ramanan, D.: SUDS: scalable urban dynamic scenes. In: CVPR, pp. 12375–12385 (2023)

    Google Scholar 

  49. Wang, D., Zhang, T., Abboud, A., Süsstrunk, S.: InpaintNeRF360: text-guided 3D inpainting on unbounded neural radiance fields. arXiv preprint arXiv:2305.15094 (2023)

  50. Wang, H., Xu, X., Xu, K., Lau, R.W.: Lighting up NeRF via unsupervised decomposition and enhancement. In: ICCV, pp. 12632–12641 (2023)

    Google Scholar 

  51. Wang, P., Zhao, L., Ma, R., Liu, P.: BAD-NeRF: bundle adjusted deblur neural radiance fields. In: CVPR, pp. 4170–4179 (2023)

    Google Scholar 

  52. Warburg, F., Weber, E., Tancik, M., Holynski, A., Kanazawa, A.: NeRFbusters: removing ghostly artifacts from casually captured NeRFs. arXiv preprint arXiv:2304.10532 (2023)

  53. Weder, S., et al.: Removing objects from neural radiance fields. In: CVPR, pp. 16528–16538 (2023)

    Google Scholar 

  54. Wei, F., Funkhouser, T., Rusinkiewicz, S.: Clutter detection and removal in 3D scenes with view-consistent inpainting. In: ICCV, pp. 18131–18141 (2023)

    Google Scholar 

  55. Wu, T., Zhong, F., Tagliasacchi, A., Cole, F., Oztireli, C.: D\(^2\)NeRF: self-supervised decoupling of dynamic and static objects from a monocular video. In: NeurIPS, vol. 35, pp. 32653–32666 (2022)

    Google Scholar 

  56. Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: PlenOctrees for real-time rendering of neural radiance fields. In: ICCV, pp. 5752–5761 (2021)

    Google Scholar 

  57. Zhang, K., Riegler, G., Snavely, N., Koltun, V.: NeRF++: analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492 (2020)

  58. Zhang, M., Zhao, X., Yao, J., Yuan, C., Huang, W.: When noisy labels meet long tail dilemmas: a representation calibration method. In: ICCV, pp. 15890–15900 (2023)

    Google Scholar 

Download references

Acknowledgements

We thank Yangdi Lu, Guangchi Fang, and Runqing Jiang for their insightful comments and valuable discussions. This work was partially supported by the National Natural Science Foundation of China (62301601), Guangdong Basic and Applied Basic Research Foundation (2022B1515020103, 2023B1515120087), the Shenzhen Science and Technology Program (No. RCYX20200714114641140).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yulan Guo .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 85182 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y. et al. (2025). Distractor-Free Novel View Synthesis via Exploiting Memorization Effect in Optimization. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15112. Springer, Cham. https://doi.org/10.1007/978-3-031-72949-2_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72949-2_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72948-5

  • Online ISBN: 978-3-031-72949-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics