Skip to main content

UrgRF:Radiance Field Reconstruction Guided by Low-Resolution Grids

  • Conference paper
  • First Online:
Advances in Computer Graphics (CGI 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15339))

Included in the following conference series:

  • 135 Accesses

Abstract

The speed advantage of neural representation has had a profound impact on scene reconstruction. However, it often involves collecting a large number of points along a ray, even if these points can be filtered. These points consume significant resources, and processing them also requires a considerable amount of time. We propose a new framework for scene representation and a unique training strategy. Specifically, we use a set of low-resolution grids to guide the sampling of the current grid-based model. Initially, we evenly sample points along rays and query their volume density using the low-resolution grid. Then, with our improved hierarchical sampling strategy, we concentrate on sampling near points with higher volume density. Subsequently, we query their volume density using the high-resolution grid. We optimize both low and high-resolution grids jointly in the first stage and only optimize the high-resolution grid in the second stage. Experiments show that we only need to collect about one-tenth of the points compared to traditional methods based on display grids, saving multiple times the GPU resources. Additionally, we further improve training time and rendering speed by around 30%, with more pronounced benefits at higher grid resolutions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5855–5864 (2021)

    Google Scholar 

  2. Cao, A., Johnson, J.: HexPlane: a fast representation for dynamic scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 130–141 (2023)

    Google Scholar 

  3. Chan, E.R., et al.: Efficient geometry-aware 3D generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16123–16133 (2022)

    Google Scholar 

  4. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXII, pp. 333–350. Springer (2022)

    Google Scholar 

  5. Dadon, D., Fried, O., Hel-Or, Y.: DDNeRF: depth distribution neural radiance fields. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 755–763 (2023)

    Google Scholar 

  6. Dai, Y., Wen, C., Wu, H., Guo, Y., Chen, L., Wang, C.: Indoor 3D human trajectory reconstruction using surveillance camera videos and point clouds. IEEE Trans. Circ. Syst. Video Technol. 32(4), 2482–2495 (2021)

    Article  MATH  Google Scholar 

  7. Deng, Y., Yang, J., Xiang, J., Tong, X.: GRAM: generative radiance manifolds for 3D-aware image generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10673–10683 (2022)

    Google Scholar 

  8. Dhamo, H., Tateno, K., Laina, I., Navab, N., Tombari, F.: Peeking behind objects: layered depth prediction from a single image. Pattern Recogn. Lett. 125, 333–340 (2019)

    Article  Google Scholar 

  9. Fang, J., Xie, L., Wang, X., Zhang, X., Liu, W., Tian, Q.: NeuSample: neural sample field for efficient view synthesis. arXiv preprint arXiv:2111.15552 (2021)

  10. Fridovich-Keil, S., Meanti, G., Warburg, F., Recht, B., Kanazawa, A.: K-planes: explicit radiance fields in space, time, and appearance. arXiv preprint arXiv:2301.10241 (2023)

  11. Jain, A., Mildenhall, B., Barron, J.T., Abbeel, P., Poole, B.: Zero-shot text-guided object generation with dream fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 867–876 (2022)

    Google Scholar 

  12. Kajiya, J.T., Von Herzen, B.P.: Ray tracing volume densities. ACM SIGGRAPH Comput. Graph. 18(3), 165–174 (1984)

    Article  Google Scholar 

  13. Kobayashi, S., Matsumoto, E., Sitzmann, V.: Decomposing NeRF for editing via feature field distillation. Adv. Neural. Inf. Process. Syst. 35, 23311–23330 (2022)

    Google Scholar 

  14. Kurz, A., Neff, T., Lv, Z., Zollhöfer, M., Steinberger, M.: AdaNeRF: adaptive sampling for real-time rendering of neural radiance fields. In: European Conference on Computer Vision, pp. 254–270. Springer (2022)

    Google Scholar 

  15. Levin, A., Durand, F.: Linear view synthesis using a dimensionality gap light field prior. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1831–1838. IEEE (2010)

    Google Scholar 

  16. Levoy, M.: Efficient ray tracing of volume data. ACM Trans. Graph. (TOG) 9(3), 245–261 (1990)

    Article  MATH  Google Scholar 

  17. Li, T., et al.: Neural 3D video synthesis from multi-view video. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5521–5531 (2022)

    Google Scholar 

  18. Lin, C.H., Ma, W.C., Torralba, A., Lucey, S.: BARF: bundle-adjusting neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5741–5751 (2021)

    Google Scholar 

  19. Lipski, C., Klose, F., Magnor, M.: Correspondence and depth-image based rendering a hybrid approach for free-viewpoint video. IEEE Trans. Circ. Syst. Video Technol. 24(6), 942–951 (2014)

    Article  MATH  Google Scholar 

  20. Liu, D., Wan, W., Fang, Z., Zheng, X.: GsNeRF: fast novel view synthesis of dynamic radiance fields. Comput. Graph. 116, 491–499 (2023)

    Article  MATH  Google Scholar 

  21. Mildenhall, B., Hedman, P., Martin-Brualla, R., Srinivasan, P.P., Barron, J.T.: NeRF in the dark: high dynamic range view synthesis from noisy raw images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16190–16199 (2022)

    Google Scholar 

  22. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)

    Article  Google Scholar 

  23. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (ToG) 41(4), 1–15 (2022)

    Article  MATH  Google Scholar 

  24. Niemeyer, M., Barron, J.T., Mildenhall, B., Sajjadi, M.S., Geiger, A., Radwan, N.: RegNeRF: regularizing neural radiance fields for view synthesis from sparse inputs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5480–5490 (2022)

    Google Scholar 

  25. Park, K., et al.: Nerfies: deformable neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5865–5874 (2021)

    Google Scholar 

  26. Poole, B., Jain, A., Barron, J.T., Mildenhall, B.: DreamFusion: text-to-3D using 2D diffusion. arXiv preprint arXiv:2209.14988 (2022)

  27. Pumarola, A., Corona, E., Pons-Moll, G., Moreno-Noguer, F.: D-NeRF: neural radiance fields for dynamic scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10318–10327 (2021)

    Google Scholar 

  28. Sun, C., Sun, M., Chen, H.T.: Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5459–5469 (2022)

    Google Scholar 

  29. Xu, Q., et al.: Point-NeRF: point-based neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5438–5448 (2022)

    Google Scholar 

  30. Zhang, P., Wang, X., Ma, L., Wang, S., Kwong, S., Jiang, J.: Progressive point cloud upsampling via differentiable rendering. IEEE Trans. Circ. Syst. Video Technol. 31(12), 4673–4685 (2021)

    Article  MATH  Google Scholar 

  31. Zhang, W., Xing, R., Zeng, Y., Liu, Y.S., Shi, K., Han, Z.: Fast learning radiance fields by shooting much fewer rays. IEEE Trans. Image Process. (2023)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weibing Wan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, D., Wan, W., Zhao, Y., Zheng, X. (2025). UrgRF:Radiance Field Reconstruction Guided by Low-Resolution Grids. In: Magnenat-Thalmann, N., Kim, J., Sheng, B., Deng, Z., Thalmann, D., Li, P. (eds) Advances in Computer Graphics. CGI 2024. Lecture Notes in Computer Science, vol 15339. Springer, Cham. https://doi.org/10.1007/978-3-031-82021-2_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-82021-2_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-82020-5

  • Online ISBN: 978-3-031-82021-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics