Skip to main content

Sur\(^{2}\)f: A Hybrid Representation for High-Quality and Efficient Surface Reconstruction from Multi-view Images

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Multi-view surface reconstruction is an ill-posed, inverse problem in 3D vision research. It involves modeling the geometry and appearance with appropriate surface representations. Most of the existing methods rely either on explicit meshes, using surface rendering of meshes for reconstruction, or on implicit field functions, using volume rendering of the fields for reconstruction. The two types of representations in fact have their respective merits. In this work, we propose a new hybrid representation, termed Sur\(^2\)f, aiming to better benefit from both representations in a complementary manner. Technically, we learn two parallel streams of an implicit signed distance field and an explicit surrogate surface (Sur\(^2\)f) mesh, and unify volume rendering of the implicit signed distance function (SDF) and surface rendering of the surrogate mesh with a shared, neural shader; the unified shading promotes their convergence to the same, underlying surface. We synchronize learning of the surrogate mesh by driving its deformation with functions induced from the implicit SDF. In addition, the synchronized surrogate mesh enables surface-guided volume sampling, which greatly improves the sampling efficiency per ray in volume rendering. We conduct thorough experiments showing that Sur\(^2\)f outperforms existing reconstruction methods and surface representations, including hybrid ones, in terms of both recovery quality and recovery efficiency.

Z. Huang and Z. Liang—Indicates equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    For any point \(\boldsymbol{x} \in \mathbb {R}^3\) in a 3D space, \(| f(\boldsymbol{x}) |\) assigns its distance to the surface \(\mathcal {S}=\{\boldsymbol{x} \in \mathbb {R}^3 | f(\boldsymbol{x})=0\}\); by convention, we have \(f(\boldsymbol{x}) < 0\) for points inside the surface and \(f(\boldsymbol{x}) > 0\) for those outside.

References

  1. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5470–5479 (2022)

    Google Scholar 

  2. Botsch, M., Kobbelt, L., Pauly, M., Alliez, P., Lévy, B.: Polygon mesh processing. CRC press (2010)

    Google Scholar 

  3. Cai, B., Huang, J., Jia, R., Lv, C., Fu, H.: Neuda: Neural deformable anchor for high-fidelity implicit surface reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8476–8485 (2023)

    Google Scholar 

  4. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: Tensorf: Tensorial radiance fields. In: European Conference on Computer Vision (ECCV) (2022)

    Google Scholar 

  5. Chen, H., Li, C., Lee, G.H.: Neusg: Neural implicit surface reconstruction with 3d gaussian splatting guidance. arXiv preprint arXiv:2312.00846 (2023)

  6. Chen, W., et al.: Learning to predict 3d objects with an interpolation-based differentiable renderer. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  7. Community, B.O.: Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam (2018). http://www.blender.org

  8. Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: Scannet: richly-annotated 3d reconstructions of indoor scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5828–5839 (2017)

    Google Scholar 

  9. Darmon, F., Bascle, B., Devaux, J., Monasse, P., Aubry, M.: Improving neural implicit surfaces geometry with patch warping. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)

    Google Scholar 

  10. Fan, Y., et al.: Factored-neus: Reconstructing surfaces, illumination, and materials of possibly glossy objects. arXiv preprint arXiv:2305.17929 (2023)

  11. Fu, Q., Xu, Q., Ong, Y.S., Tao, W.: Geo-neus: geometry-consistent neural implicit surfaces learning for multi-view reconstruction. Adv. Neural. Inf. Process. Syst. 35, 3403–3416 (2022)

    Google Scholar 

  12. Goel, S., Gkioxari, G., Malik, J.: Differentiable stereopsis: meshes from multiple views using differentiable rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8635–8644 (2022)

    Google Scholar 

  13. Gropp, A., Yariv, L., Haim, N., Atzmon, M., Lipman, Y.: Implicit geometric regularization for learning shapes. In: Proceedings of the 37th International Conference on Machine Learning, pp. 3789–3799 (2020)

    Google Scholar 

  14. Guédon, A., Lepetit, V.: Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5354–5363 (2024)

    Google Scholar 

  15. Guo, Y.C., Cao, Y.P., Wang, C., He, Y., Shan, Y., Zhang, S.H.: Vmesh: Hybrid volume-mesh representation for efficient view synthesis. In: SIGGRAPH Asia 2023 Conference Papers, pp. 1–11 (2023)

    Google Scholar 

  16. Hart, J.C.: Sphere tracing: a geometric method for the antialiased ray tracing of implicit surfaces. Vis. Comput. 12(10), 527–545 (1996)

    Article  Google Scholar 

  17. Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., Aanæs, H.: Large scale multi-view stereopsis evaluation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 406–413 (2014)

    Google Scholar 

  18. D Kajiya, J.T.: The rendering equation. In: Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, pp. 143–150 (1986)

    Google Scholar 

  19. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 42(4), 1–139 (2023)

    Article  Google Scholar 

  20. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  21. Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: benchmarking large-scale scene reconstruction. ACM Trans. Graph. (ToG) 36(4), 1–13 (2017)

    Article  Google Scholar 

  22. Laine, S., Hellsten, J., Karras, T., Seol, Y., Lehtinen, J., Aila, T.: Modular primitives for high-performance differentiable rendering. ACM Trans. Graph. 39(6) (2020)

    Google Scholar 

  23. Li, H., Yang, X., Zhai, H., Liu, Y., Bao, H., Zhang, G.: Vox-surf: Voxel-based implicit surface representation. IEEE Transactions on Visualization and Computer Graphics (2022)

    Google Scholar 

  24. Li, R., Tancik, M., Kanazawa, A.: Nerfacc: A general nerf acceleration toolbox. arXiv preprint arXiv:2210.04847 (2022)

  25. Li, Z., et al.: Neuralangelo: high-fidelity neural surface reconstruction. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2023)

    Google Scholar 

  26. Liang, Z., Huang, Z., Ding, C., Jia, K.: Helixsurf: A robust and efficient neural implicit surface learning of indoor scenes with iterative intertwined regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13165–13174 (2023)

    Google Scholar 

  27. Liang, Z., Zhang, Q., Feng, Y., Shan, Y., Jia, K.: Gs-ir: 3D gaussian splatting for inverse rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21644–21653 (2024)

    Google Scholar 

  28. Liang, Z., Zhang, Q., Hu, W., Feng, Y., Zhu, L., Jia, K.: Analytic-splatting: Anti-aliased 3d gaussian splatting via analytic integration. arXiv preprint arXiv:2403.11056 (2024)

  29. Liu, S., Zhang, Y., Peng, S., Shi, B., Pollefeys, M., Cui, Z.: Dist: rendering deep implicit signed distance function with differentiable sphere tracing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2019–2028 (2020)

    Google Scholar 

  30. Liu, S., Li, T., Chen, W., Li, H.: Soft rasterizer: a differentiable renderer for image-based 3d reasoning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7708–7717 (2019)

    Google Scholar 

  31. Lorensen, W.E., Cline, H.E.: Marching cubes: a high resolution 3d surface construction algorithm. ACM Siggraph Comput. Graph. 21(4), 163–169 (1987)

    Article  Google Scholar 

  32. Max, N.: Optical models for direct volume rendering. IEEE Trans. Visual Comput. Graph. 1(2), 99–108 (1995)

    Article  Google Scholar 

  33. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: Learning 3d reconstruction in function space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4460–4470 (2019)

    Google Scholar 

  34. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: representing scenes as neural radiance fields for view synthesis. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I, pp. 405–421 (2020)

    Google Scholar 

  35. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (ToG) 41(4), 1–15 (2022)

    Article  Google Scholar 

  36. Munkberg, J., et al.: Extracting triangular 3d models, materials, and lighting from images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8280–8290 (2022)

    Google Scholar 

  37. Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3504–3515 (2020)

    Google Scholar 

  38. Oechsle, M., Peng, S., Geiger, A.: Unisurf: unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5589–5599 (2021)

    Google Scholar 

  39. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: Deepsdf: learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 165–174 (2019)

    Google Scholar 

  40. Parker, S.G., et al.: Optix: a general purpose ray tracing engine. ACM Trans. Graph. (tog) 29(4), 1–13 (2010)

    Article  Google Scholar 

  41. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  42. Sara Fridovich-Keil and Alex Yu, Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: Radiance fields without neural networks. In: CVPR (2022)

    Google Scholar 

  43. Shen, T., Gao, J., Yin, K., Liu, M.Y., Fidler, S.: Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis. Adv. Neural. Inf. Process. Syst. 34, 6087–6101 (2021)

    Google Scholar 

  44. Shen, T., et al.: Flexible isosurface extraction for gradient-based mesh optimization. ACM Trans. Graph. (TOG) 42(4), 1–16 (2023)

    Google Scholar 

  45. Shreiner, D., et al.: OpenGL programming guide: the official guide to learning OpenGL, versions 3.0 and 3.1. Pearson Education (2009)

    Google Scholar 

  46. Sun, C., Sun, M., Chen, H.: Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In: CVPR (2022)

    Google Scholar 

  47. Walker, T., Mariotti, O., Vaxman, A., Bilen, H.: Explicit neural surfaces: Learning continuous geometry with deformation fields. arXiv preprint arXiv:2306.02956 (2023)

  48. Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: Neus: learning neural implicit surfaces by volume rendering for multi-view reconstruction. Adv. Neural. Inf. Process. Syst. 34, 27171–27183 (2021)

    Google Scholar 

  49. Wang, Y., Han, Q., Habermann, M., Daniilidis, K., Theobalt, C., Liu, L.: Neus2: Fast learning of neural implicit surfaces for multi-view reconstruction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3295–3306 (2023)

    Google Scholar 

  50. Wang, Y., Skorokhodov, I., Wonka, P.: Hf-neus: improved surface reconstruction using high-frequency details. In: Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  51. Wang, Y., Skorokhodov, I., Wonka, P.: Pet-neus: positional encoding tri-planes for neural surfaces. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12598–12607 (2023)

    Google Scholar 

  52. Wang, Z., et al.: Adaptive shells for efficient neural radiance field rendering. arXiv preprint arXiv:2311.10091 (2023)

  53. Worchel, M., Diaz, R., Hu, W., Schreer, O., Feldmann, I., Eisert, P.: Multi-view mesh reconstruction with neural deferred shading. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6187–6197 (2022)

    Google Scholar 

  54. Wu, T., et al.: Voxurf: Voxel-based efficient and accurate neural surface reconstruction. In: International Conference on Learning Representations (ICLR) (2023)

    Google Scholar 

  55. Yariv, L., Gu, J., Kasten, Y., Lipman, Y.: Volume rendering of neural implicit surfaces. Adv. Neural. Inf. Process. Syst. 34, 4805–4815 (2021)

    Google Scholar 

  56. Yariv, L., et al.: Multiview neural surface reconstruction by disentangling geometry and appearance. Adv. Neural. Inf. Process. Syst. 33, 2492–2502 (2020)

    Google Scholar 

  57. Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: Plenoctrees for real-time rendering of neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5752–5761 (2021)

    Google Scholar 

  58. Yu, Z., Chen, A., Huang, B., Sattler, T., Geiger, A.: Mip-splatting: alias-free 3d gaussian splatting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19447–19456 (2024)

    Google Scholar 

  59. Zhang, Y., Zhu, J., Lin, L.: Fastmesh: Fast surface reconstruction by hexagonal mesh-based neural rendering. arXiv preprint arXiv:2305.17858 (2023)

Download references

Acknowledgments

This work is supported by the Guangdong Provincial Key Laboratory of Human Digital Twin (2022B1212010004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kui Jia .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 17369 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, Z., Liang, Z., Jia, K. (2025). Sur\(^{2}\)f: A Hybrid Representation for High-Quality and Efficient Surface Reconstruction from Multi-view Images. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15118. Springer, Cham. https://doi.org/10.1007/978-3-031-73027-6_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73027-6_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73026-9

  • Online ISBN: 978-3-031-73027-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics