Skip to main content

Parameterization-Driven Neural Surface Reconstruction for Object-Oriented Editing in Neural Rendering

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

The advancements in neural rendering have increased the need for techniques that enable intuitive editing of 3D objects represented as neural implicit surfaces. This paper introduces a novel neural algorithm for parameterizing neural implicit surfaces to simple parametric domains like spheres and polycubes. Our method allows users to specify the number of cubes in the parametric domain, learning a configuration that closely resembles the target 3D object’s geometry. It computes bi-directional deformation between the object and the domain using a forward mapping from the object’s zero level set and an inverse deformation for backward mapping. We ensure nearly bijective mapping with a cycle loss and optimize deformation smoothness. The parameterization quality, assessed by angle and area distortions, is guaranteed using a Laplacian regularizer and an optimized learned parametric domain. Our framework integrates with existing neural rendering pipelines, using multi-view images of a single object or multiple objects of similar geometries to reconstruct 3D geometry and compute texture maps automatically, eliminating the need for any prior information. We demonstrate the method’s effectiveness on images of human heads and man-made objects. The source code is available at https://xubaixinxbx.github.io/neuparam.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bednarik, J., Parashar, S., Gundogdu, E., Salzmann, M., Fua, P.: Shape reconstruction by learning differentiable surface representations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4716–4725 (2020)

    Google Scholar 

  2. Biermann, H., Martin, I., Bernardini, F., Zorin, D.: Cut-and-paste editing of multiresolution surfaces. ACM Trans. Graph. (TOG) 21(3), 312–321 (2002)

    Article  Google Scholar 

  3. Degener, P., Meseth, J., Klein, R.: An adaptable surface parameterization method. IMR 3, 201–213 (2003)

    Google Scholar 

  4. Fan, Q., Yang, J., Hua, G., Chen, B., Wipf, D.: Revisiting deep intrinsic image decompositions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8944–8952 (2018)

    Google Scholar 

  5. Fang, H., Hart, J.C.: Textureshop: texture synthesis as a photograph editing tool. ACM Trans. Graph. (TOG) 23(3), 354–359 (2004)

    Article  Google Scholar 

  6. Floater, M.S., Hormann, K.: Surface parameterization: a tutorial and survey. In: Advances in Multiresolution for Geometric Modelling, pp. 157–186 (2005)

    Google Scholar 

  7. García, I., Xia, J., He, Y., Xin, S., Patow, G.: Interactive applications for sketch-based editable polycube map. IEEE Trans. Vis. Comput. Graph. 19(7), 1158–1171 (2013)

    Article  Google Scholar 

  8. Gotsman, C., Gu, X., Sheffer, A.: Fundamentals of spherical parameterization for 3D meshes. ACM Trans. Graph. 22(3), 358–363 (2003)

    Article  Google Scholar 

  9. Greene, N.: Environment mapping and other applications of world projections. IEEE Comput. Graph. Appl. 6(11), 21–29 (1986)

    Article  Google Scholar 

  10. Gropp, A., Yariv, L., Haim, N., Atzmon, M., Lipman, Y.: Implicit geometric regularization for learning shapes. In: International Conference on Machine Learning, pp. 3789–3799. PMLR (2020)

    Google Scholar 

  11. Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: A papier-mâché approach to learning 3D surface generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 216–224 (2018)

    Google Scholar 

  12. Gu, X., Wang, Y., Chan, T.F., Thompson, P.M., Yau, S.: Genus zero surface conformal mapping and its application to brain surface mapping. IEEE Trans. Medical Imaging 23(8), 949–958 (2004)

    Article  Google Scholar 

  13. Gu, X., Yau, S.: Global conformal parameterization. In: Kobbelt, L., Schröder, P., Hoppe, H. (eds.) First Eurographics Symposium on Geometry Processing, vol. 43, pp. 127–137 (2003)

    Google Scholar 

  14. Guo, H., Liu, S., Pan, H., Liu, Y., Tong, X., Guo, B.: ComplexGen: CAD reconstruction by B-rep chain complex generation. ACM Trans. Graph. (TOG) 41(4), 1–18 (2022)

    Article  Google Scholar 

  15. He, Y., Wang, H., Fu, C.W., Qin, H.: A divide-and-conquer approach for automatic polycube map construction. Comput. Graph. 33(3), 369–380 (2009)

    Article  Google Scholar 

  16. Huang, Y.H., He, Y., Yuan, Y.J., Lai, Y.K., Gao, L.: StylizedNeRF: consistent 3D scene stylization as stylized nerf via 2D-3D mutual learning. In: Computer Vision and Pattern Recognition (CVPR) (2022)

    Google Scholar 

  17. Kreisselmeier, G., Steinhauser, R.: Systematic control design by optimizing a vector performance index. In: Computer Aided Design of Control Systems, pp. 113–117. Elsevier (1980)

    Google Scholar 

  18. Kuang, Z., Luan, F., Bi, S., Shu, Z., Wetzstein, G., Sunkavalli, K.: PaletteNeRF: alette-based appearance editing of neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20691–20700 (2023)

    Google Scholar 

  19. Li, M., Zhang, H.: D2IM-Net: learning detail disentangled implicit fields from single images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10246–10255 (2021)

    Google Scholar 

  20. Li, Z., et al.: Neuralangelo: high-fidelity neural surface reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8456–8465 (2023)

    Google Scholar 

  21. Li, Z., et al.: Physically-based editing of indoor scene lighting from a single image. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VI, pp. 555–572. Springer Nature Switzerland, Cham (2022). https://doi.org/10.1007/978-3-031-20068-7_32

    Chapter  Google Scholar 

  22. Lin, C., Mitra, N., Wetzstein, G., Guibas, L.J., Guerrero, P.: NeuForm: adaptive overfitting for neural shape editing. Adv. Neural. Inf. Process. Syst. 35, 15217–15229 (2022)

    Google Scholar 

  23. Liu, S., Zhang, X., Zhang, Z., Zhang, R., Zhu, J.Y., Russell, B.: Editing conditional radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5773–5783 (2021)

    Google Scholar 

  24. Low, W.F., Lee, G.H.: Minimal Neural Atlas: parameterizing complex surfaces with minimal charts and distortion. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part II, pp. 465–481. Springer Nature Switzerland, Cham (2022). https://doi.org/10.1007/978-3-031-20086-1_27

    Chapter  Google Scholar 

  25. Ma, L., et al.: Neural parameterization for dynamic human head editing. ACM Trans. Graph. (TOG) 41(6), 1–15 (2022)

    Article  Google Scholar 

  26. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: Proceedings of the IEEE/CVF Conference on Computer Vision nd Pattern Recognition, pp. 4460–4470 (2019)

    Google Scholar 

  27. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: ECCV (2020)

    Google Scholar 

  28. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (ToG) 41(4), 1–15 (2022)

    Article  Google Scholar 

  29. Niemeyer, M., Geiger, A.: GIRAFFE: representing scenes as compositional generative neural feature fields. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  30. Oechsle, M., Peng, S., Geiger, A.: UNISURF: unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5589–5599 (2021)

    Google Scholar 

  31. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 165–174 (2019)

    Google Scholar 

  32. Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. In: Seminal Graphics Papers: Pushing the Boundaries, vol. 2, pp. 577–582 (2023)

    Google Scholar 

  33. Praun, E., Hoppe, H.: Spherical parametrization and remeshing. ACM Trans. Graph. (TOG) 22(3), 340–349 (2003)

    Article  Google Scholar 

  34. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017)

    Google Scholar 

  35. Qi, C.R., Su, H., Nießner, M., Dai, A., Yan, M., Guibas, L.J.: Volumetric and multi-view CNNs for object classification on 3D data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5648–5656 (2016)

    Google Scholar 

  36. Ramon, E., et al.: H3D-Net: few-shot high-fidelity 3D head reconstruction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5620–5629 (2021)

    Google Scholar 

  37. Rosu, R.A., Behnke, S.: PermutoSDF: fast multi-view reconstruction with implicit surfaces using permutohedral lattices. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8466–8475 (2023)

    Google Scholar 

  38. Sagnik Das, Ke Ma, Z.S., Samaras, D.: Learning an isometric surface parameterization for texture unwrapping. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) European Conference of Computer Vision 2022, ECCV 2022, Tel Aviv, Israel, October 23-27, 2022. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19836-6_33

  39. Saito, S., Huang, Z., Natsume, R., Morishima, S., Kanazawa, A., Li, H.: PIFu: pixel-aligned implicit function for high-resolution clothed human digitization. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2304–2314 (2019)

    Google Scholar 

  40. Sheffer, A., Praun, E., Rose, K., et al.: Mesh parameterization methods and their applications. Found. Trends® Comput. Graph. Vis. 2(2), 105–171 (2007)

    Google Scholar 

  41. Srinivasan, P.P., Deng, B., Zhang, X., Tancik, M., Mildenhall, B., Barron, J.T.: NeRV: neural reflectance and visibility fields for relighting and view synthesis. In: CVPR (2021)

    Google Scholar 

  42. Sun, Q., Zhang, L., Zhang, M., Ying, X., Xin, S., Xia, J., He, Y.: Texture brush: an interactive surface texturing interface, pp. 153–160 (2013)

    Google Scholar 

  43. Tarini, M., Hormann, K., Cignoni, P., Montani, C.: Polycube-maps. ACM Trans. Graph. (TOG) 23(3), 853–860 (2004)

    Article  Google Scholar 

  44. Tojo, K., Umetani, N.: Recolorable posterization of volumetric radiance fields using visibility-weighted palette extraction. In: Computer Graphics Forum, vol. 41, pp. 149–160. Wiley Online Library (2022)

    Google Scholar 

  45. Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.G.: Pixel2Mesh: generating 3d mesh models from single RGB images. In: Proceedings of the European conference on computer vision (ECCV), pp. 52–67 (2018)

    Google Scholar 

  46. Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction. NeurIPS (2021)

    Google Scholar 

  47. Wang, X., et al.: Seal-3D: interactive pixel-level editing for neural radiance fields (2023)

    Google Scholar 

  48. Wang, Y., Rahmann, L., Sorkine-Hornung, O.: Geometry-consistent neural shape representation with implicit displacement fields. In: The Tenth International Conference on Learning Representations. OpenReview (2022)

    Google Scholar 

  49. Wang, Y., Skorokhodov, I., Wonka, P.: HF-NeuS: improved surface reconstruction using high-frequency details. Adv. Neural. Inf. Process. Syst. 35, 1966–1978 (2022)

    Google Scholar 

  50. Williams, F., Schneider, T., Silva, C., Zorin, D., Bruna, J., Panozzo, D.: Deep geometric prior for surface reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10130–10139 (2019)

    Google Scholar 

  51. Wu, T., et al.: OmniObject3D: large-vocabulary 3D object dataset for realistic perception, reconstruction and generation (2023)

    Google Scholar 

  52. Xiang, F., Xu, Z., Hašan, M., Hold-Geoffroy, Y., Sunkavalli, K., Su, H.: NeuTex: neural texture mapping for volumetric neural rendering. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  53. Xu, B., Zhang, J., Lin, K.Y., Qian, C., He, Y.: Deformable model driven neural rendering for high-fidelity 3D reconstruction of human heads under low-view settings. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2023)

    Google Scholar 

  54. Yang, B., et al.: NeuMesh: learning disentangled neural mesh-based implicit field for geometry and texture editing. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) European Conference on Computer Vision, pp. 597–614. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19787-1_34

  55. Yang, B., et al.: Learning object-compositional neural radiance field for editable scene rendering. In: International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  56. Yang, H., et al.: FaceScape: a large-scale high quality 3D face dataset and detailed riggable 3D face prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 601–610 (2020)

    Google Scholar 

  57. Yang, L., et al.: Neural parametric surfaces for shape modeling. arXiv preprint arXiv:2309.09911 (2023)

  58. Yariv, L., Gu, J., Kasten, Y., Lipman, Y.: Volume rendering of neural implicit surfaces. Adv. Neural. Inf. Process. Syst. 34, 4805–4815 (2021)

    Google Scholar 

  59. Yariv, L., et al.: Multiview neural surface reconstruction by disentangling geometry and appearance. In: Advances in Neural Information Processing Systems, vol. 33 (2020)

    Google Scholar 

  60. Ye, W., et al.: IntrinsicNeRF: learning intrinsic neural radiance fields for editable novel view synthesis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2023)

    Google Scholar 

  61. Yenamandra, T., et al.: I3DMM: deep implicit 3D morphable model of human heads. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12803–12813 (2021)

    Google Scholar 

  62. Yuan, Y.J., et al.: Interactive nerf geometry editing with shape priors. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023)

    Google Scholar 

  63. Zhang, Q., Hou, J., Qian, Y., Chan, A.B., Zhang, J., He, Y.: RegGeoNet: learning regular representations for large-scale 3D point clouds. Int. J. Comput. Vis. 130(12), 3100–3122 (2022)

    Google Scholar 

  64. Zhang, Q., Hou, J., Qian, Y., Zeng, Y., Zhang, J., He, Y.: Flattening-Net: deep regular 2D representation for 3D point cloud analysis. IEEE Trans. Pattern Anal. Mach. Intell. 45(8), 9726–9742 (2023)

    Google Scholar 

Download references

Acknowledgment

This study is supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). This project is also partially supported by the Ministry of Education, Singapore, under its Academic Research Fund Grants (MOE-T2EP20220-0005 & RT19/22).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ying He .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 26722 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, B. et al. (2025). Parameterization-Driven Neural Surface Reconstruction for Object-Oriented Editing in Neural Rendering. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15099. Springer, Cham. https://doi.org/10.1007/978-3-031-72940-9_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72940-9_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72939-3

  • Online ISBN: 978-3-031-72940-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics