skip to main content
10.1145/3591106.3592276acmconferencesArticle/Chapter ViewAbstractPublication PagesicmrConference Proceedingsconference-collections
research-article

RIP-NeRF: Learning Rotation-Invariant Point-based Neural Radiance Field for Fine-grained Editing and Compositing

Published: 12 June 2023 Publication History

Abstract

Neural Radiance Field (NeRF) shows dramatic results in synthesising novel views. However, existing controllable and editable NeRF methods are still incapable of both fine-grained editing and cross-scene compositing, greatly limiting their creative editing as well as potential applications. When the radiance field is fine-grained edited and composited, a severe drawback is that varying the orientation of the corresponding explicit scaffold, such as point, mesh, volume, etc., may lead to the degradation of rendering quality. In this work, by taking the respective strengths of the implicit NeRF-based representation and the explicit point-based representation, we present a novel Rotation-Invariant Point-based NeRF (RIP-NeRF) for both fine-grained editing and cross-scene compositing of the radiance field. Specifically, we introduce a novel point-based radiance field representation to replace the Cartesian coordinate as the network input. This rotation-invariant representation is met by carefully designing a Neural Inverse Distance Weighting Interpolation (NIDWI) module to aggregate neural points, significantly improving the rendering quality for fine-grained editing. To achieve cross-scene compositing, we disentangle the rendering module and the neural point-based representation in NeRF. After simply manipulating the corresponding neural points, a cross-scene neural rendering module is applied to achieve controllable cross-scene compositing without retraining. The advantages of our RIP-NeRF on editing quality and capability are demonstrated by extensive editing and compositing experiments on room-scale real scenes and synthetic objects with complex geometry.

Supplemental Material

PDF File
Appendix

References

[1]
Tomas Akenine-Mller, Eric Haines, and Naty Hoffman. 2018. Real-Time Rendering, Fourth Edition (4th ed.). A. K. Peters, Ltd., USA.
[2]
Marc Alexa, Johannes Behr, Daniel Cohen-Or, Shachar Fleishman, David Levin, and Claudio T. Silva. 2003. Computing and rendering point set surfaces. IEEE Transactions on visualization and computer graphics 9, 1 (2003), 3–15.
[3]
Marc Alexa, Markus Gross, Mark Pauly, Hanspeter Pfister, Marc Stamminger, and Matthias Zwicker. 2004. Point-based computer graphics. In ACM SIGGRAPH 2004 Course Notes. 7–es.
[4]
Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, and Victor Lempitsky. 2020. Neural point-based graphics. In European Conference on Computer Vision. Springer, 696–712.
[5]
Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. 2022. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5470–5479.
[6]
Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, and Ravi Ramamoorthi. 2020. Neural reflectance fields for appearance acquisition. arXiv preprint arXiv:2008.03824 (2020).
[7]
Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. 2022. TensoRF: Tensorial Radiance Fields. arXiv preprint arXiv:2203.09517 (2022).
[8]
Shuai Chen, Zirui Wang, and Victor Prisacariu. 2021. Direct-PoseNet: Absolute pose regression with photometric consistency. In 2021 International Conference on 3D Vision (3DV). IEEE, 1175–1185.
[9]
Zhiqin Chen, Thomas Funkhouser, Peter Hedman, and Andrea Tagliasacchi. 2022. MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures. arXiv preprint arXiv:2208.00277 (2022).
[10]
Paolo Cignoni, Guido Ranzuglia, M Callieri, M Corsini, F Ganovelli, N Pietroni, and M Tarini. 2011. MeshLab. (2011).
[11]
Blender Online Community. 2018. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam. http://www.blender.org
[12]
Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. 2017. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE.
[13]
Peng Dai, Yinda Zhang, Zhuwen Li, Shuaicheng Liu, and Bing Zeng. 2020. Neural point cloud rendering via multi-plane projection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7830–7839.
[14]
Yu Deng, Jiaolong Yang, and Xin Tong. 2021. Deformed implicit field: Modeling 3d shapes with learned dense correspondence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10286–10296.
[15]
Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. 2020. Se (3)-transformers: 3d roto-translation equivariant attention networks. Advances in Neural Information Processing Systems 33 (2020), 1970–1981.
[16]
Jeffrey P Grossman and William J Dally. 1998. Point sample rendering. In Eurographics Workshop on Rendering Techniques. Springer, 181–192.
[17]
Michelle Guo, Alireza Fathi, Jiajun Wu, and Thomas Funkhouser. 2020. Object-centric neural scene rendering. arXiv preprint arXiv:2012.08503 (2020).
[18]
Animesh Karnewar, Tobias Ritschel, Oliver Wang, and Niloy Mitra. 2022. ReLU Fields: The Little Non-linearity That Could. In ACM SIGGRAPH 2022 Conference Proceedings. 1–9.
[19]
Leif Kobbelt and Mario Botsch. 2004. A survey of point-based techniques in computer graphics. Computers & Graphics 28, 6 (2004), 801–814.
[20]
Verica Lazova, Vladimir Guzov, Kyle Olszewski, Sergey Tulyakov, and Gerard Pons-Moll. 2022. Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation. arXiv preprint arXiv:2204.10850 (2022).
[21]
Jianan Li, Xuemei Xie, Qingzhe Pan, Yuhan Cao, Zhifu Zhao, and Guangming Shi. 2020. SGM-Net: Skeleton-guided multimodal network for action recognition. Pattern Recognition 104 (2020), 107356.
[22]
Tianye Li, Mira Slavcheva, Michael Zollhoefer, Simon Green, Christoph Lassner, Changil Kim, Tanner Schmidt, Steven Lovegrove, Michael Goesele, Richard Newcombe, 2022. Neural 3D Video Synthesis From Multi-View Video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5521–5531.
[23]
Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey. 2021. Barf: Bundle-adjusting neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5741–5751.
[24]
Hsueh-Ti Derek Liu, Francis Williams, Alec Jacobson, Sanja Fidler, and Or Litany. 2022. Learning Smooth Neural Functions via Lipschitz Regularization. arXiv preprint arXiv:2202.08345 (2022).
[25]
Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. 2020. Neural sparse voxel fields. Advances in Neural Information Processing Systems 33 (2020), 15651–15663.
[26]
Steven Liu, Xiuming Zhang, Zhoutong Zhang, Richard Zhang, Jun-Yan Zhu, and Bryan Russell. 2021. Editing conditional radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5773–5783.
[27]
Xiaoxiao Long, Cheng Lin, Peng Wang, Taku Komura, and Wenping Wang. 2022. SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse views. arXiv preprint arXiv:2206.05737 (2022).
[28]
Linjie Lyu, Ayush Tewari, Thomas Leimkühler, Marc Habermann, and Christian Theobalt. 2022. Neural Radiance Transfer Fields for Relightable Novel-view Synthesis with Global Illumination. arXiv preprint arXiv:2207.13607 (2022).
[29]
Yongwei Miao, Jieqing Feng, Chunxia Xiao, Hui Li, and Qunsheng Peng. 2006. Detail-preserving local editing for point-sampled geometry. In Computer Graphics International Conference. Springer, 673–681.
[30]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2021. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 65, 1 (2021), 99–106.
[31]
Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant neural graphics primitives with a multiresolution hash encoding. arXiv preprint arXiv:2201.05989 (2022).
[32]
Michael Niemeyer and Andreas Geiger. 2021. Giraffe: Representing scenes as compositional generative neural feature fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11453–11464.
[33]
Michael Oechsle, Songyou Peng, and Andreas Geiger. 2021. Unisurf: Unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5589–5599.
[34]
Julian Ost, Issam Laradji, Alejandro Newell, Yuval Bahat, and Felix Heide. 2022. Neural point light fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18419–18429.
[35]
Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, and Felix Heide. 2021. Neural scene graphs for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2856–2865.
[36]
Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 165–174.
[37]
Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M Seitz. 2021. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228 (2021).
[38]
Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, and Hujun Bao. 2021. Animatable neural radiance fields for modeling dynamic human bodies. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14314–14323.
[39]
Francesco Pittaluga, Sanjeev J Koppal, Sing Bing Kang, and Sudipta N Sinha. 2019. Revealing scenes by inverting structure from motion reconstructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 145–154.
[40]
Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. 2017. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 652–660.
[41]
Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. 2017. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems 30 (2017).
[42]
Yansong Qu, Yuze Wang, and Qi Yue. 2023. SG-NeRF: Semantic-guided Point-based Neural Radiance Fields. In Proceedings of the 2023 IEEE International Conference on Multimedia and Expo. 1–6.
[43]
Ruslan Rakhimov, Andrei-Timotei Ardelean, Victor Lempitsky, and Evgeny Burnaev. 2022. NPBG++: Accelerating Neural Point-Based Graphics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15969–15979.
[44]
Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. 2021. Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14335–14345.
[45]
Johannes Lutz Schönberger and Jan-Michael Frahm. 2016. Structure-from-Motion Revisited. In Conference on Computer Vision and Pattern Recognition (CVPR).
[46]
Harry Shum and Sing Bing Kang. 2000. Review of image-based rendering techniques. In Visual Communications and Image Processing 2000, Vol. 4067. SPIE, 2–13.
[47]
Anthony Simeonov, Yilun Du, Andrea Tagliasacchi, Joshua B Tenenbaum, Alberto Rodriguez, Pulkit Agrawal, and Vincent Sitzmann. 2022. Neural descriptor fields: Se (3)-equivariant object representations for manipulation. In 2022 International Conference on Robotics and Automation (ICRA). IEEE, 6394–6400.
[48]
Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. 2019. Deepvoxels: Learning persistent 3d feature embeddings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2437–2446.
[49]
Christiane Sommer, Lu Sang, David Schubert, and Daniel Cremers. 2022. Gradient-SDF: A Semi-Implicit Surface Representation for 3D Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6280–6289.
[50]
Olga Sorkine and Marc Alexa. 2007. As-rigid-as-possible surface modeling. In Symposium on Geometry processing, Vol. 4. 109–116.
[51]
Mohammed Suhail, Carlos Esteves, Leonid Sigal, and Ameesh Makadia. 2022. Light Field Neural Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8269–8279.
[52]
Jiaming Sun, Xi Chen, Qianqian Wang, Zhengqi Li, Hadar Averbuch-Elor, Xiaowei Zhou, and Noah Snavely. 2022. Neural 3D Reconstruction in the Wild. In ACM SIGGRAPH 2022 Conference Proceedings. 1–9.
[53]
Qi Sun, Hongyan Liu, Jun He, Zhaoxin Fan, and Xiaoyong Du. 2020. Dagc: Employing dual attention and graph convolution for point cloud based place recognition. In Proceedings of the 2020 International Conference on Multimedia Retrieval. 224–232.
[54]
Shanlin Sun, Kun Han, Deying Kong, Hao Tang, Xiangyi Yan, and Xiaohui Xie. 2022. Topology-Preserving Shape Reconstruction and Registration via Neural Diffeomorphic Flow. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20845–20855.
[55]
X. Sun, Z. Lian, and J. Xiao. 2019. SRINet: Learning Strictly Rotation-Invariant Representations for Point Cloud Classification and Segmentation. ACM (2019).
[56]
Jiaxiang Tang, Xiaokang Chen, Jingbo Wang, and Gang Zeng. 2022. Compressible-composable NeRF via Rank-residual Decomposition. arXiv preprint arXiv:2205.14870 (2022).
[57]
Wei-Cheng Tseng, Hung-Ju Liao, Lin Yen-Chen, and Min Sun. 2022. CLA-NeRF: Category-Level Articulated Neural Radiance Field. arXiv preprint arXiv:2202.00181 (2022).
[58]
Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T Barron, and Pratul P Srinivasan. 2022. Ref-nerf: Structured view-dependent appearance for neural radiance fields. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 5481–5490.
[59]
Chen Wang, Xian Wu, Yuan-Chen Guo, Song-Hai Zhang, Yu-Wing Tai, and Shi-Min Hu. 2022. NeRF-SR: High Quality Neural Radiance Fields using Supersampling. In Proceedings of the 30th ACM International Conference on Multimedia. 6445–6454.
[60]
Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul P Srinivasan, Howard Zhou, Jonathan T Barron, Ricardo Martin-Brualla, Noah Snavely, and Thomas Funkhouser. 2021. Ibrnet: Learning multi-view image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4690–4699.
[61]
Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, and Victor Adrian Prisacariu. 2021. NeRF–: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064 (2021).
[62]
Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, and Zhangyang Wang. 2022. SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image. arXiv preprint arXiv:2204.00928 (2022).
[63]
Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, and Ulrich Neumann. 2022. Point-nerf: Point-based neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5438–5448.
[64]
Tianhan Xu and Tatsuya Harada. 2022. Deforming Radiance Fields with Cages. arXiv preprint arXiv:2207.12298 (2022).
[65]
Bangbang Yang, Chong Bao, Junyi Zeng, Hujun Bao, Yinda Zhang, Zhaopeng Cui, and Guofeng Zhang. 2022. NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing. arXiv preprint arXiv:2207.11911 (2022).
[66]
Bangbang Yang, Yinda Zhang, Yijin Li, Zhaopeng Cui, Sean Fanello, Hujun Bao, and Guofeng Zhang. 2022. Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint Rendering for the Closed Scene Composed of Pre-Captured Objects. arXiv preprint arXiv:2205.02714 (2022).
[67]
Bangbang Yang, Yinda Zhang, Yinghao Xu, Yijin Li, Han Zhou, Hujun Bao, Guofeng Zhang, and Zhaopeng Cui. 2021. Learning object-compositional neural radiance field for editable scene rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 13779–13788.
[68]
Lin Yen-Chen, Pete Florence, Jonathan T Barron, Tsung-Yi Lin, Alberto Rodriguez, and Phillip Isola. 2022. NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields. arXiv preprint arXiv:2203.01913 (2022).
[69]
Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. 2021. Plenoxels: Radiance fields without neural networks. arXiv preprint arXiv:2112.05131 (2021).
[70]
Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. 2021. Plenoctrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5752–5761.
[71]
Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. 2021. pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4578–4587.
[72]
Wentao Yuan, Zhaoyang Lv, Tanner Schmidt, and Steven Lovegrove. 2021. Star: Self-supervised tracking and reconstruction of rigid objects in motion with neural rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13144–13152.
[73]
Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, and Lin Gao. 2022. NeRF-editing: geometry editing of neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18353–18364.
[74]
Xiangyu Yue, Bichen Wu, Sanjit A Seshia, Kurt Keutzer, and Alberto L Sangiovanni-Vincentelli. 2018. A lidar point cloud generator: from a virtual world to autonomous driving. In Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval. 458–464.
[75]
Qiang Zhang, Seung-Hwan Baek, Szymon Rusinkiewicz, and Felix Heide. 2022. Differentiable Point-Based Radiance Fields for Efficient View Synthesis. arXiv preprint arXiv:2205.14330 (2022).
[76]
Xiuming Zhang, Pratul P Srinivasan, Boyang Deng, Paul Debevec, William T Freeman, and Jonathan T Barron. 2021. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination. ACM Transactions on Graphics (TOG) 40, 6 (2021), 1–18.
[77]
Zhiyuan Zhang, Binh-Son Hua, David W Rosen, and Sai-Kit Yeung. 2019. Rotation invariant convolutions for 3d point clouds deep learning. In 2019 International conference on 3d vision (3DV). IEEE, 204–213.
[78]
Shizhan Zhu, Sayna Ebrahimi, Angjoo Kanazawa, and Trevor Darrell. 2021. Differentiable Gradient Sampling for Learning Implicit 3D Scene Reconstructions from a Single Image. In International Conference on Learning Representations.
[79]
D Zimny, T Trzciński, and P Spurek. 2022. Points2NeRF: Generating Neural Radiance Fields from 3D point cloud. arXiv preprint arXiv:2206.01290 (2022).
[80]
Matthias Zwicker, Mark Pauly, Oliver Knoll, and Markus Gross. 2002. Pointshop 3D: An interactive system for point-based surface editing. ACM Transactions on Graphics (TOG) 21, 3 (2002), 322–329.

Cited By

View all
  • (2025)RISE-Editing: Rotation-invariant neural point fields with interactive segmentation for fine-grained and efficient editingNeural Networks10.1016/j.neunet.2025.107304187(107304)Online publication date: Jul-2025
  • (2025)SSCCPC-Net: Simultaneously Learning 2D and 3D Features with CLIP for Semantic Scene Completion on Point CloudAdvances in Computer Graphics10.1007/978-3-031-82024-3_2(16-29)Online publication date: 25-Feb-2025
  • (2024)GOI: Find 3D Gaussians of Interest with an Optimizable Open-vocabulary Semantic-space HyperplaneProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680852(5328-5337)Online publication date: 28-Oct-2024
  • Show More Cited By

Index Terms

  1. RIP-NeRF: Learning Rotation-Invariant Point-based Neural Radiance Field for Fine-grained Editing and Compositing

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMR '23: Proceedings of the 2023 ACM International Conference on Multimedia Retrieval
    June 2023
    694 pages
    ISBN:9798400701788
    DOI:10.1145/3591106
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 June 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. 3D deep learning
    2. neural rendering
    3. point-based representation
    4. scene editing
    5. view synthesis

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Data Availability

    Funding Sources

    Conference

    ICMR '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 254 of 830 submissions, 31%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)103
    • Downloads (Last 6 weeks)9
    Reflects downloads up to 05 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)RISE-Editing: Rotation-invariant neural point fields with interactive segmentation for fine-grained and efficient editingNeural Networks10.1016/j.neunet.2025.107304187(107304)Online publication date: Jul-2025
    • (2025)SSCCPC-Net: Simultaneously Learning 2D and 3D Features with CLIP for Semantic Scene Completion on Point CloudAdvances in Computer Graphics10.1007/978-3-031-82024-3_2(16-29)Online publication date: 25-Feb-2025
    • (2024)GOI: Find 3D Gaussians of Interest with an Optimizable Open-vocabulary Semantic-space HyperplaneProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680852(5328-5337)Online publication date: 28-Oct-2024
    • (2024)Refracting Once is Enough: Neural Radiance Fields for Novel-View Synthesis of Real Refractive ObjectsProceedings of the 2024 International Conference on Multimedia Retrieval10.1145/3652583.3658000(694-703)Online publication date: 30-May-2024
    • (2024)SCARF: Scalable Continual Learning Framework for Memory‐efficient Multiple Neural Radiance FieldsComputer Graphics Forum10.1111/cgf.1525543:7Online publication date: 7-Nov-2024
    • (2024)Efficient Sampling and Volume Rendering Strategy for Neural Field SLAM2024 IEEE International Conference on Multimedia and Expo (ICME)10.1109/ICME57554.2024.10688267(1-6)Online publication date: 15-Jul-2024
    • (2024)Point'n Move: Interactive scene object manipulation on Gaussian splatting radiance fieldsIET Image Processing10.1049/ipr2.1319018:12(3507-3517)Online publication date: 26-Jul-2024
    • (2024)TranSR-NeRF: Super-resolution neural radiance field for reconstruction and rendering of weak and repetitive texture of aviation damaged functional surfaceChinese Journal of Aeronautics10.1016/j.cja.2024.03.01637:11(447-461)Online publication date: Nov-2024
    • (2024)Rotation invariance and equivariance in 3D deep learning: a surveyArtificial Intelligence Review10.1007/s10462-024-10741-257:7Online publication date: 7-Jun-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media