skip to main content
10.1145/3588155.3588162acmotherconferencesArticle/Chapter ViewAbstractPublication PagesapitConference Proceedingsconference-collections
research-article

3D Scene Modeling Method based on Sparse Representation

Published:12 June 2023Publication History

ABSTRACT

In view of the characteristics of deep cultural layer of archaeological exploration, saturated water quality of kkkkcultural relics and heritage, high reflective intensity, high accuracy requirements for digital acquisition and dynamic mapping, and wide demand for multi-scale scanning, at the same time, archaeological sites usually contain complex phenomena, such as translucency, scattering, occlusion, and movement of archeologists. The all-round collection, recording and modeling of dynamic scenes in archaeological sites are challenging. In these cases, grid based reconstruction and tracking usually fail, while other methods (such as light field video) usually rely on limited viewing conditions, which limits interactivity. This paper adopts a sparse view based video streaming dynamic scene modeling method. A new representation method of neural scene flow field is introduced to model the dynamic scene as a time-varying continuous function of appearance, geometry and 3D scene motion, and a new neural implicit representation design is used to encode the self similarity of spatiotemporal sparse key frames. The experiment shows that this method can well construct the three-dimensional model of archaeological artifacts.

References

  1. Timur Bagautdinov, Chenglei Wu, Jason Saragih, Pascal Fua, and Yaser Sheikh. Modeling facial geometry using compositional vaes. In CVPR, pages 3877–3886, 2018.Google ScholarGoogle ScholarCross RefCross Ref
  2. Chen Gao, Yichang Shih, Wei-Sheng Lai, Chia-Kai Liang,and Jia-Bin Huang. . "Portrait neural radiance fields from a single image." arXiv preprint arXiv:2012.05903 (2020).Google ScholarGoogle Scholar
  3. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, pages 4401–4410, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  4. Atsuhiro Noguchi, Xiao Sun, Stephen Lin, and Tatsuya Harada. Neural articulated radiance field. arXiv preprint arXiv:2104.03110, 2021.Google ScholarGoogle Scholar
  5. Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. arXiv preprint arXiv:2011.12948, 2020.Google ScholarGoogle Scholar
  6. Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Ricardo MartinBrualla, and Steven M Seitz. Hypernerf: A higherdimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228, 2021.Google ScholarGoogle Scholar
  7. Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. Animatable neural radiance fields for human body modeling. arXiv preprint arXiv:2105.02872, 2021.2.Google ScholarGoogle Scholar
  8. Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. In CVPR, pages 253–270, 2021.Google ScholarGoogle ScholarCross RefCross Ref
  9. Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In CVPR, pages 10318–10327, 2021.Google ScholarGoogle Scholar
  10. Yichen Qian, Weihong Deng, and Jiani Hu. Unsupervised face normalization with extreme pose and expression in thewild. In CVPR, pages 9851–9858, 2019.Google ScholarGoogle Scholar
  11. Amit Raj, Michael Zollhofer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, and Stephen Lombardi. Pixelaligned volumetric avatars. In CVPR, pages 11733–11742, 2021.Google ScholarGoogle Scholar
  12. Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. In CVPR, pages 153–170, 2021.Google ScholarGoogle Scholar
  13. Luan Tran and Xiaoming Liu. On learning 3d face morphable model from in-the-wild images. PAMI, pages 1346–1354, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhofer, Christoph Lassner, and Christian Theobalt. Nonrigid neural radiance fields: Reconstruction and novel view synthesis of a deforming scene from monocular video. https://arxiv.org/abs/2012.12247, 2020.Google ScholarGoogle Scholar
  15. Daniel Vlasic, Matthew Brand, Hanspeter Pfister, and Jovan Popovic. Face transfer with multilinear models. ToG,24(3):426–433, 2005.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul P Srinivasan, Howard Zhou, Jonathan T Barron, Ricardo Martin-Brualla, Noah Snavely, and Thomas Funkhouser. Ibrnet: Learning multi-view image-based rendering. In CVPR, pages 4690–4699, 2021.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. 3D Scene Modeling Method based on Sparse Representation
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Other conferences
              APIT '23: Proceedings of the 2023 5th Asia Pacific Information Technology Conference
              February 2023
              192 pages
              ISBN:9781450399500
              DOI:10.1145/3588155

              Copyright © 2023 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 12 June 2023

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article
              • Research
              • Refereed limited
            • Article Metrics

              • Downloads (Last 12 months)16
              • Downloads (Last 6 weeks)0

              Other Metrics

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format .

            View HTML Format