ABSTRACT
In view of the characteristics of deep cultural layer of archaeological exploration, saturated water quality of kkkkcultural relics and heritage, high reflective intensity, high accuracy requirements for digital acquisition and dynamic mapping, and wide demand for multi-scale scanning, at the same time, archaeological sites usually contain complex phenomena, such as translucency, scattering, occlusion, and movement of archeologists. The all-round collection, recording and modeling of dynamic scenes in archaeological sites are challenging. In these cases, grid based reconstruction and tracking usually fail, while other methods (such as light field video) usually rely on limited viewing conditions, which limits interactivity. This paper adopts a sparse view based video streaming dynamic scene modeling method. A new representation method of neural scene flow field is introduced to model the dynamic scene as a time-varying continuous function of appearance, geometry and 3D scene motion, and a new neural implicit representation design is used to encode the self similarity of spatiotemporal sparse key frames. The experiment shows that this method can well construct the three-dimensional model of archaeological artifacts.
- Timur Bagautdinov, Chenglei Wu, Jason Saragih, Pascal Fua, and Yaser Sheikh. Modeling facial geometry using compositional vaes. In CVPR, pages 3877–3886, 2018.Google ScholarCross Ref
- Chen Gao, Yichang Shih, Wei-Sheng Lai, Chia-Kai Liang,and Jia-Bin Huang. . "Portrait neural radiance fields from a single image." arXiv preprint arXiv:2012.05903 (2020).Google Scholar
- Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, pages 4401–4410, 2019.Google ScholarCross Ref
- Atsuhiro Noguchi, Xiao Sun, Stephen Lin, and Tatsuya Harada. Neural articulated radiance field. arXiv preprint arXiv:2104.03110, 2021.Google Scholar
- Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. arXiv preprint arXiv:2011.12948, 2020.Google Scholar
- Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Ricardo MartinBrualla, and Steven M Seitz. Hypernerf: A higherdimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228, 2021.Google Scholar
- Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. Animatable neural radiance fields for human body modeling. arXiv preprint arXiv:2105.02872, 2021.2.Google Scholar
- Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, and Xiaowei Zhou. Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. In CVPR, pages 253–270, 2021.Google ScholarCross Ref
- Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In CVPR, pages 10318–10327, 2021.Google Scholar
- Yichen Qian, Weihong Deng, and Jiani Hu. Unsupervised face normalization with extreme pose and expression in thewild. In CVPR, pages 9851–9858, 2019.Google Scholar
- Amit Raj, Michael Zollhofer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, and Stephen Lombardi. Pixelaligned volumetric avatars. In CVPR, pages 11733–11742, 2021.Google Scholar
- Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. In CVPR, pages 153–170, 2021.Google Scholar
- Luan Tran and Xiaoming Liu. On learning 3d face morphable model from in-the-wild images. PAMI, pages 1346–1354, 2019.Google ScholarDigital Library
- Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhofer, Christoph Lassner, and Christian Theobalt. Nonrigid neural radiance fields: Reconstruction and novel view synthesis of a deforming scene from monocular video. https://arxiv.org/abs/2012.12247, 2020.Google Scholar
- Daniel Vlasic, Matthew Brand, Hanspeter Pfister, and Jovan Popovic. Face transfer with multilinear models. ToG,24(3):426–433, 2005.Google ScholarDigital Library
- Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul P Srinivasan, Howard Zhou, Jonathan T Barron, Ricardo Martin-Brualla, Noah Snavely, and Thomas Funkhouser. Ibrnet: Learning multi-view image-based rendering. In CVPR, pages 4690–4699, 2021.Google ScholarCross Ref
Index Terms
- 3D Scene Modeling Method based on Sparse Representation
Recommendations
3D plenoptic representation of a linear scene
This paper presents a novel 3D plenoptic function. We constrain camera motion to a line, and create a linear mosaic using a manifold mosaic. The plenoptic function is represented with three parameters: camera position along the axis, the angle between ...
Automatic Scene Inference for 3D Object Compositing
We present a user-friendly image editing system that supports a drag-and-drop object insertion (where the user merely drags objects into the image, and the system automatically places them in 3D and relights them appropriately), postprocess illumination ...
SOL-NeRF: Sunlight Modeling for Outdoor Scene Decomposition and Relighting
SA '23: SIGGRAPH Asia 2023 Conference PapersOutdoor scenes often involve large-scale geometry and complex unknown lighting conditions, making it difficult to decompose them into geometry, reflectance and illumination. Recently researchers made attempts to decompose outdoor scenes using Neural ...
Comments