Abstract
As NeRFs modeling becomes more widely available, there is an increasing demand for the ability to flexibly and conveniently exclude unnecessary obstructions during the modeling process. Existing methods generally adopt a “ignore” strategy for occlusions, which cannot conveniently and flexibly remove any occlusions in complex scenes. We propose a new method that only requires the introduction of a small number of different external occlusion annotation to model independent 3D masks for different occlusions in space. This “model first, remove later” occlusion removal strategy allows us to model the scene only one process and obtain unobstructed images from desired viewpoint, with any specific or multiple obstruction removed. Experimental results on existing and our synthesized datasets validate the effectiveness of our method and strategy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Chang, S., Lin, Y., Zhang, S.: Flexible hybrid lenses light field super-resolution using layered refinement. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 5584–5592 (2022)
Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: Tensorf: Tensorial radiance fields. In: European Conference on Computer Vision (ECCV) (2022)
Chen, X., Zhang, Q., Li, X., Chen, Y., Ying, F., Wang, X., Wang, J.: Hallucinated neural radiance fields in the wild. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 12933–12942 (2022)
Dong, Q., Cao, C., Fu, Y.: Incremental transformer structure enhanced image inpainting with masking positional encoding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022)
Hu, T., Liu, S., Chen, Y., Shen, T., Jia, J.: Efficientnerf efficient neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12902–12911 (2022)
Jain, J., Zhou, Y., Yu, N., Shi, H.: Keys to better image inpainting: Structure and texture go hand in hand. In: 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 208–217 (2023)
Jonna, S., Satapathy, S., Sahay, R.R.: Stereo image de-fencing using smartphones. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1792–1796 (2017)
Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 42(4) (2023)
Levoy, M., Hanrahan, P.: Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques (1996)
Li, J., Wang, N., Zhang, L., Du, B., Tao, D.: Recurrent feature reasoning for image inpainting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7760–7768 (2020)
Liu, L., Gu, J., Lin, K.Z., Chua, T.S., Theobalt, C.: Neural sparse voxel fields. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 33 (2020)
Liu, Y.L., Lai, W.S., Yang, M.H., Chuang, Y.Y., Huang, J.B.: Learning to see through obstructions. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14203–14212 (2020)
Martin-Brualla, R., Radwan, N., Sajjadi, M.S.M., Barron, J.T., Dosovitskiy, A., Duckworth, D.: Nerf in the wild: Neural radiance fields for unconstrained photo collections. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7206–7215 (2021)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)
Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. 41(4), 102:1–102:15 (Jul 2022)
Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: Feature learning by inpainting. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2536–2544 (2016)
Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: Radiance fields without neural networks. In: CVPR (2022)
Wen, B., Tremblay, J., Blukis, V., Tyree, S., Müller, T., Evans, A., Fox, D., Kautz, J., Birchfield, S.: BundleSDF: Neural 6-DoF tracking and 3D reconstruction of unknown objects. In: CVPR (2023)
Xian, W., Huang, J.B., Kopf, J., Kim, C.: Space-time neural irradiance fields for free-viewpoint video. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9421–9431 (2021)
Xue, T., Rubinstein, M., Liu, C., Freeman, W.T.: A computational approach for obstruction-free photography. ACM Trans. Graph. (TOG) 34, 1–11 (2015)
Zhang, S., Shen, Z., Lin, Y.: Removing foreground occlusions in light field using micro-lens dynamic filter. In: International Joint Conference on Artificial Intelligence (2021)
Zhu, C., Wan, R., Tang, Y., Shi, B.: Occlusion-free scene recovery via neural radiance fields. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 20722–20731 (2023)
Acknowledgement
This work is supported by the National Natural Science Foundation of China (No. 62372032).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Shi, Z., Zhang, S., Chang, S., Lin, Y. (2025). Multi-3D Occlusion Mask Learning for Flexible Occlusion Removal in Neural Radiance Fields. In: Lin, Z., et al. Pattern Recognition and Computer Vision. PRCV 2024. Lecture Notes in Computer Science, vol 15036. Springer, Singapore. https://doi.org/10.1007/978-981-97-8508-7_35
Download citation
DOI: https://doi.org/10.1007/978-981-97-8508-7_35
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-8507-0
Online ISBN: 978-981-97-8508-7
eBook Packages: Computer ScienceComputer Science (R0)