Neural Segmentation Field in 3D Scene | IEEE Conference Publication | IEEE Xplore

Neural Segmentation Field in 3D Scene


Abstract:

Neural Radiance Field (NeRF) represents a 3D scene implicitly as neural network(s) that takes 3D position and viewing direction as input and predicts the corresponding co...Show More

Abstract:

Neural Radiance Field (NeRF) represents a 3D scene implicitly as neural network(s) that takes 3D position and viewing direction as input and predicts the corresponding color texture and volume density. With the learned representation, it can render color texture of arbitrary views of the 3D scene by querying the corresponding 3D positions and viewing directions for all pixels in the views. However, in addition to color texture, sometimes the users also care about the semantic information, e.g., object segmentation or semantic segmentation, in the 3D scene. Therefore, we propose the neural segmentation field, an implicit segmentation representation that represents the segmentation in a 3D scene as a neural network on top of a pre-trained 3D scene representation such as NeRF. To be specific, given a pre-trained NeRF, and a set of 2D segmentation with known camera parameters, we learn the neural segmentation field which can be used to render 2D segmentation maps for arbitrary viewpoints. Experimental result on Replica dataset shows our model can achieve a high segmentation quality (accuracy> 0.986 and mean intersection over union (mIoU) > 0.91) with a small model size (< 0.4 MB) for scene with more than 25 semantic classes.
Date of Conference: 29 October 2023 - 01 November 2023
Date Added to IEEE Xplore: 01 April 2024
ISBN Information:

ISSN Information:

Conference Location: Pacific Grove, CA, USA

References

References is not available for this document.