Processing math: 100%
Mesh-Based DGCNN: Semantic Segmentation of Textured 3-D Urban Scenes | IEEE Journals & Magazine | IEEE Xplore

Mesh-Based DGCNN: Semantic Segmentation of Textured 3-D Urban Scenes


Abstract:

Textured 3-D mesh is one of the final user products in photogrammetry and remote sensing. However, research on the semantic segmentation of complex urban scenes represent...Show More

Abstract:

Textured 3-D mesh is one of the final user products in photogrammetry and remote sensing. However, research on the semantic segmentation of complex urban scenes represented by textured 3-D meshes is in its infancy. We present a mesh-based dynamic graph convolutional neural network (DGCNN) for the semantic segmentation of textured 3-D meshes. To represent each mesh facet, composite input feature vectors are constructed by concatenating the face-inherent features, i.e., XYZ coordinates of the center of gravity (CoG), texture values, and normal vectors (NVs). A texture fusion module is embedded into the proposed mesh-based DGCNN to generate high-level semantic features of the high-resolution texture information, which is useful for semantic segmentation. We achieve competitive accuracies when the proposed method is applied to the SUM mesh datasets. The overall accuracy (OA), Kappa coefficient (Kap), mean precision (mP), mean recall (mR), mean F1 score (mF1), and mean intersection over union (mIoU) are 93.3%, 88.7%, 79.6%, 83.0%, 80.7%, and 69.6%, respectively. In particular, the OA, mean class accuracy (mAcc), mIoU, and mF1 increase by 0.3%, 12.4%, 3.4%, and 6.9%, respectively, compared with the state-of-the-art method.
Article Sequence Number: 4402812
Date of Publication: 14 April 2023

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.