Abstract:
Thanks to advancements in deep neural networks and the availability of large datasets, face recognition (FR) has made remarkable progress in recent years, achieving a lev...Show MoreMetadata
Abstract:
Thanks to advancements in deep neural networks and the availability of large datasets, face recognition (FR) has made remarkable progress in recent years, achieving a level of accuracy comparable to that of humans. Nonetheless, the current state-of-the-art methods primarily depend on training with 2D facial images (RGB), which may not be able to capture certain discriminant geometric facial features. Unfortunately, there are only small datasets containing 3D point cloud faces acquired using specific sensors such as laser or infrared, which poses challenges in training large 3D FR models. In this paper, we introduce a novel approach to enhance state-of-the-art 2D facial recognition by adapting 3D object recognition models trained using 3D reconstructed faces from single RGB images. Through our experiments involving training the ResNet, PointNet++, and PointNeXt models using 1k, 2k, 5k, and 10k subjects with the ArcFace loss, we have observed that the combined model achieves higher accuracy compared to each individual model. This improvement can be attributed to the diverse and complementary facial features extracted by different models. Leveraging 3D reconstructed faces presents a viable solution for addressing the scarcity of available 3D scanned faces. However, the extent of improvement will rely on the quality of the reconstruction process.
Date of Conference: 06-09 November 2023
Date Added to IEEE Xplore: 18 December 2023
ISBN Information: