Abstract:
The recent emergence of deep learning neural networks has propelled advancements in the field of face super-resolution. While these deep learning-based methods have shown...Show MoreMetadata
Abstract:
The recent emergence of deep learning neural networks has propelled advancements in the field of face super-resolution. While these deep learning-based methods have shown significant performance improvements, they depend overwhelmingly on fixed, spatially shared kernels within standard convolutional layers. This leads to a neglect of the diverse facial structures and regions, consequently struggling to reconstruct high-fidelity face images. As a highly structured object, the structural features of a face are crucial for representing and reconstructing face images. To this end, we introduce a structure prior-aware dynamic network (SPADNet) that leverages facial structure priors as a foundation to generate structure-aware dynamic kernels for the distinctive super-resolution of various face images. In view of that spatially shared kernels are not well-suited for specific-regions representation, a local structure-adaptive convolution (LSAC) is devised to characterize the local relation of facial features. It is more effective for precise texture representation. Meanwhile, a global structure-aware convolution (GSAC) is elaborated to capture the global facial contours to guarantee the structure consistency. These strategies form a unified face reconstruction framework, which reconciles the distinct representation of diverse face images and individual structure fidelity. Extensive experiments confirm the superiority of our proposed SPADNet over state-of-the-art methods. The source codes of the proposed method will be available at https://github.com/wcy-cs/SPADNet.
Published in: IEEE Transactions on Biometrics, Behavior, and Identity Science ( Volume: 6, Issue: 3, July 2024)