Unsupervised Video Face Super-Resolution via Untrained Neural Network Priors | IEEE Conference Publication | IEEE Xplore

Unsupervised Video Face Super-Resolution via Untrained Neural Network Priors


Abstract:

The goal of video face super-resolution is to reliably reconstruct clear face sequences from low-resolution input videos. Recent approaches either apply a single face sup...Show More

Abstract:

The goal of video face super-resolution is to reliably reconstruct clear face sequences from low-resolution input videos. Recent approaches either apply a single face super-resolution model directly or train a specific designed model from scratch. Compared with single face super-resolution, video face super-resolution should not only achieve visually plausible results, but also maintain coherency in both space and time. Existing deep-learning based approaches usually train a model using a large amount of external videos which usually are difficult to collect. In this paper, we propose an unsupervised video face super-resolution approach (UVFSR), leveraging the idea of untrained neural network prior to avoid the need for supervised learning. Specifically, we use a single generative convolutional neural network with random initialization to control the exploration of video generation in an online optimization manner. Besides, we explore this unsupervised paradigm with a new iteratively training strategy to make the optimization process more efficient and suitable for video. Moreover, we introduce a consistency-aware loss based on joint face image and face flow prediction to facilitate the temporal consistency. In this way, the super-resolution results can be plausible not only in the appearance domain but also in the motion domain. Experiments validate the effectiveness of our approach in terms of quantitative metrics and visual quality.
Date of Conference: 30 June 2024 - 05 July 2024
Date Added to IEEE Xplore: 09 September 2024
ISBN Information:

ISSN Information:

Conference Location: Yokohama, Japan

Funding Agency:


References

References is not available for this document.