Skip to main content

Learning Degradation for Real-World Face Super-Resolution

  • Conference paper
  • First Online:
Advances in Computer Graphics (CGI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14496))

Included in the following conference series:

  • 383 Accesses

Abstract

Acquiring degraded faces with corresponding high-resolution (HR) faces is critical for real-world face super-resolution (SR) applications. To generate low-resolution (LR) faces with degradation similar to those in real-world scenarios, most approaches learn a deterministic mapping from HR faces to LR faces. However, these deterministic models fail to model the various degradation of real-world LR faces, which limits the performance of the following face SR models. In this work, we learn a degradation model based on conditional generative adversarial networks (cGANs). Specifically, we propose a simple and effective weight-aware content loss that adaptively assigns different content losses to LR faces generated from the same HR face under different noise vector inputs. It significantly improves the diversity of the generated LR faces while having similar degradation to real-world LR faces. Compared with previous degradation models, the proposed degradation model can generate HR-LR pairs, which can better cover various degradation cases of real-world LR faces and further improve the performance of face SR models in real-world applications. Experiments on four datasets demonstrate that the proposed degradation model can help the face SR model achieve better performance in both quantitative and qualitative results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Deng, J., Guo, J., Ververas, E., et al.: RetinaFace: single-shot multi-level face localisation in the wild. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5203–5212 (2020)

    Google Scholar 

  2. Wang, H., Wang, S., Fang, L.: Two-stage multi-scale resolution-adaptive network for low-resolution face recognition. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 4053–4062 (2022)

    Google Scholar 

  3. Ying, L., Dinghua, S., Fuping, W., et al.: Learning wavelet coefficients for face super-resolution. Vis. Comput. 37, 1613–1622 (2021)

    Article  Google Scholar 

  4. Wen, Y., Chen, J., Sheng, B., et al.: Structure-aware motion deblurring using multi-adversarial optimized CycleGAN. IEEE Trans. Image Process. 30, 6142–6155 (2021)

    Article  Google Scholar 

  5. Li, H., Sheng, B., Li, P., et al.: Globally and locally semantic colorization via exemplar-based broad-GAN. IEEE Trans. Image Process. 30, 8526–8539 (2021)

    Article  Google Scholar 

  6. Kalanke, N.S., Tomar, A.S., et al.: Face super resolution through face semantic and structural prior. In: Arya, K.V., Tripathi, V.K., Rodriguez, C., Yusuf, E. (eds.) ICIT 2022. LNNS, vol. 685, pp. 159–166. Springer, Singapore (2023). https://doi.org/10.1007/978-981-99-1912-3_15

    Chapter  Google Scholar 

  7. Hu, X., Ren, W., Yang, J., et al.: Face restoration via plug-and-play 3D facial priors. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 8910–8926 (2021)

    Article  Google Scholar 

  8. Lu, T., Wang, Y., Zhang, Y., et al.: Rethinking prior-guided face super-resolution: a new paradigm with facial component prior. IEEE Trans. Neural Netw. Learn. Syst. 10–15 (2022)

    Google Scholar 

  9. Chen, C., Gong, D., Wang, H., et al.: Learning spatial attention for face super-resolution. IEEE Trans. Image Process. 30, 1219–1231 (2020)

    Article  Google Scholar 

  10. Shi, J., Wang, Y., Dong, S., et al.: IDPT: interconnected dual pyramid transformer for face super-resolution. In: International Joint Conference on Artificial Intelligence, pp. 1306–1312 (2022)

    Google Scholar 

  11. Zhang, K., Liang, J., Van, Gool, L., et al.: Designing a practical degradation model for deep blind image super-resolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4791–4800 (2021)

    Google Scholar 

  12. Wang, X., Xie, L., Dong, C., et al.: Real-ESRGAN: training real-world blind super-resolution with pure synthetic data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1905–1914 (2021)

    Google Scholar 

  13. Zhang, W., et al.: A closer look at blind super-resolution: degradation models, baselines, and performance upper bounds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 527–536 (2022)

    Google Scholar 

  14. Yuan, Y., Liu, S., Zhang, J., et al.: Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 701–710 (2018)

    Google Scholar 

  15. Fritsche, M., et al.: Frequency separation for real-world super-resolution. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop, pp. 3599–3608 (2019)

    Google Scholar 

  16. Chen, S., Han, Z., Dai, E., et al.: Unsupervised image super-resolution with an indirect supervised path. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 468–469 (2020)

    Google Scholar 

  17. Wang, L., Wang, Y., Dong, X., et al.: Unsupervised degradation representation learning for blind super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10581–10590 (2021)

    Google Scholar 

  18. Wolf, V., et al.: DeFlow: learning complex image degradations from unpaired data with conditional flows. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 94–103 (2021)

    Google Scholar 

  19. Luo, Z., Huang, Y., Li, S., et al.: Learning the degradation distribution for blind image super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6063–6072 (2022)

    Google Scholar 

  20. Ning, Q., Tang, J., Wu, F., et al.: Learning degradation uncertainty for unsupervised real-world image super-resolution. In: Proceedings of the 31st International Joint Conferences on Artificial Intelligence, pp. 1261–1267 (2022)

    Google Scholar 

  21. Mao, Q., Lee, H.Y., Tseng, H.Y., et al.: Mode seeking generative adversarial networks for diverse image synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1429–1437 (2019)

    Google Scholar 

  22. Yang, D., Hong, S., Jang, Y., et al.: Diversity-sensitive conditional generative adversarial networks. arXiv preprint arXiv:1901.09024 (2019)

  23. Liu, R., Ge, Y., Choi, C.L., et al.: DivCo: diverse conditional image synthesis via contrastive generative adversarial network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16377–16386 (2021)

    Google Scholar 

  24. Hou, H., Xu, J., Hou, Y., et al.: Semi-cycled generative adversarial networks for real-world face super-resolution. IEEE Trans. Image Process. 32, 1184–1199 (2023)

    Article  Google Scholar 

  25. Bulat, A., Yang, J., Tzimiropoulos, G.: To learn image super-resolution, use a GAN to learn how to do image degradation first. In: Proceedings of the European Conference on Computer Vision, pp. 185–200 (2018)

    Google Scholar 

  26. Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. arXiv preprint arXiv:1701.04862 (2017)

  27. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401–4410 (2019)

    Google Scholar 

  28. Yang, S., Luo, P., Loy, C.C., et al.: Wider face: a face detection benchmark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5525–5533 (2016)

    Google Scholar 

  29. Bulat, A., Tzimiropoulos, G.: How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1021–1030 (2017)

    Google Scholar 

  30. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  31. Wang, Z., Bovik, A.C., Sheikh, H.R., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  32. Zhang, R., Isola, P., Efros, A.A., et al.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  33. Heusel, M., Ramsauer, H., Unterthiner, T., et al.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  34. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind’’ image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2012)

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported partially by National Nature Science Foundation of China (62072347, U1903214, 62071338, 61876135), in part by the Nature Science Foundation of Hubei under Grant (2018CFA024, 2019CFB472), in part by Hubei Province Technological Innovation Major Project (No. 2018AAA062).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, J., Chen, J., Wang, X., Xu, D., Liang, C., Han, Z. (2024). Learning Degradation for Real-World Face Super-Resolution. In: Sheng, B., Bi, L., Kim, J., Magnenat-Thalmann, N., Thalmann, D. (eds) Advances in Computer Graphics. CGI 2023. Lecture Notes in Computer Science, vol 14496. Springer, Cham. https://doi.org/10.1007/978-3-031-50072-5_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50072-5_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50071-8

  • Online ISBN: 978-3-031-50072-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics