Skip to main content
Log in

Universal super-resolution for face and non-face regions via a facial feature network

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

This study proposes a fusion method for universal super-resolution (SR) that produces good results for both face and non-face regions. We observed that most general-purpose SR networks fail to sufficiently reconstruct facial features. In addition, face-specific networks degrade the performance on non-face regions. To reconstruct face regions well, face-specific SR networks are trained by employing a facial feature network. Then, to preserve the performance on the non-face regions, a region-adaptive fusion that uses both face-specific and general-purpose networks is proposed. In the fusion stage, a face detection algorithm is included, and detected masks are smoothed to avoid boundary artefacts. The results indicate that the proposed method significantly improves the performance on the face region and delivers a similar performance on the non-face region. In addition, the experimental results indicate that additional computations can be considerably reduced by sharing the front layers of the networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Hu, X., Sun, J., Mai, Z., Peng, S.: Face quality analysis of single-image super-resolution based on SIFT. Signal Image Video Process. (2019). https://doi.org/10.1007/s11760-019-01614-1

    Article  Google Scholar 

  2. Nasrollahi, H., Farajzadeh, K., Hosseini, V., Zarezadeh, E., Abdollahzadeh, M.: Deep artifact-free residual network for single-image super-resolution. Signal Image Video Process. (2019). https://doi.org/10.1007/s11760-019-01569-3

    Article  Google Scholar 

  3. Park, S.C., Park, M.K., Kang, M.G.: Super-resolution image reconstruction: a technical overview. IEEE. Signal Process. Mag. 20(3), 21–36 (2003)

    Article  Google Scholar 

  4. Anwar, S., Khan, S., Barnes, N.: A deep journey into super-resolution: a survey (2019). arXiv:1904.07523

  5. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012)

  6. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 184–199 (2014)

  7. Kim, J., Lee, J., Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 1646–1654 (2016)

  8. Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 624–632 (2017)

  9. Zhang, Y., Li, K., Li, K., Wang, L.: Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 286–301 (2018)

  10. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4681–4690 (2017)

  11. Wang, X., et al.: ESRGAN: enhanced super-resolution generative adversarial networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 63–79 (2018)

  12. Wang, X., Yu, K., Dong, C., Loy, C.C.: Recovering realistic texture in image super-resolution by deep spatial feature transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 606–615 (2018)

  13. Fanaee, F., Yazdi, M., Faghihi, M.: Face image super-resolution via sparse representation and wavelet transform. Signal Image Video Process. 13(1), 79–86 (2019)

    Article  Google Scholar 

  14. Sajjadi, M.S.M., Scholkopf, B., Hirsch, M.: EnhanceNet: single image super-resolution through automated texture synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4491–4500 (2017)

  15. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)

  16. Jolicoeur-Martineau, A.: The relativistic discriminator: a key element missing from standard GAN (2018). arXiv:1807.00734

  17. Cao, Q., et al.: Vggface2: A dataset for recognising faces across pose and age. In: 2018 13th IEEE Interantional Conference on Automatic Face and Gesture Recognition, pp. 67–74 (2018)

  18. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016)

  19. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1026–1034 (2015)

  20. Yu, X., Fernando, B., Ghanem, B., Porikli, F., Hartley, R.: Face super-resolution guided by facial component heatmaps. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 217–233 (2018)

  21. Agustsson, E., Timofte, R.: NTIRE 2017 challenge on single image super-resolution: dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 126–135 (2017)

  22. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4401–4410 (2019)

  23. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2015)

Download references

Acknowledgements

This material is based upon work supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (10080619), and also supported by Graduate School of YONSEI University Research Scholarship Grants in 2019.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. Kim.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mun, J., Kim, J. Universal super-resolution for face and non-face regions via a facial feature network. SIViP 14, 1601–1608 (2020). https://doi.org/10.1007/s11760-020-01706-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-020-01706-3

Keywords

Navigation