Skip to main content

FG-SRGAN: A Feature-Guided Super-Resolution Generative Adversarial Network for Unpaired Image Super-Resolution

  • Conference paper
  • First Online:
Advances in Neural Networks – ISNN 2019 (ISNN 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11554))

Included in the following conference series:

Abstract

Recently, the performance of single image super-resolution has been significantly improved by convolution neural networks (CNN). However, most of these networks are trained with paired images and take the bicubic-downsampled images as inputs. It’s impractical if we want to super-resolve low-resolution images in the real world, since there is no ground truth high-resolution images corresponding to the low-resolution images. To tackle this challenge, a Feature-Guided Super-Resolution Generative Adversarial Network (FG-SRGAN) for unpaired image super-resolution is proposed in this paper. A guidance module is introduced in FG-SRGAN, which is utilized to reduce the space of possible mapping functions and help to learn the correct mapping function from low-resolution domain to high-resolution domain. Furthermore, we treat the outputs of guidance module as fake examples, which can be leveraged using another adversarial loss. This is beneficial for the main task as it forces FG-SRGAN to learn valid representations for super-resolution. When applied to super-resolve low-resolution face images in the real world, FG-SRGAN is able to achieve satisfactory performance both qualitatively and quantitatively.

S. Lian and H. Zhou—Equal contributions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bulat, A., Tzimiropoulos, G.: How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks). In: International Conference on Computer Vision, pp. 1021–1030 (2017)

    Google Scholar 

  2. Bulat, A., Yang, J., Tzimiropoulos, G.: To learn image super-resolution, use a GAN to learn how to do image degradation first. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 187–202. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01231-1_12

    Google Scholar 

  3. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10593-2_13

    Google Scholar 

  4. Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 391–407. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_25

    Google Scholar 

  5. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  6. Han, W., Chang, S., Liu, D., Yu, M., Witbrock, M., Huang, T.S.: Image super-resolution via dual-state recurrent networks. In: Computer Vision and Pattern Recognition, pp. 1654–1663 (2018)

    Google Scholar 

  7. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. In: Advances in Neural Information Processing Systems, pp. 6626–6637 (2017)

    Google Scholar 

  8. Hui, Z., Wang, X., Gao, X.: Fast and accurate single image super-resolution via information distillation network. In: Computer Vision and Pattern Recognition, pp. 723–731 (2018)

    Google Scholar 

  9. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Google Scholar 

  10. Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)

    Google Scholar 

  11. Kim, J., Kwon Lee, J., Mu Lee, K.: Deeply-recursive convolutional network for image super-resolution. In: Computer Vision and Pattern Recognition, pp. 1637–1645 (2016)

    Google Scholar 

  12. Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Computer Vision and Pattern Recognition, pp. 5835–5843 (2017)

    Google Scholar 

  13. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Computer Vision and Pattern Recognition, pp. 105–114 (2017)

    Google Scholar 

  14. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: Computer Vision and Pattern Recognition Workshops, pp. 1132–1140 (2017)

    Google Scholar 

  15. Mao, X.J., Shen, C., Yang, Y.B.: Image restoration using convolutional auto-encoders with symmetric skip connections. arXiv preprint arXiv:1606.08921 (2016)

  16. Nasrollahi, K., Moeslund, T.B.: Super-resolution: a comprehensive survey. Mach. Vis. Appl. 25(6), 1423–1468 (2014)

    Google Scholar 

  17. Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)

    Google Scholar 

  18. Shocher, A., Cohen, N., Irani, M.: “Zero-shot" super-resolution using deep internal learning. In: Computer Vision and Pattern Recognition, pp. 3118–3126 (2018)

    Google Scholar 

  19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  20. Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: Computer Vision and Pattern Recognition, pp. 2790–2798 (2017)

    Google Scholar 

  21. Tai, Y., Yang, J., Liu, X., Xu, C.: MemNet: A persistent memory network for image restoration. In: Computer Vision and Pattern Recognition, pp. 4539–4547 (2017)

    Google Scholar 

  22. Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: Computer Vision and Pattern Recognition, pp. 8174–8182 (2018)

    Google Scholar 

  23. Wang, X., et al.: ESRGAN: Enhanced super-resolution generative adversarial networks. arXiv preprint arXiv:1809.00219 (2018)

  24. Yang, S., Luo, P., Loy, C.C., Tang, X.: Wider face: A face detection benchmark. In: Computer Vision and Pattern Recognition, pp. 5525–5533 (2016)

    Google Scholar 

  25. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks, In: European Conference on Computer Vision. pp. 286–301 (2018)

    Google Scholar 

  26. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Computer Vision and Pattern Recognition, pp. 2472–2481 (2018)

    Google Scholar 

  27. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: International Conference on Computer Vision, pp. 2242–2251 (2017)

    Google Scholar 

Download references

Acknowledgement

This project was partially supported by the National Natural Science Foundation of China (Grant No.61671104), and the National Major Scientific Instruments Project (Grant No. 2014YQ24044501)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Sun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lian, S., Zhou, H., Sun, Y. (2019). FG-SRGAN: A Feature-Guided Super-Resolution Generative Adversarial Network for Unpaired Image Super-Resolution. In: Lu, H., Tang, H., Wang, Z. (eds) Advances in Neural Networks – ISNN 2019. ISNN 2019. Lecture Notes in Computer Science(), vol 11554. Springer, Cham. https://doi.org/10.1007/978-3-030-22796-8_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-22796-8_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-22795-1

  • Online ISBN: 978-3-030-22796-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics