Skip to main content
Log in

SR-USRN: learning image super-resolution with unified structure and reverse network

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Using high-resolution image as reference (Ref) to recover a low-resolution (LR) image with similar texture can get the lost texture details and achieve more promising super-resolution (SR) results. Nowadays, existing reference-based image super-resolution approaches use a texture transformer network to add texture features to SR network. However, they neglect the importance of the structural consistency of texture transformer network and SR network. We propose a novel image super-resolution algorithm with unified structure and reverse network (SR-USRN), which uses the same network to transform texture and process image SR. SR-USRN consists of three steps, including training SR main network (SR-MainNet) without Ref, training reverse network (ReverseNet) to recover the features in SR-MainNet by Ref image and combining SR-MainNet and ReverseNet to train final SR-USRN. We use Ref image and LR image together to train SR-MainNet in first step and share the parameters in the process of SR and texture transformation. This design makes best use of the Ref images and the same structure of network makes texture transformer know what SR network really needs. The ReverseNet is trained to transform the Ref image to the corresponding features in SR-MainNet. Extensive experiments demonstrate that SR-USRN achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. CUFED5 is available at https://zzutk.github.io/SRNTT-Project-Page/.

  2. Sun80 and Urban100 are available at https://github.com/jbhuang0604/SelfExSR.

References

  1. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution (2014)

  2. Zheng, H., Ji, M., Wang, H., Liu, Y., Fang, L.: Crossnet: An end-to-end reference-based super resolution network using cross-scale warping (2018)

  3. Zhang, Z., Wang, Z., Lin, Z., Qi, H.: Image super-resolution by neural texture transfer. arXiv (2019)

  4. Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: CVPR (2020)

  5. Anwar, S., Khan, S., Barnes, N.: A deep journey into super-resolution: a survey. ACM Comput. Surv. 53, 1–34 (2019)

    Article  Google Scholar 

  6. Wang, Z., Chen, J., Hoi, S.C.H.: Deep learning for image super-resolution: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 99, 1 (2020)

    Google Scholar 

  7. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016)

    Article  Google Scholar 

  8. Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network (2016)

  9. Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)

  10. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2016)

    Article  MATH  MathSciNet  Google Scholar 

  11. Shi, W., Caballero, J., Huszár, F., Totz, J., Wang, Z.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network (2016)

  12. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution (2017)

  13. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2016)

  14. Ahn, N., Kang, B., Sohn, K.A.: Fast, accurate, and lightweight super-resolution with cascading residual network (2018)

  15. Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1637–1645 (2016)

  16. Tai, Y.,Yang, J., Liu, X., Xu, C.: Memnet: a persistent memory network for image restoration. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4549–4557 (2017)

  17. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks (2018)

  18. Ren, H., El-Khamy, M., Lee, J.: Image super resolution based on fusing multiple convolution neural networks. In: Computer Vision and Pattern Recognition Workshops (2017)

  19. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution (2016)

  20. Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z.A.: Photo-realistic single image super-resolution using a generative adversarial network (2016)

  21. Wang, X., Yu, K., Wu, S., Gu, J., Liu,Y., Dong, C., Loy, C.C., Qiao, Y., Tang, X.: Esrgan: Enhanced super-resolution generative adversarial networks (2018)

  22. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B. Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks (2014)

  23. Guo, Y., Chen, J., Wang, J., Chen, Q., Cao, J., Deng, Z., Xu, Y., Tan, M.: Closed-loop matters: dual regression networks for single image super-resolution. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5406–5415 (2020)

  24. Yue, H., Sun, X., Member, S., Yang, J.: Landmark image super-resolution by retrieving web images. IEEE Trans. Image Process. 22(12), 4865–4878 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  25. Boominathan, V., Mitra, K., Veeraraghavan, A.: Improving resolution and depth-of-field of light field cameras using a hybrid imaging system. In: 2014 IEEE International Conference on Computational Photography (ICCP), pp. 1–10 (2014)

  26. Zheng, H., Ji, M., Han, L., Xu, Z., Wang, H., Liu, Y., Fang, L.: Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution. 01 (2017)

  27. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2015)

  28. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANS (2017)

  29. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision (2016)

  30. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. Comput. Sci. (2014)

  31. Sun, L., Hays, J.: Super-resolution from internet-scale scene matching. In: 2012 IEEE International Conference on Computational Photography, ICCP (2012)

  32. Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars (2015)

  33. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2017)

  34. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR (2018)

  35. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV (2018)

  36. Sajjadi, M.S.M., Schlkopf, B., Hirsch, M.: Enhancenet: single image super-resolution through automated texture synthesis (2016)

  37. Zhang, W., Liu, Y., Dong, C., Qiao, Y.: Ranksrgan: generative adversarial networks with ranker for image super-resolution (2019)

Download references

Acknowledgements

This project is supported by National Natural Science Foundation of China (Grant No. 52075483).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuanyin Wang.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ji, J., Wang, X. SR-USRN: learning image super-resolution with unified structure and reverse network. SIViP 17, 1077–1085 (2023). https://doi.org/10.1007/s11760-022-02314-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-022-02314-z

Keywords

Navigation