Abstract
Using high-resolution image as reference (Ref) to recover a low-resolution (LR) image with similar texture can get the lost texture details and achieve more promising super-resolution (SR) results. Nowadays, existing reference-based image super-resolution approaches use a texture transformer network to add texture features to SR network. However, they neglect the importance of the structural consistency of texture transformer network and SR network. We propose a novel image super-resolution algorithm with unified structure and reverse network (SR-USRN), which uses the same network to transform texture and process image SR. SR-USRN consists of three steps, including training SR main network (SR-MainNet) without Ref, training reverse network (ReverseNet) to recover the features in SR-MainNet by Ref image and combining SR-MainNet and ReverseNet to train final SR-USRN. We use Ref image and LR image together to train SR-MainNet in first step and share the parameters in the process of SR and texture transformation. This design makes best use of the Ref images and the same structure of network makes texture transformer know what SR network really needs. The ReverseNet is trained to transform the Ref image to the corresponding features in SR-MainNet. Extensive experiments demonstrate that SR-USRN achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.
Similar content being viewed by others
Notes
CUFED5 is available at https://zzutk.github.io/SRNTT-Project-Page/.
Sun80 and Urban100 are available at https://github.com/jbhuang0604/SelfExSR.
References
Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution (2014)
Zheng, H., Ji, M., Wang, H., Liu, Y., Fang, L.: Crossnet: An end-to-end reference-based super resolution network using cross-scale warping (2018)
Zhang, Z., Wang, Z., Lin, Z., Qi, H.: Image super-resolution by neural texture transfer. arXiv (2019)
Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: CVPR (2020)
Anwar, S., Khan, S., Barnes, N.: A deep journey into super-resolution: a survey. ACM Comput. Surv. 53, 1–34 (2019)
Wang, Z., Chen, J., Hoi, S.C.H.: Deep learning for image super-resolution: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 99, 1 (2020)
Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2016)
Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network (2016)
Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)
Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2016)
Shi, W., Caballero, J., Huszár, F., Totz, J., Wang, Z.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network (2016)
Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2016)
Ahn, N., Kang, B., Sohn, K.A.: Fast, accurate, and lightweight super-resolution with cascading residual network (2018)
Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1637–1645 (2016)
Tai, Y.,Yang, J., Liu, X., Xu, C.: Memnet: a persistent memory network for image restoration. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4549–4557 (2017)
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks (2018)
Ren, H., El-Khamy, M., Lee, J.: Image super resolution based on fusing multiple convolution neural networks. In: Computer Vision and Pattern Recognition Workshops (2017)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution (2016)
Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z.A.: Photo-realistic single image super-resolution using a generative adversarial network (2016)
Wang, X., Yu, K., Wu, S., Gu, J., Liu,Y., Dong, C., Loy, C.C., Qiao, Y., Tang, X.: Esrgan: Enhanced super-resolution generative adversarial networks (2018)
Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B. Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks (2014)
Guo, Y., Chen, J., Wang, J., Chen, Q., Cao, J., Deng, Z., Xu, Y., Tan, M.: Closed-loop matters: dual regression networks for single image super-resolution. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5406–5415 (2020)
Yue, H., Sun, X., Member, S., Yang, J.: Landmark image super-resolution by retrieving web images. IEEE Trans. Image Process. 22(12), 4865–4878 (2013)
Boominathan, V., Mitra, K., Veeraraghavan, A.: Improving resolution and depth-of-field of light field cameras using a hybrid imaging system. In: 2014 IEEE International Conference on Computational Photography (ICCP), pp. 1–10 (2014)
Zheng, H., Ji, M., Han, L., Xu, Z., Wang, H., Liu, Y., Fang, L.: Learning cross-scale correspondence and patch-based synthesis for reference-based super-resolution. 01 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2015)
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANS (2017)
Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision (2016)
Kingma, D., Ba, J.: Adam: a method for stochastic optimization. Comput. Sci. (2014)
Sun, L., Hays, J.: Super-resolution from internet-scale scene matching. In: 2012 IEEE International Conference on Computational Photography, ICCP (2012)
Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars (2015)
Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2017)
Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR (2018)
Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: ECCV (2018)
Sajjadi, M.S.M., Schlkopf, B., Hirsch, M.: Enhancenet: single image super-resolution through automated texture synthesis (2016)
Zhang, W., Liu, Y., Dong, C., Qiao, Y.: Ranksrgan: generative adversarial networks with ranker for image super-resolution (2019)
Acknowledgements
This project is supported by National Natural Science Foundation of China (Grant No. 52075483).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ji, J., Wang, X. SR-USRN: learning image super-resolution with unified structure and reverse network. SIViP 17, 1077–1085 (2023). https://doi.org/10.1007/s11760-022-02314-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-022-02314-z