Skip to main content
Log in

LBCRN: lightweight bidirectional correction residual network for image super-resolution

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

Currently, single image super-resolution methods based on convolutional neural networks have achieved remarkable results. These methods typically increase the depth and complexity of a network to improve its performance, which increases the network’s computational burden. To solve these problems, this paper proposes a new lightweight bidirectional correction residual network (LBCRN) for image super-resolution. LBCRN includes dominant correction and return correction modules. In the dominant correction module, a feature fusion residual block (FFRB) with fewer parameters is constructed to learn and fuse the extracted features of different layers. Meanwhile, to restore the important features while keeping the network lightweight, a hybrid-attention block (HAB) is also proposed based on the channel and spatial attention mechanisms. Finally, to better constrain the network training, a return correction module is designed by simulating the process of image degradation. Experimental results on multiple datasets show that the proposed LBCRN has better subjective performance and quantitative results than most existing lightweight networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

Due to the nature of this research, participants of this study did not agree for their data to be shared publicly, so supporting data is not available.

References

  • Agustsson, E., & Timofte, R. (2017). NTIRE 2017 challenge on single image super-resolution: Dataset and study. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (CVPRW) (pp. 126–135).

  • Ahn, N., Kang, B., & Sohn, K.-A. (2018). Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the European conference on computer vision (ECCV) (pp. 252–268).

  • Anwar, S., Khan, S., & Barnes, N. (2020). A deep journey into super-resolution: A survey. arXiv:1904.07523

  • Bevilacqua, M., Roumy, A., Guillemot, C., & Morel, M. A. (2012). Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In Proceedings of the British machine vision conference (pp. 1–10).

  • Chen, L., Zhang, H., Xiao, J., Nie, L., Shao, J., Liu, W., & Chua, T.-S. (2017). SCA-CNN: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 5659–5667).

  • Ding, X., Zhang, X., Ma, N., Han, J., Ding, G., & Sun, J. (2021). RepVGG: Making VGG-style ConvNets great again. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 13733–13742).

  • Dong, C., Loy, C. C., He, K., & Tang, X. (2014). Learning a deep convolutional network for image super-resolution. In Proceedings of the European conference on computer vision (ECCV) (pp. 184–199).

  • Dong, C., Loy, C. C., & Tang, X. (2016). Accelerating the super-resolution convolutional neural network. In Proceedings of the European conference on computer vision (ECCV) (pp. 391–407).

  • Guo, Y., Chen, J., Wang, J., & Chen, Q. (2020). Closed-loop matters: Dual regression networks for single image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 5407–5416).

  • Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 7132–7141).

  • Hu, Y., Li, J., Huang, Y., & Gao, X. (2020). Channel-wise and spatial feature modulation network for single image super-resolution. IEEE Transactions on Circuits and Systems for Video Technology, 30(11), 3911–3927.

    Article  Google Scholar 

  • Huang, J.-B., Singh, A., & Ahuja, N. (2015). Single image super-resolution from transformed self-exemplars. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 5197–5206).

  • Huang, J., & Siu, W. (2017). Learning hierarchical decision trees for single-image super-resolution. IEEE Transactions on Circuits and Systems for Video Technology, 27(5), 937–950.

    Article  Google Scholar 

  • Huang, S., Sun, J., Yang, Y., Fang, Y., Lin, P., & Que, Y. (2018). Robust single-image super-resolution based on adaptive edge-preserving smoothing regularization. IEEE Transactions on Image Processing, 27(6), 2650–2663.

    Article  MathSciNet  MATH  Google Scholar 

  • Hui, Z., Wang, X., & Gao, X. (2018). Fast and accurate single image super-resolution via information distillation network. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 723–731).

  • Kim, J., Lee, J. K., & Lee, K. M. (2016b). Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1637–1645).

  • Kim, J., Lee, J. K., & Lee, K. M. (2016a). Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1646–1654).

  • Kim, J.-H., Choi, J.-H., Cheon, M., & Lee, J.-S. (2020). MAMNet: Multi-path adaptive modulation network for image super-resolution’. Neurocomputing, 402, 38–49.

    Article  Google Scholar 

  • Lai, W.-S., Huang, J.-B., Ahuja, N., & Yang, M.-H. (2017). Deep Laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 624–632).

  • Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., & Wu, W. (2019). Feedback network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 3867–3876).

  • Li, X., & Orchard, M. T. (2001). New edge-directed interpolation. IEEE Transactions on Image Processing, 10(10), 1521–1527.

    Article  Google Scholar 

  • Lim, B., Son, S., Kim, H., Nah, S., & Lee, K. M. (2017). Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (CVPRW) (pp. 136–144).

  • Liu, J., Zhang, W., Tang, Y., Tang, J., & Wu, G. (2020). Residual feature aggregation network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 2356–2365).

  • Mansimov, E., Parisotto, E., Ba, J. L., & Salakhutdinov, R. (2016). Generating images from captions with attention. In International conference on learning representations (ICLR) (pp. 1–4).

  • Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the IEEE international conference on computer vision (ICCV) (pp. 416–423).

  • Matsui, Y., Ito, K., Aramaki, Y., Fujimoto, A., Ogawa, T., Yamasaki, T., & Aizawa, K. (2017). Sketch-based manga retrieval using manga109 dataset. Multimedia Tools and Applications, 76(20), 21811–21838.

    Article  Google Scholar 

  • Mei, Y., Fan, Y., & Zhou, Y. (2021). Image super-resolution with non-local sparse attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 3517–3526).

  • Muqeet, A., Hwang, J., Yang, S., Kang, J. H., Kim, Y., & Bae, S.-H. (2020). Multi-attention based ultra lightweight image super-resolution. arXiv:2008.12912

  • Protter, M., Elad, M., Takeda, H., & Milanfar, P. (2009). Generalizing the nonlocal-means to super-resolution reconstruction. IEEE Transactions on Image Processing, 18(1), 36–51.

    Article  MathSciNet  MATH  Google Scholar 

  • Shi, W., Caballero, J., Huszar, F., Totz, J., Aitken, A. P., Bishop, R., Rueckert, D., & Wang, Z. (2016). Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1874–1883).

  • Sigitani, T., Iiguni, Y., & Maeda, H. (1999). Image interpolation for progressive transmission by using radial basis function networks. IEEE Transactions on Neural Networks and Learning Systems, 10(2), 381–390.

    Article  Google Scholar 

  • Sun, J., Xu, Z., & Shum, H.-Y. (2008). Image super-resolution using gradient profile prior. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 24–26).

  • Tai, Y., Yang, J., Liu, X., & Xu, C. (2017b). MemNet: A persistent memory network for image restoration. In Proceedings of the IEEE International conference on computer vision (ICCV) (pp. 4549–4557).

  • Tai, Y., Yang, J., & Liu, X. (2017a). Image super-resolution via deep recursive residual network. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3147–3155).

  • Tian, C., Zhuge, R., Wu, Z., Xu, Y., Zuo, W., Chen, C., & Lin, C.-W. (2020). Lightweight image super-resolution with enhanced CNN. arXiv:2007.04344.

  • Tong, T., Li, G., Liu, X., & Gao, Q. (2017). Image super-resolution using dense skip connections. In Proceedings of the IEEE International conference on computer vision (ICCV) (pp. 4809–4817).

  • Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Wang, X., & Tang, X. (2017). Residual attention network for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3156–3164).

  • Wang, X., Yu, K., Dong, C., Loy, C. C. (2018). Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 606–615).

  • Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., & Hu, Q. (2019b). ECA-Net: Efficient channel attention for deep convolutional neural networks. arXiv:1910.03151

  • Wang, Y., Wang, L., Wang, H., & Li, P. (2019a). Resolution-aware network for image super-resolution. IEEE Transactions on Circuits and Systems for Video Technology, 29(5), 1259–1269.

    Article  Google Scholar 

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Article  Google Scholar 

  • Woo, S., Park, J., Lee, J.-Y., & Kweon, I. S. (2018). CBAM: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV) (pp. 3–19).

  • Xia, Y., He, D., Qin, T., Wang, L., Yu, N., Liu, T.-Y., & Ma, W.-Y. (2016). Dual learning for machine translation. In Proceedings of the advances in neural information processing systems (pp. 820–828).

  • Xia, Y., Qin, T., Chen, W., Bian, J., Yu, N., & Liu, T.-Y. (2017). Dual supervised learning. In Proceedings of the international conference on machine learning (ICML) (pp. 3789–3798).

  • Xia, Y., Tan, X., Tian, F., Qin, T., Yu, N., & Liu, T.-Y. (2018). Model-level dual learning. In Proceedings of the international conference on machine learning (ICML) (pp. 5379–5388).

  • Xia, B., Hang, Y., Tian, Y., Yang, W., Liao, Q., & Zhou, J. (2022). Efficient non-local contrastive attention for image super-resolution. arXiv:2201.03794

  • Xie, C., Zeng, W., & Lu, X. (2019). Fast single-image super-resolution via deep network with component learning. IEEE Transactions on Circuits and Systems for Video Technology, 29(12), 3473–3486.

    Article  Google Scholar 

  • Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R. S., & Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning (ICML) (pp. 2048–2057).

  • Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). DualGAN: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2849–2857).

  • Zeyde, R., Protter, M., & Elad, M. (2010). On single image scale-up using sparse-representations. In Proceedings of the international conference on curves and surfaces (pp. 711–730). Springer, Berlin, Germany.

  • Zhang, K., Zuo, W., & Zhang, L. (2018d). Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3262–3271).

  • Zhang, Y., Xiang, T., Hospedales, T. M., & Lu, H. (2018b). Deep mutual learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 4320–4328).

  • Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., & Fu, Y. (2018a). Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 1–16).

  • Zhang, Y., Tian, Y., Kong, Y., Zhong, B., & Fu, Y. (2018c). Residual dense network for image super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 2472–2481).

  • Zhang, Y., Li, K., Li, K., Zhong, B., & Fu, Y. (2019). Residual non-local attention networks for image restoration. In Proceedings of the international conference on learning representations (ICLR) (pp. 1–18).

  • Zhang, X., Zeng, H., & Zhang, L. (2021). Edge-oriented convolution block for real-time super resolution on mobile devices. In Proceedings of the 29th ACM international conference on multimedia (pp. 4034–4043).

  • Zhang, K., Gao, X., Li, X., & Tao, D. (2011). Partially supervised neighbor embedding for example-based image super-resolution. IEEE Journal of Selected Topics in Signal Processing, 5(2), 230–239.

    Article  Google Scholar 

  • Zhang, Y., Tian, Y., Kong, Y., Zhong, B., & Fu, Y. (2020). Residual dense network for image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2020.2968521

    Article  Google Scholar 

  • Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2223–2232).

Download references

Funding

This work is supported by the National Natural Science Foundation of China (Nos. 61862030, 62072218, and 62261025), by the Natural Science Foundation of Jiangxi Province (Nos. 20182BCB22006, 20181BAB202010, 20192ACB20002, and 20192ACBL21008), and by the Talent project of Jiangxi Thousand Talents Program (No. jxsq2019201056).

Author information

Authors and Affiliations

Authors

Contributions

SH: conceptualization, methodology, writing—original draft preparation. JW: methodology, software, writing—original draft preparation. YY: supervision, formal analysis, writing—review and editing. WW: writing—review and editing. GL: data curation, software.

Corresponding author

Correspondence to Yong Yang.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, S., Wang, J., Yang, Y. et al. LBCRN: lightweight bidirectional correction residual network for image super-resolution. Multidim Syst Sign Process 34, 341–364 (2023). https://doi.org/10.1007/s11045-023-00866-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-023-00866-y

Keywords

Navigation