Skip to main content
Log in

Lightweight Parallel Feedback Network for Image Super-Resolution

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Since deep learning was introduced into super-resolution (SR), SR has achieved remarkable performance improvements. Since high-level features are more informative for reconstruction, most of SR methods have a lage number of parameters, which restrict their application in resource-constrained devices. Feedback mechanism makes it possible to get informative high-level features with few parameters, for it can feed high-level features back to refine low-level ones, which is very suitable for lightweight networks. However, most feedback networks work in a single feedback manner, which refined low-level features just once in each iteration or each unit. In this paper, we propose a lightweight parallel feedback network for image super-resolution (LPFN), which enhances the refinement ability of the feedback network. In our method, all the feedback blocks feed back their outputs to previous layers in a parallel feedback manner. Based on parallel feedback and residual learning, a local-mirror architecture is proposed. Then, we propose a dispersion-aware attention residual block (DARB) as the basic block in feedback block, which calculates the dispersion of pixels along channel and spatial dimensions. We use ensemble method to reconstruct SR image. Finally, we propose a global feedback, which feeds back the degradation results of SR to primal LR image, supervising the learning of LR-HR mapping function. Further experimental results demonstrate that LPFN has an outstanding performance while taking up few computing resources.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Chao Dong KH, Change Loy Chen, Tang X (2014) Learning a deep convolutional network for image super-resolution In: Computer Vision – ECCV 2014, Vol 8692, 2014, pp 184–199

  2. Chao Dong CCL, Tang X (2016) Accelerating the super-resolution convolutional neural network In: Computer Vision – ECCV 2016, pp 391–407

  3. Shi W, Caballero J, Huszár F, Totz J, Aitken AP, Bishop R, Rueckert D, Wang Z (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1874–1883

  4. Kim J, Lee J, Lee K (2016) Accurate image super-resolution using very deep convolutional networks In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1646–1654

  5. Kim J, Lee JK, Lee KM (2016) Deeply-recursive convolutional network for image super-resolution In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1637–1645

  6. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770–778

  7. Ledig C, Theis L, Huszár F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z, Shi W (2017) Photo-realistic single image super-resolution using a generative adversarial network In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 105–114

  8. Tai Y, Yang J, Liu X (2017) Image super-resolution via deep recursive residual network In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2790–2798

  9. Lim B, Son S, Kim H, Nah S, Lee KM (2017) Enhanced deep residual networks for single image super-resolution In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 1132–1140

  10. Tai Y, Yang J, Liu X, Xu C (2017) Memnet: A persistent memory network for image restoration In: 2017 IEEE International Conference on Computer Vision (ICCV), pp 4549–4557

  11. Lai W-S, Huang J-B, Ahuja N, Yang M-H (2017) Deep laplacian pyramid networks for fast and accurate super-resolution In: IEEE Conference on Computer Vision and Pattern Recognition

  12. H. L. Z. L. W. W. A. P. G. J. X. Y. Shipeng Fu, Lu Lu (2019) A real-time super-resolution method based on convolutional neural networks In: Circuits,Systems,andSignalProcessing

  13. Huang G, Liu Z, van der Maaten L, Weinberger K (2017) Densely connected convolutional networks

  14. Gilbert CD, Sigman M (2007) Brain states: Top-down influences in sensory processing. Neuron 54:677–696

    Article  Google Scholar 

  15. Stollenga M, Masci J, Gomez F, Schmidhuber J (2014) Deep networks with internal selective attention through feedback connections, Vol 4

  16. Zamir AR, Wu T-L, Sun L, Shen W, Shi BE, Malik J, Savarese S (2016) Feedback networks In: Computer Science - Computer Vision and Pattern Recognition

  17. Carreira J, Agrawal P, Fragkiadaki K, Malik J (2016) Human pose estimation with iterative error feedback In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp 4733–4742

  18. Haris M, Shakhnarovich G, Ukita N (2018) Deep back-projection networks for super-resolution In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1664–1673

  19. Li Z, Yang J, Liu Z, Yang X, Jeon G, Wu W (2019) Feedback network for image super-resolution In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  20. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition In: Conference on Computer Vision and Pattern Recognition (CVPR)

  21. Szegedy C, Liu Wei, Jia Yangqing, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1–9

  22. Hui Z, Gao X, Yang Y, Wang X (2019) Lightweight image super-resolution with information multi-distillation network In: Proceedings of the 27th ACM International Conference on Multimedia, pp 2024–2032

  23. Xie Y, Zhang Y, Qu Y, Li C, Fu Y (2020) LatticeNet: Towards Lightweight Image Super-Resolution with Lattice Block, pp 272–289

  24. Li Z, Wang C, Wang J, Ying S, Shi J (2021) Lightweight adaptive weighted network for single image super-resolution. Computer Vision and Image Understanding 211:103254

    Article  Google Scholar 

  25. G. A. Mnih V, Heess N (2014) Recurrent models of visual attention In: Neural Information Processing Systems, pp 2204–2212

  26. Wang X, Girshick R, Gupta A, He K (2018) Non-local neural networks In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7794–7803

  27. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 7132–7141

  28. Zhang Y, kunpeng Li, Li K, Wang L, Zhong B, Fu Y (2018) Image super-resolution using very deep residual channel attention networks In: European Conference on Computer Vision, Vol 11211, pp 294–310

  29. Woo S, Park J, Lee J-Y, Kweon I (2018) CBAM: Convolutional Block Attention Module: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part VII, pp 3–19

  30. Zhu X, Guo K, Fang H, Chen L, Ren S, Hu B (2022) Cross view capture for stereo image super-resolution. IEEE Transactions on Multimedia 24:3074–3086

    Article  Google Scholar 

  31. Han W, Chang S, Liu D, Yu M, Witbrock M, Huang TS (2018) Image super-resolution via dual-state recurrent networks In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 1654–1663

  32. Zhu X, Guo K, Ren S, Hu B, Hu M, Fang H (2022) Lightweight image super-resolution with expectation-maximization attention mechanism. IEEE Transactions on Circuits and Systems for Video Technology 32(3):1273–1284

    Article  Google Scholar 

  33. Namhyuk Ahn BK, Sohn K-A (2018) Fast, accurate, and lightweight super-resolution with cascading residual network In: Computer Vision – ECCV 2018, Vol 11214, pp 256–272

  34. Hui Z, Wang X, Gao X (2018) Fast and accurate single image super-resolution via information distillation network In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 723–731

Download references

Funding

This work was supported by the funding from Science Foundation of Sichuan Science and Technology Department 2021YFH0119 and the funding from Sichuan University under grant 2020SCUNG205.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Software and experiments were performed by beibei wang. Material preparation, data collection and analysis were performed by beibei wang, changjun liu and xiaomin yang. The first draft of the manuscript was written by beibei wang and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Binyu Yan.

Ethics declarations

Conflicts of interest/Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, B., Liu, C., Yan, B. et al. Lightweight Parallel Feedback Network for Image Super-Resolution. Neural Process Lett 55, 3225–3243 (2023). https://doi.org/10.1007/s11063-022-11007-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-022-11007-0

Keywords

Navigation