Skip to main content

Dual Residual Global Context Attention Network for Super-Resolution

  • Conference paper
  • First Online:
  • 1167 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12240))

Abstract

Recently, deep convolutional neural networks have enabled SISR to achieve amazing performance and visual experience. However, we note that most current methods fail to model effective long-distance dependence, either considering only channel-wise attention modeling or considering a self-attention mechanism to model long-range dependencies resulting in a number of parameters and memory costs have increased dramatically, hindering CNNs representation capabilities and deployment on edge devices. To solve this problem, we proposed a novel network named Dual Residual Global Context Attention Network (DRGCAN), which is lightweight and can effectively model global context information. Specifically, we proposed a dual residual in residual structure in which introduce dual residual learning in the residual block of each residual group of the traditional residual in residual (RIR) to fully capture the rich feature information. Furthermore, we further proposed a global context attention mechanism to insert it at the end of each residual group to effectively model long-distance dependence to improve the representation ability of the model. Extensive experiments shown that the proposed model is superior to state-of-the-art SR methods in both visual quality and memory footprint.

This work is supported by Hainan Provincial Natural Science Foundation of China [No. 2019RC018], and by the National Natural Science Foundation of China [61762033], and by the Natural Science Foundation of Hainan [617048, 2018CXTD333] and by Dongguan Introduction Program of Leading Innovative and Entrepreneurial Talents.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Ahn, N., Kang, B., Sohn, K.A.: Fast, accurate, and lightweight super-resolution with cascading residual network. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 252–268 (2018)

    Google Scholar 

  2. Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding (2012)

    Google Scholar 

  3. Cao, Y., Xu, J., Lin, S., Wei, F., Hu, H.: GCNet: non-local networks meet squeeze-excitation networks and beyond (April 2019). https://arxiv.org/abs/1904.11492v1

  4. Choi, J.S., Kim, M.: A deep convolutional neural network with selection units for super-resolution. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1150–1156 (July 2017). ISSN: 2160-7516

    Google Scholar 

  5. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015)

    Article  Google Scholar 

  6. Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 391–407. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_25

    Chapter  Google Scholar 

  7. Freeman, W.T., Pasztor, E.C., Carmichael, O.T.: Learning low-level vision. Int. J. Comput. Vis. 40(1), 25–47 (2000)

    Article  Google Scholar 

  8. Greenspan, H.: Super-resolution in medical imaging. Comput. J. 52(1), 43–63 (2008)

    Article  Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  10. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

    Google Scholar 

  11. Huang, J.B., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5197–5206 (2015)

    Google Scholar 

  12. Hui, Z., Gao, X., Yang, Y., Wang, X.: Lightweight Image Super-Resolution with Information Multi-distillation Network. arXiv preprint arXiv:1909.11856 (2019)

  13. Hui, Z., Wang, X., Gao, X.: Fast and Accurate Single Image Super-Resolution via Information Distillation Network. arXiv:1803.09454 [cs] (March 2018)

  14. Keys, R.: Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Sig. Process. 29(6), 1153–1160 (1981)

    Article  MathSciNet  Google Scholar 

  15. Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)

    Google Scholar 

  16. Kim, J., Kwon Lee, J., Mu Lee, K.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1637–1645 (2016)

    Google Scholar 

  17. Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 624–632 (2017)

    Google Scholar 

  18. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)

    Google Scholar 

  19. Li, S., Yin, H., Fang, L.: Remote sensing image fusion via sparse representations over learned dictionaries. IEEE Trans. Geosci. Remote Sens. 51(9), 4779–4789 (2013). https://doi.org/10.1109/TGRS.2012.2230332

    Article  Google Scholar 

  20. Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 136–144 (2017)

    Google Scholar 

  21. Liu, D., Wen, B., Fan, Y., Loy, C.C., Huang, T.S.: Non-local recurrent network for image restoration. In: Advances in Neural Information Processing Systems, pp. 1673–1682 (2018)

    Google Scholar 

  22. Liu, Z.S., Wang, L.W., Li, C.T., Siu, W.C., Chan, Y.L.: Image Super-Resolution via Attention based Back Projection Networks. arXiv:1910.04476 (2019)

  23. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV, Vancouver (2001)

    Google Scholar 

  24. Matsui, Y.: Sketch-based manga retrieval using manga109 dataset. Multimedia Tools Appl. 76(20), 21811–21838 (2016). https://doi.org/10.1007/s11042-016-4020-z

    Article  Google Scholar 

  25. Mudunuri, S.P., Biswas, S.: Low resolution face recognition across variations in pose and illumination. IEEE Trans. Pattern Anal. Mach. Intell. 38(5), 1034–1040 (2015)

    Article  Google Scholar 

  26. Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning, ICML-10, pp. 807–814 (2010)

    Google Scholar 

  27. Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)

    Google Scholar 

  28. Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3147–3155 (2017)

    Google Scholar 

  29. Timofte, R., Agustsson, E., Van Gool, L., Yang, M.H., Zhang, L.: Ntire 2017 challenge on single image super-resolution: methods and results. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 114–125 (2017)

    Google Scholar 

  30. Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In: 2017 IEEE International Conference on Computer Vision (ICCV), Venice, pp. 4809–4817. IEEE (October 2017)

    Google Scholar 

  31. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)

    Google Scholar 

  32. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J.-D., et al. (eds.) Curves and Surfaces 2010. LNCS, vol. 6920, pp. 711–730. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27413-8_47

    Chapter  Google Scholar 

  33. Zhang, K., Zuo, W., Zhang, L.: Learning a single convolutional super-resolution network for multiple degradations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3262–3271 (2018)

    Google Scholar 

  34. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 286–301 (2018)

    Google Scholar 

  35. Zhang, Y., Li, K., Li, K., Zhong, B., Fu, Y.: Residual non-local attention networks for image restoration. arXiv:1903.10082 (2019)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jingbing Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, J. et al. (2020). Dual Residual Global Context Attention Network for Super-Resolution. In: Sun, X., Wang, J., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2020. Lecture Notes in Computer Science(), vol 12240. Springer, Cham. https://doi.org/10.1007/978-3-030-57881-7_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-57881-7_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-57880-0

  • Online ISBN: 978-3-030-57881-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics