Skip to main content
Log in

Fusion diversion network for fast, accurate and lightweight single image super-resolution

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

In recent years, deep convolution neural network has been widely used in image super-resolution and achieved great performance. As the network becomes deeper and deeper, the accuracy of reconstruction is higher and higher. However, it also brings a large increase in the number of parameters and computational complexity, which makes the practical application more and more difficult. In this paper, we propose an efficient image super-resolution method based on fusion diversion network (FDN), where diversion and fusion block serves as the basic build module. By using the fusion and diversion mechanism, the information can be fully interactive and transferred in the network, and the expression ability of the model can be effectively improved. Extensive experimental results show that even with much fewer layers, the proposed FDN achieves the competitive results in both accuracy and speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Zhang, L., Wu, X.: An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Trans. Image Process. 15(8), 2226–2238 (2006)

    Article  Google Scholar 

  2. Zhang, K., Gao, X., Tao, D., Li, X.: Single image super-resolution with non-local means and steering kernel regression. IEEE Trans. Image Process. 21(11), 4544–4556 (2012)

    Article  MathSciNet  Google Scholar 

  3. Timofte, R., De, V., Gool, L.V.: Anchored neighborhood regression for fast example-based super-resolution. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 1920–1927 (2013).

  4. Peleg, T., Elad, M.: A statistical prediction model based on sparse representations for single image super-resolution. IEEE Trans. Image Process. 23(6), 2569–2582 (2014)

    Article  MathSciNet  Google Scholar 

  5. Timofte, R., De Smet, V., Van Gool L.: A+: adjusted anchored neighborhood regression for fast super-resolution. In: Proceedings of Asian Conference on Computer Vision (ACCV), pp.111–126 (2014).

  6. Huang, J.-B., Singh, A., Ahuja, N.: Single image super- resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5197–5206 (2015).

  7. Schulter, S., Leistner, C., Bischof, H.: Fast and accurate image upscaling with super-resolution forests. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3791–3799 (2015).

  8. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016)

    Article  Google Scholar 

  9. Kim, J., Lee, J. K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1646–1654 (2016).

  10. Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1637–1645 (2016).

  11. Tai, Y., Yang, J., Liu, X.: Image super-resolution via deep recursive residual network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2790–2798 (2017).

  12. Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Fast and accurate image super-resolution with deep Laplacian pyramid networks. IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2599–2613 (2019)

    Article  Google Scholar 

  13. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1132–1140 (2017).

  14. Tai, Y., Yang, J., Liu, X., Xu, C.: MemNet: a persistent memory network for image restoration. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 4549–4557 (2017).

  15. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2472–2481 (2018).

  16. Haris, M., Shakhnarovich, G., Ukita, N.: Deep back-projection networks for super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1664–1673 (2018).

  17. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In: Proceedings of the European Conference on Computer Vision (ECCV). https://doi.org/10.1007/978-3-030-01234-2_18 (2018).

  18. Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: Feedback network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3867–3876 (2019).

  19. Ahn, N., Kang, B., Sohn. K.-A.: Fast, accurate, and lightweight super-resolution with cascading residual network. In: Proceedings of the European Conference on Computer Vision (ECCV). https://doi.org/10.1007/978-3-030-01249-6_16 (2018).

  20. Lin, D., et al.: SCRSR: an efficient recursive convolutional neural network for fast and accurate image super-resolution. Neurocomputing (2019). https://doi.org/10.1016/j.neucom.2019.02.067

    Article  Google Scholar 

  21. Hui, Z., Wang, X., Gao, X.: Fast and accurate single image super- resolution via information distillation network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 723–731 (2018).

  22. Nasrollahi, H., Farajzadeh, K., Hosseini, V., et al.: Deep artifact-free residual network for single-image super-resolution. Signal Image Video Process. 14, 407–415 (2020). https://doi.org/10.1007/s11760-019-01569-3

    Article  Google Scholar 

  23. Kim, H., Kim, G.: Single image super-resolution using fire modules with asymmetric configuration. IEEE Signal Process. Lett. (2020). https://doi.org/10.1109/LSP.2020.2980172

    Article  Google Scholar 

  24. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. Comput. Sci. 3(4), 212–223 (2012)

    Google Scholar 

  25. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. J. Mach. Learn. Res. 9, 249–256 (2010)

    Google Scholar 

  26. Agustsson, E., Timofte, R.: NTIRE 2017 challenge on single image super-resolution: dataset and study. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1122–1131 (2017).

  27. Bevilacqua, M., Roumy, A., Guillemot, C., Morel, M.-l. A.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: Proceedings of the British Machine Vision Conference, pp. 1–10 (2012).

  28. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Proceedings of the International Conference on Curves and Surfaces, ser. Curves and Surfaces, pp. 711–730 (2012).

  29. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), vol. 2, pp. 416–423 (2001)

  30. Huang, J., Singh, A., Ahuja, N.: Single image super-resolution from transformed self-exemplars. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5197–5206 (2015).

  31. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization, arXiv e-prints, arXiv:1412.6980. In: Proceedings of the International Conference for Learning Representations (2014).

  32. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Grishick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678 (2014).

Download references

Acknowledgments

This work is supported by Young Top Talents Foundation of China Aerospace Science and Technology Group Co., Ltd. This research is funded by the National Natural Science Foundation of China grant 61773383.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zheng Gu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gu, Z., Chen, L., Zheng, Y. et al. Fusion diversion network for fast, accurate and lightweight single image super-resolution. SIViP 15, 1351–1359 (2021). https://doi.org/10.1007/s11760-021-01866-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-021-01866-w

Keywords

Navigation