Skip to main content
Log in

Image super-resolution reconstruction based on improved Dirac residual network

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

In recent years, to improve the nonlinear feature mapping ability of the image super-resolution network, the depth of the convolutional neural network is getting deeper and deeper. In the existing residual network, the the residual block’s output and input are added directly through the skip connection to deepen the nonlinear mapping layer. However, it can not be proved that every addition is useful to improve the network’s performance. In this paper, based on Dirac convolution, an improved Dirac residual block is proposed, which uses the trainable parameters to adaptively control the balance of the convolution and the skip connection to increase the nonlinear mapping ability of the model. The main body network uses multiple Dirac residual blocks to learn the nonlinear mapping of high-frequency information between LR and HR images. In addition, the global skip connection is realized by sub-pixel convolution, which can learn to use linear mapping of low-frequency features of input LR image. In the training stage, the model uses Adam optimizer for network training and L1 as the loss function. The experiments compare our algorithm with some other state-of-the-art models in PSNR, SSIM, IFC, and visual effect on five different benchmark datasets. The results show that the proposed model has excellent performance both in subjective and objective evaluation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Cao, Y., He, Z., Ye, Z., et al. (2019). Fast and accurate single image super-resolution via an energy-aware improved deep residual network [J]. Signal Processing, 162, 115–125.

    Article  Google Scholar 

  • Chang, H., Yeung, D. Y., & Xiong, Y. (2004). Super-resolution through neighbor embedding[C]. In: Proceedings of the 2004 In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. IEEE, vol. 1, pp. I–I.

  • Dai, T., Cai, J., Zhang, Y., et al. (2019) Second-order attention network for single image super-resolution[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 11065–11074.

  • Dong, C., Loy, CC., He, K., et al. (2014). Learning a deep convolutional network for image super-resolution[C]. In: European Conference on Computer Vision. Springer, Cham pp. 184–199.

  • Dong, C., Loy, C. C., & Tang X. (2016). Accelerating the super-resolution convolutional neural network[C]. In: European Conference on Computer Vision. Springer, Cham, pp. 391–407.

  • Haris, M., Shakhnarovich, G., & Ukita, N. (2019). Deep back-projection networks for single image super-resolution [J]. arXiv preprint arXiv:1904.05677.

  • He, K., Zhang, X., Ren, S., et al. (2016). Deep residual learning for image recognition[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778.

  • Huang, J. B., Singh, A., & Ahuja, N. (2015). Single image super-resolution from transformed self-exemplars. In: IEEE Conference on Computer Vision Pattern Recognition.

  • Hui, Z., Wang, X., & Gao X. (2018) Two-stage convolutional network for image super-res- olution. In: ICPR, pp. 2670–2675.

  • Kim, J., Kwon, L. J., & Mu L. K. (2016). Accurate image super-resolution using very deep convolutional networks[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1646–1654.

  • Kim, J., Kwon, L.J., & Mu L. K. (2016). Deeply-recursive convolutional network for image super-resolution[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1637–1645.

  • Lai, W. S., Huang, J. B., Ahuja, N., et al. (2017) Deep laplacian pyramid networks for fast and accurate super-resolution[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 624–632.

  • Ledig, C., Theis, L., Huszár, F., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4681–4690.

  • Lim, B., Son, S., Kim, H., et al. (2017). Enhanced deep residual networks for single image super-resolution[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 136–144.

  • Schulter, S., Leistner, C., & Bischof, H. (2015). Fast and accurate image upscaling with super-resolution forests[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3791–3799.

  • Sheikh, H. R., Bovik, A. C., & De Veciana, G. (2005). An information fidelity criterion for image quality assessment using natural scene statistics [J]. IEEE Transactions on Image Processing, 14(12), 2117–2128.

    Article  Google Scholar 

  • Shi, W., Caballero, J., Huszár, F., et al. (2016). Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1874–1883.

  • Shi, W., Caballero, J., Ledig, C., et al. (2013). Cardiac image super-resolution with global correspondence using multi-atlas patchmatch[C]. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Berlin, Heidelberg, pp. 9–16.

  • Tai, Y., Yang, J., & Liu, X. (2017) Image super-resolution via deep recursive residual network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3147–3155

  • Tai, Y., Yang, J., Liu, X., et al. (2017). Memnet: A persistent memory network for image restoration [C]. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4539–4547.

  • Timofte, R., De Smet, V., & Van Gool, L. (2013). Anchored neighborhood regression for fast example-based super-resolution[C]. In: Proceedings of the IEEE international conference on computer vision. pp. 1920–1927.

  • Timofte, R., Smet, V. D., & Gool, L.V. (2014). A+: adjusted anchored neighborhood regression for fast super-resolution. In: Asian Conference on Computer Vision.

  • Tong, T., Li, G., Liu, X., et al. (2017) Image super-resolution using dense skip connections[C]. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 4799–4807.

  • Wang, C., Li, Z., & Shi, J. (2019) Lightweight image super-resolution with adaptive weighted learning network[J]. arXiv preprint arXiv:1904.02358.

  • Xu, X., Ma, Y., & Sun, W. (2019) Towards real scene super-resolution with raw images[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1723–1731.

  • Yang, J., Wright, J., Huang, T. S., et al. (2010). Image super-resolution via sparse representation[J]. IEEE Transactions on Image Processing, 19(11), 2861–2873.

    Article  MathSciNet  Google Scholar 

  • Yang. S., Sun, F., Wang, M., et al. (2011). Novel super resolution restoration of remote sensing images based on compressive sensing and example patches-aided dictionary learning[C] In: 2011 International Workshop on Multi-Platform/Multi-Sensor Remote Sensing and Mapping. IEEE, pp. 1–6.

  • Yu, J., Fan, Y., Yang, J., et al. (2018). Wide activation for efficient and accurate image super-resolution[J]. arXiv preprint arXiv:1808.08718.

  • Zagoruyko, S., & Komodakis N. (2017). Diracnets: Training very deep neural networks without skip-connections [J]. arXiv preprint arXiv:1706.00388.

  • Zhang, K., Zuo, W., & Zhang, L. (2019) Deep plug-and-play super-resolution for arbitrary blur kernels[J]. arXiv preprint arXiv:1903.12529.

  • Zhang, X., Chen, Q., Ng, R., et al. (2019) Zoom to learn, learn to zoom[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3762–3770.

  • Zhang, Y., Li, K., Li, K., et al. (2018). Image super-resolution using very deep residual channel attention networks[C]. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 286–301.

  • Zhang, Y., Tian, Y., Kong, Y., et al. (2018). Residual dense network for image super-resolution[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2472–2481.

  • Zhen, Li., Jinglei Y., & Zheng L. (2019). Feedback network for image super-resolution [J]. arXiv preprint arXiv:1903.09814.

  • Zou, W. W. W., & Yuen, P. C. (2011). Very low resolution face recognition problem [J]. IEEE Transactions on Image Processing, 21(1), 327–340.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was supported by the National Natural Science Foundation of China (61573182), and by the Fundamental Research Funds for the Central Universities (NS2020025).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, X., Xie, T., Liu, L. et al. Image super-resolution reconstruction based on improved Dirac residual network. Multidim Syst Sign Process 32, 1065–1082 (2021). https://doi.org/10.1007/s11045-021-00773-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-021-00773-0

Keywords

Navigation