Skip to main content
Log in

EGNet: enhanced gradient network for image deblurring

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

In recent years, convolutional neural networks have been increasingly developed in the field of image deblurring. However, there are still many problems in the existing methods. First, most methods lack attention to the middle layers of the networks, resulting in the lack of gradient information which is important but cannot be restored well by the existing methods. And the ability of the networks which can reconstruct sharp images is reduced. Second, the existing deblurring methods cannot perform the self-ensemble of the networks, which has been proved to further improve the network performance in many fields such as image classification and single image super-resolution. Third, the loss function of the existing networks all directly use MAE loss, MSE loss or adversarial loss, which lack the special design for edge information and structural information in the images. To solve the above problems, we enhanced the gradient information of the feature maps after the encoder in the network and then designed the self-ensemble network to further improve the performance; finally, we redesigned the loss function to adaptively redistribute the weight of each position in the gradient map according to the input image. Through the extensive experiments, our proposed method achieves the state-of-the-art results of image deblurring on the existing dataset and also outperforms other methods in generalization ability on RealBlur dataset as well as real blurry images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Carbajal, G., et al.: “Single image non-uniform blur kernel estimation via adaptive basis decomposition.” Comp. Res. Repos. (CoRR), arXiv:2102.01026, pp. 1–11, Feb 2021 (2021)

  2. Boden, A.F., et al.: Massively parallel spatially variant maximum-likelihood restoration of hubble space telescope imagery. JOSA A 13(7), 1537–1545 (1996)

    Article  Google Scholar 

  3. Nah, S., Tae, H.K., Kyoung, M.L.: “Deep multi-scale convolutional neural network for dynamic scene deblurring.” Proceedings of the IEEE conference on computer vision and pattern recognition. (2017)

  4. Tao, X., et al.: “Scale-recurrent network for deep image deblurring.” Proceedings of the IEEE conference on computer vision and pattern recognition. (2018)

  5. Gao, H., et al.: “Dynamic scene deblurring with parameter selective sharing and nested skip connections.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. (2019)

  6. Zhang, H., et al.: “Deep stacked hierarchical multi-patch network for image deblurring.” Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition. (2019)

  7. Zamir, S.W., et al.: “Multi-stage progressive image restoration.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. (2021)

  8. Mao, X., et al.: “Deep residual fourier transformation for single image deblurring.” arXiv preprint arXiv:2111.11745 (2021)

  9. Cho, S.J., et al.: “Rethinking coarse-to-fine approach in single image deblurring.” Proceedings of the IEEE/CVF international conference on computer vision. (2021)

  10. Ma, W.D.K., Lewis, J.P., Kleijn, W.B.: “The HSIC bottleneck: deep learning without back-propagation.” Proceedings of the AAAI conference on artificial intelligence. Vol. 34(04). (2020)

  11. Chen, M., et al.: Nonblind Image Deconvolution via Leveraging Model Uncertainty in An Untrained Deep Neural Network. Int. J. Comp. Vis. (2022). https://doi.org/10.1007/s11263-022-01621-9

  12. Quan, Y., Ji, H., Shen, Z.: Data-driven multi-scale non-local wavelet frame construction and image recovery. J. Scient. Comp. 63(2), 307–329 (2015)

  13. Goodfellow, I., et al.: Generative adversarial networks. Commun. ACM 63(11), 139–144 (2020)

    Article  MathSciNet  Google Scholar 

  14. Gulrajani, I., et al.:“Improved training of wasser-stein gans”. In: Advances in neural information pro-cessing systems 30 pp. 5769–577 (2017)

  15. Kupyn, O., et al.: “Deblurgan: Blind motion deblurring using conditional adversarial networks.” Proceedings of the IEEE conference on computer vision and pattern recognition. (2018)

  16. Kupyn, O., et al.: “Deblurgan-v2: deblurring (orders-of-magnitude) faster and better.” Proceedings of the IEEE/CVF International conference on computer vision. (2019)

  17. Hu, J., Li S., Gang, S.: “Squeeze-and-excitation networks.” Proceedings of the IEEE conference on computer vision and pattern recognition. (2018)

  18. Xie, S., et al.: “Aggregated residual transformations for deep neural networks.” Proceedings of the IEEE conference on computer vision and pattern recognition. (2017)

  19. Lim, B., et al.: “Enhanced deep residual networks for single image super-resolution.” Proceedings of the IEEE conference on computer vision and pattern recognition workshops. (2017)

  20. Chen, M., et al.: “Self-supervised blind image deconvolution via deep generative ensemble learning”IEEE Transactions on circuits and systems for video technology (2022)

  21. Quan, Y., et al.: “Learning deep non-blind image deconvolution without ground truths.” European Conference on computer vision. Springer, Cham (2022)

  22. Woo, S, et al.: “Cbam: convolutional block attention module.” Proceedings of the European conference on computer vision (ECCV). (2018)

  23. Rim, J., et al.: “Real-world blur dataset for learning and benchmarking deblurring algorithms.” European conference on computer vision. Springer, Cham (2020)

  24. Xu, Y., et al.: Attentive deep network for blind motion deblurring on dynamic scenes. Comp. Vis. Image Underst. 205, 103169 (2021)

    Article  Google Scholar 

  25. Wang, C.-Y., Yeh, I.-H., Liao, H.-Y. M.: “You only learn one representation: unified network for multiple tasks.” arXiv preprint arXiv:2105.04206 (2021)

Download references

Funding

This work was supported in part by the Natural Science Foundation of Heilongjiang Province of China (No. LH2021F026) and Fundamental Research Funds for the Central Universities (No. HIT.NSRIF202243).

Author information

Authors and Affiliations

Authors

Contributions

CZ and XD designed the research. CZ drafted the manuscript. XD helped organize the manuscript. CZ, XD and FG revised and finalized the paper.

Corresponding author

Correspondence to Xiaoguang Di.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, C., Di, X. & Gao, F. EGNet: enhanced gradient network for image deblurring. SIViP 17, 2045–2053 (2023). https://doi.org/10.1007/s11760-022-02418-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-022-02418-6

Keywords

Navigation