Skip to main content

Advertisement

Log in

Dynamic selection of proper kernels for image deblurring: a multistrategy design

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The non-blind deblurring approach can adequately deblur single-blur images by applying a suitable mathematical model. In contrast, it cannot satisfactorily deblur images that have multiple blurs. The blind deblurring approach is able to remove various kinds of blurs from an image. However, because the causes of blur in different regions differ, it is difficult to locate and remove all the blurs accurately and also to recover the fine texture details. Considering these weaknesses and strengths of both approaches, we propose a neural network that dynamically selects suitable blur kernels for deblurring. In the proposed method, the most appropriate kernels are extracted by joint training from multiple datasets that contain specific types of blurs to tackle local and global regions in one image. In addition, to further improve the image restoration quality, we designed an edge-attention mechanism to compensate the edges and structures of specific objects. The results of experiments conducted indicate that the dynamic selection of blur kernels combined with the edge attention algorithm not only improves PSNR and SSIM, but also outperforms state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The authors declare that all data presented in this work were generated during the course of the work and any other source has been appropriately referenced within the manuscript.

Code availability

The code is available at https://github.com/zhangzhichao19020123/MEANet.

References

  1. Li, J., Yang, B., Yang, W., Sun, C., Jianhua, X.: Subspace-based multi-view fusion for instance-level image retrieval. Vis. Comput. 37(3), 619–633 (2021)

    Article  Google Scholar 

  2. Krishnan, D., Tay, T., Fergus, R.: Blind deconvolution using a normalized sparsity measure. In: CVPR 2011, pp. 233–240. IEEE (2011)

  3. Zhe, H., Yang, M.-H.: Learning good regions to deblur images. Int. J. Comput. Vis. 115(3), 345–362 (2015)

    Article  MathSciNet  Google Scholar 

  4. Nekrasov, V., Shen, C., Reid, I.: Light-weight refinenet for real-time semantic segmentation. arXiv preprint arXiv:1810.03272 (2018)

  5. Pan, J., Hu, Z. Su, Z., Yang, M.-H.: Deblurring text images via lO-regularized intensity and gradient prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2901–2908 (2014)

  6. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: European Conference on Computer Vision, pp. 818–833. Springer (2014)

  7. Zhang, K., Liang, J., Van Gool, L., Timofte, R.: Designing a practical degradation model for deep blind image super-resolution. arXiv preprint arXiv:2103.14006 (2021)

  8. Shi, W., Huiqian, D., Mei, W., Ma, Z.: (sarn) spatial-wise attention residual network for image super-resolution. Vis. Comput. 37(6), 1569–1580 (2021)

    Article  Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  10. Zhu, P., Du, D., Wen, L., Bian, X., Ling, H., Hu, Q., Peng, T., Zheng, J., Wang, X., Zhang, Y., et al.: Visdrone-vid2019: The vision meets drone object detection in video challenge results. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pp. 227–235 (2019)

  11. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: Deblurgan: Blind motion deblurring using conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8183–8192 (2018)

  12. Richardson, W.H.: Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 62(1), 55–59 (1972)

    Article  Google Scholar 

  13. Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graph. (tog) 27(3), 1–10 (2008)

    Article  Google Scholar 

  14. Xu, L., Jia, J.: Two-phase kernel estimation for robust motion deblurring. In: European Conference on Computer Vision, pp. 157–170. Springer (2010)

  15. Kyrki, V., Kragic, D.: Computer and robot vision [TC spotlight]. IEEE Robot. Autom. Mag. 18(2), 121–122 (2011)

    Article  Google Scholar 

  16. Javaran, T.A., Hassanpour, H., Abolghasemi, V.: Non-blind image deconvolution using a regularization based on re-blurring process. Comput. Vis. Image Underst. 154, 16–34 (2017)

    Article  Google Scholar 

  17. Hirsch, M., Schuler, C.J., Harmeling, S., Schölkopf, B.: Fast removal of non-uniform camera shake. In: 2011 International Conference on Computer Vision, pp. 463–470. IEEE (2011)

  18. Whyte, O., Sivic, J., Zisserman, A.: Deblurring shaken and partially saturated images. Int. J. Comput. Vis. 110(2), 185–201 (2014)

    Article  Google Scholar 

  19. Liu, G., Chang, S., Ma, Y.: Blind image deblurring using spectral properties of convolution operators. IEEE Trans. Image Process. 23(12), 5047–5056 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hore, A., Ziou, D.: Image quality metrics: Psnr vs. ssim. In: 2010 20th International Conference on Pattern Recognition, pp. 2366–2369. IEEE (2010)

  21. Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)

  22. Rong, W., Li, Z., Zhang, W., Sun, L.: An improved canny edge detection algorithm. In: 2014 IEEE International Conference on Mechatronics and Automation, pp. 577–582. IEEE (2014)

  23. Xie, S., Tu, Z.: Holistically-nested edge detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1395–1403 (2015)

  24. Zheng, S., Zhu, Z., Cheng, J., Guo, Y., Zhao, Y.: Edge heuristic GAN for non-uniform blind deblurring. IEEE Signal Process. Lett. 26(10), 1546–1550 (2019)

    Article  Google Scholar 

  25. Qi, Q., Guo, J., Jin, W.: Egan: Non-uniform image deblurring based on edge adversarial mechanism and partial weight sharing network. Signal Process. Image Commun. 88, 115952 (2020)

    Article  Google Scholar 

  26. Li, X., Ren, J.S., Liu, C., Jia, J.: Deep convolutional neural network for image deconvolution. Adv. Neural. Inf. Process. Syst. 27, 1790–1798 (2014)

    Google Scholar 

  27. Schuler, C.J, Burger, H.C., Harmeling, S., Scholkopf, B.: A machine learning approach for non-blind image deconvolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1067–1074 (2013)

  28. Zhang, H., Dai, Y., Li, H., Koniusz, P.: Deep stacked hierarchical multi-patch network for image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5978–5986 (2019)

  29. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Object detectors emerge in deep scene cnns. arXiv preprint arXiv:1412.6856 (2014)

  30. Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Is object localization for free?-Weakly-supervised learning with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 685–694 (2015)

  31. Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883–3891 (2017)

  32. Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8174–8182 (2018)

  33. Ye, M., Lyu, D., Chen, G.: Scale-iterative upscaling network for image deblurring. IEEE Access 8, 18316–18325 (2020)

    Article  Google Scholar 

  34. Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 8878–8887 (2019)

  35. Dai, J., Yang, D., Zhu, T., Wang, Y., Gao, L.: Multiscale residual convolution neural network and sector descriptor-based road detection method. IEEE Access 7, 173377–173392 (2019)

    Article  Google Scholar 

  36. Schelten, K., Nowozin, S., Jancsary, J., Rother, C., Roth, S.: Interleaved regression tree field cascades for blind image deconvolution. In: 2015 IEEE Winter Conference on Applications of Computer Vision, pp. 494–501. IEEE (2015)

  37. Lin, G., Milan, A., Shen, C., Reid, I.: Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1925–1934 (2017)

  38. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)

  39. Mei, J., Ziming, W., Xiang Chen, Yu., Qiao, H.D., Jiang, X.: Deepdeblur: text image recovery from blur to sharp. Multimed. Tools Appl. 78(13), 18869–18885 (2019)

    Article  Google Scholar 

Download references

Funding

This research was supported by the Postgraduate Innovation Project of Double First-class Universities. This work is supported by National Nature Science Foundation of China (grant number:62001493).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jinsheng Deng or Weili Li.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Z., Chen, H., Yin, X. et al. Dynamic selection of proper kernels for image deblurring: a multistrategy design. Vis Comput 39, 1375–1390 (2023). https://doi.org/10.1007/s00371-022-02415-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02415-3

Keywords