Skip to main content

Advertisement

Log in

Multi-scale Unet-based feature aggregation network for lightweight image deblurring

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

The single image deblurring task has made remarkable progress, with convolutional neural networks exhibiting extraordinary performance. However, existing methods maintain high-quality reconstruction through an excessive number of parameters and extremely deep network structures, which results in increased requirements for computational resources and memory storage, making it challenging to deploy on resource-constrained devices. Numerous experiments indicate that current models still possess redundant parameters. To address these issues, we introduce a multi-scale Unet-based feature aggregation network (MUANet). This network architecture is based on a single-stage Unet, which significantly simplifies the network’s complexity. A lightweight Unet-based attention block is designed, based on a progressive feature extraction module to enhance feature extraction from multi-scale attention modules. Given the extraordinary performance of the self-attention mechanism, we propose a self-attention mechanism based on fourier transform and a depthwise convolutional feed-forward network to enhance the network’s feature extraction capability. This module contains extractors with different receptive fields for feature extraction at different spatial scales and capturing contextual information. Through the aggregation of multi-scale features from different attention mechanisms, our method learns a set of rich features that retain contextual information from multiple scales and high-resolution spatial details. Extensive experiments show that the proposed MUANet achieves competitive results in lightweight deblurring qualitative and quantitative evaluations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Abuolaim, A., Brown, M.S.: Defocus deblurring using dual-pixel data. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16, pp. 111–126. Springer (2020)

  2. Chakrabarti, A.: A neural approach to blind motion deblurring. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14, pp. 221–235. Springer (2016)

  3. Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883–3891 (2017)

  4. Purohit, K., Rajagopalan, A.: Region-adaptive dense network for efficient motion deblurring. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11882–11889 (2020)

  5. Fang, Z., Wu, F., Dong, W., Li, X., Wu, J., Shi, G.: Self-supervised non-uniform kernel estimation with flow-based motion prior for blind image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18105–18114 (2023)

  6. Ji, S.-W., Lee, J., Kim, S.-W., Hong, J.-P., Baek, S.-J., Jung, S.-W., Ko, S.-J.: Xydeblur: divide and conquer for single image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17421–17430 (2022)

  7. Cui, Y., Ren, W., Knoll, A.: Omni-kernel network for image restoration. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 1426–1434 (2024)

  8. Cui, Y., Ren, W., Yang, S., Cao, X., Knoll, A.: Irnext: Rethinking convolutional network design for image restoration. In: International Conference on Machine Learning, pp. 6545–6564 (2023)

  9. Cui, Y., Ren, W., Cao, X., Knoll, A.: Focal network for image restoration. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13001–13011 (2023)

  10. Yuan, Y., Su, W., Ma, D.: Efficient dynamic scene deblurring using spatially variant deconvolution network with optical flow guided training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3555–3564 (2020)

  11. Ren, W., Pan, J., Cao, X., Yang, M.-H.: Video deblurring via semantic segmentation and pixel-wise non-linear kernel. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1077–1085 (2017)

  12. Kruse, J., Rother, C., Schmidt, U.: Learning to push the limits of efficient fft-based image deconvolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4586–4594 (2017)

  13. Xu, L., Tao, X., Jia, J.: Inverse kernels for fast spatial deconvolution. In: Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 33–48. Springer (2014)

  14. Ren, W., Zhang, J., Ma, L., Pan, J., Cao, X., Zuo, W., Liu, W., Yang, M.-H.: Deep non-blind deconvolution via generalized low-rank approximation. Adv. Neural Inf. Process. Syst. 31, 295–305 (2018)

  15. Cho, S., Wang, J., Lee, S.: Handling outliers in non-blind image deconvolution. In: 2011 International Conference on Computer Vision, pp. 495–502. IEEE (2011)

  16. Nan, Y., Quan, Y., Ji, H.: Variational-em-based deep learning for noise-blind image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3626–3635 (2020)

  17. Houben, T., Huisman, T., Pisarenco, M., Sommen, F., With, P.H.: Depth estimation from a single SEM image using pixel-wise fine-tuning with multimodal data. Mach. Vis. Appl. 33(4), 56 (2022)

    Article  Google Scholar 

  18. Dudhane, A., Zamir, S.W., Khan, S., Khan, F.S., Yang, M.-H.: Burst image restoration and enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 1, 1–14 (2024)

    Article  MATH  Google Scholar 

  19. Anwar, S., Barnes, N.: Densely residual Laplacian super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 44(3), 1192–1204 (2020)

    Article  MATH  Google Scholar 

  20. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Learning enriched features for real image restoration and enhancement. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, pp. 492–511. Springer (2020)

  21. Zhang, H., Dai, Y., Li, H., Koniusz, P.: Deep stacked hierarchical multi-patch network for image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5978–5986 (2019)

  22. Pan, J., Sun, D., Pfister, H., Yang, M.-H.: Deblurring images via dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2315–2328 (2017)

    Article  MATH  Google Scholar 

  23. Zhang, K., Li, Y., Zuo, W., Zhang, L., Van Gool, L., Timofte, R.: Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6360–6376 (2021)

    Article  MATH  Google Scholar 

  24. Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4641–4650 (2021)

  25. Zhang, K., Wang, T., Luo, W., Ren, W., Stenger, B., Liu, W., Li, H., Yang, M.-H.: Mc-blur: a comprehensive benchmark for image deblurring. IEEE Trans. Circuits Syst. Video Technol. 34(5), 3755–3767 (2023)

    Article  MATH  Google Scholar 

  26. Park, D., Kang, D.U., Kim, J., Chun, S.Y.: Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In: European Conference on Computer Vision, pp. 327–343. Springer (2020)

  27. Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: Deblurgan-v2: deblurring (orders-of-magnitude) faster and better. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8878–8887 (2019)

  28. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-assisted intervention—MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18, pp. 234–241. Springer (2015)

  29. Lee, D., Lee, C., Kim, T.: Wide receptive field and channel attention network for jpeg compressed image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 304–313 (2021)

  30. Lee, H.S., Cho, S.I.: Locally adaptive channel attention-based spatial-spectral neural network for image deblurring. IEEE Trans. Circuits Syst. Video Technol. 33(10), 5375–5390 (2023)

    Article  MATH  Google Scholar 

  31. Mao, X., Liu, Y., Liu, F., Li, Q., Shen, W., Wang, Y.: Intriguing findings of frequency selection for image deblurring. Proc. AAAI Conf. Artif. Intell. 37, 1905–1913 (2023)

    MATH  Google Scholar 

  32. Kong, F., Li, M., Liu, S., Liu, D., He, J., Bai, Y., Chen, F., Fu, L.: Residual local feature network for efficient super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 766–776 (2022)

  33. Lee, H.S., Cho, S.I.: Locally adaptive channel attention-based spatial-spectral neural network for image deblurring. IEEE Trans. Circuits Syst. Video Technol. 33(10), 5375–5390 (2023)

    Article  MATH  Google Scholar 

  34. Bao, Z., Shao, M., Wan, Y., Qiao, Y.: Recursive residual Fourier transformation for single image deraining. Int. J. Mach. Learn. Cybern. 15(5), 1–12 (2023)

    MATH  Google Scholar 

  35. Lu, S., Liu, M., Yin, L., Yin, Z., Liu, X., Zheng, W.: The multi-modal fusion in visual question answering: a review of attention mechanisms. PeerJ Comput. Sci. 9, 1400 (2023)

    Article  MATH  Google Scholar 

  36. Yang, W., Tan, R.T., Feng, J., Guo, Z., Yan, S., Liu, J.: Joint rain detection and removal from a single image with contextualized deep networks. IEEE Trans. Pattern Anal. Mach. Intell. 42(6), 1377–1393 (2020)

    Article  MATH  Google Scholar 

  37. Suin, M., Purohit, K., Rajagopalan, A.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3606–3615 (2020)

  38. Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Fast and accurate image super-resolution with deep Laplacian pyramid networks. IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2599–2613 (2018)

    Article  MATH  Google Scholar 

  39. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  MATH  Google Scholar 

  40. Shen, Z., Wang, W., Lu, X., Shen, J., Ling, H., Xu, T., Shao, L.: Human-aware motion deblurring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5572–5581 (2019)

  41. Rim, J., Lee, H., Won, J., Cho, S.: Real-world blur dataset for learning and benchmarking deblurring algorithms. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16, pp. 184–201. Springer (2020)

  42. Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8174–8182 (2018)

  43. Zhang, K., Luo, W., Zhong, Y., Ma, L., Stenger, B., Liu, W., Li, H.: Deblurring by realistic blurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2737–2746 (2020)

  44. Zhang, Y., Li, Q., Qi, M., Liu, D., Kong, J., Wang, J.: Multi-scale frequency separation network for image deblurring. IEEE Trans. Circuits Syst. Video Technol. 33(10), 5525–5537 (2023)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaoyan Gai.

Ethics declarations

Conflict of interest

The authors declare that they have no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Y., Gai, S. & Da, F. Multi-scale Unet-based feature aggregation network for lightweight image deblurring. SIViP 19, 22 (2025). https://doi.org/10.1007/s11760-024-03657-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-024-03657-5

Keywords

Navigation