Skip to main content

Real-World Image Deblurring via Unsupervised Domain Adaptation

  • Conference paper
  • First Online:
Advances in Visual Computing (ISVC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14362))

Included in the following conference series:

  • 410 Accesses

Abstract

Most deep learning models for image deblurring are trained on pairs of clean images and their blurry counterparts, where the blurry inputs are artificially generated. However, it is impossible for these synthesized blurry images to cover all the real-world blur. Even in two synthetic datasets, the blur type, illumination, and other important image parameters could be different. Consequently, the performance of most existing deblurring models decreases when applied to real-world images and artificial blurry images from a different synthetic dataset. Very few previous deblurring works consider the gap among blurry images from different domains. Inspired by the current success of unsupervised domain adaptation (UDA) on image classification tasks, we develop, UDA-Deblur, a novel deblurring framework that utilizes domain alignment to attenuate effects of the aforementioned gap. In our work, channel attention modules are adopted to exploit the inter-channel relationship for features; multi-scale feature classifiers are designed to discriminate domain difference. UDA-Deblur is trained adversarially to align the feature distributions of the source domain and the target domain. We provide adequate quantitative and qualitative analysis to show the state-of-the-art performance of UDA-Deblur. Firstly, we evaluate the proposed UDA-Deblur on synthesized datasets related to real-life scenarios, which achieves satisfying deblurring results. We further demonstrate that our approach also outperforms prior models on real-world blurry images. For a persuasive comparison, we carefully design experiments on GoPro, HIDE and ReaBlur datasets. More importantly, this is the first work considering real-world image deblurring from a feature-level domain adaptation perspective.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chakrabarti, A.: A neural approach to blind motion deblurring. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016 Part III. LNCS, vol. 9907, pp. 221–235. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_14

    Chapter  Google Scholar 

  2. Charbonnier, P., Blanc-Feraud, L., Aubert, G., Barlaud, M.: Two deterministic half-quadratic regularization algorithms for computed imaging. In: Proceedings of 1st International Conference on Image Processing, vol. 2, pp. 168–172. IEEE (1994)

    Google Scholar 

  3. Cho, T.S., Paris, S., Horn, B.K., Freeman, W.T.: Blur kernel estimation using the radon transform. In: CVPR 2011, pp. 241–248. IEEE (2011)

    Google Scholar 

  4. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning, pp. 1180–1189. PMLR (2015)

    Google Scholar 

  5. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2096–2030 (2016)

    Google Scholar 

  6. Ghifary, M., Kleijn, W.B., Zhang, M., Balduzzi, D., Li, W.: Deep reconstruction-classification networks for unsupervised domain adaptation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016 part IV. LNCS, vol. 9908, pp. 597–613. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_36

    Chapter  Google Scholar 

  7. Hradis, M., Kotera, J., Zemcik, P., VSroubek, F.: Convolutional neural networks for direct text deblurring. In: Proceedings of BMVC, vol. 10 (2015)

    Google Scholar 

  8. Hu, Z., Cho, S., Wang, J., Yang, M.H.: Deblurring low-light images with light streaks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3382–3389 (2014)

    Google Scholar 

  9. Hua, Y., Liu, Y., Li, B., Lu, M.: Dilated fully convolutional neural network for depth estimation from a single image. In: 2019 International Conference on Computational Science and Computational Intelligence (CSCI), pp. 612–616. IEEE (2019)

    Google Scholar 

  10. Jalata, I., Chappa, N.V.S.R., Truong, T.D., Helton, P., Rainwater, C., Luu, K.: Eqadap: equipollent domain adaptation approach to image deblurring. IEEE Access 10, 93203–93211 (2022)

    Article  Google Scholar 

  11. Kang, G., Jiang, L., Yang, Y., Hauptmann, A.G.: Contrastive adaptation network for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4893–4902 (2019)

    Google Scholar 

  12. Krishnan, D., Tay, T., Fergus, R.: Blind deconvolution using a normalized sparsity measure. In: CVPR 2011, pp. 233–240. IEEE (2011)

    Google Scholar 

  13. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: DeblurGAN: blind motion deblurring using conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8183–8192 (2018)

    Google Scholar 

  14. Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: Deblurgan-v2: deblurring (orders-of-magnitude) faster and better. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8878–8887 (2019)

    Google Scholar 

  15. Lai, W.S., Huang, J.B., Hu, Z., Ahuja, N., Yang, M.H.: A comparative study for single image blind deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1709 (2016)

    Google Scholar 

  16. Li, B., Hua, Y., Lu, M.: Advanced multiple linear regression based dark channel prior applied on dehazing image and generating synthetic haze. arXiv preprint arXiv:2103.07065 (2021)

  17. Li, B., Zhang, W., Lu, M.: Multiple linear regression haze-removal model based on dark channel prior. In: 2018 International Conference on Computational Science and Computational Intelligence (CSCI), pp. 307–312. IEEE (2018)

    Google Scholar 

  18. Liu, H., Lu, M.: A crosswalk stripe detection model based on gradient similarity tags. In: 2022 7th International Conference on Image, Vision and Computing (ICIVC), pp. 114–122. IEEE (2022)

    Google Scholar 

  19. Lu, B., Chen, J., Chellappa, R.: Unsupervised domain-specific deblurring via disentangled representations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10225–10234 (2019)

    Google Scholar 

  20. Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017)

    Google Scholar 

  21. Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883–3891 (2017)

    Google Scholar 

  22. Pan, J., Sun, D., Pfister, H., Yang, M.H.: Blind image deblurring using dark channel prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1628–1636 (2016)

    Google Scholar 

  23. Pinheiro, P.O.: Unsupervised domain adaptation with similarity learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8004–8013 (2018)

    Google Scholar 

  24. Rim, J., Lee, H., Won, J., Cho, S.: Real-world blur dataset for learning and benchmarking deblurring algorithms. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020 XXV. LNCS, vol. 12370, pp. 184–201. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58595-2_12

    Chapter  Google Scholar 

  25. Shen, Z., et al.: Human-aware motion deblurring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5572–5581 (2019)

    Google Scholar 

  26. Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-NET: CNNs for optical flow using pyramid, warping, and cost volume. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934–8943 (2018)

    Google Scholar 

  27. Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8174–8182 (2018)

    Google Scholar 

  28. Truong, T.D., Duong, C.N., Le, N., Phung, S.L., Rainwater, C., Luu, K.: Bimal: bijective maximum likelihood approach to domain adaptation in semantic scene segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8548–8557 (2021)

    Google Scholar 

  29. Tzeng, E., Hoffman, J., Darrell, T., Saenko, K.: Simultaneous deep transfer across domains and tasks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4068–4076 (2015)

    Google Scholar 

  30. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)

    Google Scholar 

  31. Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., Hu, Q.: ECA-Net: efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11534–11542 (2020)

    Google Scholar 

  32. Wang, R., Tao, D.: Recent progress in image deblurring. arXiv preprint arXiv:1409.6838 (2014)

  33. Wei, B., Zhang, L., Wang, K., Kong, Q., Wang, Z.: Dynamic scene deblurring and image de-raining based on generative adversarial networks and transfer learning for internet of vehicle. EURASIP J. Adv. Signal Process. 2021(1), 1–19 (2021)

    Article  Google Scholar 

  34. Wen, Y., et al.: Structure-aware motion deblurring using multi-adversarial optimized cyclegan. IEEE Trans. Image Process. 30, 6142–6155 (2021)

    Article  Google Scholar 

  35. Yan, Y., Ren, W., Guo, Y., Wang, R., Cao, X.: Image deblurring via extreme channels prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4003–4011 (2017)

    Google Scholar 

  36. Yuan, Q., Li, J., Zhang, L., Wu, Z., Liu, G.: Blind motion deblurring with cycle generative adversarial networks. Vis. Comput. 36, 1591–1601 (2020)

    Article  Google Scholar 

  37. Zamir, S.W., et al.: Multi-stage progressive image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14821–14831 (2021)

    Google Scholar 

  38. Zhang, H., Dai, Y., Li, H., Koniusz, P.: Deep stacked hierarchical multi-patch network for image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5978–5986 (2019)

    Google Scholar 

  39. Zhang, K., et al.: Deblurring by realistic blurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2737–2746 (2020)

    Google Scholar 

  40. Zhang, K., et al.: Deep image deblurring: a survey. Int. J. Comput. Vis. 130(9), 2103–2130 (2022)

    Article  Google Scholar 

  41. Zhang, Q., Zhang, J., Liu, W., Tao, D.: Category anchor-guided unsupervised domain adaptation for semantic segmentation. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  42. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

  43. Zou, Y., Yu, Z., Kumar, B., Wang, J.: Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 289–305 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hanzhou Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, H., Li, B., Lu, M., Wu, Y. (2023). Real-World Image Deblurring via Unsupervised Domain Adaptation. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2023. Lecture Notes in Computer Science, vol 14362. Springer, Cham. https://doi.org/10.1007/978-3-031-47966-3_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47966-3_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47965-6

  • Online ISBN: 978-3-031-47966-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics