Skip to main content

Enhancing deep image prior with roughly clean pairs and spatially random sampling

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

In recent years, significant advancements in image denoising have been achieved using large-scale datasets and strong supervision. However, a key challenge is obtaining well-aligned pairs of noisy and clean training images for specific scenarios. Existing unsupervised methods can perform denoising without ground-truth images but often rely on impractical conditions like paired noisy images, leading to suboptimal performance. This work aims to enhance the deep image prior (DIP) method’s effectiveness and efficiency by introducing a hybrid strategy that combines supervised and unsupervised approaches, termed EDIP (enhancing deep image prior). Our method incorporates roughly clean image pairs and employs two advanced supervised denoisers on noisy images to generate seed images. A spatially random sampler creates multiple subtly varied sampled images for stable training. Additionally, we use the average of the seed images as a secondary target alongside the noisy input. The network is then trained using the standard DIP unsupervised approach to produce the denoised image. Notably, both input and output consist of nearly clean images, which limits the search space and simplifies the mapping task for the network. Consequently, the proposed method yields high-quality denoised images while improving execution efficiency. Extensive experiments on public real-world datasets demonstrate that EDIP outperforms the original DIP method and state-of-the-art supervised denoisers in denoising effectiveness, without requiring labeling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Availability of data and materials

The data and materials are available from the corresponding author on reasonable request.

References

  1. Xu, S., Cheng, X., Luo, J., et al.: Boosting deep image prior by integrating external and internal image priors. J. Electron. Imaging 32(1), 013021 (2023)

    Article  Google Scholar 

  2. Li, Z., Wang, F., Cui, L., et al.: Dual mixture model based CNN for image denoising. IEEE Trans. Image Process. 31, 3618–3629 (2022)

    Article  Google Scholar 

  3. Chen, Z., Jiang, Y., Liu, D., et al.: CERL: a unified optimization framework for light enhancement with realistic noise. IEEE Trans. Image Process. 31, 4162–4172 (2022)

    Article  Google Scholar 

  4. Wang, Z., Cun, X., Bao, J., et al.: Uformer: a general u-shaped transformer for image restoration. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 17683–17693 (2022)

  5. Cui, Y., Ren, W., Cao, X., et al.: Image restoration via frequency selection. IEEE Trans. Pattern Anal. Mach. Intell. 46(2), 1093–1108 (2024)

  6. Gu, S., Xie, Q., Meng, D., et al.: Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 121, 183–208 (2017)

    Article  Google Scholar 

  7. Liu, Q., Gao, X., He, L., et al.: Single image dehazing with depth-aware non-local total variation regularization. IEEE Trans. Image Process. 27(10), 5178–5191 (2018)

    Article  MathSciNet  Google Scholar 

  8. Zavala-Mondragon, L.A., de With, P.H., van der Sommen, F.: Image noise reduction based on a fixed wavelet frame and cnns applied to ct. IEEE Trans. Image Process. 30, 9386–9401 (2021)

    Article  Google Scholar 

  9. Dong, W., Zhang, L., Shi, G., et al.: Nonlocally centralized sparse representation for image restoration. IEEE trans. Image Process. 22(4), 1620–1630 (2012)

    Article  MathSciNet  Google Scholar 

  10. Xu, S., Yang, X., Jiang, S.: A fast nonlocally centralized sparse representation algorithm for image denoising. Signal Process. 131, 99–112 (2017)

    Article  Google Scholar 

  11. Zhang, K., Zuo, W., Chen, Y., et al.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  MathSciNet  Google Scholar 

  12. Zhang, K., Zuo, W., Zhang, L.: FFDNet: toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process.. 27(9), 4608–4622 (2018)

    Article  MathSciNet  Google Scholar 

  13. Anwar, S., Barnes, N.: Real image denoising with feature attention. In: Proceedings of the IEEE/CVF international conference on computer vision, 3155–3164 (2019)

  14. Cheng, S., Wang, Y., Huang, H., et al.: NBNet: Noise basis learning for image denoising with subspace projection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 4896–4906 (2021)

  15. Zamir, S. W., Arora, A., Khan, S., et al.: Multi-stage progressive image restoration. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 14821–14831 (2021)

  16. Liang, J., Cao, J., Sun, G., et al.: SWinIR: image restoration using swin transformer. In: Proceedings of the IEEE/CVF international conference on computer vision, 1833–1844 (2021)

  17. Zamir, S. W., Arora, A., Khan, S., et al.: Restormer: efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5728–5739 (2022)

  18. Huang, J.-J., Dragotti, P.L.: WINNet: wavelet-inspired invertible network for image denoising. IEEE Trans. Image Process. 31, 4377–4392 (2022)

    Article  Google Scholar 

  19. Ma, R., Li, S., Zhang, B., et al.: Meta PID attention network for flexible and efficient real-world noisy image denoising. IEEE Trans. Image Process. 31, 2053–2066 (2022)

    Article  Google Scholar 

  20. Xu, S., Chen, X., Luo, J., et al.: A deep image prior-based three-stage denoising method using generative and fusion strategies. Signal, Image Video Process. 17, 2385–2393 (2023)

    Article  Google Scholar 

  21. Jaakko, L., Jacob, M., Jon, H., et al.: Noise2Noise: learning image restoration without clean data. In: International Conference on Machine Learning (ICML 2018), 2971–2980, (Stockholm) (2018)

  22. Zhang, Y., Li, D., Law, K. L., et al.: IDR: Self-supervised image denoising via iterative data refinement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2098–2107 (2022)

  23. Krull, A., Buchholz, T.-O., Jug, F.: Noise2Void - learning denoising from single noisy images. In: Conference on computer vision and pattern recognition (CVPR 2019), 2129–2137, (Long Beach) (2019)

  24. Quan, Y., Chen, M., Pang, T., et al.: Self2Self with dropout: learning self-supervised denoising from single image. In: Conference on Computer Vision and Pattern Recognition (CVPR 2020), 1712–1722, (Seattle) (2020)

  25. Lee, W., Son, S., Lee, K. M.: AP-BSN: self-supervised denoising for real-world images via asymmetric PD and blind-spot network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17725–17734 (2022)

  26. Huang, T., Li, S., Jia, X., et al.: Neighbor2Neighbor: a self-supervised framework for deep image denoising. IEEE Trans. Image Process. 31, 4023–4038 (2022)

    Article  Google Scholar 

  27. Neshatavar, R., Yavartanoo, M., Son, S., et al.: CVF-SID: cyclic multi-variate function for self-supervised image denoising by disentangling noise from image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 17583–17591 (2022)

  28. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Conference on Computer Vision and Pattern Recognition (CVPR 2018), 9446–9454, (Salt Lake City) (2018)

  29. Nair, V., Hinton, G. E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning (ICML), 807–814, (Haifa) (2010)

  30. Mataev, G., Milanfar, P., Elad, M.: DeepRED: Deep image prior powered by RED. In: Conference on Computer Vision and Pattern Recognition (CVPR 2019), (Long Beach) (2019)

  31. Sun, Z., Latorre, F., Sanchez, T., et al.: A plug-and-play deep image prior. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021), 8103–8107, (Toronto) (2021)

  32. Liu, J., Sun, Y., Xu, X., et al.: Image restoration using total variation regularized deep image prior. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019), 7715–7719, (Brighton) (2019)

  33. Hu, X., Ren, W., Yang, J., et al.: Face restoration via plug-and-play 3d facial priors. IEEE Trans. Pattern Anal. Mach. Intell. 44(12), 8910–8926 (2023)

    Article  Google Scholar 

  34. Xu, J., Li, H., Liang, Z., et al.: Real-world noisy image denoising: a new benchmark. arXiv preprint arXiv:1804.02603 (2018)

  35. Brummer, B., De Vleeschouwer, C.: Natural image noise dataset. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 0–0 (2019)

  36. Abdelhamed, A., Lin, S., Brown, M. S.: A high-quality denoising dataset for smartphone cameras. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 1692–1700 (2018)

  37. Dabov, K., Foi, A., Katkovnik, V., et al.: Image denoising by sparse 3D transform-domain collaborative filtering. IEEE Trans. Image Process.. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  38. Mou, C., Zhang, J., Wu, Z.: Dynamic attentive graph learning for image restoration. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 4328–4337 (2021)

  39. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. Int. J. Comput. Vis. 128(8), 1867–1888 (2020)

    Article  Google Scholar 

  40. Zhang, K., Li, Y., Zuo, W., et al.: Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 6360–6376 (2021)

    Article  Google Scholar 

  41. Zhang, K., Zuo, W., Gu, S., et al.: Learning deep CNN denoiser prior for image restoration. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 3929–3938 (2017)

  42. Paszke, A., Gross, S., Chintala, S., et al.: Automatic differentiation in PyTorch. In: 31st Conference on Neural Information Processing Systems (NIPS 2017), 1–4, (Long Beach) (2017)

  43. Chen, L., Fu, Y., Wei, K., et al.: Instance segmentation in the dark. Int. J. Comput. Vis. 131(8), 2198–2218 (2023)

  44. Cheng, B., Misra, I., Schwing, A. G., et al.: Masked-attention mask transformer for universal image segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 1290–1299 (2022)

  45. Li, T., Wang, H., Zhuang, Z., et al.: Deep random projector: Accelerated deep image prior. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18176–18185 (2023)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant 62162043.

Funding

This research was funded by Natural Science Foundation of China grant number 62162043.

Author information

Authors and Affiliations

Authors

Contributions

Shaoping Xu contributed to the conception of the study, and Minghai Xiong wrote the main manuscript text, and Changfei Zhou and Wuyong Tao contributed significantly to analysis and manuscript preparation, and Tianyu Dai conducted experiments. All authors reviewed the manuscript.

Corresponding author

Correspondence to Shaoping Xu.

Ethics declarations

Ethical approval

Not applicable.

Conflict of interest

The authors declare that they have no Conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, S., Xiong, M., Zhou, C. et al. Enhancing deep image prior with roughly clean pairs and spatially random sampling. SIViP 19, 17 (2025). https://doi.org/10.1007/s11760-024-03624-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-024-03624-0

Keywords