Skip to main content
Log in

AttentionDIP: attention-based deep image prior model to restore satellite and aerial images from gamma distributed speckle interference

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Image restoration is an inevitable pre-processing step in most satellite imaging applications. The satellite imaging modality such as Synthetic Aperture Radar (SAR) is prone to speckle distortions due to constructive and destructive interference of the probing signals. Speckles being data correlated and multiplicative, their reduction is not so trivial. Since speckles are not purely noise interventions, a blind reduction process leads to spurious analysis at the later stages. Moreover, the image details are liable to get compromised during such a noise reduction process. An attention-based deep image prior (DIP) model with U-Net architecture has been proposed in this work to carefully address these setbacks. The attention block is used to scale the features extracted from the encoder, and they are concatenated with the features from the decoder to obtain both low- and high-level features. The attention module incorporated in the model helps to extract significant complex structures in SAR images. Further, the DIP model duly respects the noise distribution of speckles while performing the despeckling task. Various synthetic, natural, aerial, and satellite images are subjected to the testing and verification process, and the results obtained are in favor of the proposed model. The quantitative analysis carried out using various statistical metrics in this study also reveals the restoration ability of the proposed method in terms of both despeckling and structure preservation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

The data used in this manuscript are publicly available for academic use. The details of sources of data/test images are mentioned in the manuscript.

References

  1. Goodman, J.W.: Some fundamental properties of speckle. JOSA 66(11), 1145–1150 (1976)

    Article  Google Scholar 

  2. Gonzalez, R.C.: Digital image processing. Pearson Education India (2009)

  3. Lee, J.-S.: Digital image enhancement and noise filtering by use of local statistics. IEEE Trans. Pattern Anal. Mach. Intell. 2, 165–168 (1980)

    Article  Google Scholar 

  4. Kuan, D.T., Sawchuk, A.A., Strand, T.C., Chavel, P.: Adaptive noise smoothing filter for images with signal-dependent noise. IEEE Trans. Pattern Anal. Mach. Intell. 2, 165–177 (1985)

    Article  Google Scholar 

  5. Frost, V.S., Stiles, J.A., Shanmugan, K.S., Holtzman, J.C.: A model for radar images and its application to adaptive digital filtering of multiplicative noise. IEEE Trans. Pattern Anal. Mach. Intell. 2, 157–166 (1982)

    Article  Google Scholar 

  6. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60(1–4), 259–268 (1992)

    Article  MathSciNet  Google Scholar 

  7. Yu, Y., Acton, S.T.: Speckle reducing anisotropic diffusion. IEEE Trans. Image Process. 11(11), 1260–1270 (2002)

    Article  MathSciNet  Google Scholar 

  8. Aja-Fernández, S., Alberola-López, C.: On the estimation of the coefficient of variation for anisotropic diffusion speckle filtering. IEEE Trans. Image Process. 15(9), 2694–2701 (2006)

    Article  Google Scholar 

  9. Aubert, G., Aujol, J.-F.: A variational approach to removing multiplicative noise. SIAM J. Appl. Math. 68(4), 925–946 (2008)

    Article  MathSciNet  Google Scholar 

  10. Gilboa, G., Osher, S.: Nonlocal operators with applications to image processing. Multiscale Model. Simul. 7(3), 1005–1028 (2009)

    Article  MathSciNet  Google Scholar 

  11. Lou, Y., Zhang, X., Osher, S., Bertozzi, A.: Image recovery via nonlocal operators. J. Sci. Comput. 42(2), 185–197 (2010)

    Article  MathSciNet  Google Scholar 

  12. Buades, A., Coll, B., Morel, J.-M.: Non-local means denoising. Image Process. On Line 1, 208–212 (2011)

    Article  Google Scholar 

  13. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), pp. 839–846. IEEE (1998)

  14. Febin, I.P., Jidesh, P.: Despeckling and enhancement of ultrasound images using non-local variational framework. Vis. Comput. 38, 1–14 (2022)

  15. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  16. Parrilli, S., Poderico, M., Angelino, C.V., Verdoliva, L.: A nonlocal SAR image denoising algorithm based on LLMMSE wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 50(2), 606–616 (2011)

    Article  Google Scholar 

  17. Yang, H., Li, J., Shen, L., Lu, J.: A convex variational model for restoring SAR images corrupted by multiplicative noise. Math. Probl. Eng. 2020, 1–19 (2020)

    MathSciNet  Google Scholar 

  18. Rasti, B., Chang, Y., Dalsasso, E., Denis, L., Ghamisi, P.: Image restoration for remote sensing: Overview and toolbox. IEEE Geoscience and Remote Sensing Magazine 10(2), 201–230 (2021)

    Article  Google Scholar 

  19. Jidesh, P., Banothu, B.: Image despeckling with non-local total bounded variation regularization. Computers & Electrical Eng. 70, 631–646 (2018)

    Article  Google Scholar 

  20. Shastry, A., Smitha, A., George, S., Jidesh, P.: Restoration and enhancement of aerial and synthetic aperture radar images using generative deep image prior architecture. J. Photogramm. Remote Sens. Geoinf. Sci. 90, 497–529 (2022)

    Google Scholar 

  21. Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Wang, X., Tang, X.: Residual attention network for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156–3164 (2017)

  22. Zhang, Q., Yuan, Q., Li, J., Yang, Z., Ma, X.: Learning a dilated residual network for SAR image despeckling. Remote Sens. 10(2), 196 (2018)

    Article  Google Scholar 

  23. Mousa, A., Badran, Y., Salama, G., Mahmoud, T.: Regression layer-based convolution neural network for synthetic aperture radar images: de-noising and super-resolution. Vis. Comput., 39, 1–12 (2022)

  24. Dalsasso, E., Denis, L., Tupin, F.: SAR2SAR: a semi-supervised despeckling algorithm for SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 14, 4321–4329 (2021)

    Article  Google Scholar 

  25. Lehtinen, J., Munkberg, J., Hasselgren, J., Laine, S., Karras, T., Aittala, M., Aila, T.: Noise2noise: Learning image restoration without clean data. arXiv preprint arXiv:1803.04189 (2018)

  26. Dalsasso, E., Denis, L., Tupin, F.: As if by magic: self-supervised training of deep despeckling networks with MERLIN. IEEE Trans. Geosci. Remote Sens. 60, 1–13 (2021)

    Article  Google Scholar 

  27. Molini, A.B., Valsesia, D., Fracastoro, G., Magli, E.: Speckle2void: deep self-supervised SAR despeckling with blind-spot convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 60, 1–17 (2021)

    Article  Google Scholar 

  28. Lalitha, V., Latha, B.: A review on remote sensing imagery augmentation using deep learning. Mater. Today Proc. 62, 4772–4778 (2022)

    Article  Google Scholar 

  29. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)

  30. Wang, P., Zhang, H., Patel, V.M.: Generative adversarial network-based restoration of speckled sar images. In: 2017 IEEE 7th International Workshop on Computational Advances in Multi-sensor Adaptive Processing (CAMSAP), pp. 1–5. IEEE (2017)

  31. Rao, J., Ke, A., Liu, G., Ming, Y.: MS-GAN: multi-scale GAN with parallel class activation maps for image reconstruction. Vis. Comput. 39, 1–16 (2022)

  32. Dalsasso, E., Yang, X., Denis, L., Tupin, F., Yang, W.: SAR image despeckling by deep neural networks: from a pre-trained model to an end-to-end training strategy. Remote Sens. 12(16), 2636 (2020)

    Article  Google Scholar 

  33. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9446–9454 (2018)

  34. Fan, W., Yu, H., Chen, T., Ji, S.: Oct image restoration using non-local deep image prior. Electronics 9(5), 784 (2020)

    Article  Google Scholar 

  35. Smitha, A., Jidesh, P.: A nonlocal deep image prior model to restore optical coherence tomographic images from gamma distributed speckle noise. J. Modern Opt. 68(18), 1002–1017 (2021)

    Article  Google Scholar 

  36. Shi, W., Du, H., Mei, W., Ma, Z.: (SARN) spatial-wise attention residual network for image super-resolution. Vis. Comput. 37, 1569–1580 (2021)

    Article  Google Scholar 

  37. Jetley, S., Lord, N.A., Lee, N., Torr, P.H.S.: Learn to pay attention. arXiv preprint arXiv:1804.02391 (2018)

  38. Tian, C., Yong, X., Li, Z., Zuo, W., Fei, L., Liu, H.: Attention-guided CNN for image denoising. Neural Netw. 124, 117–129 (2020)

    Article  Google Scholar 

  39. Zhao, Y., Zhai, D., Jiang, J., Liu, X.: ADRN: attention-based deep residual network for hyperspectral image denoising. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2668–2672. IEEE (2020)

  40. Perera, M.V., Bandara, W.G.C., Valanarasu, J.M.J., Patel, V.M.: Transformer-based SAR image despeckling. In: IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium. IEEE (2022)

  41. Oktay, O., Schlemper, J., Le Folgoc, L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B. et al.: Attention U-Net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)

  42. Zhang, J., Jiang, Z., Dong, J., Hou, Y., Liu, B.: Attention gate ResU-Net for Automatic MRI Brain tumor segmentation. IEEE Access 8, 58533–58545 (2020)

    Article  Google Scholar 

  43. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

  44. Saad, O.M., Oboué, Y.A., Innocent, S., Bai, M., Samy, L., Yang, L., Chen, Y.: Self-attention deep image prior network for unsupervised 3-D seismic data enhancement. IEEE Trans. Geosci. Remote Sens. 60, 1–14 (2021)

    Google Scholar 

  45. Gomez, L., Ospina, R., Frery, A.C.: Unassisted quantitative evaluation of despeckling filters. Remote Sens. 9(4), 389 (2017)

    Article  Google Scholar 

  46. Febin, I.P., Jidesh, P., Bini, A.A.: Noise classification and automatic restoration system using non-local regularization frameworks. Imaging Sci. J. 66(8), 479–491 (2018)

    Article  Google Scholar 

  47. Merced University of California. Merced 2020: Aerial photos, 2022. Accessed on: 2 (January 2022)

  48. Yang, Y., Newsam, S.: Bag-of-visual-words and spatial extensions for land-use classification. In: Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 270–279 (2010)

  49. Sandia National Laboratories. Pathfinder radar ISR & SAR systems: SAR imagery, 2021. Accessed on: 5 December (2021)

  50. Wei, S., Zeng, X., Qizhe, Q., Wang, M., Hao, S., Shi, J.: HRSID: a high-resolution SAR images dataset for ship detection and instance segmentation. IEEE Access 8, 120234–120254 (2020)

    Article  Google Scholar 

  51. Jet Propulsion Laboratory. Space radar image of Flevoland, Netherlands, 2021. Accessed on: 5 December (2021)

Download references

Acknowledgements

The authors wish to thank the Science and Engineering Research Board, Govt. of India, for providing financial support under grant no. CRG/2020/000476.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to P. Jidesh.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest in relation to this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

1.1 First- and second-order statistical measures

In this section, the computation of first- and second-order statistical measures is detailed. The contents are extracted from [45] and provided in this section for completeness. The best despeckling filter produces the ratio image close to the unitary mean and its ENL close to the ENL of the original (noisy) image. For each homogeneous region ’k,’ the residue obtained due to deviation of the ENL and mean from the ideal measure is calculated as,

$$\begin{aligned} r_{\hat{\text {ENL}}}(k)= & {} \frac{| \hat{\text {ENL}}_\text {noisy} - \hat{\text {ENL}}_\text {ratio}(k) |}{\hat{\text {ENL}}_\text {noisy}(k)}, \end{aligned}$$
(22)
$$\begin{aligned} r_{\hat{\mu }}(k)= & {} |1-\mu _\text {ratio}(k)|. \end{aligned}$$
(23)

These measures should yield zero for an ideal filter. Combining these ENL and mean estimates, the first-order residual \(r_{\hat{ENL},\hat{\mu }}\) quantifies the overall deviation from statistical properties of speckles in the ratio images,

$$\begin{aligned} r_{\hat{\text {ENL}},\hat{\mu }} = \frac{1}{2}\sum _{k=0}^{K}\left( r_{\hat{\text {ENL}}}(k)+ r_{\hat{\mu }}(k)\right) . \end{aligned}$$
(24)

The second-order statistics measure the homogeneity evaluated from the co-occurrence matrix defined as p(ij),

$$ \begin{aligned} \begin{aligned} h&= \sum _i \sum _j \frac{1}{1+(i-j)^2}.p(i,j), \& ,\\ \delta h&= 100|h_0 - \overline{h_g}|/h_0, \end{aligned} \end{aligned}$$
(25)

where \(h_0\) and \(h_g\) are the means of homogeneity obtained from the original ratio image and by randomly permuting its values. \(\delta h\) captures the measure of remaining structures in the ratio images and hence should produce minimum value for the structure-preserving despeckling models.

Finally, both first-order and second-order statistical measures are combined to define the estimate ’M,’

$$\begin{aligned} M = r_{\hat{\text {ENL}},\hat{\mu }}+\delta h . \end{aligned}$$
(26)

For a perfect despeckling model, M should produce a value of zero and a larger M value shows its deviation from the ideal condition.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shastry, A., George, S., Bini, A.A. et al. AttentionDIP: attention-based deep image prior model to restore satellite and aerial images from gamma distributed speckle interference. Vis Comput 40, 5219–5239 (2024). https://doi.org/10.1007/s00371-023-03101-8

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-03101-8

Keywords

Navigation