Cloud covering denoising through image fusion

https://doi.org/10.1016/j.imavis.2006.03.007Get rights and content

Abstract

This paper presents a solution to the cloud removal problem, based in a recently developed image fusion methodology consisting in applying a 1-D pseudo-Wigner distribution (PWD) transformation to the source images and on the use of a pixel-wise cloud model. Both features could also be interpreted as a denoising method centered in a pixel-level measure. Such procedure is able to process sequences of multi-temporal registered images affected with spatial-variant noise. The goal consists in providing a 2-D clean image, after removing the spatial-variant noise disturbing the set of multi-temporal registered source images. This is achieved by taking as reference a statistically parameterized model of a cloud prototype. Using this model, a pixel-wise measure of the noise degree of the source images can be calculated through their PWDs. This denoising procedure enables to choose the noise-free pixels from the set of given source images. The applicability of the method to the cloud removal paradigm is illustrated with different sets of artificial and natural cloudy or foggy images, partially occluded by clouds in different regions. Another advantage of the present approach is its reduced computational cost, once the 1-D case has been preferred instead of a full 2-D implementation of the PWD.

Introduction

The recovering of an image from a degraded realization has been the subject of many contributions in the area of image restoration, under the name of image deblurring or image deconvolution [1]. When image restoration is accomplished without any “a priori” knowledge about the degradation process, we are dealing with blind image deconvolution methods [2]. If the blurring is not homogeneously distributed, the defocusing process will affect different regions of the image with different strength. This scenario is referred to as space-variant blurring [3]. A special case of space-variant degradation occurs when multi-focus or multi-temporal images are available and therefore image fusion methods [4], [5] can be applied. The method presented in this paper applies to multi-temporal images and describes a fusion algorithm based in transforming the source images by a pseudo-Wigner distribution (PWD), following a methodology recently developed by the authors. In a previous paper [6] the theoretical background of this fusion methodology was described and experimentally validated for defocused images coming from digital camera. Later on, the method was successfully applied to multi-focus microscopic images [7]. However, the main contribution of this paper consists in extending the method to multi-temporal images for cloud removing purposes, under a similar fusion scenario. This case is formally different from the others previously treated, where the origin of degradation was image blurring. Out-of-focus affects digital camera images segmenting the view generally in two regions, foreground and background. Microscopic images present a very narrow focus depth, originating a continuous changing of the in-focus regions on equally spaced realizations. Now, the origin is occluding noise, ranging from a light haze to a dense cloud. Therefore, this case has fundamental differences that require to be treated separately. Several experiments with artificial and realistic images are presented here to illustrate the performances of the method.

This paper is structured as follows. The mathematical background of the method, including a brief introduction to the pseudo-Wigner distribution is described in Section 2 as the basis of the fusion process. Section 3 gives an introduction to the cloud removal problem and the particularities of our method. Some examples of image fusion are given in Section 4, together with a quantitative fusion quality assessment study, performed considering some ground-truth test images. Finally, conclusions are drawn in Section 5.

Section snippets

Mathematical background

Modeling and restoring images affected by spatial-variant degradation is a challenging problem that still require further attention in order to achieve a successful solution. One way of approaching the spatial-variant degradation case is by means of the conjoint spatial/spatial-frequency representations [8]. One of the most representative method is the Wigner Distribution (WD), which is a bilinear (quadratic) signal representation introduced by Wigner [9]. A comprehensive discussion of the WD

The cloud removal problem

Optical remote sensing, as typical satellite applications, has to cope with the so-called cloud cover problem that can be pictured as an important difficulty affecting the observation of the earth surface. Diverse techniques have been proposed including different methods such as thresholding [16] or wavelet decomposition [17] to solve the problem. Typically, a clean image can be produced by creating a cloud-free mosaic from several multi-temporal images related to the same area of interest [18]

Experimental results

A realistic example is presented in Fig. 3. Here the camera is fixed looking at a city landscape, taking multi-temporal photographs through different foggy situations, ranging from a clear weather to a thick mist. The method produces a cloud-free result as shown in Fig. 4B.

The results in the examples presented in Fig. 1, Fig. 2, Fig. 3, Fig. 4 illustrate the performances of the method presented here. It is worth noting that an area of the resulting image cannot be better than the best view of

Conclusions

A new cloud removal method based in a pixel-wise PWD analysis of multi-temporal source images have been presented and applied to different sequences of multi-temporal images. This method is able to operate under a cut-and-paste fusion scheme when two or more multi-temporal registered images are available. Experimental results show that the method behaves well when earth patterns present a rich morphological structure. Quality decreases when earth’s morphology tends to by regular, approaching

Acknowledgements

This work has been partially supported by the following Grants: TEC2004-00834; TEC2005-24739-E; TEC2005-24046-E; 2004CZ0009 from the Spanish Ministry of Education and Science and the PI040765 project from the Spanish Ministry of Health.

References (23)

  • A.N. Rajagopalan et al.

    Space-variant approaches to recovery of depth from defocused images

    Computer Vision and Image Understanding

    (1997)
  • L.D. Jacobson et al.

    Joint spatial/spatial-frequency representation

    Signal Processing

    (1988)
  • R.L. Lagendijk et al.

    Basic methods for image restoration and identification

  • D. Kundur et al.

    Blind image deconvolution

    IEEE Signal Processing Magazine

    (1996)
  • D. Kundur, D. Hatzinakos, H. Leung, A novel approach to multispectral blind image fusion, in: B.V. Dasarathy (Ed)....
  • Z. Zhang et al.

    A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application

    Proceedings of IEEE

    (1999)
  • S. Gabarda et al.

    Multifocus image fusion through the pseudo-Wigner distribution

    Optical Engineering

    (2005)
  • S. Gabarda, G. Cristóbal, F. Sroubek, Image fusion schemes using local spectral methods, Applications of Computer...
  • E. Wigner

    On the quantum correction for thermodynamic equilibrium

    Physical Review

    (1932)
  • T.A.C.M. Claasen et al.

    The Wigner distribution – a tool for time–frequency analysis, parts I–III

    Philips Journal of Research

    (1980)
  • J.C. O’Neill, P. Flandrin, W.J. Williams, On the existence of discrete Wigner distributions, EDICS Number:SPL. SP. 2.3,...
  • Cited by (46)

    • Automatic cloud detection for high resolution satellite stereo images and its application in terrain extraction

      2016, ISPRS Journal of Photogrammetry and Remote Sensing
      Citation Excerpt :

      A detail map is constructed to help detect cloud using RGB color images (Zhang and Xiao, 2014). Methods based on multi-temporal images: Time series analysis and threshold segmentation are used to detect cloud (Champion, 2012; Derrien and Le Gléau, 2010; Dinchang et al., 2008; Gabarda and Cristobal, 2007; Hagolle et al., 2010; Liew et al., 1998). Some supervised classification methods, such as neural network, K-nearest neighbor, self-organizing feature map, and support vector machines, are widely used (Hau et al., 2008; Hughes and Hayes, 2014; Jang et al., 2006; Laban et al., 2012) .

    • A fractional osmosis model for image fusion

      2024, Advances in Computational Mathematics
    • Cloud removal of Gaofen-5 VNIR hyperspectral data using auxiliary: Multispectral data via residual neural networks

      2020, 40th Asian Conference on Remote Sensing, ACRS 2019: Progress of Remote Sensing Technology for Smart Future
    • Variational Osmosis for Non-Linear Image Fusion

      2020, IEEE Transactions on Image Processing
    • Remote Sensing Image Reconstruction Using Tensor Ring Completion and Total Variation

      2019, IEEE Transactions on Geoscience and Remote Sensing
    View all citing articles on Scopus

    Submitted to Image and Vision Computing, Elsevier Science, 2004 (in revision).

    View full text