Elsevier

Pattern Recognition

Volume 43, Issue 4, April 2010, Pages 1564-1576
Pattern Recognition

Weighted and extended total variation for image restoration and decomposition

https://doi.org/10.1016/j.patcog.2009.10.011Get rights and content

Abstract

In various information processing tasks obtaining regularized versions of a noisy or corrupted image data is often a prerequisite for successful use of classical image analysis algorithms. Image restoration and decomposition methods need to be robust if they are to be useful in practice. In particular, this property has to be verified in engineering and scientific applications. By robustness, we mean that the performance of an algorithm should not be affected significantly by small deviations from the assumed model. In image processing, total variation (TV) is a powerful tool to increase robustness. In this paper, we define several concepts that are useful in robust restoration and robust decomposition. We propose two extended total variation models, weighted total variation (WTV) and extended total variation (ETV). We state generic approaches. The idea is to replace the TV penalty term with more general terms. The motivation is to increase the robustness of ROF (Rudin, Osher, Fatemi) model and to prevent the staircasing effect due to this method. Moreover, rewriting the non-convex sublinear regularizing terms as WTV, we provide a new approach to perform minimization via the well-known Chambolle's algorithm. The implementation is then more straightforward than the half-quadratic algorithm. The behavior of image decomposition methods is also a challenging problem, which is closely related to anisotropic diffusion. ETV leads to an anisotropic decomposition close to edges improving the robustness. It allows to respect desired geometric properties during the restoration, and to control more precisely the regularization process. We also discuss why compression algorithms can be an objective method to evaluate the image decomposition quality.

Introduction

In many problems of image analysis, we have an observed image f, representing a real scene. f contains texture v and/or noise w. Texture is characterized as some repeated pattern of small scale details. Noise is characterized as uncorrelated random patterns. The rest of the image, u, contains homogeneous regions and sharp edges. The image processing task is to extract the most meaningful information from f. Given a noisy sample of some true data, the goal of restoration is to recover the best possible estimate of the original true data, using only the noisy sample. Restoration is usually formulated as an inverse problem. The most basic image restoration problem is denoising, which is a well-known ill-posed problem. In essence, to determine a single solution, one introduces the constraint that the solution must be smooth, in the intuitive sense that similar inputs must correspond to similar outputs. The problem is then cast as a variational problem in which the variational integral depends both on the data and on the smoothness constraint (regularization term). Denoising models can also be regarded as a decomposition of the image into structural parts, u (homogeneous regions and sharp edges), and noise, f-u, which can contain oscillating patterns such as noise and texture, v+w. Following the ideas of Meyer, image decomposition can also differentiate the texture, v, of the noise, w. The textured component is completely represented using only two functions (g1,g2).

Three main successful approaches are usually considered to solve the denoising problem: wavelet-based techniques, nonlinear partial–differential equations, and image decomposition.

Wavelet-based techniques: Different from filtering-based classical methods, wavelet-based methods can be viewed as transform-domain point processing. They can achieve a good tradeoff between noise reduction and feature preservation. Donoho and Johnstone [20] developed the method of wavelet shrinkage denoising. The method attempts to reject noise by thresholding in the wavelet domain. The key idea is that the wavelet representation can separate the signal and the noise. It compacts the energy of the image into a small number of coefficients having large amplitudes, and it spreads the energy of the noise over a large number of coefficients having small amplitudes. Those small coefficients are removed by thresholding operation, and noise energy is attenuated. We use wavelet shrinkage in the image decomposition framework to extract noise component.

Nonlinear partial–differential equations (PDE): The benefit of PDE-based regularization methods lies in the ability to smooth data in a nonlinear way, allowing the preservation of important image features (edges, corners or other discontinuities). Thus, many regularization schemes have been presented so far in the literature, particularly for the problem of image restoration [1], [26], [33]. Anisotropic diffusion on the edges is an essential property. It can be obtained by using tensor-based diffusions [32]. To allow sharp discontinuities (edges), an ideal choice is the space of functions with bounded variation (BV). Many other spaces like the Sobolev space do not allow edges. Total variation (TV) regularization methods preserve the edge information without any prior knowledge about the image geometric details (prior knowledge required is that the image has discontinuity). There is often a trade-off between the regularized output and the original data. A bad trade-off leads to an over-regularization or a noisy image.

Image decomposition: The third one is the decomposition of an image into three components: geometrical, texture and noise. Following the work of Meyer [2], [7], [24], [25], several models have been developed to carry out the decomposition of grayscale and multi-valued images. The textured component v is defined using a norm designed to give small value for the oscillatory functions representing texture. The main idea is to try to pull out texture by controlling this norm.

In various information processing tasks, as for example image understanding applications, obtaining regularized versions of noisy or corrupted image data is often a prerequisite for successful use of classical image analysis algorithms. Regularization is actually one of the key operation. A lot of image regularization formalisms have been proposed in the literature for this purpose. The essential properties are the conservation of edges, the robustness of the model, an anisotropic behavior near edges to respect desired geometric properties during the restoration, a generalization for multi-valued images, and an optimized algorithm. A robust procedure should be insensitive to departures from underlying assumptions caused by, for example, strong gradients. That is, it should have good performance under the underlying assumptions and the performance deteriorates gracefully as the situation departs from the assumptions. A robust procedure is aimed to make solutions insensitive to the influence caused by strong gradients. In this paper, we propose:

  • a weighted total variation. This extension generalizes the standard definition of TV by multiplying the generalized gradient by an adequate weight function. If the weight function is equal to one then we find the classical TV norm. It leads to a projection algorithm for a non-convex restoration (strong gradients are not penalized). The implementation is then more straightforward than the well known half-quadratic algorithm described by Aubert [5];

  • an extended total variation to obtain an anisotropic decomposition close to edges, improve the image decomposition robustness, and steer the evolution of all channels in a multi-valued image (i.e., all channels are coupled). We use a correspondence between a shrinkage function and the TV diffusivity to define a new color denoising wavelet-based technique. This function is incorporated in the decomposition approach to obtain the component noise. Each channel uses information coming from other channels to improve the denoising model and to obtain the color textured component.

An approach to objectively quantify the schemes of decomposition is presented. It uses the link among entropy, informativeness, and compressibility. A compression algorithm is an objective criterium to determine whether something is random or structured.

The paper is organized as follows. In Section 2, we provide a literature review of restoration and decomposition models. We introduce the notations and definitions that will be used in the rest of the paper. We briefly review Chambolle's projection algorithm, which is an efficient method to solve the ROF problem. We also recall the framework of the total variation regularization and the space of functions with bounded variation (BV). In Section 3, we propose a weighted TV term (WTV) instead of the classic TV term. It leads to a projection algorithm for the non-convex restoration. In Section 4, we introduce an extended total variation (ETV). It is showed how a more adapted Riemannian metric in the neighborhood of edges can be used to improve the decomposition behavior near edges. In Section 5, we propose an approach to objectively quantify the different schemes of decomposition. We then conclude the paper in Section 6 with some final remarks and future prospects.

Section snippets

BV space, ROF model and total variation

The space of functions with bounded variation (BV) [4] is good framework for minimizers to the restoration models since BV provides regularity of solutions but also allows sharp discontinuities as edges.

For a given function uL1(Ω) on a bounded domain ΩRN, N2, the total variation of u in Ω is defined byΩ|Du|=supΩudiv(g):gCc1(Ω,RN),g1.The space of functions of bounded variation is defined as follows:BV(Ω)uL1(Ω):Ω|Du|<.There is a useful coarea formulation linking the total

Definitions and motivation

In this paper, we are interested in the non-convex minimization problem [3]:infuBV(Ω)J(u)whereJ(u)N(f-Ru)+ΩΦ(|Du|).Here, N will denote the norm of the Lebesgue space L2(Ω) or the Meyer space G(Ω). The set Ω is a bounded domain of RN, N2, f is a given function in L2(Ω), which may represent an observed image (for N=2), and R is a linear operator representing the blur. The first term in J(u) measures the fidelity to the data while the second one is a non-trivial smoothing term involving the

Decomposition models

Grayscale image decomposition: In [6], Aujol and Chambolle propose a decomposition model which splits a grayscale image into three components: a first one, uBV, containing the structure of the image, a second one, vG,1 the texture, and the third one, wE,2 the noise. The discretized functional

Comparison of schemes

An approach to objectively quantify the three schemes of decomposition is presented. We use the link between entropy, informativeness, and compressibility. Indeed, one of Shannon's fundamental insights in formulating information theory was the entropy of a random variable that measures simultaneously its information content (expressed in bits) and its compressibility without loss (to the same number of bits). An extreme variant of Shannon's insight was expressed by Kolmogorov in his notion of

Conclusion

Table 2 synthesizes the different methods presented in this paper and connected existing approaches. We have proposed to extend the regularization of TV in order to improve its performance. TV approach is a classical approach that has been used for the design of robust image processing systems. In this approach, the feature points are likened to energy terms. The energies studied here are inspired by image restoration and decomposition. The idea is to replace the TV penalty term with a more

About the Author—MICHEL MÉNARD is Professor of Signal and Image Processing at the Laboratory L3i, Informatique Image Interaction, University of La Rochelle (France), where he gives courses on digital signal and image processing and network. His research in image processing has involved the investigation of tools for the dynamic texture detection, fuzzy clustering, multiple classifier systems. He has published a number of papers in international journals and conference proceedings. He

References (33)

  • A. Chambolle

    An algorithm for total variation minimization and applications

    J. Math. Imaging Vision

    (2004)
  • X. Bresson, T. Chan, Fast minimization of the vectorial total variation norm and applications to color image...
  • J. Carter, Dual methods for total variation-based images restoration, Ph.D. Thesis, UCLA, Los Angeles, CA,...
  • T.F. Chan et al.

    The digital TV filter and nonlinear denoising

    IEEE Trans. Image Process.

    (2001)
  • T.F. Chan et al.

    Image Processing and Analysis. Variational, PDE, Wavelet, and Stochastic Methods

  • T.F. Chan et al.

    A nonlinear primal–dual method for total variation-based image restoration

    SIAM J. Sci. Comput.

    (1999)
  • Cited by (38)

    • Inhomogeneous regularization with limited and indirect data

      2023, Journal of Computational and Applied Mathematics
    • Atlas-based reconstruction of high performance brain MR data

      2018, Pattern Recognition
      Citation Excerpt :

      The detailed contributions of our work are as follows: So far, CS methods in the literature (e.g., [2,12,32,34]) have considered image priors based on either internal or external information, but not both. As main contribution of this work, we propose the first CS approach to combine internal and external priors in a single consistent model.

    View all citing articles on Scopus

    About the Author—MICHEL MÉNARD is Professor of Signal and Image Processing at the Laboratory L3i, Informatique Image Interaction, University of La Rochelle (France), where he gives courses on digital signal and image processing and network. His research in image processing has involved the investigation of tools for the dynamic texture detection, fuzzy clustering, multiple classifier systems. He has published a number of papers in international journals and conference proceedings. He contributed chapters for four books.

    About the Author—ABDALLAH EL HAMIDI is Associate Professor of Applied Mathematics at the University of La Rochelle (Laboratory of Mathematics, Image and Applications - MIA). His mathematical research focuses on anisotropic partial differential equations and on variational methods in image processing.

    About the Author—MATHIEU LUGIEZ received his Master degree (Applied Informatics and Mathematics) from La Rochelle University (2007). He works on his Ph.D. in Informatics and Applied Mathematics in La Rochelle University under Michel Ménard and Abdallah El-Hamidi direction. His work focuses on extraction and characterization of dynamic texture with variational methods; mainly on spatio-temporal aspect of these problematic.

    About the Author—CLARA GHANNAM received her Master degree (Applied Mathematics) from the Lebaneese University of Beyrouth (2006). She defended her Ph.D. thesis in Applied Mathematics and Image Processing at La Rochelle University under Abdallah El-Hamidi and Michel Ménard direction (2009). Her work focuses on image restoration and decomposition via variational methods.

    View full text