LLp norm regularization based group sparse representation for image compressed sensing recovery

https://doi.org/10.1016/j.image.2019.07.021Get rights and content

Highlights

  • Nonconvex LLp norm regularization has been proposed for CS image recovery via GSR.

  • An efficient algorithm based on SBI and MM has been developed to solve GSR-LLp .

  • RGSR-LLp has been proposed to cope with impulsive noisy CS measurements.

  • It has been shown that GSR-LLp and RGSR-LLp outperform other CS recovery methods.

  • GSR-LLp effectively recovers images from low number of CS measurements.

Abstract

One important challenge in image compressed sensing (CS) recovery methods is to develop a sparsity inducing model which can reflect the image priors appropriately and hence yields high quality recovery results. Recent advances have suggested that group sparse representation based models, which exploit nonlocal self-similarity prior, lead to superior results in image CS recovery. In this paper, we propose a CS recovery method via group sparse representation with nonconvex LLp norm regularization (GSR-LLp). In the proposed method, nonconvex LLp norm with (0<p1) is introduced as a new sparsity metric to better promote the sparsity of the group coefficients, rather than the l0 norm. Furthermore, the principle component analysis (PCA) is utilized to learn an adaptive orthogonal dictionary for each group. To solve the GSR-driven LLp minimization problem, an efficient algorithm based on split Bregman framework and Majorization–Minimization (MM) algorithm is developed. Moreover, the proposed model is combined with a robust M-estimate to cope with the case where measurements are corrupted by impulsive noise. In this case, we substitute the l2 norm data fidelity with Welch m-estimate which has shown the advantage of robustness against heavy-tailed impulsive noise. We develop an efficient scheme based on split Bregman framework and half-quadratic (HQ) theory to solve the resulting optimization problem (called as RGSR-LLp). Extensive experimental results show effectiveness of the proposed methods compared with the state-of-the-arts methods in CS image recovery.

Introduction

In recent years, Candès et al. [1], [2], and Donoho [3] developed a new framework called as compressed sensing (CS) for simultaneous sampling and compression of signals at sub-Nyquist rates. They showed that under certain conditions, the signal can be recovered exactly from a small set of measurements, possibly corrupted by noise, via solving an optimization problem. Let uRN be an image in vectorized form. A number of M (MN) linear and non-adaptive measurements of u are acquired through the following affine transformation: y=Φu+e,where yRM is the measurement vector, ΦRM×N is a measurement matrix, and eRM is the measurement noise vector. The usual choice for the measurement matrix Φ is a random matrix [4]. The goal is to recover the unknown image u from y by solving Eq. (1). Since MN, the reconstruction of u from y is ill-posed in general. However, if u is sufficiently sparse, it is actually possible to recover the image. Assuming that the noise is bounded in l2 norm with e2ϵ, the recovery problem of u is formulated as the following constrained optimization problem: minu(u)s.t.yΦu2ϵ,where (.) is called sparsity inducing model, and yΦu2 is known as data fidelity model. In some cases, a few additional constraints can be imposed on problem (2) to further restrict the feasible set. By using an appropriate regularization parameter λ, the problem (2) can be converted into the following unconstrained problem: minu12yΦu22+λ(u).

The sparsity inducing model, which plays a key role in achieving high quality images, is built on the image prior knowledge. Usually two types of priors are considered, namely local smoothness and nonlocal self-similarity. Local smoothness based models, called local sparsity models, are built on the assumption that images are locally smooth except at the edges. Early local models mainly consider the prior on the level of pixels [5], such as total variation (TV) model [6], [7], [8]. These models demonstrate high effectiveness in reconstructing smooth areas. However, they cannot deal well with image details and fine structures, and tend to over-smooth images [9]. Thereafter, patch-based models were proposed, which have shown promising performance in image restoration [10], [11], [12], [13]. The main idea is to decompose the image into overlapped patches and represent each patch by a few elements from a basis set called dictionary, which is learned from natural images. The learned dictionaries enjoy the advantage of being better adapted to image local structures, thereby enhancing the sparsity. However, in the process of dictionary learning and sparse coding, each patch is considered independently, which ignores the relationships between similar patches [14].

Recently, it has been shown that nonlocal self-similarity based models, called nonlocal sparsity models, are effective in preserving details and demonstrate great advantages in image reconstruction [15]. The nonlocal self-similarity depicts the repetitiveness of higher level patterns (e.g., textures and structures) globally positioned in images [9]. To exploit the nonlocal self-similarity prior, the image is divided into overlapped patches. Then, for each patch within the search window, a set of similar patches are searched to form a data matrix, called a group. The patches in the group are correlated; the strong correlations allow one to develop a much more accurate sparsity inducing model by exploiting nonlocal redundancies [16].

Several nonlocal sparsity model based methods have been proposed for high fidelity image CS recovery [9], [17], [18], [19], [20]. In [17], a framework for CS image recovery via collaborative sparsity model (CoSM), which simultaneously enforces local and nonlocal sparsity models in an adaptive hybrid space-transform domain, was proposed. In [9], a joint statistical model (JSM), closely related to CoSM, was proposed for image restoration applications. Nasser et al. [18], [19] proposed two CS image recovery methods using joint adaptive sparsity measure (JASM) [18] and joint adaptive sparsity regularization (JASR) [19] which enforce both local and nonlocal sparsity models in the transform domain. The nonlocal sparsity models in CoSM, JSM, JASM and JASR are characterized by means of the coefficients achieved by applying a 3D transform to the 3D array generated from the group. In [20], a group-based sparse representation (GSR) model for image restoration was proposed, which has shown the state-of-the-art performance. The GSR model assumes that each group can be accurately represented by a few elements from a self-adaptive learning dictionary. Due to the particular dictionary learning method by singular value decomposition (SVD), the GSR based approach and a standard low-rank approximation method [21] are nearly equivalent. The nonlocal sparsity model given in [20] is described as the l0 norm of all sparse codes achieved by applying an adaptively learned dictionary on the groups. Recent studies [22], [23] have shown that better CS recovery performance can be obtained by exploiting continuous and nonconvex functions as a metric to promote sparsity rather than the l0 norm. In [22], an image CS recovery method using GSR model with nonconvex weighted lp (0<p<1) minimization (denoted as GSR-NCR) was proposed. In group-based sparse representation model proposed in [23], nonconvex log-sum function was utilized to promote sparsity of the coefficients achieved by performing two fixed transforms on the group. For the low-rank based models, the use of truncated nuclear norm (TNN) [24], weighted nuclear norm (WNN) [25], weighted Schatten p-norm (WSN) [26] and logdet function [27] were proposed for the rank function (instead of using the convex nuclear norm).

In addition to the sparsity inducing model, data fidelity model also affects the CS recovery performance. The data fidelity model reflects the statistics of the noise present in data acquisition systems (known as measurement noise). The noise is usually assumed bounded in l2 norm and recovery methods are developed based on this assumption. The results on the bounded noise cases are directly applicable to the case where noise is Gaussian. This is due to the fact that Gaussian noise is essentially bounded [28]. Thus, the l2 norm data fidelity is optimal for Gaussian noise. However, in real applications, the noise often exhibits non-Gaussian properties [29]. A representative type is impulsive noise which is characterized by a small percentage of samples having extremely large values [30]. In the case of impulsive heavy-tailed noise corrupted measurements, l2 norm data fidelity based CS recovery methods will fail because l2 norm is highly sensitive to outliers in measurements [31]. Therefore, it is necessary to develop such a data fidelity model which is optimal for non-Gaussian distributed noise, which yields better recovery performance. To this end, and based on the theory of robust statistics [32], the l2 norm is replaced with a sub-quadratic function of noise.

In recent years, several data fidelity models have been developed to suppress large errors in measurements for CS recovery. In [33], the l1 norm has been employed as the data fidelity model and combined with l1 norm penalty to obtain an l1l1 optimization problem. In [34], Lorentzian norm (or LL2 norm) has been used as the data fidelity model. The optimization problem in [34] is formulated as Lorentzian Basis Pursuit (Lorentzian BP) which minimizes the l1 norm of a sparse signal subject to a nonlinear constraint based on the Lorentzian norm. Later, the authors in [35] replaced Lorentzian norm data fidelity instead of the l2 norm data fidelity employed by the traditional iterative hard thresholding (IHT) algorithms. In [30], M-estimate [32] with Huber function (Huber M-estimate) has been employed to replace the l2 norm data fidelity to gain more robust performance. In [36], generalized lp norm (0p<2) has been utilized as the metric for the residual error to obtain lpl1 optimization problem which is solved by alternating direction method (ADM). Although these approaches outperform the traditional CS recovery algorithms in impulsive environments, they cannot deal well with image details and fine structures, since they only exploit local sparsity model. On the other hand, the nonlocal sparsity based CS recovery methods in the literature are based on the Gaussian assumption and so have poor performance when the measurements are corrupted by impulsive noise.

Contribution of this paper: In this paper, we propose an image CS recovery method by exploiting the nonlocal self-similarity prior to achieve more accurate performance. We first introduce a new sparsity metric to utilize in the group-based sparse representation model. Our sparsity model is characterized by means of all sparse codes achieved by applying a principle component analysis (PCA)-based dictionary on the groups. To obtain sparse codes more accurately, we utilize nonconvex LLp norm with (0<p1) to promote sparsity of the coefficients, rather than the l0 norm. To solve the GSR-driven LLp minimization problem, an efficient algorithm based on split Bregman framework [37] is developed. Under this framework and the Majorization–Minimization (MM) algorithm [38], the sub-problem related to LLp norm is reduced to a reweighted l1 norm regularized problem, which leads to an efficient solution by soft thresholding technique. Second, the proposed model is combined with a robust M-estimate to cope with the case where measurements are corrupted by impulsive noise. In this case, we substitute the l2 norm data fidelity with the Welch M-estimate [39] which has shown the advantage of robustness against heavy-tailed impulsive noise. We develop an efficient scheme based on the split Bregman framework and half-quadratic (HQ) theory [40] to solve the resulting optimization problem. Simulation results on test images demonstrate that our proposed method outperforms state-of-the-art CS reconstruction methods. To evaluate our simulation results, we use two applicable quality assessors, the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM), as well as visual comparisons.

The rest of the paper is organized as follows: In Section 2, patch grouping, M-estimate and half-quadratic optimization are discussed briefly. In Section 3, our proposed method is presented. The simulation results are described in Section 4. Finally, the conclusions are provided in Section 5.

Section snippets

Patch grouping

As mentioned in the previous section, nonlocal sparsity inducing models are built on the nonlocal self-similarity prior. To exploit this prior, a patch grouping strategy is employed. Here, we give details of the patch grouping procedure and show how to construct the group. First, the given image u with N pixels is divided into n overlapped patches ui of equal sizes b×b, i=1,2,,n. Then, for each patch ui, within the search window of size L×L, we search for c patches that are the most similar to

Proposed CS recovery via GSR-driven LLp norm regularization

In this section, we present the proposed method for CS image recovery. First, the recovery problems using group-based sparse representation with nonconvex LLp norm for both cases of Gaussian and non-Gaussian distributed measurement noises are formulated. Then, the optimization algorithms to efficiently solve the recovery problems are developed.

Results and discussion

In this section, the experimental results are presented to evaluate the performance of the proposed GSR-LLp and RGSR-LLp methods, which are compared with the state-of-the-art methods. Our experiments have been conducted on six grayscale images with size 256×256 as shown in Fig. 4. These test images are commonly used in other related publications, which enables a fair comparison of the results. Here, the CS measurements are generated by randomly sampling the Fourier transform coefficients of the

Conclusions

In this paper, a new image CS recovery method by exploiting the nonlocal self-similarity prior was proposed to achieve more accurate and robust performance. In the proposed method, the nonconvex LLp norm was introduced as a new sparsity metric to be utilized in the group-based sparse representation model, rather than the l0 norm. We also made a comparison between using LLp norm and lp norm for the GSR model. We demonstrated the superiority of utilizing the proposed LLp norm over lp norm in

Acknowledgments

The authors would like to acknowledge the funding support of Babol Noshirvani University of Technology through grant program No. BNUT/389059/98. The authors would also like to thank the anonymous reviewers for their valuable comments which were useful to improve the quality of the paper. They also would like to thank the authors of [19], [20], [22], [27] for sharing the source code of their papers; and the first author of [60] for beneficial discussions.

References (60)

  • S. Mun, J.E. Fowler, Block compressed sensing of images using directional transforms, in: 16th IEEE Int. Conf. on Image...
  • ZhaZ. et al.

    Group sparsity residual constraint for image denoising

    (2017)
  • BlomgrenP. et al.

    Color TV: total variation methods for restoration of vector-valued images

    IEEE Trans. Image Process.

    (1998)
  • OsherS. et al.

    An iterative regularization method for total variation-based image restoration

    Multiscale Model. Simul.

    (2005)
  • ZhangJ. et al.

    Image restoration using joint statistical modeling in a space transform domain

    IEEE Trans. Circuits Syst. Video Technol.

    (2014)
  • EladM. et al.

    Image denoising via sparse and redundant representations over learned dictionaries

    IEEE Trans. Image Process.

    (2006)
  • AharonM. et al.

    K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation

    IEEE Trans. Signal Process.

    (2006)
  • YangJ. et al.

    Image super-resolution via sparse representation

    IEEE Trans. Image Process.

    (2010)
  • J. Zhang, D. Zhao, F. Jiang, W. Gao, Structural group sparse representation for image compressive sensing recovery, in:...
  • J. Zhang, S. Liu, D. Zhao, R. Xiong, S. Ma, Improved total variation based image compressive sensing recovery by...
  • HeN. et al.

    Nonlocal sparse regularization model with application to image denoising

    Multimed Tools Appl.

    (2016)
  • ZhangJ. et al.

    Image compressive sensing recovery via collaborative sparsity

    IEEE J. Emerg. Sel. Topics Circuits Syst.

    (2012)
  • EslahiN. et al.

    Compressive sensing image restoration using adaptive curvelet thresholding and nonlocal sparse regularization

    IEEE Trans. Image Process.

    (2016)
  • ZhangJ. et al.

    Group-based sparse representation for image restoration

    IEEE Trans. Image Process.

    (2014)
  • CaiJ. et al.

    A singular value thresholding algorithm for matrix completion

    SIAM J. Optim.

    (2010)
  • LiuS. et al.

    MRI Reconstruction via enhanced group sparsity and nonconvex regularization

    Neurocomputing

    (2017)
  • HuY. et al.

    Fast and accurate matrix completion via truncated nuclear norm regularization

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2013)
  • S. Gu, L. Zhang, W. Zuo, X. Feng, Weighted nuclear norm minimization with application to image denoising, in: Proc of...
  • XieY. et al.

    Weighted schatten p-norm minimization for image denoising and background subtraction

    IEEE Trans. Image Process.

    (2016)
  • DongW. et al.

    Compressive sensing via low-rank regularization

    IEEE Trans. Image Process.

    (2014)
  • Cited by (0)

    No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to https://doi.org/10.1016/j.image.2019.07.021.

    View full text