norm regularization based group sparse representation for image compressed sensing recovery☆
Introduction
In recent years, Candès et al. [1], [2], and Donoho [3] developed a new framework called as compressed sensing (CS) for simultaneous sampling and compression of signals at sub-Nyquist rates. They showed that under certain conditions, the signal can be recovered exactly from a small set of measurements, possibly corrupted by noise, via solving an optimization problem. Let be an image in vectorized form. A number of () linear and non-adaptive measurements of are acquired through the following affine transformation: where is the measurement vector, is a measurement matrix, and is the measurement noise vector. The usual choice for the measurement matrix is a random matrix [4]. The goal is to recover the unknown image from by solving Eq. (1). Since , the reconstruction of from is ill-posed in general. However, if is sufficiently sparse, it is actually possible to recover the image. Assuming that the noise is bounded in norm with , the recovery problem of is formulated as the following constrained optimization problem: where is called sparsity inducing model, and is known as data fidelity model. In some cases, a few additional constraints can be imposed on problem (2) to further restrict the feasible set. By using an appropriate regularization parameter , the problem (2) can be converted into the following unconstrained problem:
The sparsity inducing model, which plays a key role in achieving high quality images, is built on the image prior knowledge. Usually two types of priors are considered, namely local smoothness and nonlocal self-similarity. Local smoothness based models, called local sparsity models, are built on the assumption that images are locally smooth except at the edges. Early local models mainly consider the prior on the level of pixels [5], such as total variation (TV) model [6], [7], [8]. These models demonstrate high effectiveness in reconstructing smooth areas. However, they cannot deal well with image details and fine structures, and tend to over-smooth images [9]. Thereafter, patch-based models were proposed, which have shown promising performance in image restoration [10], [11], [12], [13]. The main idea is to decompose the image into overlapped patches and represent each patch by a few elements from a basis set called dictionary, which is learned from natural images. The learned dictionaries enjoy the advantage of being better adapted to image local structures, thereby enhancing the sparsity. However, in the process of dictionary learning and sparse coding, each patch is considered independently, which ignores the relationships between similar patches [14].
Recently, it has been shown that nonlocal self-similarity based models, called nonlocal sparsity models, are effective in preserving details and demonstrate great advantages in image reconstruction [15]. The nonlocal self-similarity depicts the repetitiveness of higher level patterns (e.g., textures and structures) globally positioned in images [9]. To exploit the nonlocal self-similarity prior, the image is divided into overlapped patches. Then, for each patch within the search window, a set of similar patches are searched to form a data matrix, called a group. The patches in the group are correlated; the strong correlations allow one to develop a much more accurate sparsity inducing model by exploiting nonlocal redundancies [16].
Several nonlocal sparsity model based methods have been proposed for high fidelity image CS recovery [9], [17], [18], [19], [20]. In [17], a framework for CS image recovery via collaborative sparsity model (CoSM), which simultaneously enforces local and nonlocal sparsity models in an adaptive hybrid space-transform domain, was proposed. In [9], a joint statistical model (JSM), closely related to CoSM, was proposed for image restoration applications. Nasser et al. [18], [19] proposed two CS image recovery methods using joint adaptive sparsity measure (JASM) [18] and joint adaptive sparsity regularization (JASR) [19] which enforce both local and nonlocal sparsity models in the transform domain. The nonlocal sparsity models in CoSM, JSM, JASM and JASR are characterized by means of the coefficients achieved by applying a 3D transform to the 3D array generated from the group. In [20], a group-based sparse representation (GSR) model for image restoration was proposed, which has shown the state-of-the-art performance. The GSR model assumes that each group can be accurately represented by a few elements from a self-adaptive learning dictionary. Due to the particular dictionary learning method by singular value decomposition (SVD), the GSR based approach and a standard low-rank approximation method [21] are nearly equivalent. The nonlocal sparsity model given in [20] is described as the norm of all sparse codes achieved by applying an adaptively learned dictionary on the groups. Recent studies [22], [23] have shown that better CS recovery performance can be obtained by exploiting continuous and nonconvex functions as a metric to promote sparsity rather than the norm. In [22], an image CS recovery method using GSR model with nonconvex weighted minimization (denoted as GSR-NCR) was proposed. In group-based sparse representation model proposed in [23], nonconvex log-sum function was utilized to promote sparsity of the coefficients achieved by performing two fixed transforms on the group. For the low-rank based models, the use of truncated nuclear norm (TNN) [24], weighted nuclear norm (WNN) [25], weighted Schatten p-norm (WSN) [26] and logdet function [27] were proposed for the rank function (instead of using the convex nuclear norm).
In addition to the sparsity inducing model, data fidelity model also affects the CS recovery performance. The data fidelity model reflects the statistics of the noise present in data acquisition systems (known as measurement noise). The noise is usually assumed bounded in norm and recovery methods are developed based on this assumption. The results on the bounded noise cases are directly applicable to the case where noise is Gaussian. This is due to the fact that Gaussian noise is essentially bounded [28]. Thus, the norm data fidelity is optimal for Gaussian noise. However, in real applications, the noise often exhibits non-Gaussian properties [29]. A representative type is impulsive noise which is characterized by a small percentage of samples having extremely large values [30]. In the case of impulsive heavy-tailed noise corrupted measurements, norm data fidelity based CS recovery methods will fail because norm is highly sensitive to outliers in measurements [31]. Therefore, it is necessary to develop such a data fidelity model which is optimal for non-Gaussian distributed noise, which yields better recovery performance. To this end, and based on the theory of robust statistics [32], the norm is replaced with a sub-quadratic function of noise.
In recent years, several data fidelity models have been developed to suppress large errors in measurements for CS recovery. In [33], the norm has been employed as the data fidelity model and combined with norm penalty to obtain an optimization problem. In [34], Lorentzian norm (or norm) has been used as the data fidelity model. The optimization problem in [34] is formulated as Lorentzian Basis Pursuit (Lorentzian BP) which minimizes the norm of a sparse signal subject to a nonlinear constraint based on the Lorentzian norm. Later, the authors in [35] replaced Lorentzian norm data fidelity instead of the norm data fidelity employed by the traditional iterative hard thresholding (IHT) algorithms. In [30], M-estimate [32] with Huber function (Huber M-estimate) has been employed to replace the norm data fidelity to gain more robust performance. In [36], generalized norm has been utilized as the metric for the residual error to obtain optimization problem which is solved by alternating direction method (ADM). Although these approaches outperform the traditional CS recovery algorithms in impulsive environments, they cannot deal well with image details and fine structures, since they only exploit local sparsity model. On the other hand, the nonlocal sparsity based CS recovery methods in the literature are based on the Gaussian assumption and so have poor performance when the measurements are corrupted by impulsive noise.
Contribution of this paper: In this paper, we propose an image CS recovery method by exploiting the nonlocal self-similarity prior to achieve more accurate performance. We first introduce a new sparsity metric to utilize in the group-based sparse representation model. Our sparsity model is characterized by means of all sparse codes achieved by applying a principle component analysis (PCA)-based dictionary on the groups. To obtain sparse codes more accurately, we utilize nonconvex norm with to promote sparsity of the coefficients, rather than the norm. To solve the GSR-driven minimization problem, an efficient algorithm based on split Bregman framework [37] is developed. Under this framework and the Majorization–Minimization (MM) algorithm [38], the sub-problem related to norm is reduced to a reweighted norm regularized problem, which leads to an efficient solution by soft thresholding technique. Second, the proposed model is combined with a robust M-estimate to cope with the case where measurements are corrupted by impulsive noise. In this case, we substitute the norm data fidelity with the Welch M-estimate [39] which has shown the advantage of robustness against heavy-tailed impulsive noise. We develop an efficient scheme based on the split Bregman framework and half-quadratic (HQ) theory [40] to solve the resulting optimization problem. Simulation results on test images demonstrate that our proposed method outperforms state-of-the-art CS reconstruction methods. To evaluate our simulation results, we use two applicable quality assessors, the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM), as well as visual comparisons.
The rest of the paper is organized as follows: In Section 2, patch grouping, M-estimate and half-quadratic optimization are discussed briefly. In Section 3, our proposed method is presented. The simulation results are described in Section 4. Finally, the conclusions are provided in Section 5.
Section snippets
Patch grouping
As mentioned in the previous section, nonlocal sparsity inducing models are built on the nonlocal self-similarity prior. To exploit this prior, a patch grouping strategy is employed. Here, we give details of the patch grouping procedure and show how to construct the group. First, the given image with pixels is divided into n overlapped patches of equal sizes , . Then, for each patch , within the search window of size , we search for c patches that are the most similar to
Proposed CS recovery via GSR-driven norm regularization
In this section, we present the proposed method for CS image recovery. First, the recovery problems using group-based sparse representation with nonconvex norm for both cases of Gaussian and non-Gaussian distributed measurement noises are formulated. Then, the optimization algorithms to efficiently solve the recovery problems are developed.
Results and discussion
In this section, the experimental results are presented to evaluate the performance of the proposed GSR- and RGSR- methods, which are compared with the state-of-the-art methods. Our experiments have been conducted on six grayscale images with size 256256 as shown in Fig. 4. These test images are commonly used in other related publications, which enables a fair comparison of the results. Here, the CS measurements are generated by randomly sampling the Fourier transform coefficients of the
Conclusions
In this paper, a new image CS recovery method by exploiting the nonlocal self-similarity prior was proposed to achieve more accurate and robust performance. In the proposed method, the nonconvex norm was introduced as a new sparsity metric to be utilized in the group-based sparse representation model, rather than the norm. We also made a comparison between using norm and norm for the GSR model. We demonstrated the superiority of utilizing the proposed norm over norm in
Acknowledgments
The authors would like to acknowledge the funding support of Babol Noshirvani University of Technology through grant program No. BNUT/389059/98. The authors would also like to thank the anonymous reviewers for their valuable comments which were useful to improve the quality of the paper. They also would like to thank the authors of [19], [20], [22], [27] for sharing the source code of their papers; and the first author of [60] for beneficial discussions.
References (60)
- et al.
Nonlinear total variation based noise removal algorithms
Physica D
(1992) - et al.
Image compressive sensing recovery using adaptively learned sparsifying basis via minimization
Signal Process.
(2014) - et al.
Image/ video compressive sensing recovery using joint adaptive sparsity meature
Neurocomputing
(2016) - et al.
Group-based sparse representation for image compressive sensing reconstruction with non-convex regularization
Neurocomputing
(2018) - et al.
Iterative hard thresholding for compressed sensing
Appl. Comput. Harmon. Anal.
(2009) - et al.
CoSaMP: ITerative signal recovery from incomplete and inaccurate samples
Appl. Comput. Harmon. Anal.
(2009) - et al.
Robust image compressive sensing based on m-estimator and nonlocal low-rank regularization
Neurocomputing
(2018) - et al.
Near-optimal signal recovery from random projections: Universal encoding strategies?
IEEE Trans. Inform. Theory
(2006) - et al.
Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information
IEEE Trans. Inform. Theory
(2006) Compressed sensing
IEEE Trans. Inform. Theory
(2006)
Group sparsity residual constraint for image denoising
Color TV: total variation methods for restoration of vector-valued images
IEEE Trans. Image Process.
An iterative regularization method for total variation-based image restoration
Multiscale Model. Simul.
Image restoration using joint statistical modeling in a space transform domain
IEEE Trans. Circuits Syst. Video Technol.
Image denoising via sparse and redundant representations over learned dictionaries
IEEE Trans. Image Process.
K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation
IEEE Trans. Signal Process.
Image super-resolution via sparse representation
IEEE Trans. Image Process.
Nonlocal sparse regularization model with application to image denoising
Multimed Tools Appl.
Image compressive sensing recovery via collaborative sparsity
IEEE J. Emerg. Sel. Topics Circuits Syst.
Compressive sensing image restoration using adaptive curvelet thresholding and nonlocal sparse regularization
IEEE Trans. Image Process.
Group-based sparse representation for image restoration
IEEE Trans. Image Process.
A singular value thresholding algorithm for matrix completion
SIAM J. Optim.
MRI Reconstruction via enhanced group sparsity and nonconvex regularization
Neurocomputing
Fast and accurate matrix completion via truncated nuclear norm regularization
IEEE Trans. Pattern Anal. Mach. Intell.
Weighted schatten p-norm minimization for image denoising and background subtraction
IEEE Trans. Image Process.
Compressive sensing via low-rank regularization
IEEE Trans. Image Process.
Cited by (0)
- ☆
No author associated with this paper has disclosed any potential or pertinent conflicts which may be perceived to have impending conflict with this work. For full disclosure statements refer to https://doi.org/10.1016/j.image.2019.07.021.