Minimization of the -ratio sparsity with for signal recovery☆
Introduction
Over the past decade, an extensive literature on compressive sensing (CS) has been developed, see the monographs [10], [14] for a comprehensive view. CS aims to find the sparsest solution from few noisy linear measurements where with , and , which can be formulated as solving a constrained -minimization problem:Unfortunately, it is a combinatorial problem which is known to be computationally NP-hard to solve [26]. Instead, a widely used solver is the following constrained -minimization problem [9]:which acts as a convex relaxation of -minimization. In order to make the CS construction process much more efficient and robust, different sparse recovery methods exist in the literature, such as CS with prior information [25], robust CS [17] and Bayesian CS [18], [45].
Besides, a variety of non-convex recovery methods have been proposed to enhance sparsity, including () [4], [5], [6], [13], [42], smoothed-L0 (SL0) [24], [22], [23], [44], transformed (TL1) [48], smoothly clipped absolute deviation (SCAD) [12], minimax concave penalty (MCP) [46], and [30], [40], to name a few. These non-convex methods result in the difficulties of theoretical analysis and computational algorithms, but do lead to better recovery performances compared to the convex -minimization in certain contexts. Among them, gives superior results for incoherent measurement matrices, while and are better choices for highly coherent measurement matrices. TL1 is a robust choice no matter whether the measurement matrix is coherent or not.
Regarding the method, there are few literatures concerning on it due to its complex structure. In fact, is neither convex nor concave, and it is not even globally continuous. Some theoretical analyses have been done when it is restricted to non-negative signals [11], [43]. Very recently, a few new attempts have been made. Rahimi et al. [30] gave some local optimality results and proposed to solve it based on the Alternating Direction Method of Multipliers (ADMM) [2]. Some accelerated schemes were used for the minimization in [40]. Wang et al. [39] adopted the minimization in computed tomography (CT) reconstruction. Xu et al. [41] investigated exact recovery conditions as well as the stability of the method, and provided the conditions under which minimization is equivalent to minimization. Petrosyan et al. [28] showed that the method outperforms the minimization in jointly sparse vectors reconstruction problems and can be effectively solved via manifold optimization methods. Meanwhile, [1] proposed a proximal subgradient algorithm with extrapolations for solving nonconvex and nonsmooth fractional programmings which encompass the minization problem.
Inspired by the fact that is a special case of the -ratio sparsity measure [21] with , we propose in this paper a more general scale invariant approach for sparse signal recovery via minimizing the -ratio sparsity measure . We aim to theoretically and numerically investigate the minimization of the -ratio sparsity. The main contributions of the present paper are four folds: (1) We give a further study on the properties of -ratio sparsity and illustrate them with examples. (2) We propose the minimization of -ratio sparsity for sparse signal recovery, which encompasses the well-known -minimization, and methods. (3) We give a verifiable sufficient condition for the exact sparse recovery and derive concise bounds on both norm and norm of the reconstruction error for the with in terms of the -ratio constrained minimal singular values (CMSV) introduced in [51]. We establish the corresponding stable and robust recovery results involving both sparsity defect and measurement error. To the best of our knowledge, in the study of this kind of non-convex methods we are the first to establish the results for the compressible (not exactly sparse) case since all the literature mentioned above merely considered the exactly sparse case. (4) We present efficient algorithms to solve the proposed methods via nonlinear fractional programming and conduct various numerical experiments to illustrate their superior performance.
The paper is organized as follows. In Section 2, we present the definition of -ratio sparsity and some further study on its properties. In Section 3, we propose the sparse signal recovery methodology via the minimization of -ratio sparsity. In Section 4, we provide a verifiable sufficient condition for the exact sparse recovery and derive the reconstruction error bounds based on -ratio CMSV for the proposed method in the case of . In Section 5, we design algorithms to solve the problem. Section 6 contains the numerical experiments. Finally, conclusions and future works are included in Section 7.
Throughout the paper, we denote vectors by lower case letters e.g., , and matrices by upper case letters e.g., . Vectors are columns by defaults. denotes the transpose of , while the notation denotes the th component of . We introduce the notations for the set and for the cardinality of a set . Furthermore, we write for the complement of a set in . The support of a vector is the index set of its nonzero entries, i.e., . For any vector , we denote by and we say is -sparse if at most of its entries are nonzero, i.e., if . The -norm for any , while . For a vector and a set , we denote by the vector which coincides with on the indices in and is extended to zero outside . In addition, for any matrix , we denote the kernel of by .
Section snippets
-ratio sparsity
In order to be self-contained, we first give the full definition of the -ratio sparsity and then present some further study on its properties, together with some illustrative examples. Definition 1 ([21], [51]) For any non-zero and non-negative , the -ratio sparsity level of is defined asThe cases of are evaluated as limits: , , .
In fact, any non-zero vector
Methodology
Based on the -ratio sparsity , we here consider the following non-convex minimization problem for sparse signal recovery:where with , and some is pre-given. Obviously, when , the problem approaches the -minimization problem (1) as approaches .
To illustrate the sparsity promoting ability of the problem (5), we revisit a toy example which was discussed in [30]. Specifically, let the measurement matrix
Recovery analysis
In this section, we study the global optimality results for the minimization with . We choose not to present the local optimality results based on null space property as given in [30], [41]. We conjecture that the local optimality results for the minimization in [30], [41] also hold for the minimization with with some minor careful modifications.
We start with a sufficient condition for the exact sparse recovery using minimization with . For some
Algorithms
The ADMM type algorithms were used in [30], [41] for solving the noiseless minimization problem. Unfortunately, they can not be generalized directly to the minimization. In fact, the minimization problem (6) belongs to the nonlinear fractional programming, which was comprehensively discussed in Chapter 4 of [33], see also [31], [32]. We investigate two kinds of methods for solving it, namely parametric methods and change of variable method. Note that it is straightforward to add a
Numerical experiments
In the following experiments, we consider two types of measurement matrices, i.e., Gaussian random matrix and oversampled discrete cosine transform (DCT) matrix. Specifically, for the Gaussian random matrix, it is generated as times an matrix with entries drawn from i.i.d. standard normal distribution. For the oversampled DCT matrix, we use with , and is a random vector uniformly distributed in . An important property of DCT matrix
Conclusion
In this paper, we studied the sparse signal recovery approach via minimizing the -ratio sparsity. For the case , it reduces to a problem of minimizing the ratio of and norms. We gave a verifiable sufficient condition for the exact sparse recovery and established the corresponding reconstruction error bounds in terms of -ratio CMSV. Two computational algorithms were proposed to approximately solve this non-convex problem. In addition, varieties of numerical experiments were
CRediT authorship contribution statement
Zhiyong Zhou: Conceptualization, Methodology, Software, Writing – original draft. Jun Yu: Supervision, Writing – review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
We would like to thank the Editor, the Associate Editor, and two anonymous referees for their detailed and insightful comments and suggestions that help to improve the quality of this paper.
References (51)
- et al.
Sparsest solutions of underdetermined linear systems via -minimization for
Appl. Comput. Harmon. Anal.
(2009) - et al.
Sparse filtering with the generalized lp/lq norm and its applications to the condition monitoring of rotating machinery
Mech. Syst. Signal Process.
(2018) - et al.
Reconstruction of jointly sparse vectors via manifold optimization
Appl. Numer. Math.
(2019) - et al.
On -ratio CMSV for sparse recovery
Signal Process.
(2019) - et al.
Sparse recovery based on -ratio constrained minimal singular values
Signal Process.
(2019) - R. I. Boţ, M. N. Dao, G. Li, Extrapolated proximal subgradient algorithms for nonconvex and nonsmooth fractional...
- et al.
Distributed optimization and statistical learning via the alternating direction method of multipliers
Found. Trends® Mach.Learn.
(2011) - et al.
The Dantzig selector: statistical estimation when p is much larger than n
Ann. Stat.
(2007) Exact reconstruction of sparse signals via nonconvex minimization
IEEE Signal Process. Lett.
(2007)- et al.
Restricted isometry properties and nonconvex compressive sensing
Inverse Probl.
(2008)
Iteratively reweighted algorithms for compressive sensing
Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on
Atomic decomposition by basis pursuit
SIAM J. Sci. Comput.
Compressed sensing and best -term approximation
J. Am. Math. Soc.
Compressed sensing
IEEE Trans. Inf. Theory
Compressed Sensing: Theory and Applications
A method for finding structured sparse solutions to nonnegative least squares problems with applications
SIAM J. Imaging Sci.
Variable selection via nonconcave penalized likelihood and its oracle properties
J. Am. Stat. Assoc.
A Mathematical Introduction to Compressive Sensing
Comparing measures of sparsity
IEEE Trans. Inf. Theory
Robust sparse recovery in impulsive noise via continuous mixed norm
IEEE Signal Process. Lett.
Bayesian compressive sensing
IEEE Trans. Signal Process.
Variations and extension of the convex–concave procedure
Optim. Eng.
Unknown sparsity in compressed sensing: denoising and inference
IEEE Trans. Inf. Theory
Fast L1–L2 minimization via a proximal operator
J. Sci. Comput.
Cited by (4)
Sparse signal reconstruction via collaborative neurodynamic optimization
2022, Neural NetworksNeurodynamics-based Iteratively Reweighted Convex Optimization for Sparse Signal Reconstruction
2022, 2022 12th International Conference on Information Science and Technology, ICIST 2022
- ☆
This work was supported by the Swedish Research Council grant (Reg.No. 340-2013-5342) and the Zhejiang Provincial Natural Science Foundation of China under Grant No. LQ21A010003.