Elsevier

Signal Processing

Volume 189, December 2021, 108250
Signal Processing

Minimization of the q-ratio sparsity with 1<q for signal recovery

https://doi.org/10.1016/j.sigpro.2021.108250Get rights and content

Highlights

  • We propose a general scale invariant approach for sparse signal recovery via the minimization of the q-ratio sparsity.

  • We establish a verifiable exact reconstruction condition and derive concise error bounds in terms of q-ratio constrained minimal singular values (CMSV).

  • We investigate two kinds of methods for solving the proposed problem including the parametric methods and the change of variable method.

  • We conduct numerical experiments to demonstrate the advantageous performance of the proposed approaches over the state-of-the-art sparse recovery methods.

Abstract

In this paper, we propose a general scale invariant approach for sparse signal recovery via the minimization of the q-ratio sparsity sq(z)=(z1/zq)qq1 with q[0,]. The properties of the q-ratio sparsity measure are studied and illustrated with examples. For the proposed q-ratio sparsity minimization problem with 1<q, we establish a verifiable exact reconstruction condition and derive its concise error bounds in terms of q-ratio constrained minimal singular values (CMSV). From an algorithmic point of view, we recognize that the proposed problem belongs to the nonlinear fractional programming and investigate two kinds of methods for solving it including the parametric methods and the change of variable method. Numerical experiments are conducted to demonstrate the advantageous performance of the proposed approaches over the state-of-the-art sparse recovery methods.

Introduction

Over the past decade, an extensive literature on compressive sensing (CS) has been developed, see the monographs [10], [14] for a comprehensive view. CS aims to find the sparsest solution xRN from few noisy linear measurements y=Ax+εRm where ARm×N with mN, and ε2η, which can be formulated as solving a constrained 0-minimization problem:minzRNz0subjecttoAzy2η.Unfortunately, it is a combinatorial problem which is known to be computationally NP-hard to solve [26]. Instead, a widely used solver is the following constrained 1-minimization problem [9]:minzRNz1subjecttoAzy2η,which acts as a convex relaxation of 0-minimization. In order to make the CS construction process much more efficient and robust, different sparse recovery methods exist in the literature, such as CS with prior information [25], robust CS [17] and Bayesian CS [18], [45].

Besides, a variety of non-convex recovery methods have been proposed to enhance sparsity, including p (0<p<1) [4], [5], [6], [13], [42], smoothed-L0 (SL0) [24], 12 [22], [23], [44], transformed 1 (TL1) [48], smoothly clipped absolute deviation (SCAD) [12], minimax concave penalty (MCP) [46], and 1/2 [30], [40], to name a few. These non-convex methods result in the difficulties of theoretical analysis and computational algorithms, but do lead to better recovery performances compared to the convex 1-minimization in certain contexts. Among them, p gives superior results for incoherent measurement matrices, while 12 and 1/2 are better choices for highly coherent measurement matrices. TL1 is a robust choice no matter whether the measurement matrix is coherent or not.

Regarding the 1/2 method, there are few literatures concerning on it due to its complex structure. In fact, 1/2 is neither convex nor concave, and it is not even globally continuous. Some theoretical analyses have been done when it is restricted to non-negative signals [11], [43]. Very recently, a few new attempts have been made. Rahimi et al. [30] gave some local optimality results and proposed to solve it based on the Alternating Direction Method of Multipliers (ADMM) [2]. Some accelerated schemes were used for the 1/2 minimization in [40]. Wang et al. [39] adopted the 1/2 minimization in computed tomography (CT) reconstruction. Xu et al. [41] investigated exact recovery conditions as well as the stability of the 1/2 method, and provided the conditions under which 0 minimization is equivalent to 1/2 minimization. Petrosyan et al. [28] showed that the 1/2 method outperforms the 1 minimization in jointly sparse vectors reconstruction problems and can be effectively solved via manifold optimization methods. Meanwhile, [1] proposed a proximal subgradient algorithm with extrapolations for solving nonconvex and nonsmooth fractional programmings which encompass the 1/2 minization problem.

Inspired by the fact that z12/z22 is a special case of the q-ratio sparsity measure sq(z)=(z1zq)qq1 [21] with q=2, we propose in this paper a more general scale invariant approach for sparse signal recovery via minimizing the q-ratio sparsity measure sq(·). We aim to theoretically and numerically investigate the minimization of the q-ratio sparsity. The main contributions of the present paper are four folds: (1) We give a further study on the properties of q-ratio sparsity and illustrate them with examples. (2) We propose the minimization of q-ratio sparsity for sparse signal recovery, which encompasses the well-known 0-minimization, 1/2 and 1/ methods. (3) We give a verifiable sufficient condition for the exact sparse recovery and derive concise bounds on both q norm and 1 norm of the reconstruction error for the 1/q with q(1,] in terms of the q-ratio constrained minimal singular values (CMSV) introduced in [51]. We establish the corresponding stable and robust recovery results involving both sparsity defect and measurement error. To the best of our knowledge, in the study of this kind of non-convex methods we are the first to establish the results for the compressible (not exactly sparse) case since all the literature mentioned above merely considered the exactly sparse case. (4) We present efficient algorithms to solve the proposed methods via nonlinear fractional programming and conduct various numerical experiments to illustrate their superior performance.

The paper is organized as follows. In Section 2, we present the definition of q-ratio sparsity and some further study on its properties. In Section 3, we propose the sparse signal recovery methodology via the minimization of q-ratio sparsity. In Section 4, we provide a verifiable sufficient condition for the exact sparse recovery and derive the reconstruction error bounds based on q-ratio CMSV for the proposed method in the case of 1<q. In Section 5, we design algorithms to solve the problem. Section 6 contains the numerical experiments. Finally, conclusions and future works are included in Section 7.

Throughout the paper, we denote vectors by lower case letters e.g., x, and matrices by upper case letters e.g., A. Vectors are columns by defaults. xT denotes the transpose of x, while the notation xi denotes the ith component of x. We introduce the notations [N] for the set {1,2,,N} and |S| for the cardinality of a set S. Furthermore, we write Sc for the complement [N]S of a set S in [N]. The support of a vector xRN is the index set of its nonzero entries, i.e., supp(x):={i[N]:xi0}. For any vector xRN, we denote by x0=i=1N1{xi0}=|supp(x)| and we say x is k-sparse if at most k of its entries are nonzero, i.e., if x0k. The q-norm xq=(i=1N|xi|q)1/q for any q(0,), while x=max1iN|xi|. For a vector xRN and a set S[N], we denote by xS the vector which coincides with x on the indices in S and is extended to zero outside S. In addition, for any matrix ARm×N, we denote the kernel of A by ker(A)={xRN|Ax=0}.

Section snippets

q-ratio sparsity

In order to be self-contained, we first give the full definition of the q-ratio sparsity and then present some further study on its properties, together with some illustrative examples.

Definition 1

([21], [51]) For any non-zero zRN and non-negative q{0,1,}, the q-ratio sparsity level of z is defined assq(z)=(z1zq)qq1.The cases of q{0,1,} are evaluated as limits: s0(z)=limq0sq(z)=z0, s1(z)=limq1sq(z)=exp(i=1N|zi|z1ln|zi|z1), s(z)=limqsq(z)=z1z.

In fact, any non-zero vector zRN

Methodology

Based on the q-ratio sparsity sq(·), we here consider the following non-convex minimization problem for sparse signal recovery:minzRNsq(z)subjecttoyAz2η,where y=Ax+ε with ε2η, and some q[0,] is pre-given. Obviously, when q0, the problem approaches the 0-minimization problem (1) as sq(z) approaches s0(z)=z0.

To illustrate the sparsity promoting ability of the problem (5), we revisit a toy example which was discussed in [30]. Specifically, let the measurement matrixA=(11000010100001

Recovery analysis

In this section, we study the global optimality results for the 1/q minimization with q(1,]. We choose not to present the local optimality results based on null space property as given in [30], [41]. We conjecture that the local optimality results for the 1/2 minimization in [30], [41] also hold for the 1/q minimization with q(1,] with some minor careful modifications.

We start with a sufficient condition for the exact sparse recovery using 1/q minimization with q(1,]. For some

Algorithms

The ADMM type algorithms were used in [30], [41] for solving the noiseless 1/2 minimization problem. Unfortunately, they can not be generalized directly to the 1/q minimization. In fact, the minimization problem (6) belongs to the nonlinear fractional programming, which was comprehensively discussed in Chapter 4 of [33], see also [31], [32]. We investigate two kinds of methods for solving it, namely parametric methods and change of variable method. Note that it is straightforward to add a

Numerical experiments

In the following experiments, we consider two types of measurement matrices, i.e., Gaussian random matrix and oversampled discrete cosine transform (DCT) matrix. Specifically, for the Gaussian random matrix, it is generated as 1m times an m×N matrix with entries drawn from i.i.d. standard normal distribution. For the oversampled DCT matrix, we use A=[a1,a2,,aN]Rm×N with aj=1mcos(2πw(j1)F),j=1,2,,N, and w is a random vector uniformly distributed in [0,1]m. An important property of DCT matrix

Conclusion

In this paper, we studied the sparse signal recovery approach via minimizing the q-ratio sparsity. For the case 1<q, it reduces to a problem of minimizing the ratio of 1 and q norms. We gave a verifiable sufficient condition for the exact sparse recovery and established the corresponding reconstruction error bounds in terms of q-ratio CMSV. Two computational algorithms were proposed to approximately solve this non-convex problem. In addition, varieties of numerical experiments were

CRediT authorship contribution statement

Zhiyong Zhou: Conceptualization, Methodology, Software, Writing – original draft. Jun Yu: Supervision, Writing – review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

We would like to thank the Editor, the Associate Editor, and two anonymous referees for their detailed and insightful comments and suggestions that help to improve the quality of this paper.

References (51)

  • R. Chartrand et al.

    Iteratively reweighted algorithms for compressive sensing

    Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on

    (2008)
  • S.S. Chen et al.

    Atomic decomposition by basis pursuit

    SIAM J. Sci. Comput.

    (1998)
  • A. Cohen et al.

    Compressed sensing and best k-term approximation

    J. Am. Math. Soc.

    (2009)
  • D.L. Donoho

    Compressed sensing

    IEEE Trans. Inf. Theory

    (2006)
  • Y.C. Eldar et al.

    Compressed Sensing: Theory and Applications

    (2012)
  • E. Esser et al.

    A method for finding structured sparse solutions to nonnegative least squares problems with applications

    SIAM J. Imaging Sci.

    (2013)
  • J. Fan et al.

    Variable selection via nonconcave penalized likelihood and its oracle properties

    J. Am. Stat. Assoc.

    (2001)
  • S. Foucart et al.

    A Mathematical Introduction to Compressive Sensing

    (2013)
  • M. Grant, S. Boyd, CVX: Matlab Software for Disciplined Convex Programming, Version 2.1,...
  • N. Hurley et al.

    Comparing measures of sparsity

    IEEE Trans. Inf. Theory

    (2009)
  • A. Javaheri et al.

    Robust sparse recovery in impulsive noise via continuous mixed norm

    IEEE Signal Process. Lett.

    (2018)
  • S. Ji et al.

    Bayesian compressive sensing

    IEEE Trans. Signal Process.

    (2008)
  • T. Lipp et al.

    Variations and extension of the convex–concave procedure

    Optim. Eng.

    (2016)
  • M.E. Lopes

    Unknown sparsity in compressed sensing: denoising and inference

    IEEE Trans. Inf. Theory

    (2016)
  • Y. Lou et al.

    Fast L1–L2 minimization via a proximal operator

    J. Sci. Comput.

    (2018)
  • Cited by (4)

    • Neurodynamics-based Iteratively Reweighted Convex Optimization for Sparse Signal Reconstruction

      2022, 2022 12th International Conference on Information Science and Technology, ICIST 2022

    This work was supported by the Swedish Research Council grant (Reg.No. 340-2013-5342) and the Zhejiang Provincial Natural Science Foundation of China under Grant No. LQ21A010003.

    View full text