Elsevier

Signal Processing

Volume 90, Issue 2, February 2010, Pages 702-706
Signal Processing

Fast communication
On a simple derivation of the complementary matching pursuit

https://doi.org/10.1016/j.sigpro.2009.07.030Get rights and content

Abstract

Sparse approximation in a redundant basis has attracted considerable attention in recent years because of many practical applications. The problem basically involves solving an under-determined system of linear equations under some sparsity constraint. In this paper, we present a simple interpretation of the recently proposed complementary matching pursuit (CMP) algorithm. The interpretation shows that the CMP, unlike the classical MP, selects an atom and determines its weight based on a certain sparsity measure of the resulting residual error. Based on this interpretation, we also derive another simple algorithm which is seen to outperform CMP at low sparsity levels for noisy measurement vectors.

Introduction

In this paper, we consider the problem of solving the following under-determined system of equations:Ax=b,where the matrix A is of dimension K×N, K<N, and b is some non-zero observation vector. The columns of A span the K-dimensional vector space RK. The system has infinite number of solutions, but here we are concerned with the solution vector x that has the minimum number of non-zero elements. In other words, we are interested in representing the given vector b as a linear sum of the fewest columns of A weighted by the non-zero elements of the solution vector x. This problem is commonly known as sparse representation or approximation in the signal processing literature. The sparse representation problem thus can be posed asmin{x0:Ax=b},where the L0 norm denotes the number of non-zero elements. The sparse approximation problem, which allows some approximation error, can be posed asmin{x0:Ax-b2δ}for some δ>0. The signals which represent the columns of the matrix A are commonly called atoms and the set of atoms is called a dictionary. Since the signal set is larger than necessary (i.e., N>K) and the signals span RK, the signals in the dictionary are said to form a redundant basis.

The exact solutions to above problems can be found through combinatorial optimization methods. Since such methods require high computational complexity, most of the research activity in this area is focused on finding approximate solutions with tractable complexity. In the literature, there are basically two approaches to arrive at a sub-optimal solution. One is a greedy approach which approximates the signal vector through a sequence of incremental approximations by selecting atoms suitably. Such an approach is known as matching pursuit (MP) [1], [2]. The other is known as the basis pursuit (BP), which relaxes the L0 norm condition by replacing it by the L1 norm and solves the problem through linear programming [3], [4], [5]. BP algorithms can produce more accurate solutions than the matching pursuit algorithms but require higher complexity. Recently, some other iterative algorithms such as the regularized orthogonal matching pursuit (ROMP) [6], the compressive sampling matching pursuit (CoSaMP) [7], and the subspace pursuit (SP) [8] have been proposed. These algorithms aim to provide the same guarantee as the BP but with computational complexity akin to orthogonal matching pursuit (OMP) [2], [9]. Another approach called gradient pursuit [10] is similar to the matching pursuit but updates the sparse solution vector at each iteration with a directional update computed based on the gradient or the conjugate gradient.

Recently Rath and Guillemot [11] have proposed the complementary matching pursuit (CMP) which is similar to the classical matching pursuit, but is performed in the coefficient space rather than the signal space. Through simulation results, they showed that by performing approximations in the coefficient space, the convergence speed and the sparsity of resulting vectors are improved. The improved performance of CMP over MP can be understood in a mathematically rigorous manner through the sensing dictionary formalism of Schnass and Vandergheynst [12]. In this paper, we provide a simple interpretation of the CMP based on the sparsity of the residual error. We believe that this provides us an intuitive understanding of the higher performance of the CMP over MP in a noiseless scenario. Following this derivation, we then present a simple pursuit algorithm which selects atoms based on the sparsity of the residual error resulting from MP algorithm. We compare it with the orthogonal CMP (OCMP) and the sensing dictionary method of Schnass and Vandergheynst [12] through simulation.

Section snippets

Complementary matching pursuits

Pursuit algorithms are iterative with each iteration consisting of two steps: atom selection and residual update. The atom selection step selects the atom which is the most likely candidate atom to be included in the sparse approximation. The residual update step updates the current residual error by subtracting the contribution of the selected atom from it. These two steps are performed sequentially and thus are dependent. The update step is dependent on the selected atom. Similarly, the

A simple greedy pursuit algorithm

The explanation in the previous section shows that CMP and OCMP take into account the “sparsity” of the resulting residual error in selecting the atom and finding its coefficient. In this sense, the CMP and OCMP “look ahead” to the future iterations. This is unlike MP and OMP which only minimize the current residual error when selecting the atom and finding its coefficient. However, here we have assumed that the measurement vector is a pure linear sum of atoms without any noise. In a practical

Simulation results

In order to compare the different sparse algorithms, we performed simulations with MATLAB. First, we created a random dictionary of 25 atoms each atom having 16 elements. The dictionary elements were generated using the standard Gaussian random number generator with mean 0 and variance 1. The atoms were then normalized to have unity magnitude.

In the first experiment, we compared the various algorithms in terms of sparsity. Using the dictionary, we created signal vectors from linear combinations

Conclusions

In this paper, we have presented a simple explanation of the recently proposed CMP algorithm. The explanation provides us an intuitive understanding of the better performance of CMP than the classical MP in a noiseless scenario. For measurements with additive random noise, the complementary pursuit algorithms are expected to perform poorly and we have verified it through simulation.

Following the explanation of the CMP algorithm, we have presented a simple pursuit algorithm which combines the

References (12)

  • S. Mallat et al.

    Matching pursuit in a time-frequency dictionary

    IEEE Trans. SP

    (1993)
  • Y.C. Pati, R. Rezaiifar, P.S. Krishnaprasad, Orthogonal matching pursuit: recursive function approximation with...
  • S.S. Chen et al.

    Atomic decomposition by basis pursuit

    SIAM J. Sci. Comput.

    (1988)
  • D.L. Donoho

    For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution

    Comm. Pure Appl. Math.

    (2006)
  • J.J. Fuchs

    Recovery of exact sparse representations in the presence of bounded noise

    IEEE Trans. Inform. Theory

    (2005)
  • D. Needell, R. Vershynin, Signal recovery from inaccurate and incomplete measurements via regularized orthogonal...
There are more references available in the full text version of this article.

Cited by (6)

  • Fractal pursuit for compressive sensing signal recovery

    2013, Computers and Electrical Engineering
    Citation Excerpt :

    Besides, the disadvantage of MP is that the measurement is overmuch and the sparsity of signal needs to be known in advance. Subspace pursuit [13] and complementary matching pursuit [14] can be seen as two effective improvements for this situation. In short, these major improvements are usually only focused on the autologous disadvantages of BP or MP, and there is little research to synthesize their advantages as a new CS recovery method.

  • Fast Matching Pursuit with Multi-Gabor Dictionaries

    2021, ACM Transactions on Mathematical Software
  • Research on compressed sensing reconstruction algorithm in complementary space

    2017, International Conference on Communication Technology Proceedings, ICCT
  • Dictionary based image compression via sparse representation

    2017, International Journal of Electrical and Computer Engineering
View full text