Elsevier

Signal Processing

Volume 91, Issue 7, July 2011, Pages 1589-1603
Signal Processing

Two-dimensional random projection

https://doi.org/10.1016/j.sigpro.2011.01.002Get rights and content

Abstract

As an alternative to adaptive nonlinear schemes for dimensionality reduction, linear random projection has recently proved to be a reliable means for high-dimensional data processing. Widespread application of conventional random projection in the context of image analysis is, however, mainly impeded by excessive computational and memory requirements. In this paper, a two-dimensional random projection scheme is considered as a remedy to this problem, and the associated key notion of concentration of measure is closely studied. It is then applied in the contexts of image classification and sparse image reconstruction. Finally, theoretical results are validated within a comprehensive set of experiments with synthetic and real images.

Introduction

The need for efficient collection, storage and processing of large, high-dimensional data has increased drastically over the past decade. Unfortunately, the high-dimensionality of data, in particular, jeopardizes the performance of inference tasks, due to the so-called “curse of dimensionality” phenomenon [1]. Luckily, dimensionality reduction techniques are often helpful in reducing this burden by extracting key low-dimensional information about the original high-dimensional signals, from which we can later infer key properties of the original data. It is therefore desirable to formulate a method that efficiently reduces the dimensionality efficiently, while preserving as much information from the original data as possible [2]. There are two main scenarios in which dimensionality reduction is successful: (1) Low-complexity inference, where only a small amount of information is required to make an inference about data. Examples include function estimation, signal detection, and classification [3], [4]. (2) Low-dimensional signal models, in which signals of interest have few degrees of freedom. In fact, it frequently happens in real-world applications that high-dimensional data actually obey some sort of concise low-dimensional model. Examples include signals with finite rate of innovation, manifolds, etc [5], [6]. While most conventional dimensionality reduction techniques are adaptive and involve nonlinear mappings to preserve certain desirable properties of data, a linear non-adaptive technique based on random projections (RP's) of data has recently been introduced [7]. In fact, random projections have been successfully utilized in low-complexity inference tasks, such as classification and estimation [3], [4], [8], [9]. RP has also demonstrated remarkable performance in obtaining a faithful low-dimensional representation of data belonging to low-complexity signal models, as in acquisition and reconstruction of sparse signals and manifolds [10], [11], [2]. Remarkable properties of RP stem from a simple concentration of measure inequality which states that, with high probability, the norm of a signal is well-preserved under a random dimensionality-reducing projection [12]. This seminal fact allows us to show that in many settings the distinguishing characteristics of a signal can be encoded by a few random measurements. In particular, using the simple union bound in combination with the above result leads us to Johnson–Lindenstrauss (JL) Lemma, which implies that the geometric structure of a point cloud is preserved under a random dimensionality reduction projection [13].As shown in [14], these results can be further extended to infinite sets with low-complexity geometrical structure, such as sparse signals and manifolds. Despite these impressive results, application of conventional RP to high-dimensional data, such as images and videos faces severe computational and memory difficulties, due to the so-called vector space model [15], [16], [17]. Under this model, each datum is modeled as a vector, i.e. columns (or rows) of each two-dimensional signal (2D-signal) are initially stacked into a large vector, as a result of which the row/column-wise structure of the image is ignored and storage and computational requirements are drastically increased. To alleviate the expensive conventional RP (1D-RP) scheme, the so-called two-dimensional random projection (2D-RP) has been recently proposed, which directly leverages the matrix structure of images and represents each datum as a matrix, instead of a vector [15]. In fact, similar ideas have previously appeared, for instance, in the context of 2D principal component analysis (2D-PCA) [18] and 2D linear discriminant analysis (2D-LDA) [19], in which the extensions of conventional PCA and LDA on 1D-signals to the image domain have demonstrated substantial improvements in memory and computational efficiency. In this paper, the idea of 2D-RP is studied and the corresponding concentration properties are closely analyzed. It is observed that desirable properties of 1D-RP extends to 2D analogue, while significantly gaining in computational and storage requirements. This gain, essentially due to the reduction in the number of degrees of freedom of the projection matrices, comes at the cost of extra measurements to obtain the same accuracy. 2D-RP is then applied to two important applications: (1) 2D-compressive classification, which is concerned with classification of images based on random measurements provided by 2D-RP. In particular, we consider multiple hypothesis testing given only random measurements of possibly noisy images, and (2) sparse 2D-signal reconstruction, which addresses the problem of accurate acquisition and reconstruction of sparse images from relatively few random measurements. In accordance with our expectations, comprehensive experiments verify the comparable performance and remarkable computational and storage advantages of 2D-RP compared to the 1D counterpart. Preliminary steps towards this work have been presented in ICIP2009 [20], in which the application of 2D-RP to classification of sparse images was studied briefly, along with a study of 2D-RP with Gaussian random matrices.

The rest of this paper is organized as follows. Section 2 offers a brief review on 1D-RP and corresponding technical results. 2D-RP and its implications for 2D-signals, finite sets, and infinite sets with low-complexity signal models are discussed in Section 3. Section 4 presents two main applications of 2D-RP and offers detailed performance analysis. In Section 5, these findings are validated through comprehensive experiments with synthetic and real images.

Section snippets

1D random projection

Consider making m linear measurements of 1D-signals in Rn, m<n. Equivalently, we can represent this measurement process in terms of linear projection onto Rm by an m×n matrix A. Successful statistical inference or stable recovery in Rm then mostly depends on the preservation of the geometric structure of data after projection [21]. This, in turn, requires a stable embedding of data in Rm, which is commonly characterized using the following notion of isometry [10], [14].

Definition 1

Baraniuk and Wakin [10, Section 3.2.1]

Given xRn, a matrix ARm

2D random projection

Traditionally, to collect a set of linear measurements of a 2D-signal (image), columns of the 2D-signal are first stacked into a large column vector. This so-called vector space model for signal processing [16], however, ignores the intrinsic row/column-wise structure of the 2D-signal and, even for moderately sized signals, involves prohibitive computational and memory requirements for collecting linear measurements and for applying statistical inference and reconstruction algorithms after

Applications of 2D random projection

In this section, we use 2D-RP in the context of two representative applications. First, as an example of low-complexity inference tasks, we consider the problem of 2D compressive classification, which is concerned with image classification based on relatively a few 2D random measurements. In particular, we study the problem of multiple hypothesis testing based on (possibly noisy) 2D random measurements. Detailed theoretical analysis along with derivation of an error bound for an important

Experiments

In this section, the effectiveness of 2D-RP is demonstrated via comprehensive experiments with synthetic and real images. First, we evaluate the performance of 2D-CC (Section 4) for multiple hypothesis testing in databases of synthetically generated random images and real retinal images. Secondly, successful application of 2D-SL0 (Section 4) to synthetic random images illustrates the advantages of 2D-RP in the context of sparse image reconstruction. Our experiments are performed in MATLAB8

Conclusions

In this paper, random projection technique was extended to directly leverage the matrix structure of images. We then studied the proposed 2D-RP and its implications for signals, arbitrary finite sets, and infinite sets with low-complexity signal models. These findings were then used to develop 2D-CC for image classification, along with an error bound for an important special case. The proposed classifier proved to be successful in experiments with arbitrary finite sets of synthetic and real

Acknowledgment

A. Eftekhari gratefully acknowledges Alejandro Weinstein for his help with preparing the revised document.

References (50)

  • J. Haupt, R. Castro, R. Nowak, G. Fudge, A. Yeh, Compressive sampling for signal classification, in: Proceedings of...
  • M. Davenport et al.

    The smashed filter for compressive classification and target recognition

  • P. Agarwal et al.

    Embeddings of surfaces, curves, and moving points in euclidean space

  • M. Vetterli et al.

    Sampling signals with finite rate of innovation

    IEEE Transactions on Signal Processing

    (2002)
  • E. Candes et al.

    People hearing without listening: an introduction to compressive sampling

    IEEE Signal Processing Magazine

    (2008)
  • J. Haupt et al.

    Compressive sampling for signal detection

  • J. Lin et al.

    Dimensionality reduction by random projection and latent semantic indexing

  • R. Baraniuk et al.

    Random projections of smooth manifolds

    Foundations of Computational Mathematics

    (2009)
  • E. Bingham et al.

    Random projection in dimensionality reduction: applications to image and text data

  • M. Talagrand

    Concentration of measure and isoperimetric inequalities in product spaces

    Publications Mathematiques de l’IHES

    (1995)
  • S. Dasgupta et al.

    An elementary proof of the Johnson–Lindenstrauss lemma

    Random Structures and Algorithms

    (2002)
  • R. Baraniuk et al.

    A simple proof of the restricted isometry property for random matrices

    Constructive Approximation

    (2008)
  • A. Ghaffari et al.

    Sparse decomposition of two dimensional signals

  • J. Ye

    Generalized low rank approximations of matrices

    Machine Learning

    (2005)
  • J. Yang et al.

    Two-dimensional PCA: a new approach to appearance-based face representation and recognition

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (2004)
  • Cited by (43)

    View all citing articles on Scopus

    This work has been partially funded by the Iran Telecom Research Center (ITRC) and the Iran National Science Foundation (INSF).

    View full text