Two-dimensional random projection☆
Introduction
The need for efficient collection, storage and processing of large, high-dimensional data has increased drastically over the past decade. Unfortunately, the high-dimensionality of data, in particular, jeopardizes the performance of inference tasks, due to the so-called “curse of dimensionality” phenomenon [1]. Luckily, dimensionality reduction techniques are often helpful in reducing this burden by extracting key low-dimensional information about the original high-dimensional signals, from which we can later infer key properties of the original data. It is therefore desirable to formulate a method that efficiently reduces the dimensionality efficiently, while preserving as much information from the original data as possible [2]. There are two main scenarios in which dimensionality reduction is successful: (1) Low-complexity inference, where only a small amount of information is required to make an inference about data. Examples include function estimation, signal detection, and classification [3], [4]. (2) Low-dimensional signal models, in which signals of interest have few degrees of freedom. In fact, it frequently happens in real-world applications that high-dimensional data actually obey some sort of concise low-dimensional model. Examples include signals with finite rate of innovation, manifolds, etc [5], [6]. While most conventional dimensionality reduction techniques are adaptive and involve nonlinear mappings to preserve certain desirable properties of data, a linear non-adaptive technique based on random projections (RP's) of data has recently been introduced [7]. In fact, random projections have been successfully utilized in low-complexity inference tasks, such as classification and estimation [3], [4], [8], [9]. RP has also demonstrated remarkable performance in obtaining a faithful low-dimensional representation of data belonging to low-complexity signal models, as in acquisition and reconstruction of sparse signals and manifolds [10], [11], [2]. Remarkable properties of RP stem from a simple concentration of measure inequality which states that, with high probability, the norm of a signal is well-preserved under a random dimensionality-reducing projection [12]. This seminal fact allows us to show that in many settings the distinguishing characteristics of a signal can be encoded by a few random measurements. In particular, using the simple union bound in combination with the above result leads us to Johnson–Lindenstrauss (JL) Lemma, which implies that the geometric structure of a point cloud is preserved under a random dimensionality reduction projection [13].As shown in [14], these results can be further extended to infinite sets with low-complexity geometrical structure, such as sparse signals and manifolds. Despite these impressive results, application of conventional RP to high-dimensional data, such as images and videos faces severe computational and memory difficulties, due to the so-called vector space model [15], [16], [17]. Under this model, each datum is modeled as a vector, i.e. columns (or rows) of each two-dimensional signal (2D-signal) are initially stacked into a large vector, as a result of which the row/column-wise structure of the image is ignored and storage and computational requirements are drastically increased. To alleviate the expensive conventional RP (1D-RP) scheme, the so-called two-dimensional random projection (2D-RP) has been recently proposed, which directly leverages the matrix structure of images and represents each datum as a matrix, instead of a vector [15]. In fact, similar ideas have previously appeared, for instance, in the context of 2D principal component analysis (2D-PCA) [18] and 2D linear discriminant analysis (2D-LDA) [19], in which the extensions of conventional PCA and LDA on 1D-signals to the image domain have demonstrated substantial improvements in memory and computational efficiency. In this paper, the idea of 2D-RP is studied and the corresponding concentration properties are closely analyzed. It is observed that desirable properties of 1D-RP extends to 2D analogue, while significantly gaining in computational and storage requirements. This gain, essentially due to the reduction in the number of degrees of freedom of the projection matrices, comes at the cost of extra measurements to obtain the same accuracy. 2D-RP is then applied to two important applications: (1) 2D-compressive classification, which is concerned with classification of images based on random measurements provided by 2D-RP. In particular, we consider multiple hypothesis testing given only random measurements of possibly noisy images, and (2) sparse 2D-signal reconstruction, which addresses the problem of accurate acquisition and reconstruction of sparse images from relatively few random measurements. In accordance with our expectations, comprehensive experiments verify the comparable performance and remarkable computational and storage advantages of 2D-RP compared to the 1D counterpart. Preliminary steps towards this work have been presented in ICIP2009 [20], in which the application of 2D-RP to classification of sparse images was studied briefly, along with a study of 2D-RP with Gaussian random matrices.
The rest of this paper is organized as follows. Section 2 offers a brief review on 1D-RP and corresponding technical results. 2D-RP and its implications for 2D-signals, finite sets, and infinite sets with low-complexity signal models are discussed in Section 3. Section 4 presents two main applications of 2D-RP and offers detailed performance analysis. In Section 5, these findings are validated through comprehensive experiments with synthetic and real images.
Section snippets
1D random projection
Consider making m linear measurements of 1D-signals in , . Equivalently, we can represent this measurement process in terms of linear projection onto by an m×n matrix A. Successful statistical inference or stable recovery in then mostly depends on the preservation of the geometric structure of data after projection [21]. This, in turn, requires a stable embedding of data in , which is commonly characterized using the following notion of isometry [10], [14]. Definition 1 Given , a matrix Baraniuk and Wakin [10, Section 3.2.1]
2D random projection
Traditionally, to collect a set of linear measurements of a 2D-signal (image), columns of the 2D-signal are first stacked into a large column vector. This so-called vector space model for signal processing [16], however, ignores the intrinsic row/column-wise structure of the 2D-signal and, even for moderately sized signals, involves prohibitive computational and memory requirements for collecting linear measurements and for applying statistical inference and reconstruction algorithms after
Applications of 2D random projection
In this section, we use 2D-RP in the context of two representative applications. First, as an example of low-complexity inference tasks, we consider the problem of 2D compressive classification, which is concerned with image classification based on relatively a few 2D random measurements. In particular, we study the problem of multiple hypothesis testing based on (possibly noisy) 2D random measurements. Detailed theoretical analysis along with derivation of an error bound for an important
Experiments
In this section, the effectiveness of 2D-RP is demonstrated via comprehensive experiments with synthetic and real images. First, we evaluate the performance of 2D-CC (Section 4) for multiple hypothesis testing in databases of synthetically generated random images and real retinal images. Secondly, successful application of 2D-SL0 (Section 4) to synthetic random images illustrates the advantages of 2D-RP in the context of sparse image reconstruction. Our experiments are performed in MATLAB8
Conclusions
In this paper, random projection technique was extended to directly leverage the matrix structure of images. We then studied the proposed 2D-RP and its implications for signals, arbitrary finite sets, and infinite sets with low-complexity signal models. These findings were then used to develop 2D-CC for image classification, along with an error bound for an important special case. The proposed classifier proved to be successful in experiments with arbitrary finite sets of synthetic and real
Acknowledgment
A. Eftekhari gratefully acknowledges Alejandro Weinstein for his help with preparing the revised document.
References (50)
- et al.
Compressive sensing for subsurface imaging using ground penetrating radar
Signal Processing
(2009) Database-friendly random projections: Johnson–Lindenstrauss with binary coins
Journal of Computer and System Sciences
(2003)- et al.
Two dictionaries matching pursuit for sparse decomposition of signals
Signal Processing
(2006) The restricted isometry property and its implications for compressed sensing
Comptes rendus Mathe-matique
(2008)- et al.
Underdetermined blind source separation using sparse representations
Signal Processing
(2001) - et al.
Extensions of compressed sensing
Signal Processing
(2006) - et al.
A simple test to check the optimality of a sparse signal approximation
Signal Processing
(2006) - et al.
Algorithms for simultaneous sparse approximation. Part i: greedy pursuit
Signal Processing
(2006) - et al.
Approximate nearest neighbors: towards removing the curse of dimensionality
- et al.
Random projections for manifold learning
The smashed filter for compressive classification and target recognition
Embeddings of surfaces, curves, and moving points in euclidean space
Sampling signals with finite rate of innovation
IEEE Transactions on Signal Processing
People hearing without listening: an introduction to compressive sampling
IEEE Signal Processing Magazine
Compressive sampling for signal detection
Dimensionality reduction by random projection and latent semantic indexing
Random projections of smooth manifolds
Foundations of Computational Mathematics
Random projection in dimensionality reduction: applications to image and text data
Concentration of measure and isoperimetric inequalities in product spaces
Publications Mathematiques de l’IHES
An elementary proof of the Johnson–Lindenstrauss lemma
Random Structures and Algorithms
A simple proof of the restricted isometry property for random matrices
Constructive Approximation
Sparse decomposition of two dimensional signals
Generalized low rank approximations of matrices
Machine Learning
Two-dimensional PCA: a new approach to appearance-based face representation and recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Cited by (43)
Image hiding using invertible neural network and similarity of bits pairs
2024, Applied Soft ComputingHiding cipher-images generated by 2-D compressive sensing with a multi-embedding strategy
2020, Signal Processing2D compressive sensing and multi-feature fusion for effective 3D shape retrieval
2017, Information SciencesFractional-order total variation combined with sparsifying transforms for compressive sensing sparse image reconstruction
2016, Journal of Visual Communication and Image Representation
- ☆
This work has been partially funded by the Iran Telecom Research Center (ITRC) and the Iran National Science Foundation (INSF).