Towards robust subspace recovery via sparsity-constrained latent low-rank representation
Introduction
In recent years, it has emerged as a very important topic in visual analysis to explore the subspace structure of the noisy data, which has attracted much attention from both academia and industry [1], [2], [3], [4]. The central goal is to capture the true subspace structure by eliminating noisy entries from original data, thus facilitating many real-world tasks, e.g., subspace clustering, classification and dimensionality reduction. To solve this problem, many promising approaches have been developed from various perspectives, such as robust principal component analysis (RPCA) [1], sparse representation (SR) [5], [6] and low-rank representation (LRR) [7]. In this work, we focus on low-rank based methods as it has found wide applications in robust recovery analysis [2], subspace segmentation [8], matrix completion [9], etc.
Generally, the low-rank based representation methods can be cast into the popular topic, namely low-rank approximation [10], [11], [12]. LRR is one of the typical methods, which seeks the lowest-rank representation among all candidates representing the data points as the linear combination of the bases in a dictionary [7]. Mathematically, for a given observation matrix consisting of n data points, LRR aims to find the low-rank matrix usingwhere is a dictionary. In LRR, the dictionary is chosen as the observation matrix itself , which would depress the performance if the observations are insufficient and grossly corrupted. To address this issue, Latent LRR (LLRR) employs both observed and hidden data to construct the dictionary [13], which assumes all data points are sampled from the same collection of low-rank subspaces and the hidden effects can be approximately recovered by solving a nuclear norm minimization problem. However, an important yet heuristic fact is not respected by LLRR, i.e., each data point in the low-rank subspace can be represented by a linear combination within only a small subset of bases in the dictionary. In other words, the learnt low-rank representation should be simultaneously a sparse representation. For example, for a given data point , the vector in the recovered term is actually sparse.
Motivated by the progresses in sparse learning [5], [14], [15], [16], we propose a novel low-rank based method named Sparse Latent Low-rank representation (SLL) by explicitly incorporating the sparsity constraint on the learnt low-rank matrix, thus obtaining a both lowest-rank and sparsest data representation. SLL is fundamentally based on LLRR, thus inheriting some advantages of LLRR, e.g., it can handle the hidden effects to compensate the insufficient sampling of the dictionary and is also able to extract the salient features from corrupted data. Furthermore, the sparsity constraint is superimposed on the objective function of SLL as a regularizer, leading to a more favorable structure of the data space and thus more robust to noise. To optimize the objective function, we adopt the variable splitting strategy [17] and the linearized Alternating Direction Method (ADM) [18], which shares the property of the Augmented Lagrange Multiplier (ALM) [19] method. Numerous experiments have shown promising results of the proposed method in comparison with others.
The remainder of this paper is structured as follows. We give a brief review on the related work in Section 2. The proposed Sparse Latent Low-rank representation (SLL) method is introduced in Section 3. Section 4 reports the experimental results with some analysis. In the end, we draw a conclusion in Section 5.
Section snippets
Related work
This section briefly reviews some closely related works to the proposed method. As a hot topic in pattern recognition and computer vision communities, robust recovery of subspace structures from corrupted data has gained much attention in the last decade. The popularity of this topic originates from the two aspects. On the one hand, the collected data points are often grossly corrupted with noises, outliers. On the other hand, it is strongly desirable to recover the clean data in several
Our approach
In this section, we describe our Sparse Latent Low-rank representation (SLL) method for robust recovery of subspace structures of the corrupted data. Besides, we adopt a linearized ADM method to optimize the objective function. We begin with the problem setting.
Experiments
In this section, we investigate the effectiveness of the proposed method through extensive experiments on the corrupted data. In particular, we conduct the subspace clustering, salient feature extraction and outlier detection on different databases. Promising results have validated the effectiveness of our approach.
Conclusion
This paper presents a novel low-rank based method named Sparse Latent Low-rank representation (SLL) for recovering subspace structures of the corrupted data. The motivation comes from the observation that each data point can be represented by only a small subset of the bases in a given dictionary, which also inspires some emerging methods based on sparse representation, e.g., sparse subspace clustering. Our SLL method is primarily based on the latent low-rank representation, which addresses the
Acknowledgments
This work was supported in part by Zhejiang Provincial Natural Science Foundation of China under Grant LQ15F020012 and the National Key Technology Support Program of China under Grant 2012BAI34B03.
References (50)
Splitting and alternating direction methods
Handbook Numer. Anal.
(1990)- et al.
Sparse fixed-rank representation for robust visual analysis
Signal Process.
(2015) - et al.
Recognizing architecture styles by hierarchical sparse coding of blocklets
Inform. Sci.
(2014) - et al.
Locally discriminative spectral clustering with composite manifold
Neurocomputing
(2013) - et al.
Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories
Comput. Vis. Image Underst.
(2007) - E. Candès, X. Li, Y. Ma, J. Wright, Robust principal component analysis? J. ACM 58 (3),...
- et al.
Recovering low-rank and sparse components of matrices from incomplete and noisy observations
SIAM J. Optimiz.
(2011) - et al.
Robust recovery of subspace structures by low-rank representation
IEEE Trans. Pattern Anal. Mach. Intell.
(2013) - et al.
Exact subspace segmentation and outlier detection by low-rank representation
J. Mach. Learn. Res. – Proc. Track
(2012) - E. Elhamifar, R. Vidal, Sparse subspace clustering, in: Proceedings of the IEEE Conference on Computer Vision and...
Robust face recognition via sparse representation
IEEE Trans. Pattern Anal. Mach. Intell.
Matrix completion with noise
Proc. IEEE
Generalized low rank approximations of matrices
Mach. Learn.
Low-rank structure learning via nonconvex heuristic recovery
IEEE Trans. Neural Netw. Learn. Syst.
Generalized low-rank approximations of matrices revisited
IEEE Trans. Neural Netw.
Sparse representation for computer vision and pattern recognition
Proc. IEEE
Efficient sparse generalized multiple kernel learning
IEEE Trans. Neural Netw.
Cited by (8)
Integrating feature and graph learning with low-rank representation
2017, NeurocomputingCitation Excerpt :Both LRR and SSC seek a linear representation of the data, however they require different structures of the low-dimensional representation, where LRR requires it to be low-rank while SSC requires it to be sparse, respectively. Recently, some new methods [13,15] marry the advantages from LRR and SSC and imposes simultaneously low-rank and sparse structure on the representation coefficients. Learning low-rank and sparse models have been well studied [16–18].
Robust subspace clustering based on latent low rank representation with non-negative sparse Laplacian constraints
2021, Journal of Intelligent and Fuzzy SystemsSea-Surface Target Angular Superresolution in Forward-Looking Radar Imaging Based on Maximum A Posteriori Algorithm
2019, IEEE Journal of Selected Topics in Applied Earth Observations and Remote SensingPerceptual image hashing using latent low-rank representation and uniform LBP
2018, Applied Sciences (Switzerland)