Towards robust subspace recovery via sparsity-constrained latent low-rank representation

https://doi.org/10.1016/j.jvcir.2015.06.012Get rights and content

Highlights

  • We present a Sparse Latent Low-rank representation approach for robust visual recovery.

  • This approach constructs the dictionary using both observed and hidden data.

  • A low-rank representation with enhanced sparsity can be derived.

  • Extensive experiments have confirmed the superiority of the proposed method.

Abstract

Robust recovery of subspace structures from noisy data has received much attention in visual analysis recently. To achieve this goal, previous works have developed a number of low-rank based methods, among of which Low-Rank Representation (LRR) is a typical one. As a refined variant, Latent LRR constructs the dictionary using both observed and hidden data to relieve the insufficient sampling problem. However, they fail to consider the observation that each data point can be represented by only a small subset of atoms in a dictionary. Motivated by this, we present the Sparse Latent Low-rank representation (SLL) method, which explicitly imposes the sparsity constraint on Latent LRR to encourage a sparse representation. In this way, each data point can be represented by only selecting a few points from the same subspace. Its objective function is solved by the linearized Augmented Lagrangian Multiplier method. Favorable experimental results on subspace clustering, salient feature extraction and outlier detection have verified promising performances of our method.

Introduction

In recent years, it has emerged as a very important topic in visual analysis to explore the subspace structure of the noisy data, which has attracted much attention from both academia and industry [1], [2], [3], [4]. The central goal is to capture the true subspace structure by eliminating noisy entries from original data, thus facilitating many real-world tasks, e.g., subspace clustering, classification and dimensionality reduction. To solve this problem, many promising approaches have been developed from various perspectives, such as robust principal component analysis (RPCA) [1], sparse representation (SR) [5], [6] and low-rank representation (LRR) [7]. In this work, we focus on low-rank based methods as it has found wide applications in robust recovery analysis [2], subspace segmentation [8], matrix completion [9], etc.

Generally, the low-rank based representation methods can be cast into the popular topic, namely low-rank approximation [10], [11], [12]. LRR is one of the typical methods, which seeks the lowest-rank representation among all candidates representing the data points as the linear combination of the bases in a dictionary [7]. Mathematically, for a given observation matrix XRm×n consisting of n data points, LRR aims to find the low-rank matrix ZRn×n usingminZrank(Z),s.t.X=AZ,where ARm×n is a dictionary. In LRR, the dictionary A is chosen as the observation matrix itself X, which would depress the performance if the observations are insufficient and grossly corrupted. To address this issue, Latent LRR (LLRR) employs both observed and hidden data to construct the dictionary [13], which assumes all data points are sampled from the same collection of low-rank subspaces and the hidden effects can be approximately recovered by solving a nuclear norm minimization problem. However, an important yet heuristic fact is not respected by LLRR, i.e., each data point in the low-rank subspace can be represented by a linear combination within only a small subset of bases in the dictionary. In other words, the learnt low-rank representation should be simultaneously a sparse representation. For example, for a given data point xi, the vector zi in the recovered term Azi is actually sparse.

Motivated by the progresses in sparse learning [5], [14], [15], [16], we propose a novel low-rank based method named Sparse Latent Low-rank representation (SLL) by explicitly incorporating the sparsity constraint on the learnt low-rank matrix, thus obtaining a both lowest-rank and sparsest data representation. SLL is fundamentally based on LLRR, thus inheriting some advantages of LLRR, e.g., it can handle the hidden effects to compensate the insufficient sampling of the dictionary and is also able to extract the salient features from corrupted data. Furthermore, the sparsity constraint is superimposed on the objective function of SLL as a regularizer, leading to a more favorable structure of the data space and thus more robust to noise. To optimize the objective function, we adopt the variable splitting strategy [17] and the linearized Alternating Direction Method (ADM) [18], which shares the property of the Augmented Lagrange Multiplier (ALM) [19] method. Numerous experiments have shown promising results of the proposed method in comparison with others.

The remainder of this paper is structured as follows. We give a brief review on the related work in Section 2. The proposed Sparse Latent Low-rank representation (SLL) method is introduced in Section 3. Section 4 reports the experimental results with some analysis. In the end, we draw a conclusion in Section 5.

Section snippets

Related work

This section briefly reviews some closely related works to the proposed method. As a hot topic in pattern recognition and computer vision communities, robust recovery of subspace structures from corrupted data has gained much attention in the last decade. The popularity of this topic originates from the two aspects. On the one hand, the collected data points are often grossly corrupted with noises, outliers. On the other hand, it is strongly desirable to recover the clean data in several

Our approach

In this section, we describe our Sparse Latent Low-rank representation (SLL) method for robust recovery of subspace structures of the corrupted data. Besides, we adopt a linearized ADM method to optimize the objective function. We begin with the problem setting.

Experiments

In this section, we investigate the effectiveness of the proposed method through extensive experiments on the corrupted data. In particular, we conduct the subspace clustering, salient feature extraction and outlier detection on different databases. Promising results have validated the effectiveness of our approach.

Conclusion

This paper presents a novel low-rank based method named Sparse Latent Low-rank representation (SLL) for recovering subspace structures of the corrupted data. The motivation comes from the observation that each data point can be represented by only a small subset of the bases in a given dictionary, which also inspires some emerging methods based on sparse representation, e.g., sparse subspace clustering. Our SLL method is primarily based on the latent low-rank representation, which addresses the

Acknowledgments

This work was supported in part by Zhejiang Provincial Natural Science Foundation of China under Grant LQ15F020012 and the National Key Technology Support Program of China under Grant 2012BAI34B03.

References (50)

  • J. Wright et al.

    Robust face recognition via sparse representation

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2009)
  • G. Liu, Z. Lin, Y. Yu, Robust subspace segmentation by low-rank representation, in: Proceedings of the 27th...
  • S. Wei, Z. Lin, Analysis and improvement of low rank representation for subspace segmentation,...
  • E. Candès et al.

    Matrix completion with noise

    Proc. IEEE

    (2010)
  • J. Ye

    Generalized low rank approximations of matrices

    Mach. Learn.

    (2005)
  • Y. Deng et al.

    Low-rank structure learning via nonconvex heuristic recovery

    IEEE Trans. Neural Netw. Learn. Syst.

    (2013)
  • J. Liu et al.

    Generalized low-rank approximations of matrices revisited

    IEEE Trans. Neural Netw.

    (2010)
  • G. Liu, S. Yan, Latent low-rank representation for subspace segmentation and feature extraction, in: Proceedings of the...
  • S. Yan, H. Wang, Semi-supervised learning by sparse representation, in: Proceeding of SIAM International Conference on...
  • J. Wright et al.

    Sparse representation for computer vision and pattern recognition

    Proc. IEEE

    (2010)
  • H. Yang et al.

    Efficient sparse generalized multiple kernel learning

    IEEE Trans. Neural Netw.

    (2011)
  • Z. Lin, R. Liu, Z. Su, Linearized alternating direction method with adaptive penalty for low-rank representation, in:...
  • Y. Shen, Z. Wen, Y. Zhang, Augmented Lagrangian alternating direction method for matrix separation based on low-rank...
  • P. Favaro, R. Vidal, A. Ravichandran, A closed form solution to robust subspace estimation and clustering, in:...
  • X. Shen, Y. Wu, A unified approach to salient object detection via low rank matrix recovery, in: Proceedings of the...
  • Cited by (8)

    • Integrating feature and graph learning with low-rank representation

      2017, Neurocomputing
      Citation Excerpt :

      Both LRR and SSC seek a linear representation of the data, however they require different structures of the low-dimensional representation, where LRR requires it to be low-rank while SSC requires it to be sparse, respectively. Recently, some new methods [13,15] marry the advantages from LRR and SSC and imposes simultaneously low-rank and sparse structure on the representation coefficients. Learning low-rank and sparse models have been well studied [16–18].

    • Sea-Surface Target Angular Superresolution in Forward-Looking Radar Imaging Based on Maximum A Posteriori Algorithm

      2019, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
    View all citing articles on Scopus
    View full text