Elsevier

Signal Processing

Volume 110, May 2015, Pages 222-231
Signal Processing

Sparse fixed-rank representation for robust visual analysis

https://doi.org/10.1016/j.sigpro.2014.08.026Get rights and content

Highlights

  • We present a sparse fixed-rank representation approach for robust visual analysis.

  • We impose the sparsity constraint on the learnt low-rank representation.

  • We model the corruptions by enforcing a sparse regularizer.

  • Its efficacy is validated by empirical studies on synthetic and real-world data.

Abstract

Robust visual analysis plays an important role in a great variety of computer vision tasks, such as motion segmentation, pose and face analysis. One of the promising real-world applications is to recover the clean data representation from the corrupted data points for subspace segmentation. Recently, low-rank based methods have gained considerable popularity in solving this problem, such as Low-Rank Representation (LRR) and Fixed-Rank Representation (FRR). They both learn a low-rank data matrix and a sparse error matrix. Each new data representation is learnt using the whole dictionary covering all data points. However, they neglect a common fact that each point can be represented by a linear combination of only a few other points w.r.t. a given dictionary, which has been shown in sparse learning. Motivated by this, we explicitly impose the sparsity constraint on the learnt low-rank representation. To be more efficient, we adopt a fixed-rank scheme by minimizing the Frobenius norm of the new representation. Hence, in this paper we propose a novel Sparse Fixed-Rank Representation (SFRR) approach for robust visual analysis. Specifically, we model the corruptions by enforcing a sparse regularizer. This way, we can obtain a new data representation with both low-rankness and sparseness robustly. Furthermore, we present a generalized alternating direction method (ADM) to optimize the objective function. Extensive experiments on both synthetic and real-world databases have suggested the effectiveness and the robustness of the proposed method.

Introduction

In many real-world applications, the available data collections are often corrupted with some noises and outliers, which is difficult to handle. Besides, this problem becomes more challenging when the corruptions get heavier. To address this issue, robust visual analysis has attracted much attention in the computer vision and signal processing community [1], [2]. Its basic goal is to capture satisfactory visual analysis results whatever corruptions exist in the data points, and it has seen wide applications in motion segmentation and face analysis [3], [4]. One of the popular tasks is to recover the clean data representation from the corrupted data collection for subspace segmentation, which is the focus of this work. In concise, we aim to reduce the noises and the outliers as much as possible to better segment the data points into their respective subspaces. To this end, some competitive methods have been developed, e.g., Robust Principal Component Analysis (RPCA) [1], [4], Sparse Subspace Clustering (SSC) [5] and Low-Rank Representation (LRR) [6].

Previous studies have shown that a given data matrix can be well approximated by a low-rank representation [7], [8], [9], which has successful applications in robust visual analysis [6], [10]. Thus, we concentrate on low-rank based methods. Given a corrupted data matrix XRd×n with each sample stacked in a column, low-rank methods often divide it into two parts, i.e., X=AZ+E, where ARd×n is a dictionary, ZRn×n is the new data representation and ERd×n is the corruption term. Specifically, LRR finds the lowest-rank representation Z by minimizing its nuclear norm Z [6]. Since the nuclear norm requiring the Singular Value Decompositions (SVD) usually incurs high computational cost, the Fixed-Rank Representation (FRR) [10] method was proposed by using the matrix factorization idea [11], [12] to avoid this. Nevertheless, these methods only consider the low-rankness of Z, and neglect the fact that each data point can be represented by a linear combination of only a few other points w.r.t. the whole dictionary A, which has been shown by many applications in sparse learning [5], [13]. More specifically, for one data point vector x, it is improper to utilize all samples to represent its new representation z. This is because some negative redundancy might be introduced by samples in some subspaces dissimilar to the right subspace. Therefore, it is more sensible to only use a subset of the atoms in A to represent z for x.

Motivated from the above analysis, we explicitly impose the sparse constraints on the low-rank representation to promote the sparsity. Moreover, we adopt the fixed-rank strategy similar to [10] through minimizing the Frobenius norm of the new data representation to reduce the high computational complexity due to SVD computations. Therefore, in this paper, we propose a novel method called Sparse Fixed-Rank Representation (SFRR) for robust visual analysis. Hopefully, it learns a new data representation with both sparseness and low-rankness. Thus, both the local structure of each data subspace and the global structure of the whole data are jointly respected. Since negative entries of the representation for vision tasks lack sensible physical interpretation, we add a nonnegative constraint on the learnt new data representation. Such nonnegativity has been reported to be more consistent with the vision data and leads to better data representation [14], [15]. Besides, we model the corruptions by enforcing a sparse regularizer with l2,1 or l1-norm. To optimize the objective function, we develop a generalized alternating direction method (ADM) [16], [17], [18]. Through experiments on both the synthetic and real-world databases for subspace segmentation and outlier detection, the effectiveness and robustness of the proposed method have been justified.

The remainder of this paper is organized as follows. Section 2 gives a brief review of some related works. Section 3 introduces the proposed Sparse Fixed-Rank Representation approach. Experimental results are reported in Section 4 with rigorous analysis. Finally, the concluding remarks are provided in Section 5.

Section snippets

Related work

In this section, we briefly review some works closely related to our method. Recently, robust visual analysis has gained considerable attention in recent years [1], [19], [20], [18]. Many efficient methods have emerged, such as RPCA [1], SSC [5], LRR [6], FRR [10]. Among them, lots of contributions on low-rank based methods have been devoted to this very topic [21], [22], [23], [24]. Generally, low-rank methods are closely related to low-dimensional representation methods, such as matrix

The proposed method

In this section, we introduce the proposed Sparse Fixed-Rank Representation method. First, we describe SFRR in the noiseless model. Then we consider SFRR when the data points are corrupted by small noises and gross outliers. To optimize the objective function, we present a generalized alternating direction method.

Experiments

In this section, extensive experiments on both synthetic and real-world data were conducted to evaluate the performance of SFRR against some state-of-the-art methods. Concretely, we demonstrate its advantages through subspace segmentation and outlier detection.

Conclusion

In this work, we present a novel method called Sparse Fixed-Rank Representation (SFRR) for robust visual analysis. The basic idea is to enforce both the sparsity constraint and the low-rank constraint on the new data representation. This is strongly inspired by the fact that each data point can be represented as a linear combination of only a few atoms in a given dictionary, and a data matrix can be represented by only a few of its columns in matrix completion and low-rank approximation. Since

Acknowledgments

This work was supported in part by National Natural Science Foundation of China under Grants 91120302, 61222207, 61173185 and 61173186, National Basic Research Program of China (973 Program) under Grant 2013CB336500, and the Fundamental Research Funds for the Central Universities under Grant 2012FZA5017.

References (55)

  • G. Liu, Z. Lin, Y. Yu, Robust subspace segmentation by low-rank representation, in: Proceedings of the 27th...
  • D. Achlioptas, F. McSherry, Fast computation of low rank matrix approximations, in: ACM Symposium on Theory of...
  • E. Candes et al.

    Matrix completion with noise

    Proc. IEEE

    (2010)
  • Z. Zhang et al.

    Low-rank approximations with sparse factors Ibasic algorithms and error analysis

    SIAM J. Matrix Anal. Appl.

    (1999)
  • R. Liu, Z. Lin, F. Torre, Z. Su, Fixed-rank representation for unsupervised visual learning, in: Proceedings of the...
  • J. Haldar et al.

    Rank-constrained solutions to linear matrix equations using power-factorization

    IEEE Signal Process. Lett.

    (2009)
  • R. Vidal et al.

    Multiframe motion segmentation with missing data using powerfactorization and GPCA

    Int. J. Comput. Vis.

    (2008)
  • J. Mairal et al.

    Online learning for matrix factorization and sparse coding

    J. Mach. Learn. Res.

    (2010)
  • D. Lee et al.

    Learning the parts of objects by non-negative matrix factorization

    Nature

    (1999)
  • L. Zhuang, H. Gao, Z. Lin, Y. Ma, X. Zhang, N. Yu, Non-negative low rank and sparse graph for semi-supervised learning,...
  • X. Ren et al.

    Linearized alternating direction method with adaptive penalty and warm starts for fast solving transform invariant low-rank textures

    Int. J. Comput. Vis.

    (2013)
  • Y. Shen et al.

    Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization

    Optim. Methods Softw.

    (2014)
  • M. Tao et al.

    Recovering low-rank and sparse components of matrices from incomplete and noisy observations

    SIAM J. Optim.

    (2011)
  • P. Favaro, R. Vidal, A. Ravichandran, A closed form solution to robust subspace estimation and clustering, in:...
  • G. Liu et al.

    Exact subspace segmentation and outlier detection by low-rank representation

    J. Mach. Learn. Res.—Proc. Track

    (2012)
  • L. Ma, C. Wang, B. Xiao, W. Zhou, Sparse representation for face recognition based on discriminative low-rank...
  • E. Richard, P. Savalle, N. Vayatis, Estimation of simultaneously sparse and low rank matrices, in: Proceedings of the...
  • Cited by (7)

    • Subspace segmentation via self-regularized latent K-means

      2019, Expert Systems with Applications
      Citation Excerpt :

      Du et al. combined graph-regularized skill and matrix factorization technique together to present a graph-regularized compact low rank representation (GCLRR) (Du & Ma, 2017). Through the comparative studies on the above LRR-related algorithms, it could be found that matrix factorization-based methods usually achieve better results with high efficiencies when data samples were insufficient or grossly corrupted (Du & Ma, 2017; Li et al., 2015; Liu et al., 2012; Wei et al., 2017). Hence, these methods catch our lots of attentions.

    • Self-regularized fixed-rank representation for subspace segmentation

      2017, Information Sciences
      Citation Excerpt :

      As reported in [17], FRR achieved much better results in subspace segmentation experiments than LRR. Inspired by FRR, Li et al. designed a sparse FRR method (SFRR) [14], which has also been shown superior to LRR-related algorithms. In this paper, we propose a new FRR-based subspace segmentation algorithm called self-regularized FRR (SRFRR).

    • Integrating feature and graph learning with low-rank representation

      2017, Neurocomputing
      Citation Excerpt :

      Both LRR and SSC seek a linear representation of the data, however they require different structures of the low-dimensional representation, where LRR requires it to be low-rank while SSC requires it to be sparse, respectively. Recently, some new methods [13,15] marry the advantages from LRR and SSC and imposes simultaneously low-rank and sparse structure on the representation coefficients. Learning low-rank and sparse models have been well studied [16–18].

    • Towards robust subspace recovery via sparsity-constrained latent low-rank representation

      2016, Journal of Visual Communication and Image Representation
      Citation Excerpt :

      To address this shortcoming, Latent LRR (LLRR) construct the dictionary using both observed and hidden data, which are sampled from the same collection of low-rank subspaces [13]. Recent studies in sparse learning have shown that each data point is represented by a linear combination of only a few bases in the dictionary, which is often observed in the real-world applications [5,26,36,37]. However, the aforementioned low-rank based methods do not respect this key fact.

    View all citing articles on Scopus
    View full text