Skip to main content
Log in

Robust subspace learning-based low-rank representation for manifold clustering

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Spectral clustering-based subspace clustering methods have attracted broad interest in recent years. This kind of methods usually uses the self-representation in the original space to extract the affinity between the data points. However, we can usually find a subspace where the affinity of the projected data points can be extracted by self-representation more effectively. Moreover, only using the self-representation in the original space cannot handle nonlinear manifold clustering well. In this paper, we present robust subspace learning-based low-rank representation learning a subspace favoring the affinity extraction for the low-rank representation. The process of learning the subspace and yielding the representation is conducted simultaneously, and thus, they can benefit from each other. After extending the linear projection to nonlinear mapping, our method can handle manifold clustering problem which can be viewed as a general case of subspace clustering. In addition, the \(\ell _{2,1}\)-norm used in our model can increase the robustness of our method. Extensive experimental results demonstrate the effectiveness of our method on manifold clustering.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. http://sipi.usc.edu/database/database.cgi?volume=textures.

References

  1. Bradley PS, Mangasarian OL (2000) k-plane clustering. J Global Optim 16(1):23–32

    Article  MathSciNet  Google Scholar 

  2. Cai J, Candès EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM J Optim 20(4):1956–1982

    Article  MathSciNet  Google Scholar 

  3. Cheng B, Liu G, Wang J, Huang Z, Yan S (2011) Multi-task low-rank affinity pursuit for image segmentation. In: ICCV

  4. Elhamifar E, Vidal R (2009) Sparse subspace clustering. In: CVPR

  5. Elhamifar E, Vidal R (2013) Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans Pattern Anal Mach Intell 35(11):2765–2781

    Article  Google Scholar 

  6. Fan J, Chow TWS (2017) Sparse subspace clustering for data with missing entries and high-rank matrix completion. Neural Netw 93:36–44

    Article  Google Scholar 

  7. Fan J, Chow TWS, Zhao M, Ho JKL (2018) Nonlinear dimensionality reduction for data with disconnected neighborhood graph. Neural Process Lett 47(2):697–716

    Google Scholar 

  8. Fan J, Tian Z, Zhao M, Chow TWS (2018) Accelerated low-rank representation for subspace clustering and semi-supervised classification on large-scale data. Neural Netw 100:39–48

    Article  Google Scholar 

  9. Ho J, Yang MH, Lim J, Lee KC, Kriegman DJ (2003) Clustering appearances of objects under varying illumination conditions. In: CVPR

  10. Hu H, Lin Z, Feng J, Zhou J (2014) Smooth representation clustering. In: CVPR

  11. Hu R, Fan L, Liu L (2012) Co-segmentation of 3d shapes via subspace clustering. Comput Graph Forum 31(5):1703–1713

    Article  Google Scholar 

  12. Lang C, Liu G, Yu J, Yan S (2012) Saliency detection by multitask sparsity pursuit. IEEE Trans Image Process 21(3):1327–1338

    Article  MathSciNet  Google Scholar 

  13. Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: CVPR

  14. Lee KC, Ho J, Kriegman DJ (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698

    Article  Google Scholar 

  15. Li B, Zhang Y, Lin Z, Lu H (2015) Subspace clustering by mixture of Gaussian regression. In: CVPR

  16. Li F, Fergus R, Perona P (2007) Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. Comput Vis Image Underst 106(1):59–70

    Article  Google Scholar 

  17. Li Z, Liu J, Tang J, Lu H (2015) Robust structured subspace learning for data representation. IEEE Trans Pattern Anal Mach Intell 37(10):2085–2098

    Article  Google Scholar 

  18. Lin Z, Chen M, Wu L, Ma Y (2009) The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. UIUC Technical Report, UILU-ENG-09-2215

  19. Liu G, Lin Z, Yan S, Sun J, Yu Y, Ma Y (2013) Robust recovery of subspace structures by low-rank representation. IEEE Trans Pattern Anal Mach Intell 35(1):171–184

    Article  Google Scholar 

  20. Liu G, Lin Z, Yu Y (2010) Robust subspace segmentation by low-rank representation. In: ICML

  21. Liu G, Yan S (2011) Latent low-rank representation for subspace segmentation and feature extraction. In: ICCV

  22. Liu R, Lin Z, la Torre FD, Su Z (2012) Fixed-rank representation for unsupervised visual learning. In: CVPR

  23. Lu C, Feng J, Lin Z, Yan S (2013) Correlation adaptive subspace segmentation by trace lasso. In: ICCV

  24. Lu CY, Min H, Zhao ZQ, Zhu L, Huang DS, Yan S (2012) Robust and efficient subspace segmentation via least squares regression. In: ECCV

  25. Luxburg U (2007) A tutorial on spectral clustering. Stat Comput 17(4):395–416

    Article  MathSciNet  Google Scholar 

  26. Nasihatkon B, Hartley RI (2011) Graph connectivity in sparse subspace clustering. In: CVPR

  27. Nene SA, Nayar SK, Murase H (1996) Columbia object image library (coil-20). Technical Report, CUCS-005-96

  28. Oyedotun OK, Khashman A (2017) Deep learning in vision-based static hand gesture recognition. Neural Comput Appl 28(12):3941–3951

    Article  Google Scholar 

  29. Patel VM, Nguyen HV, Vidal R (2013) Latent space sparse subspace clustering. In: ICCV

  30. Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905

    Article  Google Scholar 

  31. Sim T, Baker S, Bsat M (2003) The CMU pose, illumination, and expression database. IEEE Trans Pattern Anal Mach Intell 25(12):1615–1618

    Article  Google Scholar 

  32. Soltanolkotabi M, Candès EJ (2011) A geometric analysis of subspace clustering with outliers. Ann Stat 40(4):2195–2238

    Article  MathSciNet  Google Scholar 

  33. Souvenir R, Pless R (2005) Manifold clustering. In: ICCV

  34. Tang K, Dunson DB, Su Z, Liu R, Zhang J, Dong J (2016) Subspace segmentation by dense block and sparse representation. Neural Netw 75:66–76

    Article  Google Scholar 

  35. Tang K, Liu R, Su Z, Zhang J (2014) Structure-constrained low-rank representation. IEEE Trans Neural Netw Learn Syst 25(12):2167–2179

    Article  Google Scholar 

  36. Tang K, Liu X, Su Z, Jiang W, Dong J (2016) Subspace learning based low-rank representation. In: ACCV

  37. Tang K, Zhang J, Su Z, Dong J (2016) Bayesian low-rank and sparse nonlinear representation for manifold clustering. Neural Process Lett 44(3):719–733

    Article  Google Scholar 

  38. Tipping ME, Bishop CM (1999) Mixtures of probabilistic principal component analysers. Neural Comput 11(2):443–482

    Article  Google Scholar 

  39. Vidal R (2011) Subspace clustering. IEEE Signal Process Mag 28(2):52–68

    Article  Google Scholar 

  40. Vidal R, Ma Y, Sastry S (2005) Generalized principal component analysis (GPCA). IEEE Trans Pattern Anal Mach Intell 27(12):1945–1959

    Article  Google Scholar 

  41. Wang S, Yuan X, Yao T, Yan S, Shen J (2011) Efficient subspace segmentation via quadratic programming. In: AAAI

  42. Wang Y, Jiang Y, Wu Y, Zhou Z (2010) Multi-manifold clustering. In: PRICAI

  43. Yan S, Xu D, Zhang B, Zhang H, Yang Q, Lin S (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51

    Article  Google Scholar 

  44. Yang Y, Feng J, Jojic N, Yang J, Huang TS (2016) \(\ell_{0}\)-sparse subspace clustering. In: ECCV

  45. Yang Y, Shen HT, Ma Z, Huang Z, Zhou X (2011) \(\ell_{2,1}\)-norm regularized discriminative feature selection for unsupervised learning. In: IJCAI

  46. Yin M, Guo Y, Gao J, He Z, Xie S (2016) Kernel sparse subspace clustering on symmetric positive definite manifolds. In: CVPR

  47. Yong W, Yuan J, Yi W, Zhou Z (2011) Spectral clustering on multiple manifolds. IEEE Trans Neural Netw Learn Syst 22(7):1149–1161

    Article  Google Scholar 

  48. You C, Li C, Robinson DP, Vidal R (2016) Oracle based active set algorithm for scalable elastic net subspace clustering. In: CVPR

  49. Zhang H, Cao X, Ho JKL, Chow TWS (2017) Object-level video advertising: An optimization framework. IEEE Trans Industr Inf 13(2):520–531

    Article  Google Scholar 

  50. Zhang X (2004) Matrix analysis and applications. Tsinghua University Press, Beijing

    Google Scholar 

Download references

Acknowledgements

The work of K. Tang was supported by the National Natural Science Foundation of China (No. 61702243), the Educational Commission of Liaoning Province, China (No. L201683662). The work of Z. Su was supported by the High-tech Ship Research Program Support Project and the National Natural Science Foundation of China (No. 61572099). The work of W. Jiang was supported by the National Natural Science Foundation of China (No. 61771229). The work of J. Zhang was supported by National Natural Science Foundation of China (No. 61702245), the Educational Commission of Liaoning Province, China (No. L201683663). The work of X. Sun was supported by the National Natural Science Foundation of China (No. 61561016). The work of X. Luo was supported by the National Natural Science Foundation of China (Nos. 61320106008, 61772149).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Zhang.

Ethics declarations

Conflict of interest

The authors declared that they have no conflicts of interest to this work.

Appendix

Appendix

Proof

(Proof of Theorem 1) Because \(\mathbf {\mathbf {U}_{k+1}}\) in Eq. (21) is the optimal solution of the problem Eq. (20), we can obtain

$$\begin{aligned}&\frac{\alpha }{2} tr(\mathbf {U}_{k+1}(\mathbf {K}-\mathbf {K}\mathbf {Z}_{k})\mathbf {F}_{k} (\mathbf {K}^{T}-\mathbf {Z}_{k}^{T}\mathbf {K}^{T})\mathbf {U}^{T}_{k+1})+\beta tr(\mathbf {U}_{k+1}\mathbf {K}\mathbf {S}\mathbf {K}^{T}\mathbf {U}^{T}_{k+1}) \nonumber \\&\quad \le \frac{\alpha }{2} tr(\mathbf {U}_{k}(\mathbf {K}-\mathbf {K}\mathbf {Z}_{k})\mathbf {F}_{k} (\mathbf {K}^{T}-\mathbf {Z}_{k}^{T}\mathbf {K}^{T})\mathbf {U}^{T}_{k})+\beta tr(\mathbf {U}_{k}\mathbf {K}\mathbf {S}\mathbf {K}^{T}\mathbf {U}^{T}_{k})\nonumber \\&\quad =\frac{\alpha }{2} \Vert \mathbf {U}_{k}\mathbf {K}-\mathbf {U}_{k}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1}+\beta tr(\mathbf {U}_{k}\mathbf {K}\mathbf {S}\mathbf {K}^{T}\mathbf {U}^{T}_{k}). \end{aligned}$$
(27)

With respect to \(a>0,b>0\), we can obtain

$$\begin{aligned} \frac{b}{2}\ge a-\frac{a^{2}}{2b}. \end{aligned}$$
(28)

Hence, with respect to each \(j=1,\ldots ,n\)

$$\begin{aligned} \begin{array}{c} \frac{\Vert [\mathbf {U}_{k}\mathbf {K}-\mathbf {U}_{k}\mathbf {K}\mathbf {Z}_{k}]_{:j}\Vert _{2}}{2} \ge \Vert [\mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}]_{:j}\Vert _{2}-\frac{\Vert [\mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}]_{:j}\Vert _{2}^{2}}{2\Vert [\mathbf {U}_{k}\mathbf {K}-\mathbf {U}_{k}\mathbf {K}\mathbf {Z}_{k}]_{:j}\Vert _{2}}. \end{array} \end{aligned}$$
(29)

Computing the sum with respect to all j, we can obtain

$$\begin{aligned} \begin{array}{c} \frac{1}{2}\Vert \mathbf {U}_{k}\mathbf {K}-\mathbf {U}_{k}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1}\ge \Vert \mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1}\\ -\frac{1}{2}tr(\mathbf {U}_{k+1}(\mathbf {K}-\mathbf {K}\mathbf {Z}_{k})\mathbf {F}_{k} (\mathbf {K}^{T}-\mathbf {Z}_{k}^{T}\mathbf {K}^{T})\mathbf {U}^{T}_{k+1}). \end{array} \end{aligned}$$
(30)

By means of inequalities (30) and (27), we can obtain

$$\begin{aligned} L_{k,1}\le & {} \Vert \mathbf {J}_{k}\Vert _{*} +\beta tr\left( \mathbf {U}_{k+1}\mathbf {K}\mathbf {S}\mathbf {K}^{T}\mathbf {U}_{k+1}^{T}\right) +\frac{\mu _{k}}{2}\Vert \mathbf {Z}_{k}-\mathbf {J}_{k}\Vert _{F}^{2}+\langle \mathbf {\Lambda }_{k}, \mathbf {Z}_{k}-\mathbf {J}_{k} \rangle \nonumber \\&+\alpha \left( \frac{1}{2}\Vert \mathbf {U}_{k}\mathbf {K}-\mathbf {U}_{k}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1}+\frac{1}{2}tr(\mathbf {U}_{k+1}(\mathbf {K}-\mathbf {K}\mathbf {Z}_{k})\mathbf {F}_{k} (\mathbf {K}^{T}-\mathbf {Z}_{k}^{T}\mathbf {K}^{T})\mathbf {U}^{T}_{k+1}\right) \nonumber \\\le & {} \Vert \mathbf {J}_{k}\Vert _{*}+\beta tr(\mathbf {U}_{k}\mathbf {K}\mathbf {S}\mathbf {K}^{T}\mathbf {U}_{k}^{T})+\frac{\mu _{k}}{2}\Vert \mathbf {Z}_{k}-\mathbf {J}_{k}\Vert _{F}^{2}+\langle \mathbf {\Lambda }_{k}, \mathbf {Z}_{k}-\mathbf {J}_{k} \rangle \nonumber \\&+\frac{\alpha }{2} \Vert \mathbf {U}_{k}\mathbf {K}-\mathbf {U}_{k}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1}+ \frac{\alpha }{2} \Vert \mathbf {U}_{k}\mathbf {K}-\mathbf {U}_{k}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1} =L_{k,0}. \end{aligned}$$
(31)

\(\square \)

Proof

(Proof of Theorem 2) Because \(\mathbf {\mathbf {Z}_{k+1}}\) in Eq. (23) is the optimal solution of the problem Eq. (22), we can obtain

$$\begin{aligned}&\frac{\alpha }{2} tr((\mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k+1})\mathbf {G}_{k} (\mathbf {K}^{T}\mathbf {U}_{k+1}^{T}-\mathbf {Z}_{k+1}^{T}\mathbf {K}^{T}\mathbf {U}_{k+1}^{T}))\nonumber \\&\qquad +\frac{\mu _{k}}{2}\Vert \mathbf {J}_{k+1}-\mathbf {Z}_{k+1}\Vert _{F}^{2}+\langle \mathbf {\Lambda }_{k}, \mathbf {Z}_{k+1}-\mathbf {J}_{k+1} \rangle \nonumber \\&\quad \le \frac{\alpha }{2} tr((\mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k})\mathbf {G}_{k} (\mathbf {K}^{T}\mathbf {U}_{k+1}^{T}-\mathbf {Z}_{k}^{T}\mathbf {K}^{T}\mathbf {U}_{k+1}^{T}))\nonumber \\&\qquad +\frac{\mu _{k}}{2}\Vert \mathbf {J}_{k+1}-\mathbf {Z}_{k}\Vert _{F}^{2}+\langle \mathbf {\Lambda }_{k}, \mathbf {Z}_{k}-\mathbf {J}_{k+1} \rangle \nonumber \\&\quad =\frac{\alpha }{2}\Vert \mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1} +\frac{\mu _{k}}{2}\Vert \mathbf {J}_{k+1}-\mathbf {Z}_{k}\Vert _{F}^{2}+\langle \mathbf {\Lambda }_{k}, \mathbf {Z}_{k}-\mathbf {J}_{k+1} \rangle . \end{aligned}$$
(32)

With respect to \(a>0,b>0\), we can obtain

$$\begin{aligned} \frac{b}{2}\ge a-\frac{a^{2}}{2b}. \end{aligned}$$
(33)

Hence, with respect to each \(j=1,\ldots ,n\)

$$\begin{aligned} \begin{array}{c} \frac{\Vert [\mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}]_{:j}\Vert _{2}}{2} \ge \Vert [\mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k+1}]_{:j}\Vert _{2}-\frac{\Vert [\mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k+1}]_{:j}\Vert _{2}^{2}}{2\Vert [\mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}]_{:j}\Vert _{2}}. \end{array} \end{aligned}$$
(34)

Computing the sum with respect to all j, we can obtain

$$\begin{aligned} \begin{array}{c} \frac{1}{2}\Vert \mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1}\ge \Vert \mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k+1}\Vert _{2,1} \\ -\frac{1}{2}tr((\mathbf {U}_{k+1}(\mathbf {K}-\mathbf {K}\mathbf {Z}_{k+1})\mathbf {G}_{k} (\mathbf {K}^{T}-\mathbf {Z}^{T}_{k+1}\mathbf {K}^{T})\mathbf {U}_{k+1}^{T}). \end{array} \end{aligned}$$
(35)

By means of inequalities (35) and (32), we can obtain

$$\begin{aligned} L_{k,3}\le & {} \Vert \mathbf {J}_{k+1}\Vert _{*}+ \beta tr(\mathbf {U}_{k+1}\mathbf {K}\mathbf {S}\mathbf {K}^{T}\mathbf {U}_{k+1}^{T}) +\frac{\mu _{k}}{2}\Vert \mathbf {Z}_{k+1}-\mathbf {J}_{k+1}\Vert _{F}^{2}\nonumber \\&+\langle \mathbf {\Lambda }_{k}, \mathbf {Z}_{k+1}-\mathbf {J}_{k} \rangle +\alpha \left(\frac{1}{2}\Vert \mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1} \right.\nonumber \\&\left.+\frac{1}{2}tr(\mathbf {U}_{k+1}(\mathbf {K}-\mathbf {K}\mathbf {Z}_{k+1})\mathbf {G}_{k} (\mathbf {K}^{T}-\mathbf {Z}^{T}_{k+1}\mathbf {K}^{T})\mathbf {U}_{k+1}^{T})\right) \le \Vert \mathbf {J}_{k+1}\Vert _{*}\nonumber \\&+\beta tr(\mathbf {U}_{k+1}\mathbf {K}\mathbf {S}\mathbf {K}^{T}\mathbf {U}_{k+1}^{T})+\frac{\mu _{k}}{2}\Vert \mathbf {Z}_{k}-\mathbf {J}_{k+1}\Vert _{F}^{2}+\langle \mathbf {\Lambda }_{k}, \mathbf {Z}_{k}-\mathbf {J}_{k} \rangle \nonumber \\&+\frac{\alpha }{2}\Vert \mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1}+\frac{\alpha }{2}\Vert \mathbf {U}_{k+1}\mathbf {K}-\mathbf {U}_{k+1}\mathbf {K}\mathbf {Z}_{k}\Vert _{2,1} = L_{k,2}. \end{aligned}$$
(36)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, K., Su, Z., Jiang, W. et al. Robust subspace learning-based low-rank representation for manifold clustering. Neural Comput & Applic 31, 7921–7933 (2019). https://doi.org/10.1007/s00521-018-3617-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-018-3617-8

Keywords

Navigation