Skip to main content
Log in

Semi-supervised classification based on subspace sparse representation

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

Graph plays an important role in graph-based semi-supervised classification. However, due to noisy and redundant features in high-dimensional data, it is not a trivial job to construct a well-structured graph on high-dimensional samples. In this paper, we take advantage of sparse representation in random subspaces for graph construction and propose a method called Semi-Supervised Classification based on Subspace Sparse Representation, SSC-SSR in short. SSC-SSR first generates several random subspaces from the original space and then seeks sparse representation coefficients in these subspaces. Next, it trains semi-supervised linear classifiers on graphs that are constructed by these coefficients. Finally, it combines these classifiers into an ensemble classifier by minimizing a linear regression problem. Unlike traditional graph-based semi-supervised classification methods, the graphs of SSC-SSR are data-driven instead of man-made in advance. Empirical study on face images classification tasks demonstrates that SSC-SSR not only has superior recognition performance with respect to competitive methods, but also has wide ranges of effective input parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. http://www.public.asu.edu/~jye02/Software/SLEP/.

  2. http://users.ece.gatech.edu/~justin/l1magic/.

  3. http://www.cad.zju.edu.cn/home/dengcai/Data/data.html.

  4. http://www.public.asu.edu/~jye02/Software/SLEP/.

References

  1. Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434

    MATH  MathSciNet  Google Scholar 

  2. Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    Article  MATH  Google Scholar 

  3. Brown G, Wyatt J, Harris R, Yao X (2005) Diversity creation methods: a survey and categorisation. Inf Fusion 6(1):5–20

    Article  Google Scholar 

  4. Chen L, Liao H, Ko M, Lin J, Yu G (2000) A new LDA-based face recognition system which can solve the small sample size problem. Pattern Recogn 33(10):1713–1726

    Article  Google Scholar 

  5. Chung F (1997) Spectral graph theory. Regional conference series in mathematics, No 92

  6. Donoho D (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306

    Article  MATH  MathSciNet  Google Scholar 

  7. Duda R, Hart P, Stork D (2001) Pattern classification. Wiley, New York

    MATH  Google Scholar 

  8. Elhamifar E, Vidal R (2009) Sparse subspace clustering. Proc IEEE Conf Comput Vis Pattern Recogn 2009:2790–2797

    Google Scholar 

  9. Fan M, Gu N, Qiao H, Zhang B (2011) Sparse regularization for semi-supervised classification. Pattern Recogn 44(8):1777–1784

    Article  MATH  Google Scholar 

  10. Georghiades A, Belhumeur P, Kriegman D (2001) From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans Pattern Anal Mach Intell 23(6):643–660

    Article  Google Scholar 

  11. Ho T (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8):832–844

    Article  Google Scholar 

  12. Huang K, Aviyente S (2006) Sparse representation for signal classification. In: Proceedings of advances in neural information processing systems 2006, pp 609–616

  13. Jebara T, Wang J, Chang S (2009) Graph construction and b-matching for semi-supervised learning. In Proceedings of international conference on machine learning, pp 441–448

  14. Kleinberg E (2000) On the algorithmic implementation of stochastic discrimination. IEEE Trans Pattern Anal Mach Intell 22(5):473–490

    Article  Google Scholar 

  15. Kuncheva L, Whitaker C (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181–207

    Article  MATH  Google Scholar 

  16. Liu J, Ye J (2009) Efficient Euclidean projections in linear time. In: Proceedings of international conference on machine learning, pp 657–664

  17. Liu W, Chang S (2009) Robust multi-class transductive learning with graphs. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 381–388

  18. Maier M, Von Luxburg U, Hein M (2008) Influence of graph construction on graph-based clustering measures. In: Proceedings of advances in neural information processing system, pp 1025–1032

  19. Margineantu D, Dietterich T (1997) Pruning adaptive boosting. In: Proceedings of international conference on machine learning, pp 211–218

  20. Martinez A (1998) The AR face database. Technical Report 24, CVC

  21. Olshausen B, Field D (1997) Sparse coding with an overcomplete basis set: a strategy employed by V1? Vis Res 37(23):3311–3325

    Article  Google Scholar 

  22. Parsons L, Haque E, Liu H (2004) Subspace clustering for high dimensional data: a review. ACM SIGKDD Explor Newsl 6(1):90–105

    Article  Google Scholar 

  23. Qiao L, Chen S, Tan X (2010) Sparsity preserving projections with applications to face recognition. Pattern Recogn 43(1):331–341

    Google Scholar 

  24. Qiao L, Chen S, Tan X (2010) Sparsity preserving discriminant analysis for single training image face recognition. Pattern Recogn Lett 31(5):422–429

    Google Scholar 

  25. Rodriguez J, Kuncheva L, Alonso C (2006) Rotation forest: a new classifier ensemble method. IEEE Trans Pattern Anal Mach Intell 28(10):1619–1630

    Article  Google Scholar 

  26. Roweis S, Saul L (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326

    Article  Google Scholar 

  27. Samaria F, Harter A (1994) Parameterisation of a stochastic model for human face identification. In: Proceedings of the second IEEE workshop on applications of computer vision, pp 138–142

  28. Sim T, Baker S, Bsat M (2003) The CMU pose, illumination, and expression (PIE) database. IEEE Trans Pattern Anal Mach Intell 25(12):1615–1618

    Article  Google Scholar 

  29. Wang J, Wang F, Zhang C, Shen H, Quan L (2009) Linear neighborhood propagation and its applications. IEEE Trans Pattern Anal Mach Intell 31(9):1600–1615

    Article  Google Scholar 

  30. Wright J, Yang A, Ganesh A, Sastry S, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  31. Wu F, Wang W, Yang Y, Zhuang Y, Nie F (2010) Classification by semi-supervised discriminative regularization. Neurocomputing 73(10):1641–1651

    Article  Google Scholar 

  32. Yan S, Wang H (2009) Semi-supervised learning with sparse representation. In: Proceedings of SIAM conference on data mining, pp 792–801

  33. Yan S, Xu D, Zhang B, Zhang HJ, Yang Q, Lin S (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51

    Article  Google Scholar 

  34. Yang A, Ganesh G, Sastry S, Ma Y (2010) Fast \(\ell \)1-minimization algorithms and an application in robust face recognition: a review. Technical Report 2010–13, Department of Electrical Engineering and Computer Science, University of Carlifornia at Berkeley

  35. Yang J, Zhang L, Xu Y, Yang J (2012) Beyond sparsity: the role of L1-optimizer in pattern classification. Pattern Recogn 45(3):1104–1118

    Article  MATH  Google Scholar 

  36. Yu G, Zhang G, Domeniconi C, Yu Z, You J (2012) Semi-supervised classification based on random subspace dimensionality reduction. Pattern Recogn 45(3):1119–1135

    Google Scholar 

  37. Yu G, Zhang G, Yu Z, Domeniconi C, You J, Han G (2012) Semi-supervised ensemble classification in subspaces. Appl Soft Comput 12(5):1511–1522

    Google Scholar 

  38. Zhou D, Bousquet O, Lal TN, Weston J, Schölkopf B (2004) Learning with local and global consistency. In: Proceedings of advances in neural information processing systems, pp 321–328

  39. Zhu X (2008) Semi-supervised learning literature survey. Technical Report 1530, Department of Computer Sciences, University of Wisconsin-Madison

  40. Zhu X, Ghahramani Z, Lafferty J (2003) Semi-supervised learning using gaussian fields and harmonic functions. In: Proceedings of international conference on machine learning 2003, pp 912–919

Download references

Acknowledgments

The authors are grateful to the discussion with Dr. Jieping Ye and appreciated to the valuable comments from anonymous reviewers and editors. This work is supported by Natural Science Foundation of China (Nos. 61003174, 61101234 and 61372138), Natural Science Foundation of Guangdong Province (No. S2012010009961), Specialized Research Fund for the Doctoral Program of Higher Education (No. 20110172120027), Cooperation Project in Industry, Education and Academy of Guangdong Province and Ministry of Education of China (No. 2011B090400032), Fundamental Research Funds for the Central Universities (Nos. 2012ZZ0064, XDJK2010B002, XDJK2013C123), Doctoral Fund of Southwest University (Nos. SWU110063 and SWU113034), Open Project from Key Laboratory of Electronic Commerce Market Application Technology (No. 2011GDECOF01) and China Scholarship Council (CSC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoxian Yu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yu, G., Zhang, G., Zhang, Z. et al. Semi-supervised classification based on subspace sparse representation. Knowl Inf Syst 43, 81–101 (2015). https://doi.org/10.1007/s10115-013-0702-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-013-0702-2

Keywords

Navigation