Skip to main content
Log in

Subspace embedding for classification

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Subspace embedding is a popular technique to discover a mapping space in which the samples are expected to be represented appropriately. In recent years, graph has received increasing attention in subspace embedding and most of these related graph-based algorithms directly construct the connecting graph in original space. But some redundant information probably exists in the data with high dimension, and thus, it is hard to ensure the quality of graph. In this paper, we propose a novel discriminative subspace embedding (DSE) algorithm for classification. DSE is a supervised subspace learning method. In DSE, an intra-class graph and an inter-class graph are used to characterize the relationship among samples from the same class and different classes, respectively. DSE assumes that the embeddings of samples from the same class should be similar while different embeddings should be learned for the samples belonging to different classes. Based on this assumption, the above two graphs are constructed in mapping space. In order to enhance the quality of projections, the reconstruction of original data is also taken into consideration in DSE. Finally, some datasets are adopted to test the performance of DSE. Experimental results illustrate that effective representations can be learned by DSE and it has a more competitive learning ability, in comparison with related algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. https://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php.

  2. https://www.cs.columbia.edu/CAVE/software/softlib/coil-100.php.

  3. https://www.kaggle.com/farahaniali/facepix.

  4. http://www.anefian.com/research/face_reco.htm.

  5. http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html.

  6. https://cs.nyu.edu/~roweis/data.html.

  7. https://github.com/zalandoresearch/fashion-mnist.

References

  1. Yvan S, Iaki I, Pedro L (2007) A review of feature selection techniques in bioinformatics. Bioinformatics 19:2507–2517

    Google Scholar 

  2. Li W, Duan F, Sheng S, Xu C, Liu R, Zhang Z, Jiang X (2017) A human-vehicle collaborative simulated driving system based on hybrid brain-computer interfaces and computer vision. IEEE Trans Cognit Develop Syst 10(3):810–822

    Article  Google Scholar 

  3. Xu JL, Sun DW (2017) Identification of freezer burn on frozen salmon surface using hyperspectral imaging and computer vision combined with machine learning algorithm. Int J Refrig 74:151–164

    Article  Google Scholar 

  4. G. Ren, J. Ning, Z. Zhang, Multi-variable selection strategy based on near-infrared spectra for the rapid description of dianhong black tea quality. Spectrochimica Acta A Mol Biomol Spectrosc 118918

  5. Yu-Qiang LI, Pan TH, Hao-Ran LI, Zou XB (2019) Nir spectral feature selection using lasso method and its application in the classification analysis. Spectroscopy Spectral Anal 39(12):3809–3815

    Google Scholar 

  6. Jing L, Allinson NM (2009) Subspace learning-based dimensionality reduction in building recognition. Neurocomputing 73(1–3):324–330

    Google Scholar 

  7. Liang Z, Ma B, Li G, Huang Q, Qi T (2017) Cross-modal retrieval using multi-ordered discriminative structured subspace learning. IEEE Trans Multim 19(6):1220–1233

    Article  Google Scholar 

  8. Zhao Z, Lei J, Zhao M, Ye Q, Min Z, Meng W (2018) Adaptive non-negative projective semi-supervised learning for inductive classification. Neural Netw 108:128–145

    Article  Google Scholar 

  9. Abdi H, Williams LJ (2010) Principal component analysis, Wiley Interdisciplinary Reviews. Comput Stat 2(4):433–459

    Article  Google Scholar 

  10. Deepa P, Thilagavathi K (2015) Feature extraction of hyperspectral image using principal component analysis and folded-principal component analysis. In: 2015 2nd International Conference on Electronics and Communication Systems (ICECS), pp 656–660

  11. Tang G, Lu G, Wang Z, Xie Y,(2016) Robust tensor principal component analysis by lp-norm for image analysis. In: 2016 2nd IEEE international conference on computer and communications (ICCC), pp 568–573

  12. Li B (2018) A principal component analysis approach to noise removal for speech denoising, in. International Conference on Virtual Reality and Intelligent Systems (ICVRIS) 2018:429–432

    Google Scholar 

  13. Yong X, Song F, Ge F, Zhao Y (2010) A novel local preserving projection scheme for use with face recognition. Expert Syst Appl 37(9):6718–6721

    Article  Google Scholar 

  14. Roweis S, Saul L (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326

    Article  Google Scholar 

  15. Zhang W, Kang P, Fang X, Teng L, Han N (2019) Joint sparse representation and locality preserving projection for feature extraction. Int J Mach Learn Cybern 10(7):1731–1745

    Article  Google Scholar 

  16. Fang X, Yong X, Li X, Fan Z, Hong L, Yan C (2014) Locality and similarity preserving embedding for feature selection. Neurocomputing 128:304–315

    Article  Google Scholar 

  17. Qiao Z, Zhou L, Huang J (2009) Sparse linear discriminant analysis with applications to high dimensional low sample size data. IAENG Int J Appl Math 39(1):48–60

    MathSciNet  MATH  Google Scholar 

  18. Sheng W, Lu J, Gu X, Du H, Yang J,(2016) Semi-supervised linear discriminant analysis for dimension reduction and classification. Pattern Recognit 57(C):179–189

  19. Chu D, Liao LZ, Ng KP, Wang X (2017) Incremental linear discriminant analysis: a fast algorithm and comparisons. IEEE Trans Neural Netw Learn Syst 26(11):2716–2735

    Article  MathSciNet  Google Scholar 

  20. Liang H, Chen X, Xu C, Jia L, Johnson MT (2018) Local pairwise linear discriminant analysis for speaker verification. IEEE Signal Process Lett 25(10):1575–1579

    Article  Google Scholar 

  21. Shu X, Xu H, Tao L (2015) A least squares formulation of multi-label linear discriminant analysis. Neurocomputing 156:221–230

    Article  Google Scholar 

  22. Lu J, Tan YP (2013) Cost-sensitive subspace analysis and extensions for face recognition. IEEE Trans Inf Forensics Secur 8(3):510–519

    Article  Google Scholar 

  23. Wen J, Fang X, Cui J, Fei L, Yan K, Chen Y, Xu Y (2019) Robust sparse linear discriminant analysis. IEEE Trans Circuits Syst Video Technol 29(2):392–403

    Article  Google Scholar 

  24. Yan S, Xu D, Zhang B (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51

    Article  Google Scholar 

  25. Li H, Jiang T, Zhang K (2006) Efficient and robust feature extraction by maximum margin criterion. IEEE Trans Neural Netw 17(1):157–165

    Article  Google Scholar 

  26. Dornaika F, Bosaghzadeh A (2013) Exponential local discriminant embedding and its application to face recognition. IEEE Trans Cybern 43(3):921–934

    Article  Google Scholar 

  27. Wang F, Xin W, Zhang D, Zhang C, Tao L (2009) marginface: a novel face recognition method by average neighborhood margin maximization. Pattern Recognit 42(11):2863–2875

    Article  Google Scholar 

  28. Masoudimansour W, Bouguila N (2020) Supervised dimensionality reduction of proportional data using mixture estimation. Pattern Recognit 105:107379

  29. Jiang X, Gao J, Wang T, Zheng L (2012) Supervised latent linear gaussian process latent variable model for dimensionality reduction, IEEE Transactions on Systems, Man, and Cybernetics. Part B (Cybernetics) 42(6):1620–1632

    Article  Google Scholar 

  30. Murthy KR, Ghosh A (2017) Moments discriminant analysis for supervised dimensionality reduction. Neurocomputing 237:114–132

    Article  Google Scholar 

  31. Than K, Tu BH, Nguyen DK (2014) An effiective framework for supervised dimension reduction. Neurocomputing 139:397–407

    Article  Google Scholar 

  32. Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7(1):2399–2434

    MathSciNet  MATH  Google Scholar 

  33. Deng C, He X, Han J (2007) Semi-supervised discriminant analysis. In: 2007 11th IEEE international conference on computer vision, pp 1–7

  34. Dornaika F, ElTraboulsia Y (2017) Matrix exponential based semi-supervised discriminant embedding for image classification. Pattern Recognit 61:92–103

    Article  Google Scholar 

  35. Huang H, Liu J, Pan Y (2012) Semi-supervised marginal lisher analysis for hyperspectral image classification. ISPRS Ann Photogramm Remote Sens Spatial Inform Sci 1–3:377–382

    Article  Google Scholar 

  36. Fang X, Teng S, Lai Z et al (2018) Robust latent subspace learning for image classification. IEEE Trans Neural Netw Learn Syst 29(6):2502–2515

    Article  MathSciNet  Google Scholar 

  37. Fang X, Xu Y, Li X et al (2017) Orthogonal self-guided similarity preserving projection for classification and clustering. Neural Netw 88:1–8

    Article  Google Scholar 

  38. Jedrzejewski K, Zamorski M (2013) Performance of k-nearest neighbors algorithm in opinion classification. Found Comput Decis Sci 38(2):97–110

    Article  Google Scholar 

  39. Liu G, Yan S (2011) Latent low-rank representation for subspace segmentation and feature extraction. In: International conference on computer vision, pp 1615–1622

  40. Ren LR, Gao YL, Liu JX, Zhu R, Kong XZ (2020) \({l_{2,1}}\)-extreme learning machine: an efficient robust classifier for tumor classification. Comput Biol Chem 89:107368

  41. Flach P, Kull M (2015) Precision-recall-gain curves: PR analysis done right. In: Proceedings of the 28th international conference on neural information processing systems, pp 838–846

  42. Cohen J (1960) A coefficient of agreement for nominal scales. Educ Psychol Measur 20(1):37–46

    Article  Google Scholar 

Download references

Acknowledgements

The authors are grateful for the financial support from the National Program on Key Research Project of China (Grant number: 2019YFE0103900), European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No 861917-SAFFI and the National Natural Science Foundation of China (Grant number: 32071481).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Jin.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, Z., Jin, W. & Mu, Y. Subspace embedding for classification. Neural Comput & Applic 34, 18407–18420 (2022). https://doi.org/10.1007/s00521-022-07409-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07409-9

Keywords

Navigation