Skip to main content
Log in

Sparse Low-Rank and Graph Structure Learning for Supervised Feature Selection

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Spectral feature selection (SFS) is superior to conventional feature selection methods in many aspects, by extra importing a graph matrix to preserve the subspace structure of data. However, the graph matrix of classical SFS that is generally constructed by original data easily outputs a suboptimal performance of feature selection because of the redundancy. To address this, this paper proposes a novel feature selection method via coupling the graph matrix learning and feature data learning into a unified framework, where both steps can be iteratively update until achieving the stable solution. We also apply a low-rank constraint to obtain the intrinsic structure of data to improve the robustness of learning model. Besides, an optimization algorithm is proposed to solve the proposed problem and to have fast convergence. Compared to classical and state-of-the-art feature selection methods, the proposed method achieved the competitive results on twelve real data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. http://archive.ics.uci.edu/ml/.

  2. http://featureselection.asu.edu/datasets.php.

References

  1. Cai X, Ding CHQ, Nie F, Huang H (2013) On the equivalent of low-rank linear regressions and linear discriminant analysis based regressions. In: The 19th ACM SIGKDD international conference on knowledge discovery and data mining, pp 1124–1132

  2. Gao L, Li X, Song J, Shen HT (2019) Hierarchical lstms with adaptive attention for visual captioning. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2019.2894139

    Article  Google Scholar 

  3. He X, Cai D, Niyogi P (2005) Laplacian score for feature selection. In: Advances in Neural Information Processing Systems 18 [Neural Information Processing Systems, NIPS 2005, December 5–8, 2005, Vancouver, British Columbia, Canada], pp 507–514

  4. Hou C, Nie F, Yi D, Wu Y (2011) Feature selection via joint embedding learning and sparse regression. In: Twenty-second international joint conference on artificial intelligence, pp 1324–1329

  5. Hu R, Zhu X, Zhu Y, Gan J (2019) Robust svm with adaptive graph learning. World Wide Web. https://doi.org/10.1007/s11280-019-00766-x

    Article  Google Scholar 

  6. Ji Y, Zhan Y, Yang Y, Xu X, Shen F, Shen HT (2019) A context knowledge map map guided coarse-to-fine action recognition. IEEE Trans Image Process. https://doi.org/10.1109/TIP.2019.2952088

    Article  Google Scholar 

  7. Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97(1–2):273–324

    Article  MATH  Google Scholar 

  8. Li Z, Tang J (2015) Unsupervised feature selection via nonnegative spectral analysis and redundancy control. IEEE Trans Image Process 24(12):43–53

    MathSciNet  MATH  Google Scholar 

  9. Li Z, Zhang Z, Qin J, Zhang Z, Shao L (2019) Discriminative fisher embedding dictionary learning algorithm for object recognition. IEEE Trans Neural Netw Learn Syst 99(1):1–14

    Google Scholar 

  10. Luukka P (2011) Feature selection using fuzzy entropy measures with similarity classifier. Expert Syst Appl 38(4):4600–4607

    Article  Google Scholar 

  11. Nie F, Cai X, Huang H (2014) Flexible shift-invariant locality and globality preserving projections. In: Joint European conference on machine learning and knowledge discovery in databases, pp 485–500

  12. Nie F, Xu D, Tsang WH, Zhang C (2010) Flexible manifold embedding: a framework for semi-supervised and unsupervised dimension reduction. IEEE Trans Image Process 19(7):1921–1932

    Article  MathSciNet  MATH  Google Scholar 

  13. Nie F, Zhu W, Li X (2016) Unsupervised feature selection with structured graph optimization. In: Proceedings of the thirtieth AAAI conference on artificial intelligence, pp 1302–1308

  14. Peng H, Fan Y (2017) A general framework for sparsity regularized feature selection via iteratively reweighted least square minimization. In: Proceedings of the thirty-first AAAI conference on artificial intelligence, pp 2471–2477

  15. Qian M, Zhai C (2013) Robust unsupervised feature selection. In: Proceedings of the 23rd international joint conference on artificial intelligence, pp 1621–1627

  16. Shang R, Wang W, Stolkin R, Jiao L (2018) Non-negative spectral learning and sparse regression-based dual-graph regularized feature selection. IEEE Trans Cybern 48(2):793–806

    Article  Google Scholar 

  17. Shi X, Guo Z, Lai Z, Yang Y, Bao Z, Zhang D (2015) A framework of joint graph embedding and sparse regression for dimensionality reduction. IEEE Trans Image Process 24(4):1341–1355

    Article  MathSciNet  MATH  Google Scholar 

  18. Singh D, Febbo PG, Ross K, Jackson DG, Manola J, Ladd C, Tamayo P, Renshaw AA, D’Amico AV, Richie JP et al (2002) Gene expression correlates of clinical prostate cancer behavior. Cancer cell 1(2):203–209

    Article  Google Scholar 

  19. Wang B, Yang Y, Xu X, Hanjalic A, Shen HT (2017) Adversarial cross-modal retrieval. In: Proceedings of the 2017 ACM on multimedia conference, pp 154–162

  20. Wang H, Nie F, Huang H, Risacher SL, Ding CHQ, Saykin AJ, Shen L (2011) Sparse multi-task regression and feature selection to identify brain imaging predictors for memory performance. In: IEEE international conference on computer vision, pp 557–562

  21. Wang X, Zhang X, Zeng Z, Wu Q, Zhang J (2016) Unsupervised spectral feature selection with l1-norm graph. Neurocomputing 200:47–54

    Article  Google Scholar 

  22. Wen G (2019) Robust self-tuning spectral clustering. Neurocomputing. https://doi.org/10.1016/j.neucom.2018.11.105

    Article  Google Scholar 

  23. Wen Z, Yin W (2013) A feasible method for optimization with orthogonality constraints. Math Program 142(1–2):397–434

    Article  MathSciNet  MATH  Google Scholar 

  24. Wold S, Esbensen K, Geladi P (1987) Principal component analysis. Chemom Intell Lab Syst 2(1):37–52

    Article  Google Scholar 

  25. Xie GS, Zhang Z, Liu L, Zhu F, Zhang XY, Shao L, Li X (2019) Srsc: selective, robust, and supervised constrained feature representation for image classification. IEEE Trans Neural Netw Learn Syst 99(1):1–14

    Google Scholar 

  26. Zhang S, Li X, Zong M, Zhu X, Cheng D (2017) Learning k for knn classification. ACM Trans Intell Syst Technol 8(3):43:1–43:19

    Google Scholar 

  27. Zhang Z, Lai Z, Huang Z, Wong W, Xie G, Liu L, Shao L (2019) Scalable supervised asymmetric hashing with semantic and latent factor embedding. IEEE Trans Image Process 28(10):4803–4818

    Article  MathSciNet  MATH  Google Scholar 

  28. Zhang Z, Liu L, Shen F, Shen HT, Shao L (2019) Binary multi-view clustering. IEEE Trans Pattern Anal Mach Intell 41(7):1774–1782

    Article  Google Scholar 

  29. Zha, Z, Liu H (2007) Spectral feature selection for supervised and unsupervised learning. In: Proceedings of the 24th international conference on machine learning, pp 1151–1157

  30. Zhu X, Gan J, Lu G, Li J, Zhang S (2019) Spectral clustering via half-quadratic optimization. World Wide Web. https://doi.org/10.1007/s11280-019-00731-8

    Article  Google Scholar 

  31. Zhu X, Li X, Zhang S, Xu Z, Yu L, Wang C (2017) Graph pca hashing for similarity search. IEEE Trans Multimed 19(9):2033–2044

    Article  Google Scholar 

  32. Zhu X, Yang J, Zhang C, Zhang S (2019) Efficient utilization of missing data in cost-sensitive learning. IEEE Trans Knowl Data Eng. https://doi.org/10.1109/TKDE.2019.2956530

    Article  Google Scholar 

  33. Zhu X, Zhang S, Hu R, He W, Lei C, Zhu P (2019) One-step multi-view spectral clustering. IEEE Trans Knowl Data Eng 31(10):2022–2034

    Article  Google Scholar 

  34. Zhu X, Zhu Y, Zheng W (2019) Spectral rotation for deep one-step clustering. Pattern Recognit. https://doi.org/10.1016/j.patcog.2019.107175

    Article  Google Scholar 

Download references

Acknowledgements

This work is partially supported by the China Key Research Program (Grant No. 2016YFB1000905), the Key Program of the National Natural Science Foundation of China (Grant No. 61836016), the Natural Science Foundation of China (Grants Nos. 61876046 and 61573270), the Project of Guangxi Science and Technology (GuiKeAD17195062), the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing, the Guangxi High Institutions Program of Introducing 100 High-Level Overseas Talents, and the Research Fund of Guangxi Key Lab of Multisource Information Mining and Security (18-A-01-01).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guoqiu Wen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wen, G., Zhu, Y., Zhan, M. et al. Sparse Low-Rank and Graph Structure Learning for Supervised Feature Selection. Neural Process Lett 52, 1793–1809 (2020). https://doi.org/10.1007/s11063-020-10250-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-020-10250-7

Keywords

Navigation