Skip to main content
Log in

Double feature selection algorithm based on low-rank sparse non-negative matrix factorization

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Recently, many feature selection algorithms based on non-negative matrix factorization have been proposed. However, many of these algorithms only consider unilateral information about global or local geometric structure normally. To this end, this paper proposes a new feature selection algorithm called double feature selection algorithm based on low-rank sparse non-negative matrix factorization (NMF-LRSR). Firstly, to reduce the dimensions effectively, NMF-LRSR uses non-negative matrix factorization as the framework to further reduce the dimension of the feature selection which is originally a dimension reduction problem. Secondly, the low-rank sparse representation with the self-representation is used to construct the graph, so both the global and intrinsic geometric structure information of the data could be taken into account in the process of feature selection, which makes full use of the information and makes the feature selection more accurate. In addition, the double feature selection theory is used to this paper, which makes the result of feature selection more accurate. NMF-LRSR is tested on the baseline and the other six algorithms in the literature and evaluated them on 11 publicly available benchmark datasets. Experimental results show that NMF-LRSR is more effective than the other six feature selection algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Bach F (2008) Consistency of the group Lasso and multiple kernel learning. J Mach Learn Res 9(2):1179–1225

    MathSciNet  MATH  Google Scholar 

  2. Banerjee M, Pal N (2015) Unsupervised feature selection with controlled redundancy (UFeSCoR). IEEE Trans Knowl Data Eng 27(12):3390–3403

    Google Scholar 

  3. Belkin M, Niyogi P (2001) Laplacian eigenmaps and spectral techniques for embedding and clustering. Adv Neural Inf Process Syst 14:585–591

    Google Scholar 

  4. Bhattacharya A, Goswam R, Mukherjee K (2019) A feature selection technique based on rough set and improvised PSO algorithm (PSORS-FS) for permission based detection of Android malwares. Int J Mach Learn Cybern 10(7):1893–1907

    Google Scholar 

  5. Cai D, He X, Han J, Huang TS (2011) Graph regularized nonnegative matrix factorization for data representation. IEEE Trans Pattern Anal Mach Intell 33(8):1548–1560

    Google Scholar 

  6. Cai D, Zhang C, He X (2010) Unsupervised feature selection for multi-cluster data. In: Proceedings of the 16th ACM SIGKDD international conference on knowledge discovery and data mining, pp 333–342

  7. Chen Z, Wu C, Zhang Y, Huang Z (2015) Feature selection with redundancy-complementariness dispersion. Knowl-Based Syst 89:203–217

    Google Scholar 

  8. Deutsch HP (2004) Principle component analysis. Derivatives and internal models

  9. Du S, Wang W, Ma Y (2016) Low rank sparse preserve projection for face recognition. In: Control & decision conference IEEE

  10. Golub G, Reinsch C (1970) Singular value decomposition and least squares solutions. Numer Math 14(5):403–420

    MathSciNet  MATH  Google Scholar 

  11. Gu B, Sun X, Sheng V (2016) Structural minimax probability machine. IEEE T Neural Netw Learn Syst 1(7):1–11

    Google Scholar 

  12. Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182

    MATH  Google Scholar 

  13. He W, Zhu X, Cheng D, Hu R, Zhang S (2017) Unsupervised feature selection for visual classification via feature-representation property. Neurocomputing 236:5–13

    Google Scholar 

  14. He X, Cai D, Niyogi P (2005) Laplacian score for feature selection. In: Advances in neural information processing systems

  15. Hou C, Nie F, Yi D, Wu Y (2011) Feature selection via joint embedding learning and sparse regression. In: Proceedings of the international joint conference on artificial intelligence, p 1324

  16. Hu R, Zhu X, Cheng D, He W, Yan Y, Song J, Zhang S (2017) Graph self-representation method for unsupervised feature selection. Neurocomputing 220:130–137

    Google Scholar 

  17. Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97(1–2):273–324

    MATH  Google Scholar 

  18. Labani M, Moradi P, Ahmadizar F, Jalili M (2018) A novel multivariate filter method for feature selection in text classification problems. Eng Appl Artif Intell 70:25–37

    Google Scholar 

  19. Lee DD, Seung HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401:788–791

    MATH  Google Scholar 

  20. Lee DD, Seung HS (2001) Algorithms for non-negative matrix factorization. In: Advances in neural information processing systems, pp 556–562

  21. Li X, Zhang H, Zhang R, Liu Y, Nie F (2018) Generalized uncorrelated regression with adaptive graph for unsupervised feature selection. IEEE Trans Neural Netw Learn Syst 99:1–9

    Google Scholar 

  22. Lipovetsky S (2009) PCA and SVD with nonnegative loadings. Pattern Recognit 42(1):68–76

    MATH  Google Scholar 

  23. Liu G, Lin Z, Yan S, Sun J, Yu Y, Yi M (2010) Robust recovery of subspace structures by low-rank representation. IEEE Trans Pattern Anal Mach Intell 35(1):171–184

    Google Scholar 

  24. Liu Y, Liu K, Zhang C, Wang J, Wang X (2017) Unsupervised feature selection via diversity-induced self-representation. Neurocomputing 219(1):350–363

    Google Scholar 

  25. Meng Y, Shang R, Jiao L, Zhang W, Yang S (2018) Dual-graph regularized non-negative matrix factorization with sparse and orthogonal constraints. Eng Appl Artif Inte 69:24–35

    Google Scholar 

  26. Meng Y, Shang R, Jiao L, Zhang W, Yuan Y, Yang S (2018) Feature selection based dual-graph sparse non-negative matrix factorization for local discriminative clustering. Neurocomputing 290:87–99

    Google Scholar 

  27. Moradi P, Rostami M (2015) A graph theoretic approach for unsupervised feature selection. Eng Appl Artif Intell 44:33–45

    Google Scholar 

  28. Moradi P, Rostami M (2015) Integration of graph clustering with ant colony optimization for feature selection. Knowl-Based Syst 8:144–161

    Google Scholar 

  29. Nie F, Xiang S, Jia Y, Zhang C (2008) Trace ratio criterion for feature selection. Assoc Adv Artif Intell 2:671–676

    Google Scholar 

  30. Nie F, Huang H, Cai X, Ding C (2010a) Efficient and robust feature selection via joint 2,1-norms minimization. In: Advances in neural information processing systems, pp 1813–1821

  31. Nie F, Xu D, Tsang IW, Zhang C (2010) Flexible manifold embedding: a framework for semi-supervised and unsupervised dimension reduction. IEEE Trans Image Process 19(7):1921–1932

    MathSciNet  MATH  Google Scholar 

  32. Nie F, Zeng Z, Tsang IW, Xu D, Zhang C (2011) Spectral embedded clustering: a framework for in-sample and out-of-sample spectral clustering. IEEE Trans Neural Netw 22(11):1796–1808

    Google Scholar 

  33. Nie F, Xiang S, Liu Y, Hou C, Zhang C (2012) Orthogonal vs. uncorrelated least squares discriminant analysis for feature extraction. Pattern Recognit Lett 33(5):485–491

    Google Scholar 

  34. Paatero P, Tapper U (1994) Positive matrix factorization: a non-negative factor model with optimal utilization of error estimates of data values. Environmetrics 5(2):111–126

    Google Scholar 

  35. Qiao L, Chen S, Tan X (2010) Sparsity preserving projections with applications to face recognition. Pattern Recognit 43(1):331–341

    MATH  Google Scholar 

  36. Rakhlin A, Caponnetto A (2007) Stability of k-means clustering. In: Advances in neural information processing systems, p 1121

  37. Ren Z, Sun Q, Wu B, Zhang X, Yan W (2019) Learning latent low-rank and sparse embedding for robust image feature extraction. IEEE Trans Image Process 99(9):1

    Google Scholar 

  38. Robnik-Šikonja M, Kononenko I (2003) Theoretical and empirical analysis of relief and relief. Mach Learn 53(1–2):23–69

    MATH  Google Scholar 

  39. Shang F, Jiao L, Wang F (2012) Graph dual regularization non-negative matrix factorization for co-clustering. Pattern Recognit 45(6):2237–2250

    MATH  Google Scholar 

  40. Shang F, Liu Y, Wang F (2011) Learning spectral embedding for semi-supervised clustering. In: 2011 IEEE 11th international conference on data miming, pp 597–606

  41. Shang R, Wang W, Stolkin R, Jiao L (2016) Subspace learning-based graph regularized feature selection. Knowl-Based Syst 112:152–165

    Google Scholar 

  42. Shang R, Chang J, Jiao L, Xue Y (2019) Unsupervised feature selection based on self-representation sparse regression and local similarity preserving. Int J Mach Learn Cybern 10(4):757–770

    Google Scholar 

  43. Strehl A, Ghosh J (2002) Cluster ensembles—a knowledge reuse framework for combining multiple partitions. J Mach Learn Res 3(12):583–617

    MathSciNet  MATH  Google Scholar 

  44. Vergara JR, Estévez PA (2014) A review of feature selection methods based on mutual information. Mach Learn 24:175–186

    Google Scholar 

  45. Wan Y, Chen X, Zhang J (2018) Global and intrinsic geometric structure embedding for unsupervised feature selection. Expert Syst Appl 93:134–142

    Google Scholar 

  46. Wang J, Yao J, Sun Y (2014) Semi-supervised local-learning-based feature selection. In: International joint conference on neural networks, pp 1942–1948

  47. Wang A, An N, Chen G, Li L, Alterovitz G (2015) Accelerating wrapper-based feature selection with K-nearest-neighbor. Knowl-Based Syst 83:81–91

    Google Scholar 

  48. Wang S, Tang J, Liu H (2015) Embedded unsupervised feature selection. In: International conference on swarm intelligence

  49. Wang S, Wang H (2017) Unsupervised feature selection via low-rank approximation and structure learning. Knowl-Based Syst 44:70–79

    Google Scholar 

  50. Wang C, He Q, Shao M, Hu Q (2018) Feature selection based on maximal neighborhood discernibility. Int J Mach Learn Cyb 9(11):1929–1941

    Google Scholar 

  51. Wu M, Schölkopf B (2006) A local learning approach for clustering. In: Advances in neural information processing systems, pp 1529–1536

  52. Xiao S, Tan M, Xu D (2014) Weighted block-sparse low rank representation for face clustering in videos. In: European conference on computer vision, pp 123–138

  53. Xu W, Gong Y (2004) Document clustering by concept factorization. In: Proceedings of the 27th annual international ACM SIGIR conference on research and development in information retrieval, pp 202–209

  54. Xu Z, King I, Lyu RT, Jin R (2010) Discriminative semi-supervised feature selection via manifold regularization. IEEE Trnas Neural Netw 21(7):1033–1047

    Google Scholar 

  55. Yan H, Yang J (2015) Sparse discriminative feature selection. Pattern Recognit 48(5):1827–1835

    Google Scholar 

  56. Yang Y, Shen HT, Ma Z, Huang Z (2011) L2,1-norm regularized discriminative feature selection for unsupervised learning. In: Proceedings-international joint conference on artificial intelligence, p 1589

  57. Zheng M, Bu J, Chen C, Wang C, Zhang L, Qiu G, Cai D (2011) Graph regularized sparse coding for image representation. IEEE Trans Image Process 20(5):1327–1336

    MathSciNet  MATH  Google Scholar 

  58. Zhou L, Lu D, Fujita H (2015) The performance of corporate financial distress prediction models with features selection guided by domain knowledge and data mining approaches. Knowl-Based Syst 85:52–61

    Google Scholar 

  59. Zhou Q, Zhou H, Li T (2016) Cost-sensitive feature selection using random forest: selecting low-cost subsets of informative features. Knowl-Based Syst 95:1–11

    Google Scholar 

  60. Zhu P, Zuo W, Zhang L, Hu Q, Shiu S (2015) Unsupervised feature selection by regularized self-representation. Pattern Recognit 48(2):438–446

    MATH  Google Scholar 

  61. Zhu P, Zhu W, Wang W, Zuo W, Hu Q (2017) Non-convex regularized self-representation for unsupervised feature selection. Image Vis Comput 60:22–29

    Google Scholar 

  62. Zhuang L, Gao S, Tang J, Wang J, Lin Z, Ma Y, Yu N (2015) Constructing a non-negative low rank and sparse graph with data-adaptive features. Image Process 24(11):3717–3728

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was partially supported by the National Natural Science Foundation of China under Grants Nos. 61773304, 61836009, 61871306, 61772399 and U1701267, the Fund for Foreign Scholars in University Research and Teaching Programs (the 111 Project) under Grants No. B07048, the Key Laboratory Fund 61421010402, and the Program for Cheung Kong Scholars and Innovative Research Team in University under Grant IRT1170.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ronghua Shang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shang, R., Song, J., Jiao, L. et al. Double feature selection algorithm based on low-rank sparse non-negative matrix factorization. Int. J. Mach. Learn. & Cyber. 11, 1891–1908 (2020). https://doi.org/10.1007/s13042-020-01079-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-020-01079-6

Keywords

Navigation