Skip to main content
Log in

Adaptive distance penalty based nonnegative low-rank representation for semi-supervised learning

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Low-rank representation (LRR) aims to find the essential structural information of the original data. It can capture global information and has strong robustness to noise. However, the disadvantage of LRR is that the local similarity of the data is not considered. Most semi-supervised learning (SSL) methods infer unknown tags based on two-stage learning, in which the first stage is a graph construction, and the second stage is to perform SSL for classification. These methods do not share public information that is used to improve classification accuracy. This paper proposes a new semi-supervised learning classification algorithm termed adaptive distance penalty non-negative low-rank representation (ADP-NNLRR). The proposed method combines low rank representation, local constraints and SSL strategy, to make full use of label information and local manifold geometry structure from the data, which in turn, can capture the global subspace, and maintain the relationship between the local parts better. In the proposed method, distance penalty terms and non-negative constraints are introduced, and the obtained low-rank coefficient matrix is used as the similarity matrix of the graph, which can capture more discriminative information for the construction of the graph. Comparative experiments on some classical datasets and noisy datasets verify the superior performance of our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Yang X, Jiang X, Tian C, Wang P, Zhou F, Fujita H (2020) Inverse projection group sparse representation for tumor classification: a low rank variation dictionary approach. Knowl-Based Syst 196(4):105768

    Google Scholar 

  2. Gao Y, Luo S, Pan J, Wang Z, Gao P (2021) Kernel alignment unsupervised discriminative dimensionality reduction. Neurocomputing 453:181–194

    Google Scholar 

  3. Fan Y, Liu J, Liu P, du Y, Lan W, Wu S (2021) Manifold learning with structured subspace for multi-label feature selection. Pattern Recogn 120(2):108169

    Google Scholar 

  4. Xu X, Deng C, Nie F (2019) Adaptive graph weighting for multi-view dimensionality reduction. Signal Process 165:186–191

    Google Scholar 

  5. Khoder A, Dornaika F (2020) A hybrid discriminant embedding with feature selection: application to image categorization. Appl Intell 51:1–17

    MATH  Google Scholar 

  6. Liu GH, Yang JY (2021) Deep-seated features histogram: a novel image retrieval method. Pattern Recogn 116:107926

    Google Scholar 

  7. Liu Z, Shi K, Zhang K, Ou W, Wang L (2020) Discriminative sparse embedding based on adaptive graph for dimension reduction. Eng Appl Artif Intell 94:103758

    Google Scholar 

  8. Li CN, Shao YH, Yin W, Liu MZ (2019) Robust and sparse linear discriminant analysis via an alternating direction method of multipliers. IEEE Trans Neural Networks Learning Syst 31(3):915–926

    Google Scholar 

  9. Lai Z, Chen Y, Mo D, Wen J, Kong H (2018) Robust jointly sparse embedding for dimensionality reduction. Neurocomputing 314:30–38

    Google Scholar 

  10. Yin J, Sun S (2019) Multiview uncorrelated locality preserving projection. IEEE Trans Neural Networks Learn Syst 31(9):3442–3455

    MathSciNet  Google Scholar 

  11. Lai Z, Bao J, Kong H, Wan M, Yang G (2020) Discriminative low-rank projection for robust subspace learning. Int J Mach Learn Cybern 11(10):2247–2260

    Google Scholar 

  12. Liu GH, Yang JY (2018) Exploiting color volume and color difference for salient region detection. IEEE Trans Image Process 28(1):6–16

    MathSciNet  MATH  Google Scholar 

  13. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396

    MATH  Google Scholar 

  14. Roweis S, Saul L (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326

    Google Scholar 

  15. Tenenbaum J B, De Silva V, Langford J C. A global geometric framework for nonlinear dimensionality reduction science, 2000, 290(5500): 2319–2323

  16. He X (2003) Locality preserving projections. Adv Neural Inf Proces Syst 16(1):186–197

    Google Scholar 

  17. Wang A, Zhao S, Liu J, Yang J, Liu L, Chen G (2020) Locality adaptive preserving projections for linear dimensionality reduction. Expert Syst Appl 151:113352

    Google Scholar 

  18. Zhang Z, Zhang Y, Li F, Zhao M, Zhang L, Yan S (2017) Discriminative sparse flexible manifold embedding with novel graph for robust visual representation and label propagation. Pattern Recogn 61:492–510

    MATH  Google Scholar 

  19. Lu J, Wang H, Zhou J, Chen Y, Lai Z, Hu Q (2021) Low-rank adaptive graph embedding for unsupervised feature extraction. Pattern Recogn 113:107758

    Google Scholar 

  20. Liu Z, Wang J, Liu G, Zhang L (2019) Discriminative low-rank preserving projection for dimensionality reduction. Appl Soft Comput 85:105768

    Google Scholar 

  21. Yang S, Feng Z, Ren Y, Liu H, Jiao L (2014) Semi-supervised classification via kernel low-rank representation graph. Knowl-Based Syst 69:150–158

    Google Scholar 

  22. Zhang Y, Xiang M, Yang B (2017) Low-rank preserving embedding. Pattern Recogn 70:112–125

    Google Scholar 

  23. Jing P, Su Y, Li Z, Liu J, Nie L (2019) Low-rank regularized tensor discriminant representation for image set classification. Signal Process 156:62–70

    Google Scholar 

  24. Wen J, Fang X, Yong X et al (2018) Low-rank representation with adaptive graph regularization. Neural Netw 108:83–96

    MATH  Google Scholar 

  25. Lu GF, Wang Y, Tang G (2022) Robust low-rank representation with adaptive graph regularization from clean data. Appl Intell 52(5):5830–5840

    Google Scholar 

  26. Fang X, Na H, Wu J et al (2018) Approximate low-rank projection learning for feature extraction. IEEE Trans Neural Networks Learn Syst 29(11):5228–5241

    MathSciNet  Google Scholar 

  27. Deng T, Ye D, Ma R, Fujita H, Xiong L (2020) Low-rank local tangent space embedding for subspace clustering. Inf Sci 508:1–21

    MathSciNet  MATH  Google Scholar 

  28. Zhu R, Dornaika F, Ruichek Y (2019) Learning a discriminant graph-based embedding with feature selection for image categorization. Neural Netw 111:35–46

    MATH  Google Scholar 

  29. Liu Z, Lai Z, Ou W, Zhang K, Zheng R (2020) Structured optimal graph based sparse feature extraction for semi-supervised learning. Signal Process 170:107456

    Google Scholar 

  30. Fei L, Xu Y et al (2017) Low rank representation with adaptive distance penalty for semi-supervised subspace classification. Patt Recogn: J Patt Recogn Soc 67:252–262

    Google Scholar 

  31. Wang CP, Zhang JS, Du F et al (2018) Symmetric low-rank representation with adaptive distance penalty for semi-supervised learning. Neurocomputing 316:376–385

    Google Scholar 

  32. Wang Y, Meng Y, Li Y, Chen S, Fu Z, Xue H (2017) Semi-supervised manifold regularization with adaptive graph construction. Pattern Recogn Lett 98:90–95

    Google Scholar 

  33. Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2(1):183–202

    MathSciNet  MATH  Google Scholar 

  34. Lin Z, Liu R, Li H (2015) Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs in machine learning. Mach Learn 99(2):287–325

    MathSciNet  MATH  Google Scholar 

  35. Zhang N, Yang J (2013) Low-rank representation based discriminative projection for robust feature extraction. Neurocomputing 111:13–20

    Google Scholar 

  36. Cai JF, Candès EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM J Optim 20(4):1956–1982

    MathSciNet  MATH  Google Scholar 

  37. Wright J, Yang AY, Ganesh A, Sastry SS, Yi Ma (2008) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Google Scholar 

  38. Sim T, Baker S, Bsat M (2002) The CMU pose, illumination, and expression (PIE) database. Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition. IEEE:53–58

  39. Zhu X, Ghahramani Z, Lafferty J D. Semi-supervised learning using gaussian fields and harmonic functions. Proceedings of the 20th International conference on Machine learning (ICML-03). 2003: 912–919

  40. Liu Z, Lu Y, Lai Z, Ou W, Zhang K (2021) Robust sparse low-rank embedding for image dimension reduction. Appl Soft Comput 113:107907

    Google Scholar 

  41. Wen J, Fang X, Cui J, Fei L, Yan K, Chen Y, Xu Y (2018) Robust sparse linear discriminant analysis. IEEE Trans Circ Syst Video Technol 29(2):390–403

    Google Scholar 

  42. Xu Y, Fang X, Zhu Q, Chen Y, You J, Liu H (2014) Modified minimum squared error algorithm for robust classification and face recognition experiments. Neurocomputing 135:253–261

    Google Scholar 

  43. Yang S, Feng Z, Ren Y, Liu H, Jiao L (2014) Semi-supervised classification via kernel low-rank representation graph. Knowl-Based Syst 69:150–158

    Google Scholar 

  44. Peng Y, Lu BL, Wang S (2015) Enhanced low-rank representation via sparse manifold adaption for semi-supervised learning. Neural Netw 65:1–17

    MATH  Google Scholar 

  45. Liu Z, Wang X, Pu J, Wang L, Zhang L (2017) Nonnegative low-rank representation based manifold embedding for semi-supervised learning. Knowl-Based Syst 136:121–129

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by NSFC (U1504610) and the Natural Science Foundations of Henan Province (202300410148).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yixiu Zhang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Y., Chen, J. & Liu, Z. Adaptive distance penalty based nonnegative low-rank representation for semi-supervised learning. Appl Intell 53, 1405–1416 (2023). https://doi.org/10.1007/s10489-022-03632-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03632-y

Keywords

Navigation