Skip to main content
Log in

Soft adaptive loss based Laplacian eigenmaps

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The Laplacian eigenmaps (LE) is one of the most commonly used nonlinear dimensionality reduction methods and aims to find a low-dimensional representation to preserve the topological relationship between sample points in the original data. However, the 2-norm based loss function makes LE unable to preserve the relationship in many cases. Additionally, the topological relationship does not represent the real intrinsic structure of data. For example, the overemphasis of the topological relationship by LE easily breaks the manifold structure into multiple local areas in the embedding space, which makes the spectral clustering analysis of multi-manifold data more difficult to carry out. To solve this problem, we propose the soft adaptive loss based LE (SALE). With the soft adaptive loss, SALE can adaptively emphasize the topological relationship between sample points and the clustering structure of data. The model is tested and validated on UCI, face and gene expression data sets, and compared with some state-of-the-art models. The experimental results show that the method is robust to noise.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Teng L, Feng Z, Fang X, Teng S, Wang H, Kang P, Zhang Y (2019) Unsupervised feature selection with adaptive residual preserving. Neurocomputing 367:259–272

    Article  Google Scholar 

  2. Ghassabeh Y A, Rudzicz F, Moghaddam H A (2015) Fast incremental lda feature extraction. Pattern Recogn 48(6):1999–2012

    Article  Google Scholar 

  3. Li B, Li Y-R, Zhang X-L (2019) A survey on laplacian eigenmaps based manifold learning methods. Neurocomputing 335:336–351

    Article  Google Scholar 

  4. Ma M, Deng T, Wang N, Chen Y (2019) Semi-supervised rough fuzzy laplacian eigenmaps for dimensionality reduction. Int J Mach Learn Cybern 10(2):397–411

    Article  Google Scholar 

  5. Malik Z K, Hussain A, Wu J (2016) An online generalized eigenvalue version of laplacian eigenmaps for visual big data. Neurocomputing 173:127–136

    Article  Google Scholar 

  6. Zhang W, Kang P, Fang X, Teng L, Han N (2019) Joint sparse representation and locality preserving projection for feature extraction. Int J Mach Learn Cybern 10(7):1731–1745

    Article  Google Scholar 

  7. Zheng W, Lin Z, Wang H (2013) L1-norm kernel discriminant analysis via bayes error bound optimization for robust feature extraction. IEEE Trans Neural Netw Learn Syst 25(4):793– 805

    Article  Google Scholar 

  8. Liu J, Lin Y, Lin M, Wu S, Zhang J (2017) Feature selection based on quality of information. Neurocomputing 225:11–22

    Article  Google Scholar 

  9. Liu X, Wang L, Zhang J, Yin J, Liu H (2013) Global and local structure preservation for feature selection. IEEE Trans Neural Netw Learn Syst 25(6):1083–1095

    Google Scholar 

  10. Luo M, Nie F, Chang X, Yang Y, Hauptmann A G, Zheng Q (2017) Adaptive unsupervised feature selection with structure regularization. IEEE Trans Neural Netw Learn Syst 29(4):944–956

    Article  Google Scholar 

  11. Zhao Z, Wang L, Liu H, Ye J (2011) On similarity preserving feature selection. IEEE Trans Knowl Data Eng 25(3):619–632

    Article  Google Scholar 

  12. Chang X, Yang Y (2016) Semisupervised feature analysis by mining correlations among multiple tasks. IEEE Trans Neural Netw Learn Syst 28(10):2294–2305

    Article  MathSciNet  Google Scholar 

  13. Law MHC, Jain A K (2006) Incremental nonlinear dimensionality reduction by manifold learning. IEEE Trans Pattern Anal Mach Intell 28(3):377–391

    Article  Google Scholar 

  14. Li S, Wei D (2013) Extremely high-dimensional feature selection via feature generating samplings. IEEE Trans Cybern 44(6):737–747

    Article  MathSciNet  Google Scholar 

  15. Liu H, Motoda H (2012) Feature selection for knowledge discovery and data mining, vol 454. Springer Science & Business Media

  16. Silva V D, Tenenbaum J B (2003) Global versus local methods in nonlinear dimensionality reduction. In: Advances in neural information processing systems, pp 721–728

  17. Boutemedjet S, Bouguila N, Ziou D (2008) A hybrid feature extraction selection approach for high-dimensional non-gaussian data clustering. IEEE Trans Pattern Anal Mach Intell 31(8):1429–1443

    Article  Google Scholar 

  18. Jain A, Zongker D (1997) Feature selection: Evaluation, application, and small sample performance. IEEE Trans Pattern Anal Mach Intell 19(2):153–158

    Article  Google Scholar 

  19. Mao KZ (2005) Identifying critical variables of principal components for unsupervised feature selection. IEEE Trans Syst Man Cybern Part B (Cybern) 35(2):339–344

    Article  Google Scholar 

  20. Jolliffe I T (2002) Principal component analysis. J Mark Res 87(4):513

    MATH  Google Scholar 

  21. Cox T F, Cox Michael AA (2000) Multidimensional scaling. Chapman and hall/CRC

  22. Rosipal R, Girolami M, Trejo L J, Cichocki A (2001) Kernel pca for feature extraction and de-noising in nonlinear regression. Neural Comput Appl 10(3):231–243

    Article  Google Scholar 

  23. Scholkopf B, Smola A J, Muller K (1997) Kernel principal component analysis. Int Conf Artif Neural Netw:583–588

  24. Tenenbaum J B, De Silva V, Langford J C (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323

    Article  Google Scholar 

  25. Roweis S T, Saul L K (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500):2323–2326

    Article  Google Scholar 

  26. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396

    Article  Google Scholar 

  27. He X, Niyogi P (2004) Locality preserving projections. In: Advances in neural information processing systems, pp 153–160

  28. Luo D, Nie F, Huang H, Ding C H (2011) Cauchy graph embedding. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp 553–560

  29. Nie F, Zeng Z, Tsang I W, Xu D, Zhang C (2011) Spectral embedded clustering: A framework for in-sample and out-of-sample spectral clustering. IEEE Trans Neural Netw 22(11):1796–1808

    Article  Google Scholar 

  30. Nie F, Wang H, Huang H, Ding C (2011) Unsupervised and semi-supervised learning via l1-norm graph. In: 2011 International Conference on Computer Vision. In: IEEE, pp 2268–2273

  31. Zou H, Hastie T (2005) Regularization and variable selection via the elastic net. J R Stat Soc Ser B (Stat Methodol) 67(2):301–320

    Article  MathSciNet  Google Scholar 

  32. Ding C (2013) A new robust function that smoothly interpolates between l1 and l2 error functions. Univerisity of Texas at Arlington Tech Report

  33. Nie F, Wang H, Huang H, Ding C (2013) Adaptive loss minimization for semi-supervised elastic embedding. In: Twenty-Third International Joint Conference on Artificial Intelligence

  34. Barron J T (2019) A general and adaptive robust loss function. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4331–4339

  35. Watanabe T, Kessler D, Scott C, Angstadt M, Sripada C (2014) Disease prediction based on functional connectomes using a scalable and spatially-informed support vector machine. Neuroimage 96:183–202

    Article  Google Scholar 

  36. Ma X, Ye Q, Yan H (2017) L2p-norm distance twin support vector machine. IEEE Access 5:23473–23483

    Article  Google Scholar 

  37. Wang Q, Gao Q, Gao X, Nie F (2018) 2,p -norm based pca for image recognition. IEEE Trans Image Process 27(3):1336–1346

    Article  MathSciNet  Google Scholar 

  38. Liu Y, Gao Q, Miao S, Gao X, Nie F, Li Y (2016) A non-greedy algorithm for l1-norm lda. IEEE Trans Image Process:684–695

  39. Sun W, Yuan Y (2006) Optimization theory and methods. Springer US

  40. Wang Q, Gao Q, Gao X, Nie F (2017) Angle principal component analysis. In: Twenty-Sixth International Joint Conference on Artificial Intelligence

  41. Kang Z, Pan H, Hoi SCH, Xu Z (2019) Robust graph learning from noisy data. IEEE Trans Cybern 50(5):1833–1843

    Article  Google Scholar 

  42. Yang Y, Deng S, Lu J, Li Y, Gong Z, Hao Z, et al. (2020) Graphlshc: Towards large scale spectral hypergraph clustering. Inf Sci

  43. Han Y, Zhu L, Cheng Z, Li J, Liu X (2020) Discrete optimal graph clustering. IEEE Trans Cybern 50(4):1697–1710

    Article  Google Scholar 

  44. Georghiades AS, Belhumeur PN, Kriegman DJ (2001) From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans Pattern Anal Mach Intell 23(6):643–660

    Article  Google Scholar 

  45. Lee K, Ho J, Kriegman D J (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698

    Article  Google Scholar 

  46. Cambridge AL (1994) Orl database of faces

  47. Lyons M J, Budynek J, Akamatsu S (1999) Automatic classification of single facial images. IEEE Trans Pattern Anal Mach Intell 21(12):1357–1362

    Article  Google Scholar 

  48. Asuncion A, Newman D (2007) Uci machine learning repository

  49. Breiman L, Friedman J, Olshen R, Stone C (1984) Classification and regression trees. wadsworth int. Group 37(15):237–251

    Google Scholar 

  50. Aha D W, Kibler D, Albert M K (1991) Instance-based learning algorithms. Mach Learn 6(1):37–66

    Google Scholar 

  51. Hoyer P O (2004) Non-negative matrix factorization with sparseness constraints. J Mach Learn Res 5:1457–1469

    MathSciNet  MATH  Google Scholar 

  52. Li Y, Ngom A (2013) Classification approach based on non-negative least squares. Neurocomputing 118:41–57

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yunlong Gao or Shunxiang Wu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, B., Gao, Y., Wu, S. et al. Soft adaptive loss based Laplacian eigenmaps. Appl Intell 52, 321–338 (2022). https://doi.org/10.1007/s10489-021-02300-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-02300-x

Keywords

Navigation