Skip to main content
Log in

Robust sparsity-preserved learning with application to image visualization

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

Linear subspace learning is of great importance for the purpose of visualization of high-dimensional observations. Sparsity-preserved learning (SPL) is a recently developed technique for linear subspace learning. Its objective function is formulated by using the \(\ell _2\)-norm, which implies that the obtained projection vectors are likely to be distorted by outliers. In this paper, we develop a new SPL algorithm called SPL-L1 based on the \(\ell _1\)-norm instead of the \(\ell _2\)-norm. The proposed approach seeks projection vectors by minimizing a reconstruction error subject to a constraint of samples dispersion, both of which are defined using the \(\ell _1\)-norm. As a robust alternative, SPL-L1 works well in the presence of atypical samples. We design an iterative algorithm under the framework of bound optimization to solve the projection vectors of SPL-L1. The experiments on image visualization demonstrate the superiority of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. The \(\ell _1\)-norm has been used once in the construction of the \(\ell _1\)-graph.

References

  1. Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  2. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15:1373–1396

    Article  MATH  Google Scholar 

  3. Cai D, He X, Han J (2007) Isometric projection. In: AAAI proceedings of the 22nd national conference on artificial intelligence

  4. Cheng B, Yang J, Yan S, Fu Y, Huang TS (2010) Learning with \(\ell ^1\)-graph for image analysis. IEEE Trans Image Process 19(4):858–866

    Article  MathSciNet  Google Scholar 

  5. Ding C, Zhou D, He X, Zha H (2006) R1-PCA: Rotational invariant L1-norm principal component analysis for robust subspace factorization. In: Proceeding of international conference on machine learning, pp 281–288

  6. Donoho DL (2006) For most large underdetermined systems of linear equations the minimal \(\ell _1\)-norm solution is also the sparsest solution. Commun Pure Appl Math 59(6):797–829

    Article  MATH  MathSciNet  Google Scholar 

  7. Donoho DL, Elad M (2003) Optimally sparse representation in general (non-orthogonal) dictionaries via \(\ell ^1\) minimization. Proc Natl Acad Sci 100(5):2197–2202

    Article  MATH  MathSciNet  Google Scholar 

  8. He X, Cai D, Yan S, Zhang H-J (2005a) Neighborhood preserving embedding. In: Proceedings of the tenth IEEE international conference on computer vision, vol 2, pp 1208–1213

  9. He X, Yan S, Hu Y, Niyogi P, Zhang H-J (2005b) Face recognition using Laplacianfaces. IEEE Trans Pattern Anal Mach Intell 27(3):328–340

    Article  Google Scholar 

  10. Huang K, Ying Y, Campbell C (2011) Generalized sparse metric learning with relative comparisons. Knowl Inf Syst 28(1):25–45

    Article  Google Scholar 

  11. Jain AK, Duin RPW, Mao J (2000) Statistical pattern recognition: a review. IEEE Trans Pattern Anal Mach Intell 22(1):4–37

    Article  Google Scholar 

  12. Jenatton R, Obozinski G, Bach F (2010) Structured sparse principal component analysis. In: Proceedings of the 13th international conference on artificial intelligence and statistics

  13. Jolliffe IT (1986) Principal component analysis. Springer, New York

    Book  Google Scholar 

  14. Ke Q, Kanade T (2005) Robust L1 norm factorization in the presence of outliers and missing data by alternative convex programming. In: Proceeding of IEEE international conference on computer vision and pattern recognition, pp 739–746

  15. Krishnapuram B, Carin L, Figueiredo MAT, Hartemink AJ (2005) Sparse multinomial logistic regression: fast algorithms and generalization bounds. IEEE Trans Pattern Anal Mach Intell 27(6):957–968

    Article  Google Scholar 

  16. Kwak N (2008) Principal component analysis based on L1-norm maximization. IEEE Trans Pattern Anal Mach Intell 30(9):1672–1680

    Article  Google Scholar 

  17. Lange K, Hunter D, Yang I (2000) Optimization transfer using surrogate objective functions. J Comput Graph Stat 9:1–59

    MathSciNet  Google Scholar 

  18. Li X, Pang Y, Yuan Y (2009) L1-norm-based 2DPCA. IEEE Trans Syst Man Cybern B Cybern 40(4): 1170–1175

    Google Scholar 

  19. Liu Q, Lan C, Jing XY, Gao SQ, Zhang D, Yang JY (2012) Sparsity preserving embedding with manifold learning and discriminant analysis. IEICE Trans Inf Syst E95.D(1): 271–274

  20. Meng D, Zhao Q, Xu Z (2012) Improve robustness of sparse PCA by L1-norm maximization. Pattern Recognit 45(1):487–497

    Article  MATH  Google Scholar 

  21. Nene SA, Nayar SK, Murase H (1996) Columbia object image library (COIL-20), technical report CUCS-005-96

  22. Nie F, Huang H, Ding C, Luo D, Wang H (2011) Robust principal component analysis with non-greedy L1-norm maximization. In: International joint conference on artificial intelligence, pp 1433–1438

  23. Olshausen BA, Fieldt DJ (1997) Sparse coding with an overcomplete basis set: a strategy employed by v1? Vis Res 37(23):3311–3325

    Article  Google Scholar 

  24. Pang Y, Li X, Yuan Y (2010) Robust tensor analysis with L1-norm. IEEE Trans Circuits Syst Video Technol 20(2):172–178

    Article  Google Scholar 

  25. Qiao L, Chen S, Tan X (2010) Sparsity preserving projections with applications to face recognition. Pattern Recognit 43:331–341

    Article  MATH  Google Scholar 

  26. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326

    Article  Google Scholar 

  27. Sugiyama M (2007) Dimensionality reduction of multimodal labeled data by local Fisher discriminant analysis. J Mach Learn Res 8:1027–1061

    MATH  Google Scholar 

  28. Sun J, Tao D, Papadimitriou S, Yu P, Faloutsos C (2008) Incremental tensor analysis: theory and applications. ACM Trans Knowl Discov Data 2(3):1–37

    Article  Google Scholar 

  29. Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290:2319–2323

    Article  Google Scholar 

  30. Wang H (2012) Block principal component analysis with L1-norm for image analysis. Pattern Recogn Lett 33(5):537–542

    Article  Google Scholar 

  31. Wang H, Tang Q, Zheng W (2012) L1-norm-based common spatial patterns. IEEE Trans Biomed Eng 59(3):653–662

    Article  Google Scholar 

  32. Wright J, Ma Y, Mairal J, Sapiro G, Huang TS, Yan S (2010) Sparse representation for computer vision and pattern recognition. Proc IEEE 98(6):1031–1044

    Article  Google Scholar 

  33. Wright J, Yang A, Sastry S, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  34. Yan S, Wang H (2009) Semi-supervised learning by sparse representation. In: Proceedings of the SIAM international conference on data mining, Nevada, USA, pp 792–801

  35. Yan S, Xu D, Zhang B, Zhang H-J, Yang Q, Lin S (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51

    Article  Google Scholar 

  36. Yang J, Zhang D, Frangi AF, Yang J (2004) Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans Pattern Anal Mach Intell 26(1):131–137

    Article  Google Scholar 

  37. Yin F, Jiao LC, Shang F, Wang S, Hou B (2012) Fast Fisher sparsity preserving projections. Neural Comput Appl. doi:10.1007/s00521-012-0978-2, online first

Download references

Acknowledgments

The authors would like to thank the anonymous referees for the constructive comments, which are helpful to improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haixian Wang.

Additional information

This work was supported in part by the National Basic Research Program of China under grant 2011CB302202, in part by the National Natural Science Foundation of China under Grant 61075009, in part by the Natural Science Foundation of Jiangsu Province under Grant BK2011595, in part by the Program for New Century Excellent Talents in University of China, and in part by the Qing Lan Project of Jiangsu Province.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, H., Zheng, W. Robust sparsity-preserved learning with application to image visualization. Knowl Inf Syst 39, 287–304 (2014). https://doi.org/10.1007/s10115-012-0605-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-012-0605-7

Keywords

Navigation