Skip to main content
Log in

Dimensionality reduction via kernel sparse representation

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

Dimensionality reduction (DR) methods based on sparse representation as one of the hottest research topics have achieved remarkable performance in many applications in recent years. However, it’s a challenge for existing sparse representation based methods to solve nonlinear problem due to the limitations of seeking sparse representation of data in the original space. Motivated by kernel tricks, we proposed a new framework called empirical kernel sparse representation (EKSR) to solve nonlinear problem. In this framework, nonlinear separable data are mapped into kernel space in which the nonlinear similarity can be captured, and then the data in kernel space is reconstructed by sparse representation to preserve the sparse structure, which is obtained by minimizing a 1 regularization-related objective function. EKSR provides new insights into dimensionality reduction and extends two models: 1) empirical kernel sparsity preserving projection (EKSPP), which is a feature extraction method based on sparsity preserving projection (SPP); 2) empirical kernel sparsity score (EKSS), which is a feature selection method based on sparsity score (SS). Both of the two methods can choose neighborhood automatically as the natural discriminative power of sparse representation. Compared with several existing approaches, the proposed framework can reduce computational complexity and be more convenient in practice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Marcellin M, Gormish M, Bilgin A, Boliek M. An overview of JPEG-2000. In: Proceedings of the 2000 IEEE Data Compression Conference. 2000, 523–541

    Chapter  Google Scholar 

  2. Elad M, Aharon M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Process, 2006, 15(12): 3736–3745

    Article  MathSciNet  Google Scholar 

  3. Marial J, Bach F, Ponce J, Sapiro J, Zisserman A. Non-local sparse models for image restoration. In: Proceedings of the 12th IEEE International Conference on Computer Vision. 2009, 2272–2279

    Google Scholar 

  4. Huang K, Aviyente S. Sparse representation for signal classification. In: Proceedings of Advances in Neural Information Processing Systems. 2006, 609–616

    Google Scholar 

  5. Davenport M, Duarte M, Wakin M, Takhar D, Kelly K, Baraniuk R. The smashed filter for compressive classification and target recognition. In: Proceedings of IS&T/SPIE Symposium on Electronic Imaging: Computational Imaging. 2007, 64980H–64980H-12

    Google Scholar 

  6. Donoho D. For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution. Communications On Pure and Applied Mathematics, 2006, 59(6): 797–829

    Article  MathSciNet  MATH  Google Scholar 

  7. Scholkopf B, Smola A, Muller K R. Kernel, principal component analysis. In: Proceedings of the 1997 International Conference on Artificial Neural Networks. 1997, 583–588

    Google Scholar 

  8. Qiao L, Chen S, Tan X. Sparsity preserving projections with applications to face recognition. Pattern Recognition, 2010, 43(1): 331–341

    Article  MATH  Google Scholar 

  9. Turk M, Pentland A. Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 1991, 3(1): 71–86

    Article  Google Scholar 

  10. He X F, Niyogi P. Locality preserving projections. In: Proceedings of Advances in Neural Information Processing Systems. 2003, 16: 234–241

    Google Scholar 

  11. Yang Y, Nie F, Xiang S, Zhuang Y, Wang W. Local and global regressive mapping for manifold learning with out-of-sample extrapolation. In: Proceedings of the 24th AAAI Conference on Artificial Intelligence. 2010, 649–654

    Google Scholar 

  12. Belhumeur P N, Hespanha J P, Kriegman D J. Eigenfaces vs. fisherfaces: recognition using class specific linear projection. In: Proceedings of the 1997 IEEE Transactions on Pattern Analysis and Machine Intelligence. 1997, 19(7): 711–720

    Article  Google Scholar 

  13. Xu D, Yan S C, Tao D C, Lin S, Zhang H J. Marginal fisher analysis and its variants for human gait recognition and content based image retrieval. In: Proceedings of the 2007 IEEE Transactions on Image Processing. 2007, 16(11): 2811–2821

    Article  MathSciNet  Google Scholar 

  14. Li H F, Jiang T, Zhang K S. Efficient and robust feature extraction by maximum margin criterion. In: Proceedings of the 2006 IEEE Transactions on Neural Networks. 2006, 17(1): 157–165

    Article  Google Scholar 

  15. Liu J, Chen S C, Tan X Y, Zhang D Q. Comments on “Efficient and robust feature extraction by maximum margin criterion”. In: Proceedings of the 2007 IEEE Transactions on Neural Networks. 2007, 18(6): 1862–1864

    Google Scholar 

  16. Zhang D Q, Zhou Z H, Chen S C. Semi-supervised dimensionality reduction. In: Proceedings of the 2007 International Conference on Data Mining. 2007, 629–634

    Chapter  Google Scholar 

  17. Cai D, He X F, Han J W. Semi-supervised discriminant analysis. In: Proceedings of the 11th IEEE International Conference on Computer Vision. 2007, 1–7

    Google Scholar 

  18. Sugiyama M, Ide T, Nakajima S, Sese J. Semi-supervised local fisher discriminant analysis for dimensionality reduction. Machine Learning, 2008, 78(1–2): 35–61

    MathSciNet  Google Scholar 

  19. Qiao L, Zhang L, Chen S. Dimensionality reduction with adaptive graph. Frontiers of Computer Science, 2013, 7(5): 745–753

    Article  MathSciNet  Google Scholar 

  20. Bishop C M. Neural Networks for Pattern Recognition. Oxford: Oxford University Press, 1995

    Google Scholar 

  21. He X, Cai D, Niyogi P. Laplacian score for feature selection. In: Proceedings of the Advances in Neural Information Processing Systems. 2005, 17: 507–514

    Google Scholar 

  22. Liu M X, Sun D, Zhang D Q. Sparsity score: a new filter feature selection method based on L1 graph. In: Proceedings of the 21st International Conference on Pattern Recognition. 2012, 11–15

    Google Scholar 

  23. Yang Y, Ma Z, Hauptmann A, Sebe N. Feature selection for multimedia analysis by sharing information among multiple tasks. In: Proceedings of the 2013 IEEE Transactions on Multimedia. 2013, 15(3): 661–669

    Google Scholar 

  24. Ma Z, Nie F P, Yang Y, Uijlings J R R, Sebe N. Web image annotation via subspace-sparsity collaborated feature selection. In: Proceedings of the 2012 IEEE Transactions on Multimedia. 2012, 14(4): 1021–1030

    Google Scholar 

  25. Yang Y, Shen H T, Ma Z, Huang Z, Zhou X F. ℓ2,1-norm regularized discriminative feature selection for unsupervised learning. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence. 2011(2): 1589–1594

    Google Scholar 

  26. Zhang D, Chen S, Zhou Z. Constraint score: a new filter method for feature selection with pairwise constraints. Pattern Recognition, 2008, 41(5): 1440–1451

    Article  MATH  Google Scholar 

  27. Zhao Z, Liu H. Semi-supervised feature selection via spectral analysis. In: Proceedings of the 7th SIAM International Conference on Data Mining. 2007, 641–646

    Google Scholar 

  28. Gao S, Tsang I W H, Chia L T. Kernel sparse representation for image classification and face recognition. In: Proceedings of the 11th European Conference on Computer Vision. 2010, 1–14

    Google Scholar 

  29. Zhang L, Zhou W, Chang P, Liu J, Yan Z, Wang T, Li F. Kernel sparse representation-based classifier. In: Proceedings of the 2012 IEEE Transactions on Signal Processing. 2012, 1684–1695

    Google Scholar 

  30. Yin J, Liu Z, Jin Z, Yang W. Kernel sparse representation based classi-fication. Neurocomputing, 2012, 77(1): 120–128

    Article  Google Scholar 

  31. Chen Y, Nasser N M, Tran T D. Hyperspectral image classification via kernel sparse representation. In: Proceedings of the 2013 IEEE Transactions on Geoscience and Remote Senseing. 2013, 51(1): 217–231

    Article  Google Scholar 

  32. Xiong H, Swamy M N S, Ahmad M O. Optimizing the kernel in the empirical feature space. In: Proceedings of the 2005 IEEE Transactions on Neural Networks. 2005, 16: 460–474

    Article  Google Scholar 

  33. Wang Z, Chen S, Sun T. MultiK-MHKS: a novel multiple kernel learning algorithm. In: Proceedings of the 2008 IEEE Transactions on Pattern Analysis and Machine Intelligence. 2008, 30(2): 348–353

    Article  Google Scholar 

  34. Donoho D. Compressed sensing. In: Proceedings of the 2006 IEEE Transactions on Information Theory. 2006, 52(4): 1289–1306

    Article  MathSciNet  MATH  Google Scholar 

  35. Shawe-Taylor J, Cristianini N. Kernel Methods for Pattern Analysis. Cambridge: Cambridge University Press, 2004

    Book  Google Scholar 

  36. Martinez M, Kak A C. PCA versus LDA. In: Proceedings of the 2001 IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001, 23(2): 228–233

    Article  Google Scholar 

  37. Wright J, Yang A, Sastry S, Ma Y. Robust face recognition via sparse representation. In: Proceedings of the 2009 IEEE Transactions on Pattern Analysis and Machine Intelligence. 2009, 31(2): 210–227

    Article  Google Scholar 

  38. Tsang I, Kocsor A, Kwok J. Efficient kernel feature extraction for massive data sets. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and DataMining. 2006, 724–729

    Chapter  Google Scholar 

  39. Yuan M, Lin Y. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2006, 68(1): 49–67

    Article  MathSciNet  MATH  Google Scholar 

  40. Liu J, Ye J. Moreau-yosida regularization for grouped tree structure learning. In: Proceedings of Advances in Neural Information Processing Systems. 2010, 23: 1459–1467

    Google Scholar 

  41. Jacob L, Obozinski G, Vert J P. Group lasso with overlap and graph lasso. In: Proceedings of the 26th ACM International Conference on Machine Learning. 2009, 433–440

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhisong Pan.

Additional information

Zhisong Pan received his BS in computer science and MS degree in computer science and application from PLA Information Engineering University, China in 1991 and 1994 respectively, and his PhD in Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, China in 2003. From July 2006 to present, he led several key projects of intelligent data processing for the network management. From July 2011, he has working as a full professor in PLA University of Science and Technology, China. His current research interests mainly include pattern recognition, machine learning and neural networks.

Zhantao Deng received his BS in communication engineering in 2009 from the University Electronic Science and Technology of China and his MS in computer science in 2012 from PLA University of Science and Technology. Presently, he is an assistant engineer within the Department of Computer Technology in PLA University of Science and Technology, China. His current areas of research interest are in the fields of sparse learning, dimension reduction and time series prediction.

Yibing Wang received his BS and MS from National University of Defense Technology, China in 2006 and 2010 respectively. He is now a PhD student at PLA University of Science and Technology, China. His main research interests include feature selection, multikernel learning, social media networks, and network securities.

Yanyan Zhang received her BS from Xi’an University of Finance and Economics, China in 2007 and her MS in computer science and application from Nanjing University of Aeronautics and Astronautics, China in 2010. Now she is a teaching assistant in PLA University of Science and Technology, China. Her current research interests include face recognition, hyperspectral image classification, and sparse learning.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pan, Z., Deng, Z., Wang, Y. et al. Dimensionality reduction via kernel sparse representation. Front. Comput. Sci. 8, 807–815 (2014). https://doi.org/10.1007/s11704-014-3317-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-014-3317-1

Keywords

Navigation