Skip to main content

Maximum Variance Sparse Mapping

  • Conference paper
Advances in Neural Networks – ISNN 2011 (ISNN 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6676))

Included in the following conference series:

  • 2393 Accesses

Abstract

In this paper, a multiple sub-manifold learning method oriented classification is presented via sparse representation, which is named maximum variance sparse mapping. Based on the assumption that data with the same label locate on a sub-manifold and different class data reside in the corresponding sub-manifolds, the proposed algorithm can construct an objective function which aims to project the original data into a subspace with maximum sub-manifold distance and minimum manifold locality. Moreover, instead of setting the weights between any two points directly or obtaining those by a square optimal problem, the optimal weights in this new algorithm can be approached using L1 minimization. The proposed algorithm is efficient, which can be validated by experiments on some benchmark databases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003)

    Article  MATH  Google Scholar 

  2. Bengio, Y., Paiement, J.-F., Vincent, P.: Out-of-sample extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering, Technical Report 1238, Universit’ e deMontreal (2003)

    Google Scholar 

  3. Yan, S., Xu, D., Zhang, B., Zhang, H.-J.: Graph Embedding: A General Framework for Dimensionality Reduction. IEEE Trans. Pattern Analysis and Machine Intelligence 29(1), 40–51 (2007)

    Article  Google Scholar 

  4. He, X., Yang, S., Hu, Y., Niyogi, P., Zhang, H.J.: Face Recognition Using Laplacianfaces. IEEE Trans. Pattern Analysis and Machine Intelligence 27(3), 328–340 (2005)

    Article  Google Scholar 

  5. He, X., Niyogi, P.: Locality preserving projections. In: Neural Information Processing Systems, NIPS 2003, Vancouver, Canada, vol. 16 (2003)

    Google Scholar 

  6. Cai, D., He, X., Han, J., Zhang, H.: Orthogonal Laplacianfaces for Face Recognition. IEEE Trans. on Image Processing 15(11), 3609–3614 (2006)

    Article  Google Scholar 

  7. Yang, J., Zhang, D., Yang, J.Y., Niu, B.: Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Application to Face and Palm Biometrics. IEEE Trans. Pattern Analysis and Machine Intelligence 29(4), 650–664 (2007)

    Article  Google Scholar 

  8. Deng, W., Hu, J., Guo, J., Zhang, H., Zhang, C.: Comments on Globally Maximizing, Locally Minimizing: Unsupervised Discriminant Projection with Application to Face and Palm Biometrics. IEEE Trans. Pattern Analysis and Machine Intelligence (accepted)

    Google Scholar 

  9. Li, B., Wang, C., Huang, D.-S.: Supervised feature extraction based on orthogonal discriminant projection. Neurocomputing 73(1-3), 191–196 (2009)

    Article  Google Scholar 

  10. Saul, L.K., Roweis, S.T.: Think globally, fit locally: unsupervised learning of low dimensional manifolds. J. Mach. Learning Res. 4, 119–155 (2003)

    MathSciNet  MATH  Google Scholar 

  11. Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)

    Article  Google Scholar 

  12. Wright, J., Yang, A., Sastry, S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)

    Article  Google Scholar 

  13. Li, B., Huang, D.-S., Wang, C., Liu, K.-H.: Feature extraction using constrained maximum variance mapping. Pattern Recognition 41(11), 3287–3294 (2008)

    Article  MATH  Google Scholar 

  14. Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  15. Drori, I., Donoho, D.: Solution of L1 minimization problems by LARS/Homotopy methods. In: ICASSP, vol. 3, pp. 636–639 (2006)

    Google Scholar 

  16. Qiao, L., Chen, S., Tan, X.: Sparsity preserving projections with applications to face recognition. Pattern Recognition 43(1), 331–341 (2010)

    Article  MATH  Google Scholar 

  17. Brunet, J.P., Tamayo, P., Golun, T.R., Mesirov, J.P.: Metagenes and Molecular Pattern Discovery Using Matrix Factorization. Proc. Natl. Acad. Sci. 101, 4164–4416 (2004)

    Article  Google Scholar 

  18. Pomeroy, S.L., Tamayo, P., et al.: Prediction of Central Nervous System Embryonal Tumour Outcome Based on Gene Expression. Nature 415, 436–442 (2002)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Li, B., Liu, J., Dong, W. (2011). Maximum Variance Sparse Mapping. In: Liu, D., Zhang, H., Polycarpou, M., Alippi, C., He, H. (eds) Advances in Neural Networks – ISNN 2011. ISNN 2011. Lecture Notes in Computer Science, vol 6676. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21090-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21090-7_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21089-1

  • Online ISBN: 978-3-642-21090-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics