Skip to main content
Log in

A manifold framework of multiple-kernel learning for hyperspectral image classification

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Manifold learning is a promising intelligent data analysis method, and the manifold learning preserves the local embedding features of the data in manifold mapping space. Manifold learning has its limitations on extracting the nonlinear features of the data in many applications. For example, hyperspectral image classification needs to seek the nonlinear local relationships between spectral curves. For that, researchers applied the kernel trick to manifold learning in the previous works. The kernel-based manifold learning was developed, but still endures the problem that the inappropriate kernel model reduces the system performance. In order to solve the problem of kernel model selection, we propose a manifold framework of multiple-kernel learning for the application of hyperspectral image classification. In this framework, the quasiconformal mapping-based multiple-kernel model is optimized based on the optimization objective equation, which maximizes the class discriminant ability of data. Accordingly, the discriminative structure of data distribution is achieved for classification with the quasiconformal mapping-based multiple-kernel model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  2. Batur AU, Hayes MH (2001) Linear subspace for illumination robust face recognition. In: Proceedings of the IEEE international conference on computer vision and pattern recognition, pp 296–301

  3. Hastie T, Stuetzle W (1989) Principal curves. J Am Stat Assoc 84:502–516

    Article  MathSciNet  MATH  Google Scholar 

  4. Chang K-Y, Ghosh J (2001) A unified model for probabilistic principal surfaces. IEEE Trans Pattern Anal Mach Intell 23(1):22–41

    Article  Google Scholar 

  5. Zhu Z, He H, Starzyk JA, Tseng C (2007) Self-organizing learning array and its application to economic and financial problems. Inf Sci 177(5):1180–1192

    Article  Google Scholar 

  6. Yin H (2002) Data visualisation and manifold mapping using the ViSOM. Neural Netw 15(8):1005–1016

    Article  Google Scholar 

  7. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326

    Article  Google Scholar 

  8. Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290:2319–2323

    Article  Google Scholar 

  9. He X, Niyogi P (2003) Locality preserving projections. In: Proceedings of the conference on advances in neural information processing systems, pp 585–591

  10. He X, Yan S, Hu Y, Niyogi P, Zhang H (2005) Face recognition using Laplacianfaces. IEEE Trans Pattern Anal Mach Intell 27(3):328–340

    Article  Google Scholar 

  11. Li J-B, Pan J-S, Chu S-C (2008) Kernel class-wise locality preserving projection. Inf Sci 178(7):1825–1835

    Article  MATH  Google Scholar 

  12. Mulier F, Cherkassky V (1995) Self-organization as an iterative kernel smoothing process. Neural Comput 7:1165–1177

    Article  Google Scholar 

  13. Ritter H, Martinetz T, Schulten K (1992) Neural computation and self-organizing maps. Addison-Wesley, Reading, pp 64–72

    MATH  Google Scholar 

  14. Chen C, Li W, Hongjun S, Liu K (2014) Spectral–spatial classification of hyperspectral image based on kernel extreme learning machine. Remote Sens 6(6):5795–5814

    Article  Google Scholar 

  15. Huang J, Yuen PC, Chen W-S, Lai JH (2004) Kernel subspace LDA with optimized kernel parameters on face recognition. In: Proceedings of the sixth IEEE international conference on automatic face and gesture recognition

  16. Pan JS, Li JB, Lu ZM (2008) Adaptive quasiconformal kernel discriminant analysis. Neurocomputing 71(13–15):2754–2760

    Article  Google Scholar 

  17. Chen W-S, Yuen PC, Huang J, Dai D-Q (2005) Kernel machine-based one-parameter regularized fisher discriminant method for face recognition. IEEE Trans Syst Man Cybern B Cybern 35(4):658–669

    Google Scholar 

  18. Xiong H, Swamy MN, Ahmad MO (2005) Optimizing the kernel in the empirical feature space. IEEE Trans Neural Netw 16(2):460–474

    Article  Google Scholar 

  19. Amari S, Wu S (1999) Improving support vector machine classifiers by modifying kernel functions. Neural Netw 12(6):783–789

    Article  Google Scholar 

  20. Li J-B, Pan J-S, Lu Z-M (2009) Kernel optimization-based discriminant analysis for face recognition. Neural Comput Appl 18(6):603–612

    Article  Google Scholar 

  21. Xie X, Li B, Chai X (2015) Kernel-based nonparametric fisher classifier for hyperspectral remote sensing imagery. J Inf Hiding Multimed Signal Process 6(3):591–599

    Google Scholar 

  22. Subrahmanya N, Shin YC (2010) Sparse multiple kernel learning for signal processing applications. IEEE Trans Pattern Anal Mach Intell 32(5):788–798

    Article  Google Scholar 

  23. Sonnenburg S, Rätsch G, Schäfer C, Schölkopf B (2006) Large scale multiple kernel learning. J Mach Learn Res 7:1531–1565

    MathSciNet  MATH  Google Scholar 

  24. Kloft M, Brefeld U, Sonnenburg S, Zien A (2011) lp-Norm multiple kernel learning. J Mach Learn Res 12:953–997

    MathSciNet  MATH  Google Scholar 

  25. Lin C, Jiang J, Zhao X, Pang M, Ma Y (2015) Supervised kernel optimized locality preserving projection with its application to face recognition and palm biometrics. Math Probl Eng. doi:10.1155/2015/421671

    Google Scholar 

  26. Koltchinskii V, Panchenko D (2002) Empirical margin distributions and bounding the generalization error of combined classifiers. Ann Stat 30(1):1–50

    MathSciNet  MATH  Google Scholar 

  27. Wang L, Chan KL, Xue P (2005) A criterion for optimizing kernel parameters in KBDA for image retrieval. IEEE Trans Syst Man Cybern B Cybern 35(3):556–562

    Article  Google Scholar 

  28. Cortes C, Mohri M, Rostamizadeh A (2012) Algorithms for learning kernels based on centered alignment. J Mach Learn Res 13(1):795–828

    MathSciNet  MATH  Google Scholar 

  29. He Z, Li J (2015). Multiple data-dependent kernel for classification of hyperspectral images. Expert Syst Appl 42(3):1118–1135

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodan Xie.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xie, X., Li, B. & Chai, X. A manifold framework of multiple-kernel learning for hyperspectral image classification. Neural Comput & Applic 28, 3429–3439 (2017). https://doi.org/10.1007/s00521-016-2206-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-016-2206-y

Keywords

Navigation