Skip to main content
Log in

Improvement of the kernel minimum squared error model for fast feature extraction

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The kernel minimum squared error (KMSE) expresses the feature extractor as a linear combination of all the training samples in the high-dimensional kernel space. To extract a feature from a sample, KMSE should calculate as many kernel functions as the training samples. Thus, the computational efficiency of the KMSE-based feature extraction procedure is inversely proportional to the size of the training sample set. In this paper, we propose an efficient kernel minimum squared error (EKMSE) model for two-class classification. The proposed EKMSE expresses each feature extractor as a linear combination of nodes, which are a small portion of the training samples. To extract a feature from a sample, EKMSE only needs to calculate as many kernel functions as the nodes. As the nodes are commonly much fewer than the training samples, EKMSE is much faster than KMSE in feature extraction. The EKMSE can achieve the same training accuracy as the standard KMSE. Also, EKMSE avoids the overfitting problem. We implement the EKMSE model using two algorithms. Experimental results show the feasibility of the EKMSE model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Muller KR, Mika S, Ratsch G, Tsuda K, Scholkopf B (2001) An introduction to kernel-based learning algorithms. IEEE Trans Neural Netw 12(2):181–201

    Article  Google Scholar 

  2. Vapnik VN (1999) An overview of statistical learning theory. IEEE Trans Neural Netw 10(5):988–999

    Article  Google Scholar 

  3. Kim KI, Jung K, Kim HJ (2002) Face recognition using kernel principal component analysis. IEEE Signal Process Lett 9(2):40–42

    Article  Google Scholar 

  4. Mika S, Ratsch G, Weston J, Scholkopf B, Mullers KR (1999) Fisher discriminant analysis with kernels. In: Proceedings of the 1999 IEEE signal processing society workshop neural networks for signal processing IX, pp 41–48

  5. Xu J, Zhang X, Li Y (2001) Kernel MSE algorithm: a unified framework for KFD, LS-SVM and KRR, in Neural Networks, 2001. In: Proceedings. IJCNN ‘01. International joint conference on, vol 2. pp 1486–1491

  6. Saunders C, Gammerman A, Vovk V (1998) Ridge regression learning algorithm in dual variables. Presented at the proceedings of the 15th international conference on machine learning, vol 37. pp 515–521

  7. Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  MathSciNet  Google Scholar 

  8. Baudat G, Anouar F (2000) Generalized discriminant analysis using a kernel approach. Neural Comput 12(10):2385–2404

    Article  Google Scholar 

  9. Zhang C, Nie F, Xiang S (2010) A general kernelization framework for learning algorithms based on kernel PCA. Neurocomputing 73(4–6):959–967

    Article  Google Scholar 

  10. Wang J, Li Q, You J, Zhao Q (2011) Fast kernel Fisher discriminant analysis via approximating the kernel principal component analysis. Neurocomputing 74(17):3313–3322

    Article  Google Scholar 

  11. Xu Y, Yang J-Y, Yang J (2004) A reformative kernel Fisher discriminant analysis. Pattern Recogn 37(6):1299–1302

    Article  MATH  Google Scholar 

  12. Xu Y, Zhang D, Jin Z, Li M, Yang J-Y (2006) A fast kernel-based nonlinear discriminant analysis for multi-class problems. Pattern Recogn 39(6):1026–1033

    Article  MATH  Google Scholar 

  13. Zhao Y-P, Du Z-H, Zhang Z-A, Zhang H-B (2011) A fast method of feature extraction for kernel MSE. Neurocomputing 74(10):1654–1663

    Article  Google Scholar 

  14. Zhu Q (2011) Reformative nonlinear feature extraction using kernel MSE. Neurocomputing 73(16–18):3334–3337

    Google Scholar 

  15. Xu Y, Yang J-Y, Lu J-F (2005) An efficient kernel-based nonlinear regression method for two-class classification, in machine learning and cybernetics, 2005. In: Proceedings of 2005 international conference on, vol 7. pp 4442–4445

  16. Zheng Y-J, Yang J, Yang J-Y, Wu X-J (2006) A reformative kernel Fisher discriminant algorithm and its application to face recognition. Neurocomputing 69(13–15):1806–1810

    Article  Google Scholar 

  17. Zhao Y-P, Sun J-G, Du Z-H, Zhang Z-A, Zhang H-B (2011) Pruning least objective contribution in KMSE. Neurocomputing 74(17):3009–3018

    Article  Google Scholar 

  18. Zhao Y-P, Sun J-G (2009) Recursive reduced least squares support vector regression. Pattern Recogn 42(5):837–842

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinghua Wang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, J., Wang, P., Li, Q. et al. Improvement of the kernel minimum squared error model for fast feature extraction. Neural Comput & Applic 23, 53–59 (2013). https://doi.org/10.1007/s00521-012-0813-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-012-0813-9

Keywords

Navigation