Skip to main content
Log in

A Fast Feature-based Dimension Reduction Algorithm for Kernel Classifiers

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper presents a novel dimension reduction algorithm for kernel based classification. In the feature space, the proposed algorithm maximizes the ratio of the squared between-class distance and the sum of the within-class variances of the training samples for a given reduced dimension. This algorithm has lower complexity than the recently reported kernel dimension reduction (KDR) for supervised learning. We conducted several simulations with large training datasets, which demonstrate that the proposed algorithm has similar performance or is marginally better compared with KDR whilst having the advantage of computational efficiency. Further, we applied the proposed dimension reduction algorithm to face recognition in which the number of training samples is very small. This proposed face recognition approach based on the new algorithm outperforms the eigenface approach based on the principal component analysis (PCA), when the training data is complete, that is, representative of the whole dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Blake, C. L. and Merz, C. J.: Uci Repository of Machine Learning Databases [http://www.ics.uci.edu/~ mlearn/MLRepository.html]. University of California, Irvine, CA, Dept. of Information and Computer Science, 1998.

  2. Boser, B., Guyon, I. and Vapnik, V.: A training algorithm for optimal margin classifiers. Proc. Of the Fifth Annual Workshop on Computational Learning Theory, pp. 144–152 (1992).

  3. Fukumizu K., Bach F.R., Jordan M.I. (2004): Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. Journal of Machine Learning Research 5, 73–99

    MathSciNet  Google Scholar 

  4. Manton J.H. (2002): Optimization algorithms exploiting unitary constraints. IEEE Transactions Signal Processing 50, 635–650

    Article  MathSciNet  ADS  Google Scholar 

  5. McLachlan, G. J.: Discriminant Analysis and Statistical Pattern Recognition. John Wiley & Sons.

  6. Mika S., Patsch G., Weston J., Scholkopf B., Muller K. (1999): Fisher discriminant analysis with kernels. Neural Networks for Signal Processing IX, IEEE, 41–48

    Google Scholar 

  7. Pan V. (1984): How can we speed up matrix multiplication?. SIAM Review 26, 393–415

    Article  MATH  MathSciNet  Google Scholar 

  8. Suykens J.A.K., Van Gestel T., De Brabanter J., De Moor B., Vandewalle J. (2002). Least Squares Support Vector Machines. World Scientific, Singapore

    MATH  Google Scholar 

  9. Suykens J.A.K., Vandewalle J. (1999): Least squares support vector machine classifiers. Neural Processing Letters 9, 293–300

    Article  MathSciNet  Google Scholar 

  10. Tiahyadi, R.: Investigations Into pca and dct Based Recognition Algorithms. Master Thesis, Curtin University of Technology (2004).

  11. Turk M., Pentland A. (1991): Eigenfaces for recognition. Journal of Cognitive Neuroscience 13, 71–86

    Article  Google Scholar 

  12. Van Gestel T., Suykens J.A.K., Baesens B., Viaene S., Vanthienen J., Dedene G., De Moor B., Vandewalle J. (2004): Benchmarking least squares support vector machine classifiers. Machine Learning 54, 5–32

    Article  MATH  Google Scholar 

  13. Vapnik V. (1995). The Nature of Statistical Learning Theory. Spring-Verlag, New-York

    MATH  Google Scholar 

  14. WebSite: [online] http://cvc.yale.edu/projects/yalefaces/yalefaces.html. YALE University Face Database (2004)

  15. Weston J., Mukherjee S., Chapelle O., Pontil M., Poggio T., Vapnik V. (2001): Feature selection for svms. Advances in Neural Information Processing Systems 13, 668–674

    Google Scholar 

  16. Xiong H., Swamy S., Ahmad M. (2005): Optimizing the kernel in the empirical feature space. IEEE Transactions on Neural Networks 16, 460–474

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wanquan Liu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

An, S., Liu, W., Venkatesh, S. et al. A Fast Feature-based Dimension Reduction Algorithm for Kernel Classifiers. Neural Process Lett 24, 137–151 (2006). https://doi.org/10.1007/s11063-006-9016-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-006-9016-7

Keywords

Navigation