Skip to main content
Log in

Linear dimension reduction in classification: adaptive procedure for optimum results

  • Published:
Advances in Data Analysis and Classification Aims and scope Submit manuscript

Abstract

Linear dimension reduction plays an important role in classification problems. A variety of techniques have been developed for linear dimension reduction to be applied prior to classification. However, there is no single definitive method that works best under all circumstances. Rather a best method depends on various data characteristics. We develop a two-step adaptive procedure in which a best dimension reduction method is first selected based on the various data characteristics, which is then applied to the data at hand. It is shown using both simulated and real life data that such a procedure can significantly reduce the misclassification rate.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Barker M, Rayens W (2003) Partial least squares for discrimination. J Chemom 17: 166–173

    Article  Google Scholar 

  • Brazdil P, Soares C, Pinto da Costa J (2003) Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results. Mach Learn 50(3): 251–277

    Article  MATH  Google Scholar 

  • Brazdil P, Giraud-Carrier C, Soares C, Vilalta R (2009) Metalearning. Springer, Berlin, Heidelberg

    MATH  Google Scholar 

  • Diaconis P, Freedman D (1984) Asymptotics of graphical projection pursuit. Ann Stat 12: 793–815

    Article  MathSciNet  MATH  Google Scholar 

  • Fleishman AI (1978) A mehtod for simulating non-normal distributions. Psychometrica 43: 521–532

    Article  MATH  Google Scholar 

  • Hastie T, Buja A, Tibshirani R (1995) Penalized discriminant analysis. Ann Stat 23(1): 73–102

    Article  MathSciNet  MATH  Google Scholar 

  • Jin Z, Yang JY, Tang ZM, Hu ZS (2001) A theorem on the uncorrelated optimal discriminant vectors. Pattern Recognit 34(10): 2041–2047

    Article  MATH  Google Scholar 

  • Lele S, Richtsmeier JT (2001) An invariant approach to statistical analysis of shapes. CRC Press Inc, London

    Book  MATH  Google Scholar 

  • Luebke K (2006) Adaptive lineare Dimensionsreduktion in der Klassifikation. PhD thesis, Universität Dortmund, Fachbereich Statistik

  • Luebke K, Weihs C (2005) Improving feature extraction by replacing the fisher criterion by an upper error bound. Pattern Recognit 38(11): 2220–2223

    Article  Google Scholar 

  • Luebke K, Weihs C (2009) Prediction optimal classication of business phases. In: Wagner A (ed) Empirische Wirtschaftsforschung heute, Schäffer-Poeschel, Stuttgart, pp 149–156

    Google Scholar 

  • McCulloch RE (1986) Some remarks on allocatory and separatory linear discrimination. J Stat Planning Inference 14: 323–330

    Article  MathSciNet  MATH  Google Scholar 

  • McLachlan GJ (1992) Discriminant analysis and statistical pattern recognition. Wiley, New York

    Book  Google Scholar 

  • Michie D, Spiegelhalter DJ, Taylor CC (1994) Machine learning, neural and statistical classification. Ellis Horwood, Upper Saddle River

    MATH  Google Scholar 

  • O’Gorman TW (2004) Applied adaptive statistical methods: tests of significance and confidence intervals. SIAM, Philadelphia

    MATH  Google Scholar 

  • Press W, Flannery B, Teukolsky S, Vetterling W (1992) Numerical recipes in C, 2nd edn. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Röhl MC, Weihs C, Theis W (2002) Direct minimization of error rates in multivariate classification. Comput Stat 17: 29–46

    Article  MATH  Google Scholar 

  • Salamon P, Sibani P, Frost R (2002) Facts, conjectures and improvement for simulated annealing. Monographs on mathematical modeling and computation. SIAM, Philadelphia

    Book  Google Scholar 

  • Schervish MJ (1984) Linear discrimination for three known normal populations. J Stat Planning Inference 10: 167–175

    Article  MathSciNet  MATH  Google Scholar 

  • Wolpert DH (2001) The supervised no-free-lunch theorems. In: Roy R, Köppen M, Ovaska SJ, Furuhashi T, Hoffmann F (eds) Proceedings of the 6th Online World Conference on Soft Computing in Industrial Applications, Springer Engineering Series, pp 25–42

  • Yang J, Yang JY, Zhang D (2002) What’s wrong with Fisher criterion?. Pattern Recognit 35(11): 2665–2668

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karsten Luebke.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Luebke, K., Weihs, C. Linear dimension reduction in classification: adaptive procedure for optimum results. Adv Data Anal Classif 5, 201–213 (2011). https://doi.org/10.1007/s11634-011-0091-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11634-011-0091-x

Keywords

Mathematics Subject Classification (2000)

Navigation