Abstract
Linear dimension reduction plays an important role in classification problems. A variety of techniques have been developed for linear dimension reduction to be applied prior to classification. However, there is no single definitive method that works best under all circumstances. Rather a best method depends on various data characteristics. We develop a two-step adaptive procedure in which a best dimension reduction method is first selected based on the various data characteristics, which is then applied to the data at hand. It is shown using both simulated and real life data that such a procedure can significantly reduce the misclassification rate.
Similar content being viewed by others
References
Barker M, Rayens W (2003) Partial least squares for discrimination. J Chemom 17: 166–173
Brazdil P, Soares C, Pinto da Costa J (2003) Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results. Mach Learn 50(3): 251–277
Brazdil P, Giraud-Carrier C, Soares C, Vilalta R (2009) Metalearning. Springer, Berlin, Heidelberg
Diaconis P, Freedman D (1984) Asymptotics of graphical projection pursuit. Ann Stat 12: 793–815
Fleishman AI (1978) A mehtod for simulating non-normal distributions. Psychometrica 43: 521–532
Hastie T, Buja A, Tibshirani R (1995) Penalized discriminant analysis. Ann Stat 23(1): 73–102
Jin Z, Yang JY, Tang ZM, Hu ZS (2001) A theorem on the uncorrelated optimal discriminant vectors. Pattern Recognit 34(10): 2041–2047
Lele S, Richtsmeier JT (2001) An invariant approach to statistical analysis of shapes. CRC Press Inc, London
Luebke K (2006) Adaptive lineare Dimensionsreduktion in der Klassifikation. PhD thesis, Universität Dortmund, Fachbereich Statistik
Luebke K, Weihs C (2005) Improving feature extraction by replacing the fisher criterion by an upper error bound. Pattern Recognit 38(11): 2220–2223
Luebke K, Weihs C (2009) Prediction optimal classication of business phases. In: Wagner A (ed) Empirische Wirtschaftsforschung heute, Schäffer-Poeschel, Stuttgart, pp 149–156
McCulloch RE (1986) Some remarks on allocatory and separatory linear discrimination. J Stat Planning Inference 14: 323–330
McLachlan GJ (1992) Discriminant analysis and statistical pattern recognition. Wiley, New York
Michie D, Spiegelhalter DJ, Taylor CC (1994) Machine learning, neural and statistical classification. Ellis Horwood, Upper Saddle River
O’Gorman TW (2004) Applied adaptive statistical methods: tests of significance and confidence intervals. SIAM, Philadelphia
Press W, Flannery B, Teukolsky S, Vetterling W (1992) Numerical recipes in C, 2nd edn. Cambridge University Press, Cambridge
Röhl MC, Weihs C, Theis W (2002) Direct minimization of error rates in multivariate classification. Comput Stat 17: 29–46
Salamon P, Sibani P, Frost R (2002) Facts, conjectures and improvement for simulated annealing. Monographs on mathematical modeling and computation. SIAM, Philadelphia
Schervish MJ (1984) Linear discrimination for three known normal populations. J Stat Planning Inference 10: 167–175
Wolpert DH (2001) The supervised no-free-lunch theorems. In: Roy R, Köppen M, Ovaska SJ, Furuhashi T, Hoffmann F (eds) Proceedings of the 6th Online World Conference on Soft Computing in Industrial Applications, Springer Engineering Series, pp 25–42
Yang J, Yang JY, Zhang D (2002) What’s wrong with Fisher criterion?. Pattern Recognit 35(11): 2665–2668
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Luebke, K., Weihs, C. Linear dimension reduction in classification: adaptive procedure for optimum results. Adv Data Anal Classif 5, 201–213 (2011). https://doi.org/10.1007/s11634-011-0091-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11634-011-0091-x