Skip to main content

Boosting the Fisher Linear Discriminant with Random Feature Subsets

  • Conference paper

Part of the book series: Advances in Soft Computing ((AINSC,volume 30))

Abstract

Boosting increases the recognition accuracy of many types of classifiers. However, studies show that for the Fisher Linear Discriminant (FLD), a simple and widely used classifier, boosting does not lead to a significant increase in accuracy. In this paper, a new method for adapting the FLD into the boosting framework is proposed. This method, the AdaBoost-RandomFeatureSubset-FLD (AB-RFS-FLD), uses a different, randomly chosen subset of features for learning in each boosting round. The new method achieves significantly better accuracy than both single FLD and FLD with boosting, with improvements reaching 6% in some cases. We show that the good performance can be attributed to higher diversity of the individual FLDs, as well as to the better generalization abilities.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   259.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Viola P, Jones MJ (2004) Robust real-time face detection. Int. J. Comput. Vision 57:137–154

    Article  Google Scholar 

  2. Svetnik V, Liaw A, Tong C, Wang T (2004) Application of breiman’s random forest to modeling structure-activity relationships of pharmaceutical molecules. Lecture Notes in Computer Science 3077:334–343

    Article  Google Scholar 

  3. Qu Y, Adam BL, Yasui Y, Ward MD, Cazares LH, Schellhammer PF, Feng Z, Semmes OJ, Wright GL (2002) Boosted decision tree analysis of surface-enhanced laser desorption/ionization mass spectral serum profiles discriminates prostate cancer from noncancer patients. Clinical Chemistry 48:1835–1843

    Google Scholar 

  4. Freund Y, Schapire R (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55:119–139

    Article  MATH  MathSciNet  Google Scholar 

  5. Breiman L (1996) Bagging predictors. Machine Learning 24:123–140

    MATH  MathSciNet  Google Scholar 

  6. Ho TK (1995) Random decision forests. In: Proc. of the 3rd Int’l Conference on Document Analysis and Recognition:278–282

    Google Scholar 

  7. Bryll R, Gutierrez-Osuna R, Quek F (2003) Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recognition 36:1291–1302

    Article  MATH  Google Scholar 

  8. Freund Y, Schapire R (1996) Experiments with a new boosting algorithm. In: Proc. 13th International Conference on Machine Learning:148–156, Morgan Kaufmann

    Google Scholar 

  9. Schwenk H, Bengio Y (2000) Boosting neural networks. Neural Computation 12:1869–1887

    Article  Google Scholar 

  10. Kim HC, Pang S, Je HM, Kim D, Bang SY (2003) Constructing support vector machine ensemble. Pattern Recognition 36:2757–2767

    Article  MATH  Google Scholar 

  11. Skurichina M, Duin RPW (2000) Boosting in linear discriminant analysis. Lecture Notes in Computer Science 1857:190–199

    Article  Google Scholar 

  12. Wang X, Tang X (2004) Multiple LDA classifier combination for high dimensional data classification. Lecture Notes in Computer Science 3077:344–353

    Article  Google Scholar 

  13. Skurichina M, Duin RPW (2002) Bagging, boosting and the random subspace method for linear classifiers. Pattern Analysis and Applications 5:121–135

    Article  MATH  MathSciNet  Google Scholar 

  14. Schapire RE, Freund Y, Bartlett P, Lee WS (1997) Boosting the margin: a new explanation for the effectiveness of voting methods. In: Proc. 14th International Conference on Machine Learning:322–330, Morgan Kaufmann

    Google Scholar 

  15. Kuncheva L (2003) That elusive diversity in classifier ensembles. In: Proc. First Iberian Conference on Pattern Recognition and Image Analysis:1126–1138

    Google Scholar 

  16. Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51:181–207

    Article  MATH  Google Scholar 

  17. Vapnik V (1982) Estimation of Dependences Based on Empirical Data. Springer, New York

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Arodź, T. (2005). Boosting the Fisher Linear Discriminant with Random Feature Subsets. In: Kurzyński, M., Puchała, E., Woźniak, M., żołnierek, A. (eds) Computer Recognition Systems. Advances in Soft Computing, vol 30. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-32390-2_7

Download citation

  • DOI: https://doi.org/10.1007/3-540-32390-2_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-25054-8

  • Online ISBN: 978-3-540-32390-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics