Skip to main content

An Empirical Study of a Linear Regression Combiner on Multi-class Data Sets

  • Conference paper
Multiple Classifier Systems (MCS 2009)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 5519))

Included in the following conference series:

Abstract

The meta-learner MLR (Multi-response Linear Regression) has been proposed as a trainable combiner for fusing heterogeneous base-level classifiers. Although it has interesting properties, it never has been evaluated extensively up to now. This paper employs learning curves to investigate the relative performance of MLR for solving multi-class classification problems in comparison with other trainable combiners. Several strategies (namely, Reusing, Validation and Stacking) are considered for using the available data to train both the base-level classifiers and the combiner. Experimental results show that due to the limited complexity of MLR, it can outperform the other combiners for small sample sizes when the Validation or Stacking strategy is adopted. Therefore, MLR should be a preferential choice of trainable combiners when solving a multi-class task with small sample size.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recog. 34(2), 299–314 (2001)

    Article  MATH  Google Scholar 

  2. Todorovski, L., Džeroski, S.: Combining classifiers with meta decision trees. Mach. Learn. 50(3), 223–249 (2003)

    Article  MATH  Google Scholar 

  3. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  4. Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: 13th International Conference on Machine Learning, pp. 148–156. Morgan Kaufmann Press, San Francisco (1996)

    Google Scholar 

  5. Breiman, L.: Randomizing outputs to increase prediction accuracy. Mach. Learn. 40(3), 229–242 (2000)

    Article  MATH  Google Scholar 

  6. Ting, K.M., Witten, I.H.: Stacking bagged and dagged models. In: 14th International Conference on Machine Learning, pp. 367–375. Morgan Kaufmann Press, San Francisco (1997)

    Google Scholar 

  7. Ting, K.M., Witten, I.H.: Issues in stacked generalization. J. Artif. Intell. Res. 10, 271–289 (1999)

    MATH  Google Scholar 

  8. Merz, C.J.: Using corresponding analysis to combine classifiers. Mach. Learn. 36(1/2), 33–58 (1999)

    Article  Google Scholar 

  9. Seewald, A.K.: How to make stacking better and faster while also taking care of an unknown weakness. In: 19th International Conference on Machine learning, pp. 554–561. Morgan Kaufmann Press, San Francisco (2002)

    Google Scholar 

  10. Džeroski, S., Ženko, B.: Is combining classifiers with stacking better than selecting the best ones? Mach. Learn. 54(3), 255–273 (2004)

    Article  MATH  Google Scholar 

  11. Raudys, S.: Trainable fusion rules: I. Large sample size case. Neural Networks 19(10), 1506–1516 (2006)

    Article  MATH  Google Scholar 

  12. Raudys, S.: Trainable fusion rules: II. Small sample-size effects. Neural Networks 19(10), 1517–1527 (2006)

    Article  MATH  Google Scholar 

  13. Paclík, P., Landgrebe, T.C.W., Tax, D.M.J., Duin, R.P.W.: On deriving the second-stage training set for trainable combiners. In: Oza, N.C., Polikar, R., Kittler, J., Roli, F. (eds.) MCS 2005. LNCS, vol. 3541, pp. 136–146. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  14. Liu, M., Yuan, B.Z., Chen, J.F., Miao, Z.j.: Does linear combination outperform the k-NN rule? In: 8th International Conference on Signal Processing, vol. 3. IEEE Press, Beijing (2006)

    Google Scholar 

  15. Lawson, C.J., Hanson, R.J.: Solving Least Squares Problems. SIAM Publications, Philadephia (1995)

    Book  MATH  Google Scholar 

  16. Wolpert, D.H.: Stacked generalization. Neural Networks 5(2), 241–259 (1992)

    Article  Google Scholar 

  17. UCI machine larning respository, http://www.ics.uci.edu/~mlearn/MLRespository.html

  18. Lai, C.: Supervised classification and spatial dependency analysis in human cancer using high throughput data. Ph.D Thesis, Delft University of Technology (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhang, CX., Duin, R.P.W. (2009). An Empirical Study of a Linear Regression Combiner on Multi-class Data Sets. In: Benediktsson, J.A., Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2009. Lecture Notes in Computer Science, vol 5519. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02326-2_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02326-2_48

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02325-5

  • Online ISBN: 978-3-642-02326-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics