Skip to main content
Log in

Convex Hull Ensemble Machine for Regression and Classification

  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

We propose a new ensemble algorithm called Convex Hull Ensemble Machine (CHEM). CHEM in Hilbert space is first developed and modified for regression and classification problems. We prove that the ensemble model converges to the optimal model in Hilbert space under regularity conditions. Empirical studies reveal that, for classification problems, CHEM has a prediction accuracy similar to that of boosting, but CHEM is much more robust with respect to output noise and never overfits datasets even when boosting does. For regression problems, CHEM is competitive with other ensemble methods such as gradient boosting and bagging.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Bauer E, Kohavi R (1999) An empirical comparison of voting classification algorithms: bagging, boosting and variants. Mach Learn 36:105–139

    Article  Google Scholar 

  2. Breiman L (1996) Bagging predictors. Mach Learn 24:123–140

    Article  MATH  Google Scholar 

  3. Breiman L (1998) Arcing classifiers. Mach Learn 26:801–846

    MATH  Google Scholar 

  4. Breiman L (2001) Random forests. Mach Learn 45:5–32

    Article  MATH  Google Scholar 

  5. Bühlmann P, Yu B (2000) Contribution to the discussion of paper by Friedman, Hastie and Tibshirani. Ann Stat 28:377–386

    Google Scholar 

  6. Dietterich TG (2000) An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting and randomization. Mach Learn 40:139–157

    Article  Google Scholar 

  7. Friedman JH (1991) Multivariate adaptive regression splines (with discussion). Ann Stat 19:1–141

    Google Scholar 

  8. Friedman JH, Hastie T, Tibshirani R (2000) Additive logistic regression: a statistical view of boosting. Ann Stat 38:337–374

    Article  Google Scholar 

  9. Friedman JH (2001) Greedy function approximation: a gradient boosting machine. Ann Stat 29:1189–1232

    Article  MATH  Google Scholar 

  10. Freund Y, Schapire R (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Sys Sci 55:119–139

    Article  MATH  Google Scholar 

  11. Jiang W (2002) On weak base hypotheses and their implications for boosting regression and classification. Ann Stat 30:51–73

    Article  MATH  Google Scholar 

  12. Mason L, Baxter J, Bartlett PL, Frean M (2000) Functional gradient techniques for combining hypotheses. In: Smola AJ, Bartlett P, Schölkopf B, Shuurmans C (eds) Advances in large margin classifiers. MIT Press, Cambridge, MA

  13. Opitz D, Maclin R (1999) Popular ensemble methods: an empirical study. J Artif Intell Res 11:169–198

    MATH  Google Scholar 

  14. Quinlan J (1996) Boosting first-order learning. In: Arikawa S, Sharma (eds) Proceedings of the 7th international workshop on algorithmic learning theory. Lecture notes in artificial intelligence, vol 1160. Springer, Berlin Heidelberg New York, pp 143–155

  15. Rätsch G, Onoda T, Müller KR (2001) Soft margins for AdaBoost. Mach Learn 42:287–320

    Article  Google Scholar 

  16. Ridgeway G (2000) Contribution to the discussion of paper by Friedman, Hastie and Tibshirani. Ann Stat 28:393–400

    Google Scholar 

  17. Schapire R, Freund Y, Bartlett P, Lee W (1998) Boosting the margin: a new explanation for the effectiveness of voting methods. Ann Stat 26:1651–1686

    Article  MATH  Google Scholar 

  18. Schapire R, Singer Y (1999) Improved boosting algorithms using confidence-rated predictions. Mach Learn 37:297–336

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongdai Kim.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kim, Y., Kim, J. Convex Hull Ensemble Machine for Regression and Classification. Know. Inf. Sys. 6, 645–663 (2004). https://doi.org/10.1007/s10115-003-0116-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-003-0116-7

Keywords

Navigation