Skip to main content

Tuning Cost-Sensitive Boosting and Its Application to Melanoma Diagnosis

  • Conference paper
  • First Online:
Multiple Classifier Systems (MCS 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2096))

Included in the following conference series:

Abstract

This paper investigates a methodology for effective model selection of cost-sensitive boosting algorithms. In many real situations, e.g. for automated medical diagnosis, it is crucial to tune the classification performance towards the sensitivity and specificity required by the user. To this purpose, for binary classification problems, we have designed a cost-sensitive variant of AdaBoost where (1) the model error function is weighted with separate costs for errors (false negative and false positives) in the two classes, and (2) the weights are updated differently for negatives and positives at each boosting step. Finally, (3) a practical search procedure allows to get into or as close as possible to the sensitivity and specificity constraints without an extensive tabulation of the ROC curve. This off-the-shelf methodology was applied for the automatic diagnosis of melanoma on a set of 152 skin lesions described by geometric and colorimetric features, out-performing, on the same data set, skilled dermatologists and a specialized automatic system based on a multiple classifier combination.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. N. Adams and D. Hand, “Classifier performance assessment,” Neural Computation, vol. 12,no. 2, pp. 305–311, 2000.

    Article  Google Scholar 

  2. L. Bischof, H. Talbot, E. Breen, D. Lovell, D. Chan, G. Stone, S. Menzies, A. Gutenev, and R. Caffin, “An automated melanoma diagnosis system,” in New Approaches in Medical Image Analysis (B. Pham, M. Braun, A. Maeder, and M. Eckert, eds.), SPIE, 1999.

    Google Scholar 

  3. E. Blanzieri, C. Eccher, S. Forti, and A. Sboner, “Exploiting classifier combination for early melanoma diagnosis support,” in Proceedings of ECML-2000 (R. de Mantaras and E. Plaza, eds.), (Berlin), pp. 55–62, Springer-Verlag, 2000.

    Google Scholar 

  4. L. Breiman, J. Friedman, R. Olshen, and C. Stone, Classification and Regression Trees. Pacific Grove CA: Wadsworth and Brooks/Cole, 1984.

    MATH  Google Scholar 

  5. L. Breiman, “Combining predictors,” in Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems (A. Sharkey, ed.), (London), Springer-Verlag, 1999. pages 31–50.

    Google Scholar 

  6. T. Dietterich, “An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization,” Machine Learning, vol. 40,no. 2, pp. 139–158, 2000.

    Article  Google Scholar 

  7. W. Fan, S. Stolfo, J. Zhang, and P. Chan, “Adacost: Misclassification cost-sensitive boosting,” in Proceedings of ICML-99, 1999.

    Google Scholar 

  8. Y. Freund and R. Schapire, “A decision-theoretic generalization of online learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55,no. 1, pp. 119–139, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  9. J. Friedman, T. Hastie, and R. Tibshirani, “Additive logistic regression: a statistical view of boosting,” tech. rep., Stanford University, 1999.

    Google Scholar 

  10. C. Furlanello and S. Merler, “Boosting of tree-based classifiers for predictive risk modeling in gis,” in Multiple Classifier Systems (J. Kittler and F. Roli, eds.), vol. 1857, (Amsterdam), Springer, 2000. pages 220–229.

    Google Scholar 

  11. G. Karakoulas and J. Shawe-Taylor, “Optimizing classifiers for imbalanced training sets,” in Advances in Neural Information Processing Systems 11 (M. Kearns, S. Solla, and D. Cohn, eds.), MIT Press, 1999.

    Google Scholar 

  12. D. Margineantu and T. Dietterich, “Bootstrap methods for the cost-sensitive evaluation of classifiers,” in Proceedings of ICML-2000, pp. 583–590, Morgan Kaufmann, 2000.

    Google Scholar 

  13. R. Schapire, Y. Freund, P. Bartlett, and W. Lee, “Boosting the margin: a new explanation for the effectiveness of voting methods,” The Annals of Statistics, vol. 26,no. 5, pp. 1651–1686, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  14. K. Ting, “A comparative study study of cost-sensitive boosting algorithms,” in Proceedings of ICML-2000 (M. Kaufmann, ed.), pp. 983–990, 2000.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Merler, S., Furlanello, C., Larcher, B., Sboner, A. (2001). Tuning Cost-Sensitive Boosting and Its Application to Melanoma Diagnosis. In: Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2001. Lecture Notes in Computer Science, vol 2096. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48219-9_4

Download citation

  • DOI: https://doi.org/10.1007/3-540-48219-9_4

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42284-6

  • Online ISBN: 978-3-540-48219-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics