Skip to main content

Confidence Sets for Classification

  • Conference paper
  • First Online:
Statistical Learning and Data Sciences (SLDS 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9047))

Included in the following conference series:

  • 2861 Accesses

Abstract

Conformal predictors, introduced by [13], serve to build prediction intervals by exploiting a notion of conformity of the new data point with previously observed data. In the classification problem, conformal predictor may respond to the problem of classification with reject option. In the present paper, we propose a novel method of construction of confidence sets, inspired both by conformal prediction and by classification with reject option. An important aspect of these confidence sets is that, when there are several observations to label, they control the proportion of the data we want to label. Moreover, we introduce a notion of risk adapted to classification with reject option. We show that for this risk, the confidence set risk converges to the risk of the confidence set based on the Bayes classifier.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Audibert, J.Y., Tsybakov, A.: Fast learning rates for plug-in classifiers. Ann. Statist. 35(2), 608–633 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  2. Bartlett, P., Wegkamp, M.: Classification with a reject option using a hinge loss. J. Mach. Learn. Res. 9, 1823–1840 (2008)

    MATH  MathSciNet  Google Scholar 

  3. Chow, C.K.: On optimum error and reject trade-off. IEEE Transactions on Information Theory 16, 41–46 (1970)

    Article  MATH  Google Scholar 

  4. Dvoretzky, A., Kiefer, J., Wolfowitz, J.: Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. Ann. Math. Statist. 27, 642–669 (1956)

    Article  MATH  MathSciNet  Google Scholar 

  5. Freund, Y., Mansour, Y., Schapire, R.: Generalization bounds for averaged classifiers. Ann. Statist. 32(4), 1698–1722 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  6. Györfi, L., Kohler, M., Krzyżak, A., Walk, H.: A distribution-free theory of nonparametric regression. Springer Series in Statistics. Springer, New York (2002)

    Book  MATH  Google Scholar 

  7. Grandvalet, Y., Rakotomamonjy, A., Keshet, J., Canu, S.: Support Vector Machines with a Reject Option. In: Advances in Neural Information Processing Systems (NIPS 2008), vol. 21, pp. 537–544. MIT Press (2009)

    Google Scholar 

  8. Herbei, R., Wegkamp, M.: Classification with reject option. Canad. J. Statist. 34(4), 709–721 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  9. Massart, P.: The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Ann. Probab. 18(3), 1269–1283 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  10. Naadeem, M., Zucker, J.D., Hanczar, B.: IN Accuracy-Rejection Curves (ARCs) for Comparing Classification Methods with a Reject Option. MLSB., 65–81 (2010)

    Google Scholar 

  11. Vapnik, V.: Statistical learning theory. Adaptive and Learning Systems for Signal Processing, Communications, and Control. John Wiley & Sons Inc., New York (1998)

    Google Scholar 

  12. Vovk, V., Gammerman, A., Saunders, C.: Machine-learning applications of algorithmic randomness. In: Proceedings of the 16th International Conference on Machine Learning, pp. 444–453 (1999)

    Google Scholar 

  13. Vovk, V., Gammerman, A., Shafer, G.: Algorithmic learning in a random world. Springer, New York (2005)

    MATH  Google Scholar 

  14. Wang, J., Shen, X., Pan, W.: On transductive support vector machines. In: Prediction and discovery, Contemp. Math., Amer. Math. Soc., Providence, RI, vol. 443, pp. 7–19 (2007)

    Google Scholar 

  15. Wegkamp, M., Yuan, M.: Support vector machines with a reject option. Bernoulli. 17(4), 1368–1385 (2011)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Christophe Denis or Mohamed Hebiri .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Denis, C., Hebiri, M. (2015). Confidence Sets for Classification. In: Gammerman, A., Vovk, V., Papadopoulos, H. (eds) Statistical Learning and Data Sciences. SLDS 2015. Lecture Notes in Computer Science(), vol 9047. Springer, Cham. https://doi.org/10.1007/978-3-319-17091-6_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-17091-6_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-17090-9

  • Online ISBN: 978-3-319-17091-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics