Skip to main content

Out-of-Sample Evaluation

  • Reference work entry
  • First Online:
Encyclopedia of Machine Learning and Data Mining
  • 171 Accesses

Definition

Out-of-sample evaluation refers to algorithm evaluation whereby the learned model is evaluated on out-of-sample data, which are data that were not used in the process of learning the model. Out-of-sample evaluation provides a less biased estimate of learning performance than in-sample evaluation. Cross validation, holdout evaluation and prospective evaluation are three main approaches to out-of-sample evaluation. Cross validation and holdout evaluation run risks of overestimating performance relative to what should be expected on future data, especially if the data set used is not a true random sample of the distribution on which the learned models are to be applied in the future.

Cross-References

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 699.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 949.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Science+Business Media New York

About this entry

Cite this entry

(2017). Out-of-Sample Evaluation. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_621

Download citation

Publish with us

Policies and ethics