Definition
Out-of-sample evaluation refers to algorithm evaluation whereby the learned model is evaluated on out-of-sample data, which are data that were not used in the process of learning the model. Out-of-sample evaluation provides a less biased estimate of learning performance than in-sample evaluation. Cross validation, holdout evaluation and prospective evaluation are three main approaches to out-of-sample evaluation. Cross validation and holdout evaluation run risks of overestimating performance relative to what should be expected on future data, especially if the data set used is not a true random sample of the distribution on which the learned models are to be applied in the future.
Cross-References
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media New York
About this entry
Cite this entry
(2017). Out-of-Sample Evaluation. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_621
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7687-1_621
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7685-7
Online ISBN: 978-1-4899-7687-1
eBook Packages: Computer ScienceReference Module Computer Science and Engineering