Loading [a11y]/accessibility-menu.js
Convergence of the nearest neighbor rule | IEEE Journals & Magazine | IEEE Xplore
Scheduled Maintenance: On Tuesday, 25 February, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (1800-2200 UTC). During this time, there may be intermittent impact on performance. We apologize for any inconvenience.

Convergence of the nearest neighbor rule


Abstract:

If the nearest neighbor rule (NNR) is used to classify unknown samples, then Cover and Hart [1] have shown that the average probability of error usingnknown samples (deno...Show More

Abstract:

If the nearest neighbor rule (NNR) is used to classify unknown samples, then Cover and Hart [1] have shown that the average probability of error usingnknown samples (denoted byR_n) converges to a numberRasntends to infinity, whereR^ {\ast} \leq R \leq 2R^ {\ast} (1 - R^ {\ast}), andR^ {\ast}is the Bayes probability of error. Here it is shown that when the samples lie inn-dimensional Euclidean space, the probability of error for the NNR conditioned on thenknown samples (denoted byL_n. so thatEL_n = R_n)converges toRwith probability 1 for mild continuity and moment assumptions on the class densities. Two estimates ofRfrom thenknown samples are shown to be consistent. Rates of convergence ofL_ntoRare also given.
Published in: IEEE Transactions on Information Theory ( Volume: 17, Issue: 5, September 1971)
Page(s): 566 - 571
Date of Publication: 06 January 2003

ISSN Information:


References

References is not available for this document.