Abstract:
If the nearest neighbor rule (NNR) is used to classify unknown samples, then Cover and Hart [1] have shown that the average probability of error usingnknown samples (deno...Show MoreMetadata
Abstract:
If the nearest neighbor rule (NNR) is used to classify unknown samples, then Cover and Hart [1] have shown that the average probability of error usingnknown samples (denoted byR_n) converges to a numberRasntends to infinity, whereR^ {\ast} \leq R \leq 2R^ {\ast} (1 - R^ {\ast}), andR^ {\ast}is the Bayes probability of error. Here it is shown that when the samples lie inn-dimensional Euclidean space, the probability of error for the NNR conditioned on thenknown samples (denoted byL_n. so thatEL_n = R_n)converges toRwith probability 1 for mild continuity and moment assumptions on the class densities. Two estimates ofRfrom thenknown samples are shown to be consistent. Rates of convergence ofL_ntoRare also given.
Published in: IEEE Transactions on Information Theory ( Volume: 17, Issue: 5, September 1971)