As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Although nearest neighbor based classifier still shows good performance when training datasets are enough large, there are several improvements or comparisons to it. In this article, we discuss two kinds of k-nearest neighbor classifier (wk-NNC) weighted by different methods, and adjusted weights with n-fold cross validation for better classification performance. In fact, it is not enough to describe the performance of one classifier only depending on the classification accuracy. So we refer to the variance and the accuracy for performance comparison in this article. We develop experiments on different datasets (such as HEART, WINE, and HILL-VALLEY, downloaded from UCI), and compare the performance of wk-NNC under n=3, or 10 folds cross validation for adjusting the weights. The results show that (1) the instance wk-NNC is relatively more stable than the attribute wk-NNC when the dataset size is changed, (2) comparing the variance, the classifier performance is better when the fold number increase, just like on the datasets HEART and WINE, (3) if the dimension is greater than 100, the n-fold cross validation is failed, such as on the dataset HILL-VALLEY. Furthermore, this paper also gives a better insight into model selection on given set.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.