Impact Statement:The adaptive k-Nearest Neighbor (AKNN) algorithm is an improvement over the traditional k-Nearest Neighbor (KNN) technique in machine learning. AKNN can assign a more app...Show More
Abstract:
The k-nearest neighbor (KNN) classifies unlabeled samples according to the parameter k, which is a user-defined constant and usually depends on prior knowledge. The sel...Show MoreMetadata
Impact Statement:
The adaptive k-Nearest Neighbor (AKNN) algorithm is an improvement over the traditional k-Nearest Neighbor (KNN) technique in machine learning. AKNN can assign a more appropriate k-value to each sample, enhancing the accuracy of KNN, and is applicable to a variety of machine learning tasks. However, the performance of AKNN for multiview tasks is hindered by the limitations of single-view scenarios. In this article, we address these limitations by extending AKNN to the field of multi-view classification while employing evidential theory to fuse consistent and complementary information from different views. Our approach improves AKNN's classification accuracy on multi-view data.
Abstract:
The k-nearest neighbor (KNN) classifies unlabeled samples according to the parameter k, which is a user-defined constant and usually depends on prior knowledge. The selection of k is crucial, as the size of the sample neighborhood affects the classification accuracy. To tackle this issue, we introduce the adaptive KNN (AKNN), which constructs a decision tree to assign different numbers of k-values to different samples. In AKNN, we use the sample label information to calculate the weight between samples. Furthermore, to extend AKNN to a multiview scenario, we propose a method namely multiview adaptive KNN (MVAKNN), which integrates information from every single view by using the Dempster–Shafer theory. We conduct experiments on three benchmark multiview image datasets and the results show that MVAKNN exhibits desirable classification accuracy, outperforming some single-view and multiview methods. Experiments with Gaussian noises show the robustness of the proposed method.
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 3, March 2024)