Elsevier

Pattern Recognition

Volume 30, Issue 3, March 1997, Pages 459-465
Pattern Recognition

Fast implementations of nearest neighbor classifiers

https://doi.org/10.1016/S0031-3203(96)00098-2Get rights and content

Abstract

Standard implementations of non-parametric classifiers have large computational requirements. Parzen classifiers use the distances of an unknown vector to all N prototype samples, and consequently exhibit O(N) behavior in both memory and time. We describe four techniques for expediting the nearest neighbor methods: replacing the linear search with a new kd tree method, exhibiting approximately O(N12) behavior; employing an L instead of L2 distance metric; using variance-ordered features; and rejecting prototypes by evaluating distances in low dimensionality subspaces. We demonstrate that variance-ordered features yield significant efficiency gains over the same features linearly transformed to have uniform variance. We give results for a large OCR problem, but note that the techniques expedite recognition for arbitrary applications. Three of four techniques preserve recognition accuracy.

References (14)

There are more references available in the full text version of this article.

Cited by (58)

  • Improving scalability of ART neural networks

    2017, Neurocomputing
    Citation Excerpt :

    The idea here is the hierarchical organization of data into a tree structure using the features in order to accelerate access to it. Although k-d Trees can be fast in finding neighbors, as in the case of the k-Nearest-Neighbors (kNN) classifier [17], their representation of a hyperbox by only one point is too inaccurate for classification. Approaches using hyperboxes directly as R-Tree (and even X-tree, its extension for high dimensions) are slower when searching for prototypes in the neighborhood of a test sample than the activation of all prototypes of ML-ARAM, especially in classification problems with thousands of features.

  • NNMap: A method to construct a good embedding for nearest neighbor classification

    2015, Neurocomputing
    Citation Excerpt :

    To overcome the above limitations of NN classification, in this work, we propose a new method called as NNMap, which speeds up NN classifier through data embedding. The proposed method makes three key contributions to the current state of the art [9–14]. The first contribution is that the proposed method obtains an efficient distance metric to take place of the original expensive distance.

  • Fuzzy nearest neighbor algorithms: Taxonomy, experimental analysis and prospects

    2014, Information Sciences
    Citation Excerpt :

    The aforementioned drawbacks have been analyzed extensively by the research community. As a result, many approaches have been proposed regarding, for example, the computation of similarity measures [16], the optimum choice of the k parameter [60], the definition of weighting schemes for patterns and attributes [96,51], the adaptation of the algorithm to data [43], the development of fast and approximate versions of the NN rule, devised to quicken the computation of the nearest neighbors [37,74,5,70], and the reduction of the training data [31,91,22,26]. Fuzzy Sets Theory (FST) [107] has been the basis of a remarkable number of these approaches.

  • Neighbors' distribution property and sample reduction for support vector machines

    2014, Applied Soft Computing Journal
    Citation Excerpt :

    The time complexity of naïve algorithm to solve kNN is O(n2). To speed up searching k-nearest neighbors, many scholars researched on it and proposed many faster algorithms [21–25]. In this paper, kNN is solved by naïve algorithm.

  • Boundary detection and sample reduction for one-class Support Vector Machines

    2014, Neurocomputing
    Citation Excerpt :

    Without considering speedup strategies, the time complexity of solving kNN is O(n2). To speed up searching k-nearest neighbors, a large number of faster algorithms have been proposed [18–22]. This paper only uses the naïve algorithm to solve kNN.

View all citing articles on Scopus
View full text