A concurrency control algorithm for nearest neighbor query
References (15)
- et al.
The R*-tree: An efficient and robust access method for points and rectangles
- et al.
GENESYS: A system for efficient spatial query processing
- et al.
A study of concurrent operations on R-trees
Inform. Sci.
(1987) Simulation model design and execution
(1995)- et al.
An algorithm for finding the best matches in logarithmic expected time
ACM Trans. on Math. Software
(1977) R-trees: A dynamic index structure for spatial searching
- et al.
A new methodology to evaluate locking protocols
IEEE Trans. on Knowledge and Data Eng.
(1990)
Cited by (10)
Sampling technique for noisy and borderline examples problem in imbalanced classification[Formula presented]
2023, Applied Soft ComputingSMOTE-NaN-DE: Addressing the noisy and borderline examples problem in imbalanced classification by natural neighbors and differential evolution
2021, Knowledge-Based SystemsCitation Excerpt :The Natural Neighbor (NaN) [29] is the state-of-the-art technology for neighbors. Compared to k-nearest neighbors (KNN) [36], r reverse nearest neighbors (RNN) [36], k nearest centroid neighbors (KNCN) [37], etc., the main advantages of the NaN are that (a) It is parameter-free; (b) The neighbor number of each sample is different; (c) It is more suitable for the manifold distribution. Because of the advanced nature, the NaN has been applied to outlier detection [38], clustering analysis [39,40], instance reduction [41], and semi-supervised learning [42,43].
A novel oversampling technique for class-imbalanced learning based on SMOTE and natural neighbors
2021, Information SciencesCitation Excerpt :Concretely, N refers to the number of generated samples of each base sample. NNr(xi)={xj, xj+1, …, xj+r-1} is the set of r nearest neighbors [43] of sample xi. NNr(xi) has r elements and does not include sample xi.
Reverse k-nearest neighbor search in the presence of obstacles
2016, Information SciencesCitation Excerpt :Given a multi-dimensional data set P and a query point q, a reverse nearest neighbor (RNN) query retrieves all the points in P that have q as their nearest neighbor (NN). Due to its wide application base such as decision support [15], profile-based marketing [15,26], and resource allocation [15,35], RNN is one of the most popular variants of NN queries [7,12,14,17,20]. Formally, RNN(q) = {p ∈ P | q ∈ NN(p)}, in which RNN(q) represents the set of reverse nearest neighbors to q and NN(p) denotes the NN of a point p ∈ P. Consider an example in Fig. 1a, where the data set P consists of three data points (i.e., p1, p2, p3) in a two-dimensional (2D) space.
Network Voronoi Diagram on uncertain objects for nearest neighbor queries
2015, Information SciencesCitation Excerpt :However, it is difficult to achieve high efficiency in multi-user environments due to the depth-first fashion. Chen and Chin [6] overcomed this disadvantage by considering the concurrency feature. As an important variant of NN search, continuous nearest neighbor (CNN) queries in Euclidean space [47,45,13] has also been studied since the first solution was proposed by Song and Roussopoulos in [43].
A performance comparison of distance-based query algorithms using R-trees in spatial databases
2007, Information Sciences