Abstract
A popular method to discriminate between normal and abnormal data is based on accepting test objects whose nearest neighbors distances in a reference data set lie within a certain threshold. In this work we investigate the possibility of using as reference set a subset of the original data set. We discuss relationship between reference set size and generalization, and show that finding the minimum cardinality reference consistent subset is intractable. Then, we describe an algorithm that computes a reference consistent subset with only two reference set passes. Experimental results confirm the effectiveness of the approach.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Angiulli, F., Pizzuti, C.: Fast outlier detection in high-dimensional spaces. In: Proc. of the European Conf. on Principles and Practice of Knowledge Discovery in Databases, pp. 15–26 (2002)
Breunig, M., Kriegel, H.P., Ng, R., Sander, J.: Lof: identifying density-based local outliers. In: Proc. of the ACM Int. Conf. on Management of Data (2000)
Cover, T.M., Hart, P.E.: Nearest neighbor pattern classification. IEEE Trans. on Information Theory 13, 21–27 (1967)
Devroye, L.: On the inequality of cover and hart. IEEE Trans. on Pattern Analysis and Machine Intelligence 3, 75–78 (1981)
Devroye, L., Gyorfi, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer, New York (1996)
Eskin, E., Arnold, A., Prerau, M., Portnoy, L., Stolfo, S.: A geometric framework for unsupervised anomaly detection: detecting intrusions in unlabeled data. In: Applications of Data Mining in Computer Security (2002)
Fix, E., Hodges, J.: Discriminatory analysis. non parametric discrimination: Consistency properties. In: Tech. Report 4, USAF School of Aviation Medicine, Randolph Field, Texas (1951)
Garey, M.R., Johnson, D.S.: Computer and Intractability. W. H. Freeman and Company, New York (1979)
Hart, P.E.: The condensed nearest neighbor rule. IEEE Trans. on Information Theory 14, 515–516 (1968)
Karaçali, B., Krim, H.: Fast minimization of structural risk by nearest neighbor rule. IEEE Trans. on Neural Networks 14(1), 127–137 (2003)
Knorr, E., Ng, R.: Algorithms for mining distance-based outliers in large datasets. In: Proc. of the Int. conf. on Very Large Databases, pp. 392–403 (1998)
Ramaswamy, S., Rastogi, R., Shim, K.: Efficient algorithms for mining outliers from large data sets. In: Proc. ACM Int. Conf. on Managment of Data, pp. 427–438 (2000)
Schölkopf, B., Burges, C., Vapnik, V.: Extracting support data for a given task. In: Proc. of the Int. Conf. on Knowledge Discovery & Data Mining, Menlo Park, CA, pp. 251–256 (1995)
Stone, C.: Consistent nonparametric regression. Annals of Statistics 8, 1348–1360 (1977)
Tax, D., Duin, R.: Data domain description using support vectors. In: Proc. of the European Symp. on Artificial Neural Networks, Bruges, Belgium, April 1999, pp. 251–256 (1999)
Tax, D., Duin, R.: Data descriptions in subspaces. In: Proc. of Int. Conf. on Pattern Recognition, pp. 672–675 (2000)
Toussaint, G.: Proximity graphs for nearest neighbor decision rules: Recent progress. In: Tech. Report SOCS-02.5, School of Computer Science, McGill University, Montréal, Québec, Canada (2002)
Vapnik, V., Chervonenkis, A.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16(2), 264–280 (1971)
Vapnik, V.N.: Statistical learning theory. In: Haykin, S. (ed.). Wiley, Chichester (1998)
Ypma, A., Duin, R.: Support objects for domain approximation. In: Proc. of the ICANN (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Angiulli, F. (2005). Condensed Nearest Neighbor Data Domain Description. In: Famili, A.F., Kok, J.N., Peña, J.M., Siebes, A., Feelders, A. (eds) Advances in Intelligent Data Analysis VI. IDA 2005. Lecture Notes in Computer Science, vol 3646. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11552253_2
Download citation
DOI: https://doi.org/10.1007/11552253_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28795-7
Online ISBN: 978-3-540-31926-9
eBook Packages: Computer ScienceComputer Science (R0)