Abstract
Prototype classifiers have been studied for many years. However, few methods can realize incremental learning. On the other hand, most prototype classifiers need users to predetermine the number of prototypes; an improper prototype number might undermine the classification performance. To deal with these issues, in the paper we propose an online supervised algorithm named Incremental Learning Vector Quantization (ILVQ) for classification tasks. The proposed method has three contributions. (1) By designing an insertion policy, ILVQ incrementally learns new prototypes, including both between-class incremental learning and within-class incremental learning. (2) By employing an adaptive threshold scheme, ILVQ automatically learns the number of prototypes needed for each class dynamically according to the distribution of training data. Therefore, unlike most current prototype classifiers, ILVQ needs no prior knowledge of the number of prototypes or their initial value. (3) A technique for removing useless prototypes is used to eliminate noise interrupted into the input data. Results of experiments show that the proposed ILVQ can accommodate the incremental data environment and provide good recognition performance and storage efficiency.



Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Blake CL, Merz CJ (1996) UCI repository of machine learning databases. University of California Department of Information, Irvine
Carpenter G, Grossberg S (1990) Hierarchical search using chemical transmitters in self-organizing pattern recognition architectures. Neural Netw 3(3):129–152
Cauwenberghs G, Poggio T (2002) Incremental and decremental support vector machine learning. In: NIPS02
Dasarathy BV (1991) Nearest neighbor NN norms: NN Pattern Classification Techniques. IEEE Computer Society Press
Devijver P, Kittler J (1982) Pattern recognition: a statistical approach. Prentince-Hall, NJ
Duda R, Hart D, Stork P (2001) Pattern classification, 2nd edn. Wiley Press, NY
Eick CF, Zeidat N, Vilalta R (2004) Using representative-based clustering for nearest neighbor dataset editing. In: ICDM04, pp 375–378
Fritzke B (1995) A growing neural gas network learns topologies. In: NIPS95, pp 625–632
Gates G (2004) The reduced nearest neighbor rule. IEEE Trans Inf Theory 18(3):431–433
Hastie T, Tibshirani R, Friedman J (2001) The elements of statistical learning: data mining, inference, and prediction. Springer
Haykin S (2004) A comprehensive foundation, 2nd edn. China Machine Press, China
Kim SW, Oommen BJ (2005) On using prototype reduction schemes and classifier fusion strategies to optimize kernel-based nonlinear subspace methods. IEEE Trans PAMI 27(3):455–460
Kohonen T (1990) Improved versions of learning vector quantization. In IJCNN90, pp 545–550
Lia B, Xuea X, Fan J (2001) Learn++: an incremental learning algorithm for supervised neural networks. IEEE Trans SMC Part C 31:497–508
Likas S, Vlassis N, Verbeek J (2005) The global k-means clustering algorithm. Pattern Recogn 36(2):451–461
Martinetz TM, Schulten K (1994) Topology representing networks. Neural Netw 7(3):507–522
Mitra P, Murthy CA, Pal SK (2002) Density-based multiscale data condensation. IEEE Trans PAMI 24(6):734–747
Mollineda RA, Ferri FJ, Vidal E (2002) A merge-based condensing strategy for multiple prototype classifiers. IEEE Trans SMC, Part B 32(5):662–668
Passerini A, Frasconi P (2002) From margins to probabilities in multiclass learning problems. In: ECAI02
Pekalska E, Duin RPW (2008) Beyond traditional kernels: classification in two dissimilarity-based representation spaces. IEEE Trans SMC, Part C 38(6):729–744
Polikar R, Honavar V (2001) Learn++: an incremental learning algorithm for supervised neural networks. IEEE Trans SMC Part C 31:497–508
Ren C-X, Dai D-Q (2010) Incremental learning of bidirectional principal components for face recognition. Pattern Recogn Lett 43:318–330
Sato A, Yamada K (1995) Generalized learning vector quantization. In: NIPS95, pp 424–429
Shen F, Hasegawa O (2008) A fast nearest neighbor classifier based on self-organizing incremental neural network. Neural Networks 21(10):1537–1547
Veenman CJ, Reinders MJT (2005) The nearest subclass classifier: a compromise between the nearest mean and nearest neighbor classifier. IEEE Trans PAMI 27(9):1417–1429
Villegas M, Paredes R (2008) Simultaneous learning of a discriminative projection and prototypes for nearest-neighbor classification. In: CVPR08, pp 1–8
Wasan MT (1969) Stochastic approximation. Cambridge University Press, Cambridge
Wilson DL (1972) Asymptotic properties of nearest neighbor rules using edited data. IEEE Trans SMC, Part B 2(5):408–420
Wu M, Scholkopf B (2007) A local learning approach for clustering. In: NIPS07, pp 1529–1536
Zhang H, Berg AC, Maire M, Malik J (2006) Svm-knn: Discriminative nearest neighbor classification for visual category recognition. In: CVPR06, pp 2126–2134
Zhang H, Malik J (2003) Learning a discriminative classifier using shape context distances. In: CVPR03, pp 242–247
Zhang J, Gruenwald L (2006) Opening the black box of feature extraction: Incorporating visualization into high-dimensional data mining processes. In: ICDM06, pp 18–22
Acknowledgments
This work was supported in part by the Fund of the National Natural Science Foundation of China (Grant No. 60975047, 60723003, 60721002), 973 Program (2010CB327903), and Jiangsu NSF grant (#BK2009080).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Xu, Y., Shen, F. & Zhao, J. An incremental learning vector quantization algorithm for pattern classification. Neural Comput & Applic 21, 1205–1215 (2012). https://doi.org/10.1007/s00521-010-0511-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-010-0511-4