Abstract:
Learning Vector Quantization (LVQ) is a popular class of nearest prototype classifiers for multiclass classification. Learning algorithms from this family are widely used...Show MoreMetadata
Abstract:
Learning Vector Quantization (LVQ) is a popular class of nearest prototype classifiers for multiclass classification. Learning algorithms from this family are widely used because of their intuitively clear learning process and ease of implementation. They run efficiently and in many cases provide state of the art performance. In this paper we propose a modification of the LVQ algorithm that addresses problems of determining appropriate number of prototypes, sensitivity to initialization, and sensitivity to noise in data. The proposed algorithm allows adaptive addition of prototypes at potentially beneficial locations and removal of harmful or less useful prototypes. The prototype addition and removal steps can be easily implemented on top of many existing LVQ algorithms. Experimental results on synthetic and benchmark datasets showed that the proposed modifications can significantly improve LVQ classification accuracy while at the same time determining the appropriate number of prototypes and avoiding the problems of initialization.
Published in: 2009 International Joint Conference on Neural Networks
Date of Conference: 14-19 June 2009
Date Added to IEEE Xplore: 31 July 2009
ISBN Information: