Abstract
Large margin classifiers are computed to assign patterns to a class with high confidence. This strategy helps controlling the capacity of the learning device so good generalization is presumably achieved. Two recent examples of large margin classifiers are support vector learning machines (SVM) [12] and boosting classifiers [10]. In this paper we show that it is possible to compute large-margin maximum classifiers using a gradient-based learning based on a cost function directly connected with their average margin. We also prove that the use of this procedure in nearest-neighbor (NN) classifiers induce solutions closely related to support vectors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Barlett, P. L. (1998). The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network, IEEE Transaction on Information Theory, 44, 525–536.
Bermejo, S., & Cabestany, J. (1999). Adaptive soft k-nearest neighbour classifiers. Pattern Recognition, Brief communication, 32, 2077–2079.
Bermejo, S., & Cabestany, J. (2000a). Adaptive soft k-nearest neighbour classifiers. Pattern Recognition, full-length paper, 33, 1999–2005.
Bermejo, S., & Cabestany, J. (2000b). Learning with nearest neighbour classifiers. To Appear in Neural Processing Letters, 13.
Breiman, L. (1998). Half-&-Half Bagging and Hard Boundary Points, Technical Report No.534, Berkley: University of California, Department of Statistics.
Dietterich, T. (1997). Machine Learning Research: Four Current Directions. AI Magazine, 18, 97–136.
Kohonen, T. (1996). Self-organizing Maps, 2nd Edition, Berlin: Springer-Verlag.
Lawrence, S., Giles, C. L. & Tsoi, A. C. (1997). Lessons in neural network training: overfitting may be harder than expected. Proceedings ofAAAI-97, 540–545, Menlo Park, CA: AAAI Press.
Ripley, D. (1994). Neural Networks and Methods for Classification, Journal of the Royal Statistical Society, Series B, 56, p. 409–456.
Schapire, R. E., Freund, Y., Bartlett, P. & Lee, W. S. (1998). Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26, 1651–1686.
Smola, A. et al. (1999). Introduction to Large Margin Classifiers, in Smola, A. et al. (Eds.) Advances in Large Margin Classifiers Boston, MA: MIT Press.
Vapnik, V. (1998). Statistical Learning Theory, New York: Wiley-Interscience.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bermejo, S., Cabestany, J. (2001). Large Margin Nearest Neighbor Classifiers. In: Mira, J., Prieto, A. (eds) Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. IWANN 2001. Lecture Notes in Computer Science, vol 2084. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45720-8_80
Download citation
DOI: https://doi.org/10.1007/3-540-45720-8_80
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42235-8
Online ISBN: 978-3-540-45720-6
eBook Packages: Springer Book Archive