Abstract
This paper investigates margin-maximization models for nearest prototype classifiers. These models are formulated through a minimization problem, which is a weighted sum of the inverted margin and a loss function. It is reduced a difference-of-convex optimization problem, and solved using the convex-concave procedure. In our latest study, to overcome limitations of the previous model, we have revised the model in both of the optimization problem and the training algorithm. In this paper, we propose another revised margin-maximization model by replacing the max-over-others loss function used in the latest study with the sum-over-others loss function. We provide a derivation of the training algorithm of the proposed model. Moreover, we evaluate classification performance of the revised margin-maximization models through a numerical experiment using benchmark data sets of UCI Machine Learning Repository. We compare the performance of our models not only with the previous model but also with baseline methods that are the generalized learning quantization, the class-wise k-means, and the support vector machine.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arthur, D., Vassilvitskii, S.: k-means++: the advantages of careful seeding. In: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1027–1035 (2007)
Doğan, Ü., Glasmachers, T., Igel, C.: A unified view on multi-class support vector classification. J. Mach. Learn. Res. 17(45), 1–32 (2016)
Dua, D., Graff, C.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml
Hammer, B., Villmann, T.: Generalized relevance learning vector quantization. Neural Netw. 15(8), 1059–1068 (2002)
Kohonen, T.: The self-organizing map. Proc. IEEE 78(9), 1464–1480 (1990)
Kusunoki, Y., Nakashima, T.: Revised optimization algorithm for maximum-margin nearest prototype classifier. In: Proceedings of IFSA 2023, pp. 276–280 (2023)
Kusunoki, Y., Wakou, C., Tatsumi, K.: Maximum-margin model for nearest prototype classifiers. J. Adv. Comput. Intell. Intell. Inform. 22(4), 565–577 (2018)
Lipp, T., Boyd, S.: Variations and extension of the convex-concave procedure. Optim. Eng. 17(2), 263–287 (2016)
MOSEK ApS: MOSEK Optimization Toolbox for MATLAB 10.0.33 (2022). https://docs.mosek.com/10.0/toolbox/index.html
Sato, A., Yamada, K.: Generalized learning vector quantization. In: Touretzky, D., Mozer, M., Hasselmo, M. (eds.) Advances in Neural Information Processing Systems, vol. 8. MIT Press (1995)
Schneider, P., Biehl, M., Hammer, B.: Adaptive relevance matrices in learning vector quantization. Neural Comput. 21(12), 3532–3561 (2009)
Acknowledgements
This work was supported by JSPS KAKENHI Grant Number JP21K12062.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kusunoki, Y. (2023). Maximum-Margin Nearest Prototype Classifiers with the Sum-over-Others Loss Function and a Performance Evaluation. In: Honda, K., Le, B., Huynh, VN., Inuiguchi, M., Kohda, Y. (eds) Integrated Uncertainty in Knowledge Modelling and Decision Making. IUKM 2023. Lecture Notes in Computer Science(), vol 14376. Springer, Cham. https://doi.org/10.1007/978-3-031-46781-3_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-46781-3_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-46780-6
Online ISBN: 978-3-031-46781-3
eBook Packages: Computer ScienceComputer Science (R0)