Abstract
We discuss a fast training method of support vector machines using Newton’s method combined with fixed-size chunking. To speed up training, we limit the number of upper or lower bounded variables in the working set to two so that the corrections of the variables do not violate the bounding conditions. If similar working sets occur alternately, we merge these two working sets into one, and if similar working sets occur consecutively, we use incremental Cholesky factorization in calculating corrections. By computer experiments, we show that the proposed method is comparable to or faster than SMO (Sequential minimum optimization) using the second order information.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Abe, S.: Support Vector Machines for Pattern Classification. Springer, Heidelberg (2010)
Platt, J.C.: Fast training of support vector machines using sequential minimal Optimization. In: Advances in Kernel Methods: Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)
Fan, R.-E., Chen, P.-H., Lin, C.-J.: Working set selection using second order information for training support vector machines. Journal of Machine Learning Research 6, 1889–1918 (2005)
Barbero, Á., Dorronsoro, J.R.: Faster directions for second order SMO. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds.) ICANN 2010. LNCS, vol. 6353, pp. 30–39. Springer, Heidelberg (2010)
Keerthi, S.S., Gilbert, E.G.: Convergence of a generalized SMO algorithm for SVM classifier Design. Machine Learning 46(1-3), 351–360 (2002)
Abe, S.: Is primal better than dual. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009. LNCS, vol. 5768, pp. 854–863. Springer, Heidelberg (2009)
Chang, C.-C., Lin, C.-J.: LIBSVM–A library for support vector machines, http://www.csie.ntu.edu.tw/~cjlin/libsvm/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Abe, S. (2011). Fast Support Vector Training by Newton’s Method. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2011. ICANN 2011. Lecture Notes in Computer Science, vol 6792. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21738-8_19
Download citation
DOI: https://doi.org/10.1007/978-3-642-21738-8_19
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-21737-1
Online ISBN: 978-3-642-21738-8
eBook Packages: Computer ScienceComputer Science (R0)