Abstract
In our previous work we have discussed the training method of a support vector regressor (SVR) by active set training based on Newton’s method. In this paper, we discuss convergence improvement by modifying the training method. To stabilize convergence for a large epsilon tube, we calculate the bias term according to the signs of the previous variables, not the updated variables. And to speed up calculating the inverse matrix by the Cholesky factorization during iteration steps, at the first iteration step, we keep the factorized matrix. And at the subsequent steps we restart the Cholesky factorization at the point where the variable in the working set is replaced. By computer experiments we show that by the proposed method the convergence is stabilized for a large epsilon tube and the incremental Cholesky factorization speeds up training.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Osuna, E., Freund, R., Girosi, F.: An improved training algorithm for support vector machines. In: Proc. NNSP 1997, pp. 276–285 (1997)
Saunders, C., Stitson, M.O., Weston, J., Bottou, L., Schölkopf, B., Smola, A.: Support vector machine: Reference manual, Technical Report CSD-TR-98-03, Royal Holloway, University of London (1998)
Platt, J.C.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., et al. (eds.) Advances in Kernel Methods: Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)
Cauwenberghs, G., Poggio, T.: Incremental and decremental support vector machine learning. In: Leen, T.K., et al. (eds.) Advances in Neural Information Processing Systems, vol. 13, pp. 409–415. MIT Press, Cambridge (2001)
Shilton, A., Palaniswami, M., Ralph, D., Tsoi, A.C.: Incremental training of support vector machines. IEEE Trans. Neural Networks 16(1), 114–131 (2005)
Scheinberg, K.: An efficient implementation of an active set method for SVMs. Journal of Machine Learning Research 7, 2237–2257 (2006)
Abe, S.: Batch support vector training based on exact incremental training. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008, Part I. LNCS, vol. 5163, pp. 527–536. Springer, Heidelberg (2008)
Gâlmeanu, H., Andonie, R.: Implementation issues of an incremental and decremental SVM. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008, Part I. LNCS, vol. 5163, pp. 325–335. Springer, Heidelberg (2008)
Sentelle, C., Anagnostopoulos, G.C., Georgiopoulos, M.: An efficient active set method for SVM training without singular inner problems. In: Proc. IJCNN 2009, pp. 2875–2882 (2009)
Chapelle, O.: Training a support vector machine in the primal. In: Bottou, L., et al. (eds.) Large-Scale Kernel Machines, pp. 29–50. MIT Press, Cambridge (2007)
Abe, S.: Is primal better than dual. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009, Part I. LNCS, vol. 5768, pp. 854–863. Springer, Heidelberg (2009)
Ma, J., Theiler, J., Perkins, S.: Accurate on-line support vector regression. Neural Computation 15(11), 2683–2703 (2003)
Musicant, D.R., Feinberg, A.: Active set support vector regression. IEEE Trans. Neural Networks 15(2), 268–275 (2004)
Abe, S.: Active set training of support vector regressors. In: Proc. ESANN 2010, pp. 117–122 (2010)
Mattera, D., Palmieri, F., Haykin, S.: An explicit algorithm for training support vector machines. IEEE Signal Processing Letters 6(9), 243–245 (1999)
Kaieda, K., Abe, S.: KPCA-based training of a kernel fuzzy classifier with ellipsoidal regions. International Journal of Approximate Reasoning 37(3), 189–217 (2004)
Chang, C.-C., Lin, C.-J.: LIBSVM–A library for support vector machines, http://www.csie.ntu.edu.tw/~cjlin/libsvm/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Abe, S., Yabuwaki, R. (2010). Convergence Improvement of Active Set Training for Support Vector Regressors. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds) Artificial Neural Networks – ICANN 2010. ICANN 2010. Lecture Notes in Computer Science, vol 6353. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15822-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-642-15822-3_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15821-6
Online ISBN: 978-3-642-15822-3
eBook Packages: Computer ScienceComputer Science (R0)