Skip to main content

Convergence Improvement of Active Set Training for Support Vector Regressors

  • Conference paper
Artificial Neural Networks – ICANN 2010 (ICANN 2010)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6353))

Included in the following conference series:

Abstract

In our previous work we have discussed the training method of a support vector regressor (SVR) by active set training based on Newton’s method. In this paper, we discuss convergence improvement by modifying the training method. To stabilize convergence for a large epsilon tube, we calculate the bias term according to the signs of the previous variables, not the updated variables. And to speed up calculating the inverse matrix by the Cholesky factorization during iteration steps, at the first iteration step, we keep the factorized matrix. And at the subsequent steps we restart the Cholesky factorization at the point where the variable in the working set is replaced. By computer experiments we show that by the proposed method the convergence is stabilized for a large epsilon tube and the incremental Cholesky factorization speeds up training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Osuna, E., Freund, R., Girosi, F.: An improved training algorithm for support vector machines. In: Proc. NNSP 1997, pp. 276–285 (1997)

    Google Scholar 

  2. Saunders, C., Stitson, M.O., Weston, J., Bottou, L., Schölkopf, B., Smola, A.: Support vector machine: Reference manual, Technical Report CSD-TR-98-03, Royal Holloway, University of London (1998)

    Google Scholar 

  3. Platt, J.C.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., et al. (eds.) Advances in Kernel Methods: Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)

    Google Scholar 

  4. Cauwenberghs, G., Poggio, T.: Incremental and decremental support vector machine learning. In: Leen, T.K., et al. (eds.) Advances in Neural Information Processing Systems, vol. 13, pp. 409–415. MIT Press, Cambridge (2001)

    Google Scholar 

  5. Shilton, A., Palaniswami, M., Ralph, D., Tsoi, A.C.: Incremental training of support vector machines. IEEE Trans. Neural Networks 16(1), 114–131 (2005)

    Article  Google Scholar 

  6. Scheinberg, K.: An efficient implementation of an active set method for SVMs. Journal of Machine Learning Research 7, 2237–2257 (2006)

    MathSciNet  Google Scholar 

  7. Abe, S.: Batch support vector training based on exact incremental training. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008, Part I. LNCS, vol. 5163, pp. 527–536. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  8. Gâlmeanu, H., Andonie, R.: Implementation issues of an incremental and decremental SVM. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008, Part I. LNCS, vol. 5163, pp. 325–335. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  9. Sentelle, C., Anagnostopoulos, G.C., Georgiopoulos, M.: An efficient active set method for SVM training without singular inner problems. In: Proc. IJCNN 2009, pp. 2875–2882 (2009)

    Google Scholar 

  10. Chapelle, O.: Training a support vector machine in the primal. In: Bottou, L., et al. (eds.) Large-Scale Kernel Machines, pp. 29–50. MIT Press, Cambridge (2007)

    Google Scholar 

  11. Abe, S.: Is primal better than dual. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009, Part I. LNCS, vol. 5768, pp. 854–863. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  12. Ma, J., Theiler, J., Perkins, S.: Accurate on-line support vector regression. Neural Computation 15(11), 2683–2703 (2003)

    Article  MATH  Google Scholar 

  13. Musicant, D.R., Feinberg, A.: Active set support vector regression. IEEE Trans. Neural Networks 15(2), 268–275 (2004)

    Article  Google Scholar 

  14. Abe, S.: Active set training of support vector regressors. In: Proc. ESANN 2010, pp. 117–122 (2010)

    Google Scholar 

  15. Mattera, D., Palmieri, F., Haykin, S.: An explicit algorithm for training support vector machines. IEEE Signal Processing Letters 6(9), 243–245 (1999)

    Article  Google Scholar 

  16. Kaieda, K., Abe, S.: KPCA-based training of a kernel fuzzy classifier with ellipsoidal regions. International Journal of Approximate Reasoning 37(3), 189–217 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  17. Chang, C.-C., Lin, C.-J.: LIBSVM–A library for support vector machines, http://www.csie.ntu.edu.tw/~cjlin/libsvm/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Abe, S., Yabuwaki, R. (2010). Convergence Improvement of Active Set Training for Support Vector Regressors. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds) Artificial Neural Networks – ICANN 2010. ICANN 2010. Lecture Notes in Computer Science, vol 6353. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15822-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15822-3_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15821-6

  • Online ISBN: 978-3-642-15822-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics