Skip to main content

A Novel Sequential Minimal Optimization Algorithm for Support Vector Regression

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4232))

Abstract

A novel sequential minimal optimization (SMO) algorithm for support vector regression is proposed. This algorithm is based on Flake and Lawrence’s SMO in which convex optimization problems with l variables are solved instead of standard quadratic programming problems with 2l variables where l is the number of training samples, but the strategy for working set selection is quite different. Experimental results show that the proposed algorithm is much faster than Flake and Lawrence’s SMO and comparable to the fastest conventional SMO.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998)

    MATH  Google Scholar 

  2. Platt, J.C.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods: Support Vector Machines. MIT Press, Cambridge (1998)

    Google Scholar 

  3. Smola, A.J., Schölkopf, B.: A tutorial on support vector regression. Technical Report NC2-TR-1998-030, NeuroCOLT2 (1998)

    Google Scholar 

  4. Shevade, S.K., Keerthi, S.S., Bhattacharyya, C., Murthy, K.R.K.: Improvements to the SMO algorithm for SVM regression. IEEE Trans. Neural Networks 11(5), 1188–1193 (2000)

    Article  Google Scholar 

  5. Flake, G.W., Lawrence, S.: Efficient SVM regression training with SMO. Machine Learning 46, 271–290 (2002)

    Article  MATH  Google Scholar 

  6. Guo, J., Takahashi, N., Nishi, T.: Convergence proof of a sequential minimal optimization algorithm for support vector regression. In: Proc. of IJCNN 2006 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Guo, J., Takahashi, N., Nishi, T. (2006). A Novel Sequential Minimal Optimization Algorithm for Support Vector Regression. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4232. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893028_92

Download citation

  • DOI: https://doi.org/10.1007/11893028_92

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46479-2

  • Online ISBN: 978-3-540-46480-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics