Skip to main content

A General Convergence Theorem for the Decomposition Method

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3120))

Abstract

The decomposition method is currently one of the major methods for solving the convex quadratic optimization problems being associated with support vector machines. Although there exist some versions of the method that are known to converge to an optimal solution, the general convergence properties of the method are not yet fully understood. In this paper, we present a variant of the decomposition method that basically converges for any convex quadratic optimization problem provided that the policy for working set selection satisfies three abstract conditions. We furthermore design a concrete policy that meets these requirements.

This work has been supported by the Deutsche Forschungsgemeinschaft Grant SI 498/7-1.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Balcázar, J.L., Dai, Y., Watanabe, O.: Provably fast trainig algorithms for support vector machines. In: Proceedings of the 1st International Conference on Data Mining, pp. 43–50 (2001)

    Google Scholar 

  2. Balcázar, J.L., Dai, Y., Watanabe, O.: A random sampling technique for training support vector machines. In: Proceedings of the 12th International Conference on Algorithmic Learning Theory, pp. 119–134. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  3. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms. John Wiley & Sons, Chichester (1993)

    MATH  Google Scholar 

  4. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pp. 144–152. ACM Press, New York (1992)

    Chapter  Google Scholar 

  5. Chang, C.-C., Hsu, C.-W., Lin, C.-J.: The analysis of decomposition methods for support vector machines. IEEE Transactions on Neural Networks 11(4), 248–250 (2000)

    Google Scholar 

  6. Christianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  7. Gärtner, B., Welzl, E.: A simple sampling lemma: Analysis and applications in geometric optimization. Discrete & Computational Geometry 25(4), 569–590 (2001)

    MATH  MathSciNet  Google Scholar 

  8. Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. The John Hopkins University Press (1996)

    Google Scholar 

  9. Hsu, C.-W., Lin, C.-J.: A simple decomposition method for support vector machines. Machine Learning 46(1-3), 291–314 (2002)

    Article  MATH  Google Scholar 

  10. Hush, D., Scovel, C.: Polynomial-time decomposition algorithms for support vector machines. Machine Learning 51, 51–71 (2003)

    Article  MATH  Google Scholar 

  11. Joachims, T.: Making large scale SVM learning practical. In: Schölkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods—Support Vector Learning, MIT Press, Cambridge (1998)

    Google Scholar 

  12. Keerthi, S.S., Gilbert, E.G.: Convergence of a generalized SMO algorithm for SVM classifier design. Machine Learning 46, 351–360 (2002)

    Article  MATH  Google Scholar 

  13. Keerthi, S.S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to SMO algorithm for SVM regression. IEEE Transactions on Neural Networks 11(5), 1188–1193 (2000)

    Article  Google Scholar 

  14. Keerthi, S.S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13, 637–649 (2001)

    Article  MATH  Google Scholar 

  15. Laskov, P.: An improved decomposition algorithm for regression support vector machines. Machine Learning 46, 315–350 (2002)

    Article  MATH  Google Scholar 

  16. Liao, S.-P., Lin, H.-T., Lin, C.-J.: A note on the decomposition methods for support vector regression. Neural Computation 14, 1267–1281 (2002)

    Article  MATH  Google Scholar 

  17. Lin, C.-J.: On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks 12, 1288–1298 (2001)

    Article  Google Scholar 

  18. Lin, C.-J.: Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Transactions on Neural Networks 13, 248–250 (2002)

    Article  Google Scholar 

  19. Lin, C.-J.: A formal analysis of stopping criteria of decomposition methods for support vector machines. IEEE Transactions on Neural Networks 13, 1045–1052 (2002)

    Article  Google Scholar 

  20. Osuna, E., Freund, R., Girosi, F.: Training support vector machines: an application to face detection. In: Proceedings of CVPS 1997 (1997)

    Google Scholar 

  21. Platt, J.C.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods—Support Vector Learning, MIT Press, Cambridge (1998)

    Google Scholar 

  22. Saunders, C., Stitson, M.O.,Weston, J., Bottou, L., Schölkopf, B., Smola, A.J.: Support vector machine reference manual. Technical Report CSDTR- 98-03, Royal Holloway, University of London, Egham, UK (1998)

    Google Scholar 

  23. Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge (2002)

    Google Scholar 

  24. Schölkopf, B., Smola, A.J., Williamson, R.C., Bartlett, P.L.: New support vector algorithms. Neural Computation 12, 1207–1245 (2000)

    Article  Google Scholar 

  25. Vapnik, V.: Statistical Learning Theory. In: Wiley Series on Adaptive and Learning Systems for Signal Processing, Communications, and Control, John Wiley & Sons, Chichester (1998)

    Google Scholar 

  26. Zoutendijk, G.: Methods of Feasible Directions. Elsevier Publishing Company, Amsterdam (1960)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

List, N., Simon, H.U. (2004). A General Convergence Theorem for the Decomposition Method. In: Shawe-Taylor, J., Singer, Y. (eds) Learning Theory. COLT 2004. Lecture Notes in Computer Science(), vol 3120. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-27819-1_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-27819-1_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22282-8

  • Online ISBN: 978-3-540-27819-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics