Abstract
The decomposition method is currently one of the major methods for solving the convex quadratic optimization problems being associated with support vector machines. Although there exist some versions of the method that are known to converge to an optimal solution, the general convergence properties of the method are not yet fully understood. In this paper, we present a variant of the decomposition method that basically converges for any convex quadratic optimization problem provided that the policy for working set selection satisfies three abstract conditions. We furthermore design a concrete policy that meets these requirements.
This work has been supported by the Deutsche Forschungsgemeinschaft Grant SI 498/7-1.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Balcázar, J.L., Dai, Y., Watanabe, O.: Provably fast trainig algorithms for support vector machines. In: Proceedings of the 1st International Conference on Data Mining, pp. 43–50 (2001)
Balcázar, J.L., Dai, Y., Watanabe, O.: A random sampling technique for training support vector machines. In: Proceedings of the 12th International Conference on Algorithmic Learning Theory, pp. 119–134. Springer, Heidelberg (2001)
Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms. John Wiley & Sons, Chichester (1993)
Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pp. 144–152. ACM Press, New York (1992)
Chang, C.-C., Hsu, C.-W., Lin, C.-J.: The analysis of decomposition methods for support vector machines. IEEE Transactions on Neural Networks 11(4), 248–250 (2000)
Christianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000)
Gärtner, B., Welzl, E.: A simple sampling lemma: Analysis and applications in geometric optimization. Discrete & Computational Geometry 25(4), 569–590 (2001)
Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. The John Hopkins University Press (1996)
Hsu, C.-W., Lin, C.-J.: A simple decomposition method for support vector machines. Machine Learning 46(1-3), 291–314 (2002)
Hush, D., Scovel, C.: Polynomial-time decomposition algorithms for support vector machines. Machine Learning 51, 51–71 (2003)
Joachims, T.: Making large scale SVM learning practical. In: Schölkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods—Support Vector Learning, MIT Press, Cambridge (1998)
Keerthi, S.S., Gilbert, E.G.: Convergence of a generalized SMO algorithm for SVM classifier design. Machine Learning 46, 351–360 (2002)
Keerthi, S.S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to SMO algorithm for SVM regression. IEEE Transactions on Neural Networks 11(5), 1188–1193 (2000)
Keerthi, S.S., Shevade, S., Bhattacharyya, C., Murthy, K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation 13, 637–649 (2001)
Laskov, P.: An improved decomposition algorithm for regression support vector machines. Machine Learning 46, 315–350 (2002)
Liao, S.-P., Lin, H.-T., Lin, C.-J.: A note on the decomposition methods for support vector regression. Neural Computation 14, 1267–1281 (2002)
Lin, C.-J.: On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks 12, 1288–1298 (2001)
Lin, C.-J.: Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Transactions on Neural Networks 13, 248–250 (2002)
Lin, C.-J.: A formal analysis of stopping criteria of decomposition methods for support vector machines. IEEE Transactions on Neural Networks 13, 1045–1052 (2002)
Osuna, E., Freund, R., Girosi, F.: Training support vector machines: an application to face detection. In: Proceedings of CVPS 1997 (1997)
Platt, J.C.: Fast training of support vector machines using sequential minimal optimization. In: Schölkopf, B., Burges, C.J.C., Smola, A.J. (eds.) Advances in Kernel Methods—Support Vector Learning, MIT Press, Cambridge (1998)
Saunders, C., Stitson, M.O.,Weston, J., Bottou, L., Schölkopf, B., Smola, A.J.: Support vector machine reference manual. Technical Report CSDTR- 98-03, Royal Holloway, University of London, Egham, UK (1998)
Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge (2002)
Schölkopf, B., Smola, A.J., Williamson, R.C., Bartlett, P.L.: New support vector algorithms. Neural Computation 12, 1207–1245 (2000)
Vapnik, V.: Statistical Learning Theory. In: Wiley Series on Adaptive and Learning Systems for Signal Processing, Communications, and Control, John Wiley & Sons, Chichester (1998)
Zoutendijk, G.: Methods of Feasible Directions. Elsevier Publishing Company, Amsterdam (1960)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
List, N., Simon, H.U. (2004). A General Convergence Theorem for the Decomposition Method. In: Shawe-Taylor, J., Singer, Y. (eds) Learning Theory. COLT 2004. Lecture Notes in Computer Science(), vol 3120. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-27819-1_25
Download citation
DOI: https://doi.org/10.1007/978-3-540-27819-1_25
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22282-8
Online ISBN: 978-3-540-27819-1
eBook Packages: Springer Book Archive