Abstract
The quadratically convergent algorithms for training SVM with smoothing methods are discussed in this paper. By smoothing the objective function of an SVM formulation, Lee and Mangasarian [Comput. Optim. Appl. 20(1):5-22, 2001] presented one such algorithm called SSVM and proved that the error bound between the new smooth problem and the original one was \(O(\frac{1}{p})\) for large positive smoothing parameter p. We derive a new method by smoothing the optimality conditions of the SVM formulation, and we prove that the error bound is \(O(\frac{1}{p^{2}})\), which is better than Lee and Mangasarian’s result. Based on SMW identity and updating Hessian iteratively, some boosting skills are proposed to solve Newton equation with lower computational complexity for reduced smooth SVM algorithms. Many experimental results show that the proposed smoothing method has the same accuracy as SSVM, whose error bound is also tightened to \(O(\frac{1}{p^{2}})\) in this paper, and the proposed boosting skills are efficient for solving large-scale problems by RSVM.
Similar content being viewed by others
References
Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 1–27 (2011). http://www.csie.ntu.edu.tw/~cjlin/libsvm
Chapelle, O.: Support Vector Machines in the primal (2006). http://www.kyb.tuebingen.mpg.de/bs/people/chapelle/primal/
Chapelle, O.: Training a Support Vector Machine in the primal. Neural Comput. 19(5), 1155–1178 (2007)
Clarke, F.H.: Optimization and Nonsmooth Analysis. Willey, New York (1983)
Fan, R.E., Lin, C.J.: LIBSVM Data (2010). http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
Ferris, M.C., Munson, T.S.: Semismooth Support Vector Machines. Math. Program., Ser. B 101, 185–204 (2004)
Fine, S., Scheinberg, K.: Efficient SVM training using low-rank kernel representations. J. Mach. Learn. Res. 2, 243–264 (2001)
Fletcher, R., Zanghirati, G.: Binary separation and training support vector machines. Acta Numer. 19, 121–158 (2010)
Golub, G.H., Loan, C.F.V.: Matrix Computations. Johns Hopkins University Press, Baltimore (1996)
Ho, T.K., Kleinberg, E.M.: Building projectable classifiers of arbitrary complexity. In: Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, pp. 880–885 (1996)
Joachims, T.: In: SVMlight, Support Vector Machine (1998). http://www.cs.cornell.edu/people/tj/svm_light/
Joachims, T.: Making large-scale SVM learning practical. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods—Support Vector Learning, pp. 169–184. MIT Press, Cambridge (1999)
Keerthi, S.S., Chapelle, O., Decoste, D.: Building Support Vector Machines with reduced classifier complexity. J. Mach. Learn. Res. 7, 1493–1515 (2006)
Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput. 13, 637–649 (2001)
Lee, Y.J., Huang, S.Y.: Reduced support vector machines: a statistical theory. IEEE Trans. Neural Netw. 18(1), 1–13 (2007)
Lee, Y.J., Mangasarian, O.L.: RSVM: reduced Support Vector Machines. In: CD Proceedings of the SIAM International Conference on Data Mining, pp. 1–17. SIAM, Chicago (2001)
Lee, Y.J., Mangasarian, O.L.: SSVM: a smooth Support Vector Machine for classification. Comput. Optim. Appl. 20(1), 5–22 (2001)
Lin, C.J.: On the convergence of the decomposition method for Support Vector Machines. IEEE Trans. Neural Netw. 12, 1288–1298 (2001)
Lin, C.J.: Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Trans. Neural Netw. 13, 248–250 (2002)
Lin, K.M., Lin, C.J.: A study on reduced Support Vector Machines. IEEE Trans. Neural Netw. 14(6), 1449–1459 (2003)
Mangasarian, O.L.: Generalized Support Vector Machine. In: Smola, A.J., Bartlett, P., Schökopf, B., Schuurmans, D. (eds.) Advances in Large Margin Classifiers, pp. 135–146. MIT Press, Cambridge (2000)
Mangasarian, O.L., Musicant, D.R.: Successive overrelaxation for Support Vector Machines. IEEE Trans. Neural Netw. 10(5), 1032–1037 (1999)
Mangasarian, O.L., Musicant, D.R.: Lagrangian Support Vector Machines. J. Mach. Learn. Res. 1, 161–177 (2001)
Melacci, S., Belkin, M.: Laplacian support vector machines trained in the primal. J. Mach. Learn. Res. 12, 1149–1184 (2011)
Platt, J.C.: Fast training of Support Vector Machines using Sequential Minimal Optimization. In: Schölkopf, B., Burges, C.J., Smola, A.J. (eds.) Advances in Kernel Method-Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)
Schölkopf, B., Smola, A.J.: Learning with Kernels-Support Vector Machines, Regularization, Optimization and Beyond. The MIT Press, Cambridge (2002)
Shalev-Shwartz, S., Singer, Y., Srebro, N., Cotter, A.: Pegasos: primal estimated sub-gradient solver for SVM. Math. Program. 127(1), 3–30 (2011)
Vapnik, V.N.: An overview of statistical learning theory. IEEE Trans. Neural Netw. 10(5), 988–999 (1999)
Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (2000)
Zhou, S., Liu, H., Ye, F., Zhou, L.: A new iterative algorithm training SVM. Optim. Methods Softw. 24(6), 913–932 (2009)
Zhou, S., Liu, H., Zhou, L., Ye, F.: Semismooth Newton Support Vector Machine. Pattern Recognit. Lett. 28, 2054–2062 (2007)
Acknowledgements
We gratefully acknowledge the supports of National Natural Science Foundation of China (NNSFC) under Grant Nos. 60603098, 61072144, 61179040, 61173089, 11101321 and 11101322. We also thank the anonymous reviewers and editors for their helpful comments to improve the presentation.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zhou, S., Cui, J., Ye, F. et al. New smoothing SVM algorithm with tight error bound and efficient reduced techniques. Comput Optim Appl 56, 599–617 (2013). https://doi.org/10.1007/s10589-013-9571-6
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-013-9571-6