Skip to main content
Log in

New smoothing SVM algorithm with tight error bound and efficient reduced techniques

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

The quadratically convergent algorithms for training SVM with smoothing methods are discussed in this paper. By smoothing the objective function of an SVM formulation, Lee and Mangasarian [Comput. Optim. Appl. 20(1):5-22, 2001] presented one such algorithm called SSVM and proved that the error bound between the new smooth problem and the original one was \(O(\frac{1}{p})\) for large positive smoothing parameter p. We derive a new method by smoothing the optimality conditions of the SVM formulation, and we prove that the error bound is \(O(\frac{1}{p^{2}})\), which is better than Lee and Mangasarian’s result. Based on SMW identity and updating Hessian iteratively, some boosting skills are proposed to solve Newton equation with lower computational complexity for reduced smooth SVM algorithms. Many experimental results show that the proposed smoothing method has the same accuracy as SSVM, whose error bound is also tightened to \(O(\frac{1}{p^{2}})\) in this paper, and the proposed boosting skills are efficient for solving large-scale problems by RSVM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 1–27 (2011). http://www.csie.ntu.edu.tw/~cjlin/libsvm

    Article  Google Scholar 

  2. Chapelle, O.: Support Vector Machines in the primal (2006). http://www.kyb.tuebingen.mpg.de/bs/people/chapelle/primal/

  3. Chapelle, O.: Training a Support Vector Machine in the primal. Neural Comput. 19(5), 1155–1178 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  4. Clarke, F.H.: Optimization and Nonsmooth Analysis. Willey, New York (1983)

    MATH  Google Scholar 

  5. Fan, R.E., Lin, C.J.: LIBSVM Data (2010). http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/

  6. Ferris, M.C., Munson, T.S.: Semismooth Support Vector Machines. Math. Program., Ser. B 101, 185–204 (2004)

    MathSciNet  MATH  Google Scholar 

  7. Fine, S., Scheinberg, K.: Efficient SVM training using low-rank kernel representations. J. Mach. Learn. Res. 2, 243–264 (2001)

    Google Scholar 

  8. Fletcher, R., Zanghirati, G.: Binary separation and training support vector machines. Acta Numer. 19, 121–158 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Golub, G.H., Loan, C.F.V.: Matrix Computations. Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  10. Ho, T.K., Kleinberg, E.M.: Building projectable classifiers of arbitrary complexity. In: Proceedings of the 13th International Conference on Pattern Recognition, Vienna, Austria, pp. 880–885 (1996)

    Google Scholar 

  11. Joachims, T.: In: SVMlight, Support Vector Machine (1998). http://www.cs.cornell.edu/people/tj/svm_light/

    Google Scholar 

  12. Joachims, T.: Making large-scale SVM learning practical. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods—Support Vector Learning, pp. 169–184. MIT Press, Cambridge (1999)

    Google Scholar 

  13. Keerthi, S.S., Chapelle, O., Decoste, D.: Building Support Vector Machines with reduced classifier complexity. J. Mach. Learn. Res. 7, 1493–1515 (2006)

    MathSciNet  MATH  Google Scholar 

  14. Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K.: Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput. 13, 637–649 (2001)

    Article  MATH  Google Scholar 

  15. Lee, Y.J., Huang, S.Y.: Reduced support vector machines: a statistical theory. IEEE Trans. Neural Netw. 18(1), 1–13 (2007)

    Article  Google Scholar 

  16. Lee, Y.J., Mangasarian, O.L.: RSVM: reduced Support Vector Machines. In: CD Proceedings of the SIAM International Conference on Data Mining, pp. 1–17. SIAM, Chicago (2001)

    Google Scholar 

  17. Lee, Y.J., Mangasarian, O.L.: SSVM: a smooth Support Vector Machine for classification. Comput. Optim. Appl. 20(1), 5–22 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  18. Lin, C.J.: On the convergence of the decomposition method for Support Vector Machines. IEEE Trans. Neural Netw. 12, 1288–1298 (2001)

    Article  Google Scholar 

  19. Lin, C.J.: Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Trans. Neural Netw. 13, 248–250 (2002)

    Article  Google Scholar 

  20. Lin, K.M., Lin, C.J.: A study on reduced Support Vector Machines. IEEE Trans. Neural Netw. 14(6), 1449–1459 (2003)

    Article  Google Scholar 

  21. Mangasarian, O.L.: Generalized Support Vector Machine. In: Smola, A.J., Bartlett, P., Schökopf, B., Schuurmans, D. (eds.) Advances in Large Margin Classifiers, pp. 135–146. MIT Press, Cambridge (2000)

    Google Scholar 

  22. Mangasarian, O.L., Musicant, D.R.: Successive overrelaxation for Support Vector Machines. IEEE Trans. Neural Netw. 10(5), 1032–1037 (1999)

    Article  Google Scholar 

  23. Mangasarian, O.L., Musicant, D.R.: Lagrangian Support Vector Machines. J. Mach. Learn. Res. 1, 161–177 (2001)

    MathSciNet  MATH  Google Scholar 

  24. Melacci, S., Belkin, M.: Laplacian support vector machines trained in the primal. J. Mach. Learn. Res. 12, 1149–1184 (2011)

    MathSciNet  Google Scholar 

  25. Platt, J.C.: Fast training of Support Vector Machines using Sequential Minimal Optimization. In: Schölkopf, B., Burges, C.J., Smola, A.J. (eds.) Advances in Kernel Method-Support Vector Learning, pp. 185–208. MIT Press, Cambridge (1999)

    Google Scholar 

  26. Schölkopf, B., Smola, A.J.: Learning with Kernels-Support Vector Machines, Regularization, Optimization and Beyond. The MIT Press, Cambridge (2002)

    Google Scholar 

  27. Shalev-Shwartz, S., Singer, Y., Srebro, N., Cotter, A.: Pegasos: primal estimated sub-gradient solver for SVM. Math. Program. 127(1), 3–30 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  28. Vapnik, V.N.: An overview of statistical learning theory. IEEE Trans. Neural Netw. 10(5), 988–999 (1999)

    Article  Google Scholar 

  29. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (2000)

    Book  MATH  Google Scholar 

  30. Zhou, S., Liu, H., Ye, F., Zhou, L.: A new iterative algorithm training SVM. Optim. Methods Softw. 24(6), 913–932 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  31. Zhou, S., Liu, H., Zhou, L., Ye, F.: Semismooth Newton Support Vector Machine. Pattern Recognit. Lett. 28, 2054–2062 (2007)

    Article  Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge the supports of National Natural Science Foundation of China (NNSFC) under Grant Nos. 60603098, 61072144, 61179040, 61173089, 11101321 and 11101322. We also thank the anonymous reviewers and editors for their helpful comments to improve the presentation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuisheng Zhou.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhou, S., Cui, J., Ye, F. et al. New smoothing SVM algorithm with tight error bound and efficient reduced techniques. Comput Optim Appl 56, 599–617 (2013). https://doi.org/10.1007/s10589-013-9571-6

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-013-9571-6

Keywords

Navigation