Skip to main content
Log in

The coordinate descent method with stochastic optimization for linear support vector machines

  • ICONIP 2011
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Optimizing the training speed of support vector machines (SVMs) is one of the most important topics in the SVM research. In this paper, we propose an algorithm in which the size of working set is reduced to one in order to obtain a faster training speed. Instead of the complex heuristic criteria, the random order for selecting the elements into the working set is adopted. The proposed algorithm shows a better performance in linear SVM training, especially in the large-scale scenario.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Cortes C, Vapnik V (1995) Support vector networks. Mach Learn 20:273–297

    MATH  Google Scholar 

  2. Fan R-E, Chang K-W, Hsieh C-J, Wang X-R, Lin C-J (2008) LIBLINEAR: a library for large linear classification. J Mach Learn Res 9:1871–1874

    MATH  Google Scholar 

  3. Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: Proceedings of computer vision and pattern recognition, 1063–6919:130–136

  4. Platt JC (1998) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning. MIT Press, Cambridge, pp 185–208

    Google Scholar 

  5. JoachimsT (2006) Training linear SVMs in linear time. ACM KDD

  6. Smola AJ, Vishwanathan SVN, Le Q (2008) Bundle methods for machine learning. NIPS

  7. Lin C-J, Weng RC, Keerthi SS (2008) Trust region Newton method for large-scale logistic regression. JML R9:627–650

    MathSciNet  Google Scholar 

  8. Shai S-S, Singer Y, Srebro N, Pegasos (2007) Primal estimated sub-gradient solver for SVM. In: Proceedings of the 24th international conference on machine learning, ACM, New York

  9. Bottou L (2007) Stochastic gradient descent examples. http://leon.bottou.org/projects/sgd

  10. Zhang T (2004) Solving large scale linear prediction problems using stochastic gradient descent algorithms. ICML

  11. Chang K-W, Hsie C-J, Lin C-J, Sathiya KS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear SVM. In: Proceedings of the 25th international conference on machine learning, ACM, New York

  12. Chang K-W, Hsie C-J, Lin C-J (2008) Coordinate descent method for large-scale L2-loss linear SVM. J Mach Learn Res 9:1369–1398

    MathSciNet  MATH  Google Scholar 

  13. Luo Z-Q, Tseng P (1992) On the convergence of coordinate descent method for convex differentiable minimization. J Optim Theory Appl 72:7–35

    Article  MathSciNet  MATH  Google Scholar 

  14. Chang C-C, Lin C-J (2001) LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm

Download references

Acknowledgments

The work was supported by the Fundamental Research Funds for the Central Universities, Research Funds of Renmin University of China (10XNI029, Research on Financial Web Data Mining and Knowledge Management), and the Natural Science Foundation of China under Grant 70871001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xun Liang.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zheng, T., Liang, X. & Cao, R. The coordinate descent method with stochastic optimization for linear support vector machines. Neural Comput & Applic 22, 1261–1266 (2013). https://doi.org/10.1007/s00521-012-1139-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-012-1139-3

Keywords

Navigation