Skip to main content
Log in

A fast conjugate functional gain sequential minimal optimization training algorithm for LS-SVM model

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The least squares support vector machine (LS-SVM) is an effective method to deal with classification and regression problems and has been widely studied and applied in the fields of machine learning and pattern recognition. The learning algorithms of the LS-SVM are usually conjugate gradient (CG) and sequential minimal optimization (SMO) algorithms. Based on this, we propose a conjugate functional gain SMO algorithm and theoretically prove its asymptotic convergence. This algorithm combines the conjugate direction method and the functional gain SMO algorithm with second-order information, which increases the functional gain of the plain SMO algorithm. In addition, we also provide a generalized SMO-type algorithm framework with a simple iterative format and easy implementation for other LS-SVM training algorithms. The numerical results show that the execution time of this algorithm is significantly shorter than that of the other plain SMO-type algorithms and CG-type algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data availability

The authors declare that data supporting the findings of this study are available within the article.

Notes

  1. https://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html.

  2. https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/.

References

  1. Burges CJ (1998) A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov 2(2):121–167

    Article  Google Scholar 

  2. Smola AJ, Schölkopf B (2004) A tutorial on support vector regression. Stat Comput 14(3):199–222

    Article  MathSciNet  Google Scholar 

  3. Chen PH, Fan RE, Lin CJ (2006) A study on smo-type decomposition methods for support vector machines. IEEE Trans Neural Netw 17(4):893–908

    Article  Google Scholar 

  4. Suykens JA, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  Google Scholar 

  5. Jiao L, Bo L, Wang L (2007) Fast sparse approximation for least squares support vector machine. IEEE Trans Neural Netw 18(3):685–697

    Article  Google Scholar 

  6. Yang X, Lu J, Zhang G (2010) Adaptive pruning algorithm for least squares support vector machine classifier. Soft Comput 14(7):667–680

    Article  MATH  Google Scholar 

  7. Li B, Song S, Li K (2013) A fast iterative single data approach to training unconstrained least squares support vector machines. Neurocomputing 115:31–38

    Article  Google Scholar 

  8. Xia X-L (2018) Training sparse least squares support vector machines by the qr decomposition. Neural Netw 106:175–184

    Article  MATH  Google Scholar 

  9. Chua KS (2003) Efficient computations for large least square support vector machine classifiers. Pattern Recognit Lett 24(1–3):75–80

    Article  MATH  Google Scholar 

  10. Suykens J, Lukas L, Van Dooren P, De Moor B, Vandewalle J, et al. (1999) Least squares support vector machine classifiers: a large scale algorithm. In: European Conference on Circuit Theory and Design, ECCTD, vol. 99, pp. 839–842. Citeseer

  11. Chu W, Ong CJ, Keerthi SS (2005) An improved conjugate gradient scheme to the solution of least squares svm. IEEE Trans Neural Netw 16(2):498–501

    Article  Google Scholar 

  12. Li B, Song S, Li K (2012) Improved conjugate gradient implementation for least squares support vector machines. Pattern Recognit Lett 33(2):121–125

    Article  Google Scholar 

  13. Platt J (1998) Sequential minimal optimization: a fast algorithm for training support vector machines

  14. Keerthi SS, Shevade SK (2003) Smo algorithm for least-squares svm formulations. Neural Comput 15(2):487–507

    Article  MATH  Google Scholar 

  15. Fan R-E, Chen P-H, Lin C-J, Joachims T (2005) Working set selection using second order information for training support vector machines. J Mach Learn Res, 6(12)

  16. López J, Suykens JA (2011) First and second order smo algorithms for LS-SVM classifiers. Neural Process Lett 33(1):31–44

    Article  Google Scholar 

  17. Shao X, Wu K, Liao B (2013) Single directional smo algorithm for least squares support vector machines. Comput Intell Neurosci 2013

  18. Bo L, Jiao L, Wang L (2007) Working set selection using functional gain for LS-SVM. IEEE Trans Neural Netw 18(5):1541–1544

    Article  Google Scholar 

  19. Tavara S (2019) Parallel computing of support vector machines: a survey. ACM Comput Surv (CSUR) 51(6):1–38

    Article  Google Scholar 

  20. Cao LJ, Keerthi SS, Ong CJ, Zhang JQ, Periyathamby U, Fu XJ, Lee H (2006) Parallel sequential minimal optimization for the training of support vector machines. IEEE Trans Neural Netw 17(4):1039–1049

    Article  Google Scholar 

  21. Zeng Z-Q, Yu H-B, Xu H-R, Xie Y-Q, Gao J (2008) Fast training support vector machines using parallel sequential minimal optimization. In: 2008 3rd International Conference on Intelligent System and Knowledge Engineering, vol. 1, pp. 997–1001. IEEE

  22. Cao L, Keerthi SS, Ong CJ, Uvaraj P, Fu XJ, Lee H (2006) Developing parallel sequential minimal optimization for fast training support vector machine. Neurocomputing 70(1–3):93–104

    Article  Google Scholar 

  23. Noronha DH, Torquato MF, Fernandes MA (2019) A parallel implementation of sequential minimal optimization on fpga. Microprocess Microsyst 69:138–151

    Article  Google Scholar 

  24. Zanghirati G, Zanni L (2003) A parallel solver for large quadratic programs in training support vector machines. Parallel Comput 29(4):535–551

    Article  MathSciNet  MATH  Google Scholar 

  25. Chang P, Bi Z, Feng Y (2014) Parallel smo algorithm implementation based on openmp. In: 2014 IEEE International Conference on System Science and Engineering (ICSSE), pp. 236–240. IEEE

  26. Huang S-A, Yang C-H (2019) A hardware-efficient admm-based svm training algorithm for edge computing. arXiv preprint arXiv:1907.09916

  27. Cipolla S, Gondzio J (2021) Training large scale SVMS using structured kernel approximations and ADMM. In: PROCEEDINGS OF SIMAI 2020+ 21

  28. Torres-Barrán A, Alaíz CM, Dorronsoro JR (2021) Faster svm training via conjugate smo. Pattern Recognit 111:107644

    Article  Google Scholar 

  29. López J, Barbero Á, Dorronsoro JR (2011) Momentum acceleration of least–squares support vector machines. In: International Conference on Artificial Neural Networks, pp. 135–142. Springer

  30. Torres-Barrán, A., Dorronsoro, J.R.: Nesterov acceleration for the smo algorithm. In: International Conference on Artificial Neural Networks, pp. 243–250 (2016). Springer

  31. Chang C-C, Lin C-J (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol(TIST) 2(3):1–27

    Article  Google Scholar 

  32. Joachims T (1999) Svmlight: Support vector machine. SVM-light support vector machine http://svmlight.joachims.org/, University of Dortmund 19(4)

Download references

Acknowledgements

This research was supported by the Graduate Research and Innovation Foundation of Chongqing, China (CYS22074), the National Natural Science Foundation of China (71901184), Humanities and Social Science Fund of Ministry of Education of China (19YJCZH119).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shengjie Li.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, L., Ma, X. & Li, S. A fast conjugate functional gain sequential minimal optimization training algorithm for LS-SVM model. Neural Comput & Applic 35, 6095–6113 (2023). https://doi.org/10.1007/s00521-022-07875-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07875-1

Keywords

Navigation