Abstract
The least squares support vector machine (LS-SVM) is an effective method to deal with classification and regression problems and has been widely studied and applied in the fields of machine learning and pattern recognition. The learning algorithms of the LS-SVM are usually conjugate gradient (CG) and sequential minimal optimization (SMO) algorithms. Based on this, we propose a conjugate functional gain SMO algorithm and theoretically prove its asymptotic convergence. This algorithm combines the conjugate direction method and the functional gain SMO algorithm with second-order information, which increases the functional gain of the plain SMO algorithm. In addition, we also provide a generalized SMO-type algorithm framework with a simple iterative format and easy implementation for other LS-SVM training algorithms. The numerical results show that the execution time of this algorithm is significantly shorter than that of the other plain SMO-type algorithms and CG-type algorithms.




Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
The authors declare that data supporting the findings of this study are available within the article.
References
Burges CJ (1998) A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov 2(2):121–167
Smola AJ, Schölkopf B (2004) A tutorial on support vector regression. Stat Comput 14(3):199–222
Chen PH, Fan RE, Lin CJ (2006) A study on smo-type decomposition methods for support vector machines. IEEE Trans Neural Netw 17(4):893–908
Suykens JA, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300
Jiao L, Bo L, Wang L (2007) Fast sparse approximation for least squares support vector machine. IEEE Trans Neural Netw 18(3):685–697
Yang X, Lu J, Zhang G (2010) Adaptive pruning algorithm for least squares support vector machine classifier. Soft Comput 14(7):667–680
Li B, Song S, Li K (2013) A fast iterative single data approach to training unconstrained least squares support vector machines. Neurocomputing 115:31–38
Xia X-L (2018) Training sparse least squares support vector machines by the qr decomposition. Neural Netw 106:175–184
Chua KS (2003) Efficient computations for large least square support vector machine classifiers. Pattern Recognit Lett 24(1–3):75–80
Suykens J, Lukas L, Van Dooren P, De Moor B, Vandewalle J, et al. (1999) Least squares support vector machine classifiers: a large scale algorithm. In: European Conference on Circuit Theory and Design, ECCTD, vol. 99, pp. 839–842. Citeseer
Chu W, Ong CJ, Keerthi SS (2005) An improved conjugate gradient scheme to the solution of least squares svm. IEEE Trans Neural Netw 16(2):498–501
Li B, Song S, Li K (2012) Improved conjugate gradient implementation for least squares support vector machines. Pattern Recognit Lett 33(2):121–125
Platt J (1998) Sequential minimal optimization: a fast algorithm for training support vector machines
Keerthi SS, Shevade SK (2003) Smo algorithm for least-squares svm formulations. Neural Comput 15(2):487–507
Fan R-E, Chen P-H, Lin C-J, Joachims T (2005) Working set selection using second order information for training support vector machines. J Mach Learn Res, 6(12)
López J, Suykens JA (2011) First and second order smo algorithms for LS-SVM classifiers. Neural Process Lett 33(1):31–44
Shao X, Wu K, Liao B (2013) Single directional smo algorithm for least squares support vector machines. Comput Intell Neurosci 2013
Bo L, Jiao L, Wang L (2007) Working set selection using functional gain for LS-SVM. IEEE Trans Neural Netw 18(5):1541–1544
Tavara S (2019) Parallel computing of support vector machines: a survey. ACM Comput Surv (CSUR) 51(6):1–38
Cao LJ, Keerthi SS, Ong CJ, Zhang JQ, Periyathamby U, Fu XJ, Lee H (2006) Parallel sequential minimal optimization for the training of support vector machines. IEEE Trans Neural Netw 17(4):1039–1049
Zeng Z-Q, Yu H-B, Xu H-R, Xie Y-Q, Gao J (2008) Fast training support vector machines using parallel sequential minimal optimization. In: 2008 3rd International Conference on Intelligent System and Knowledge Engineering, vol. 1, pp. 997–1001. IEEE
Cao L, Keerthi SS, Ong CJ, Uvaraj P, Fu XJ, Lee H (2006) Developing parallel sequential minimal optimization for fast training support vector machine. Neurocomputing 70(1–3):93–104
Noronha DH, Torquato MF, Fernandes MA (2019) A parallel implementation of sequential minimal optimization on fpga. Microprocess Microsyst 69:138–151
Zanghirati G, Zanni L (2003) A parallel solver for large quadratic programs in training support vector machines. Parallel Comput 29(4):535–551
Chang P, Bi Z, Feng Y (2014) Parallel smo algorithm implementation based on openmp. In: 2014 IEEE International Conference on System Science and Engineering (ICSSE), pp. 236–240. IEEE
Huang S-A, Yang C-H (2019) A hardware-efficient admm-based svm training algorithm for edge computing. arXiv preprint arXiv:1907.09916
Cipolla S, Gondzio J (2021) Training large scale SVMS using structured kernel approximations and ADMM. In: PROCEEDINGS OF SIMAI 2020+ 21
Torres-Barrán A, Alaíz CM, Dorronsoro JR (2021) Faster svm training via conjugate smo. Pattern Recognit 111:107644
López J, Barbero Á, Dorronsoro JR (2011) Momentum acceleration of least–squares support vector machines. In: International Conference on Artificial Neural Networks, pp. 135–142. Springer
Torres-Barrán, A., Dorronsoro, J.R.: Nesterov acceleration for the smo algorithm. In: International Conference on Artificial Neural Networks, pp. 243–250 (2016). Springer
Chang C-C, Lin C-J (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol(TIST) 2(3):1–27
Joachims T (1999) Svmlight: Support vector machine. SVM-light support vector machine http://svmlight.joachims.org/, University of Dortmund 19(4)
Acknowledgements
This research was supported by the Graduate Research and Innovation Foundation of Chongqing, China (CYS22074), the National Natural Science Foundation of China (71901184), Humanities and Social Science Fund of Ministry of Education of China (19YJCZH119).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Yu, L., Ma, X. & Li, S. A fast conjugate functional gain sequential minimal optimization training algorithm for LS-SVM model. Neural Comput & Applic 35, 6095–6113 (2023). https://doi.org/10.1007/s00521-022-07875-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-022-07875-1