Abstract
As a new version of support vector machine (SVM), least squares SVM (LS-SVM) involves equality instead of inequality constraints and works with a least squares cost function. A well-known drawback in the LS-SVM applications is that the sparseness is lost. In this paper, we develop an adaptive pruning algorithm based on the bottom-to-top strategy, which can deal with this drawback. In the proposed algorithm, the incremental and decremental learning procedures are used alternately and a small support vector set, which can cover most of the information in the training set, can be formed adaptively. Using this set, one can construct the final classifier. In general, the number of the elements in the support vector set is much smaller than that in the training set and a sparse solution is obtained. In order to test the efficiency of the proposed algorithm, we apply it to eight UCI datasets and one benchmarking dataset. The experimental results show that the presented algorithm can obtain adaptively the sparse solutions with losing a little generalization performance for the classification problems with no-noises or noises, and its training speed is much faster than sequential minimal optimization algorithm (SMO) for the large-scale classification problems with no-noises.
Similar content being viewed by others
References
Cauwenberghs G, Poggio T (2000) Incremental and decremental support vector machine learning. In: Proceedings of advances in neural information processing systems, vol 13, pp 409–415
Chapelle O, Vapnik V, Bousquet O, Mukherjee S (2002) Choosing kernel parameters for support vector machines. Mach Learn 46(1–3):131–159
Chu W, Ong C, Keerthi S (2005) An improved conjugate gradient scheme to the solution of least squares SVM. IEEE Trans Neural Netw 16(2):498–501
Chua K (2003) Efficient computations for large least square support vector machine classifiers. Pattern Recognit Lett 24:75–80
Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297
de Kruif B, de Vries T (2003) Pruning error minimization in least squares support vector machines. IEEE Trans Neural Netw 14(3):696–702
Golub G, Van Loan C (1996) Matrix computations, 3rd edn. The Johns Hopkins University Press, London
Hamers B, Suykens J, De Moor B (2001) A comparison of iterative methods for least squares support vector machine classifiers. ESAT-SISTA, K. U. Leuven, Leuven, Belgium, Internal Rep. 01-110
Hoegaerts L, Suykens J, Vandewalle J, De Moor B (2004) A comparison of pruning algorithms for sparse least squares support vector machines. In: Proceedings of ICONIP 2004. Lecture Notes in Computer Science, vol 3316. Springer, Berlin, pp 1247–1253
Joachims T (1998) Making large-scale support vector machine learning practical. In: Proceedings of advances in kernel methods-support vector learning. MIT Press, Cambridge, pp 169–184
Keerthi S, Shevade S (2003) SMO algorithm for least squares SVM formulations. Neural Comput 15:487–507
Keerthi S, Shevade S, Bhattacharyya C, Murthy K (2001) Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput 13(3):637–649
Mangasarian O, Musicant D (1999) Successive overrelaxation for support vector machines. IEEE Transa Neural Netw 10(5):1032–1037
Mangasarian O, Musicant D (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177
Murphy P, Aha D (1992) UCI repository of machine learning database. http://www.ics.uci.edu/~mlearn/MLRepository.html
Osuna E, Freund R, Girosi F (1997) An improved training algorithm for support vector machines. IEEE Workshop on Neural Networks and Signal Processing, Amelia Island, pp 276–285
Platt J (1998) Sequential minimal optimization-a fast algorithm for training support vector machines. In: Proceedings of advances in kernel methods-support vector learning. MIT Press, Cambridge, pp 185–208
Ripley B (1996) Pattern recognition and neural networks. Cambridge University Press, Cambridge
Schölkopf B, Smola A, Williamson R, Bartlett P (2000) New support vector algorithms. Neural Comput 12:1207–1245
Suykens J, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300
Suykens J, Vandewalle J (2000) Recurrent least squares support vector machines. IEEE Trans Circuits Syst I 47(7):1109–1114
Suykens J, Lukas L, Van Dooren P, De Moor B, Vandewalle J (1999) Least squares support vector machine classifiers: a large scale algorithm. In: Proceedings of Europe conference on circuit theory and design (ECCTD’99), Stresa, Italy, pp 839–842
Suykens J, Lukas L, Vandewalle J (2000) Sparse approximation using least squares support vector machines. IEEE International Symposium on Circuits and Systems, Genvea, Switzerland, pp 757–760
Suykens J, Vandewalle J, De Moor B (2001) Optimal control by least squares support vector machines. Neural Netw 14(1):23–35
Suykens J, De Barbanter J, Lukas L, Vandewalle J (2002a) Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing 48(1–4):85–105
Suykens J, Van Gestel T, De Brabanter J, De Moor B, Vandewalle J (2002b) Least squares support vector machines. World Scientific, Singapore
Van Gestel T et al (2001) Financial time series prediction using least squares support vector machines within the evidence framework. IEEE Trans Neural Netw 12(4):809–821
Van Gestel T et al (2004) Benchmarking least squares support vector machine classifiers. Mach Learn 54(1):5–32
Vapnik V (1995) The nature of statistical learning theory. Springer, New York
Vapnik V (1998) Statistical learning theory. Wiley, New York
Vapnik V, Chapelle O (2000) Bounds on error expectation for support vector machines. Neural Comput 12(9):2013–2036
Zeng X, Chen X (2005) SMO-based pruning methods for sparse least squares support vector machines. IEEE Trans Neural Netw 16(6):1541–1546
Acknowledgments
The authors would like to thank the anonymous reviewers’ useful comments and suggestions. This work presented in this paper is supported by Australia Research Council (ARC) under discovery grant DP0559213, National Natural Science Foundation of China (10471045, 60433020), Natural Science Foundation of Guangdong Province (031360, 04020079), Key Technology Research and Development Program of Guangdong Province (2005B10101010, 2005B70101118), Key Technology Research and Development Program of Tianhe District (051G041), Open Research Fund of Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (93K-17-2006-03), and Natural Science Foundation of South China University of Technology (B13-E5050190).
Author information
Authors and Affiliations
Corresponding author
Additional information
This research work was carried out at Faculty of Information Technology, University of Technology Sydney.
Rights and permissions
About this article
Cite this article
Yang, X., Lu, J. & Zhang, G. Adaptive pruning algorithm for least squares support vector machine classifier. Soft Comput 14, 667–680 (2010). https://doi.org/10.1007/s00500-009-0434-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-009-0434-0