Skip to main content
Log in

Adaptive pruning algorithm for least squares support vector machine classifier

  • Original Paper
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

As a new version of support vector machine (SVM), least squares SVM (LS-SVM) involves equality instead of inequality constraints and works with a least squares cost function. A well-known drawback in the LS-SVM applications is that the sparseness is lost. In this paper, we develop an adaptive pruning algorithm based on the bottom-to-top strategy, which can deal with this drawback. In the proposed algorithm, the incremental and decremental learning procedures are used alternately and a small support vector set, which can cover most of the information in the training set, can be formed adaptively. Using this set, one can construct the final classifier. In general, the number of the elements in the support vector set is much smaller than that in the training set and a sparse solution is obtained. In order to test the efficiency of the proposed algorithm, we apply it to eight UCI datasets and one benchmarking dataset. The experimental results show that the presented algorithm can obtain adaptively the sparse solutions with losing a little generalization performance for the classification problems with no-noises or noises, and its training speed is much faster than sequential minimal optimization algorithm (SMO) for the large-scale classification problems with no-noises.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Cauwenberghs G, Poggio T (2000) Incremental and decremental support vector machine learning. In: Proceedings of advances in neural information processing systems, vol 13, pp 409–415

  • Chapelle O, Vapnik V, Bousquet O, Mukherjee S (2002) Choosing kernel parameters for support vector machines. Mach Learn 46(1–3):131–159

    Article  MATH  Google Scholar 

  • Chu W, Ong C, Keerthi S (2005) An improved conjugate gradient scheme to the solution of least squares SVM. IEEE Trans Neural Netw 16(2):498–501

    Article  Google Scholar 

  • Chua K (2003) Efficient computations for large least square support vector machine classifiers. Pattern Recognit Lett 24:75–80

    Article  MATH  Google Scholar 

  • Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297

    MATH  Google Scholar 

  • de Kruif B, de Vries T (2003) Pruning error minimization in least squares support vector machines. IEEE Trans Neural Netw 14(3):696–702

    Article  Google Scholar 

  • Golub G, Van Loan C (1996) Matrix computations, 3rd edn. The Johns Hopkins University Press, London

    MATH  Google Scholar 

  • Hamers B, Suykens J, De Moor B (2001) A comparison of iterative methods for least squares support vector machine classifiers. ESAT-SISTA, K. U. Leuven, Leuven, Belgium, Internal Rep. 01-110

  • Hoegaerts L, Suykens J, Vandewalle J, De Moor B (2004) A comparison of pruning algorithms for sparse least squares support vector machines. In: Proceedings of ICONIP 2004. Lecture Notes in Computer Science, vol 3316. Springer, Berlin, pp 1247–1253

  • Joachims T (1998) Making large-scale support vector machine learning practical. In: Proceedings of advances in kernel methods-support vector learning. MIT Press, Cambridge, pp 169–184

  • Keerthi S, Shevade S (2003) SMO algorithm for least squares SVM formulations. Neural Comput 15:487–507

    Article  MATH  Google Scholar 

  • Keerthi S, Shevade S, Bhattacharyya C, Murthy K (2001) Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput 13(3):637–649

    Article  MATH  Google Scholar 

  • Mangasarian O, Musicant D (1999) Successive overrelaxation for support vector machines. IEEE Transa Neural Netw 10(5):1032–1037

    Article  Google Scholar 

  • Mangasarian O, Musicant D (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177

    Article  MATH  MathSciNet  Google Scholar 

  • Murphy P, Aha D (1992) UCI repository of machine learning database. http://www.ics.uci.edu/~mlearn/MLRepository.html

  • Osuna E, Freund R, Girosi F (1997) An improved training algorithm for support vector machines. IEEE Workshop on Neural Networks and Signal Processing, Amelia Island, pp 276–285

    Google Scholar 

  • Platt J (1998) Sequential minimal optimization-a fast algorithm for training support vector machines. In: Proceedings of advances in kernel methods-support vector learning. MIT Press, Cambridge, pp 185–208

  • Ripley B (1996) Pattern recognition and neural networks. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Schölkopf B, Smola A, Williamson R, Bartlett P (2000) New support vector algorithms. Neural Comput 12:1207–1245

    Article  Google Scholar 

  • Suykens J, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  MathSciNet  Google Scholar 

  • Suykens J, Vandewalle J (2000) Recurrent least squares support vector machines. IEEE Trans Circuits Syst I 47(7):1109–1114

    Article  Google Scholar 

  • Suykens J, Lukas L, Van Dooren P, De Moor B, Vandewalle J (1999) Least squares support vector machine classifiers: a large scale algorithm. In: Proceedings of Europe conference on circuit theory and design (ECCTD’99), Stresa, Italy, pp 839–842

  • Suykens J, Lukas L, Vandewalle J (2000) Sparse approximation using least squares support vector machines. IEEE International Symposium on Circuits and Systems, Genvea, Switzerland, pp 757–760

    Google Scholar 

  • Suykens J, Vandewalle J, De Moor B (2001) Optimal control by least squares support vector machines. Neural Netw 14(1):23–35

    Article  Google Scholar 

  • Suykens J, De Barbanter J, Lukas L, Vandewalle J (2002a) Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing 48(1–4):85–105

    Article  MATH  Google Scholar 

  • Suykens J, Van Gestel T, De Brabanter J, De Moor B, Vandewalle J (2002b) Least squares support vector machines. World Scientific, Singapore

    MATH  Google Scholar 

  • Van Gestel T et al (2001) Financial time series prediction using least squares support vector machines within the evidence framework. IEEE Trans Neural Netw 12(4):809–821

    Article  Google Scholar 

  • Van Gestel T et al (2004) Benchmarking least squares support vector machine classifiers. Mach Learn 54(1):5–32

    Article  MATH  Google Scholar 

  • Vapnik V (1995) The nature of statistical learning theory. Springer, New York

    MATH  Google Scholar 

  • Vapnik V (1998) Statistical learning theory. Wiley, New York

    MATH  Google Scholar 

  • Vapnik V, Chapelle O (2000) Bounds on error expectation for support vector machines. Neural Comput 12(9):2013–2036

    Article  Google Scholar 

  • Zeng X, Chen X (2005) SMO-based pruning methods for sparse least squares support vector machines. IEEE Trans Neural Netw 16(6):1541–1546

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the anonymous reviewers’ useful comments and suggestions. This work presented in this paper is supported by Australia Research Council (ARC) under discovery grant DP0559213, National Natural Science Foundation of China (10471045, 60433020), Natural Science Foundation of Guangdong Province (031360, 04020079), Key Technology Research and Development Program of Guangdong Province (2005B10101010, 2005B70101118), Key Technology Research and Development Program of Tianhe District (051G041), Open Research Fund of Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education (93K-17-2006-03), and Natural Science Foundation of South China University of Technology (B13-E5050190).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaowei Yang.

Additional information

This research work was carried out at Faculty of Information Technology, University of Technology Sydney.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yang, X., Lu, J. & Zhang, G. Adaptive pruning algorithm for least squares support vector machine classifier. Soft Comput 14, 667–680 (2010). https://doi.org/10.1007/s00500-009-0434-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-009-0434-0

Keywords

Navigation