Skip to main content
Log in

A method to sparsify the solution of support vector regression

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Although the solution of support vector machine is relatively sparse, it makes unnecessarily liberal use of basis functions since the number of support vectors required typically grows linearly with the size of the training set. In this paper, we present a simple post-processing method to sparsify the solution of support vector regression (SVR). The main idea is as follows: first, we train a SVR machine on the full training set; then another SVR machine is trained only on a subset of the full training set with modified target values. This process is done several times iteratively. Experiments indicate that the proposed method can reduce the support vectors greatly while maintaining the good generalization capacity of SVR.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Drucker H, Burges CJC, Kaufman L, Smola AJ, Vapnik V (1997) Support vector regression machines. In: Mozer M, Jordan M, Petsche T (eds) Advances in neural information processing system, vol 9. MIT press, Cambridge, MA, pp 155–161

    Google Scholar 

  2. Schölkopf B, Smola AJ (2002) Learning with kernels. MIT Press

  3. Girosi F (1998) An equivalence between sparse approximation and support vector machines. Neural Comput 10(6):1455–1480

    Article  Google Scholar 

  4. Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244

    Article  MATH  MathSciNet  Google Scholar 

  5. Figueiredo MAT (2003) Adaptive sparseness for supervised learning. IEEE Trans Pattern Anal Mach Intell 25(9):1150–1159

    Article  Google Scholar 

  6. Roth V (2004) The generalized LASSO. IEEE Trans Neural Netw 15(1):16–28

    Article  Google Scholar 

  7. MacKay D (1994) Bayesian nonlinear modeling for the prediction competition. In: ASHRAE trans., vol 100. Atlanta, GA, pp 1053–1062

  8. Burges CJC (1996) Simplified support vector decision rules. In: Saitta L (ed) Proceeding of the 13th international conference on machine learning. Morgan Kaufmann, Bari, Italy, pp 71–77

  9. Burges CJC, Schölkopf B (1997) Improving the accuracy and speed of support vector machine. In: Mozer M, Jordan M, Petsche T (eds) Advances in neural information processing system, vol 9. MIT press, Cambridge, MA, pp 375–381

    Google Scholar 

  10. Platt JC (1998) Fast training of support vector machines using sequential optimization. In: Schölkopf B, Burges CJC, Smola AJ (eds) Advanced in kernel methods: support vector machines. MIT press, Cambridge, MA, pp 185–208

    Google Scholar 

  11. Shevade SK, Keerthi SS, Bhattacharyya C, Murthy KRK (2000) Improvments to SMO algorithm for SVM regression. IEEE Trans Neural Netw 11(5):1188–1193

    Article  Google Scholar 

  12. Wu MR, Schölkopf B, Bakir G (2006) A direct method for building sparse kernel learning algorithms. J Mach Learn Res 7:455–491

    MathSciNet  Google Scholar 

  13. Dekel O, Singer Y (2006) Support vector machines on a budget. In: Schölkopf B, Platt J, Hofmann T (eds) Advances in neural information processing systems, vol 19. MIT press, Cambridge, MA, pp 345–352

    Google Scholar 

  14. Chang CC, Lin CJ (2001) LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/cjlin/libsvm

  15. Collobert R, Bengio S (2001) SVMTorch: support vector machines for large-scale regression problems. J Mach Learn Res 1:143–160

    Article  MathSciNet  Google Scholar 

  16. Osuna E, Girosi F (1999) Reduing the run-time complexity of support vector machines. In: Schölkopf B, Smola AJ (eds) Advances in kernel methods: support vector learning. MIT press, pp 271–284

  17. Schölkopf B, Smola AJ, Williamson RC, Bartlett PL (2000) New support vector algorithms. Neural Comput 12:1207–1245

    Article  Google Scholar 

  18. Chang CC, Lin CJ (2002) Training ν-support vector regression: theory and algorithms. Neural Comput 13(9):2119–2147

    Article  Google Scholar 

  19. Chen PH, Lin CJ, Schlökopf B (2005) A tutorial on ν-support vector machines. Appl Stoch Model Bus Ind 21:111–136

    Article  MATH  Google Scholar 

  20. Mattera D, Palmieri F, Haykin S (1999) Simple and robust methods for support vector expansions. IEEE Trans Neural Netw 10(5):1038–1047

    Article  Google Scholar 

  21. Eubank RL (1999) Nonparametric regression and spline smoothing. In: Statistics: textbooks and monographs, vol 157, 2nd edn. Marcel Dekker, New York

  22. Michie D, Spiegelhalter DJ, Taylor CC (1994) Machine learning, neural and statistical classification. Prentice Hall. Data available at: ftp://ftp.ncc.up.pt/pub/statlog/

  23. Li HQ, Wan BW (2003) Multi-input-layer wavelet neural network and its application. In: Proceedings of the 5th international conference on computational intelligence and multimedia application (ICCIMA’03), pp 468–473

Download references

Acknowledgement

This research was supported by Natural Science Foundation of China (60373106, 40225004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gao Guo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guo, G., Zhang, JS. & Zhang, GY. A method to sparsify the solution of support vector regression. Neural Comput & Applic 19, 115–122 (2010). https://doi.org/10.1007/s00521-009-0258-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-009-0258-y

Keywords

Navigation