Skip to main content
Log in

A new approach for training Lagrangian support vector regression

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

In this paper, a novel root finding problem for the Lagrangian support vector regression in 2-norm (LSVR) is formulated in which the number of unknowns becomes the number of training examples. Further, it is proposed to solve it by functional iterative and Newton methods. Under sufficient conditions, we proved their linear rate of convergence. Experiments are performed on a number of synthetic and real-world benchmark datasets, and their results are compared with support vector regression (SVR) and its variants such as least squares SVR and LSVR. Similar generalization performance with improved or comparable learning speed to SVR and its variants demonstrates the usefulness of the proposed formulation solved by the iterative methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Balasundaram S, Gupta D, Kapil (2014) Lagrangian support vector regression via unconstrained convex minimization. Neural Netw 51:67–79

    Article  MATH  Google Scholar 

  2. Balasundaram S, Kapil (2011) Finite Newton method for implicit Lagrangian support vector regression. Int J Knowl Based Intel Eng Syst 15:203–214

    Article  Google Scholar 

  3. Balasundaram S, Kapil (2010) On Lagrangian support vector regression. Exp Syst Appl 37:8784–8792

    Article  Google Scholar 

  4. Box GEP, Jenkins GM (1976) Time series analysis: Forecasting and Control. Holden-Day, San Francisco

    MATH  Google Scholar 

  5. Chen S, Wang M (2005) Seeking multi-threshold directly from support vectors for image segmentation. Neurocomputing 67:335–344

    Article  Google Scholar 

  6. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel based learning method. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  7. Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  8. Fung G, Mangasarian OL (2003) Finite Newton method for Lagrangian support vector machine. Neurocomputing 55:39–55

    Article  Google Scholar 

  9. Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. The Johns Hopkins University Press, Baltimore

    MATH  Google Scholar 

  10. Gretton A, Doucet A, Herbrich R, Rayner PJW, Scholkopf B (2001) Support vector regression for black-box system identification. In: Proceedings of the 11th IEEE workshop on statistical signal processing

  11. Guyon I, Weston J, Barnhill S, Vapnik V (2002) Gene selection for cancer classification using support vector machine. Mach Learn 46:389–422

    Article  MATH  Google Scholar 

  12. Hsieh C-J, Chang K-W, Lin C-J, Sathiya Keerthi S, Sundararajan S (2008) A dual coordinate descent method for large scale linear SVM. In: Proceedings of the 25th international conference on machine learning, ACM

  13. Jayadeva, Khemchandani R, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intel 29(5):905–910

    Article  MATH  Google Scholar 

  14. Joachims T, Ndellec C, Rouveriol (1998) Text categorization with support vector machines: learning with many relevant features. In: European conference on machine learning No. 10, Chemnitz, Germany, pp 137–142

  15. Lee YJ, Hsieh WF, Huang CM (2005) \(\varepsilon \)-SSVR: a smooth support vector machine for \(\varepsilon \)-insensitive regression. IEEE Trans Knowl Data Eng 17(5):678–685

    Article  Google Scholar 

  16. Lee YJ, Mangasarian OL (2001) SSVM: a smooth support vector machine for classification. Comput Optim Appl 20(1):5–22

    Article  MathSciNet  MATH  Google Scholar 

  17. Mangasarian OL (2009) A generalized Newton method for absolute value equations. Optim Lett 3:101–108

    Article  MathSciNet  MATH  Google Scholar 

  18. Mangasarian OL, Musicant DR (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177

    MathSciNet  MATH  Google Scholar 

  19. Mangasarian OL, Musicant DR (2001) Active set support vector machine classification. In: Leen TK, Dietterich TG, Tesp V (eds) Advances in neural information processing systems, vol 13. MIT Press, Cambridge, pp 577–586

    Google Scholar 

  20. Murphy PM, Aha DW (1992) UCI repository of machine learning databases. University of California, Irvine. http://www.ics.uci.edu/~mlearn

  21. Musicant DR, Feinberg A (2004) Active set support vector regression. IEEE Trans Neural Netw 15(2):268–275

    Article  Google Scholar 

  22. Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: Proceedings of computer vision and pattern recognition, pp 130–136

  23. Pardalos PM, Hansen P (2008) Data mining and mathematical programming, CRM vol 45, AMS

  24. Peng X (2010) Primal twin support vector regression and its sparse approximation. Neurocomputing 73:2846–2858

    Article  Google Scholar 

  25. Peng X (2010) TSVR: An efficient twin support vector machine for regression. Neural Networks 23(3):365–372

    Article  Google Scholar 

  26. Ribeiro B (2002) Kernelized based functions with Minkovsky’s norm for SVM regression. In: Proceedings of the international joint conference on neural networks, IEEE press, pp 2198–2203

  27. Rockafellar RT (1974) Conjugate duality and optimization. SIAM, Philadelphia

    Book  MATH  Google Scholar 

  28. Shalev-Shwartz S, Zhang T (2013) Stochastic dual coordinate ascent methods for regularized loss minimization. J Mach Learn Res 14:567–599

    MathSciNet  MATH  Google Scholar 

  29. Shawe-Taylor J, Sun S (2011) A review of optimization methodologies in support vector machines. Neurocomputing 74(17):3609–3618

    Article  Google Scholar 

  30. Sjoberg J, Zhang Q, Ljung L, Berveniste A, Delyon B, Glorennec P, Hjalmarsson H, Juditsky A (1995) Nonlinear black-box modeling in system identification: a unified overview. Automatica 31:1691–1724

    Article  MathSciNet  MATH  Google Scholar 

  31. Souza LGM, Barreto GA (2006) Nonlinear system identification using local ARX models based on the self-organizing map. In: Learning and Nonlinear Models-Revista da Sociedade Brasileira de Redes Neurais (SBRN) 4(2):112–123

  32. Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  MathSciNet  MATH  Google Scholar 

  33. Vapnik VN (2000) The nature of statistical learning theory, 2nd edn. Springer, New York

    Book  MATH  Google Scholar 

  34. Xanthopoulos P, Pardalos PM, Trafalis TB (2013) Robust data mining. Springer, Berlin

    Book  MATH  Google Scholar 

Download references

Acknowledgments

The authors are extremely thankful to the anonymous reviewers for their constructive comments. Mr. Yogendra Meena acknowledges the financial assistance awarded by Rajiv Gandhi National Fellowship, Government of India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Balasundaram.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Balasundaram, S., Meena, Y. A new approach for training Lagrangian support vector regression. Knowl Inf Syst 49, 1097–1129 (2016). https://doi.org/10.1007/s10115-016-0928-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-016-0928-x

Keywords

Navigation