Skip to main content
Log in

On a new approach for Lagrangian support vector regression

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this paper, a simplification of the necessary and sufficient Karush–Kuhn–Tucker (KKT) optimality conditions for the Lagrangian support vector regression in 2-norm resulting a naïve root finding problem of a system of equations in m variables is proposed, where m is the number of training examples. However, since the resulting system of equations contains terms having the nonsmooth “plus” function, two approaches are considered: (i) the problem is reformulated into an equivalent absolute value equation problem and solved by functional iterative and generalized Newton methods and (ii) solve the original root finding problem by generalized Newton method. The proofs of convergence and pseudo-codes of the iterative methods are given. Numerical experiments performed on a number of synthetic and real-world benchmark datasets showing similar generalization performance with much faster learning speed in comparison with support vector regression (SVR) and as good as performance in comparison with the least squares SVR, unconstrained Lagrangian SVR proposed in Balasundaram et al. (Neural Netw 51:67–79, 2014), twin SVR and extreme learning machine clearly indicate the effectiveness and suitability of the proposed problem formulation solved by functional iterative and generalized Newton methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Balasundaram S, Gupta D, Kapil (2014) Lagrangian support vector regression via unconstrained convex minimization. Neural Netw 51:67–79

    Article  MATH  Google Scholar 

  2. Balasundaram S, Gupta D (2014) Training Lagrangian twin support vector regression via unconstrained convex minimization. Knowl Based Syst 59:85–96

    Article  MATH  Google Scholar 

  3. Balasundaram S, Kapil (2011) Finite Newton method for implicit Lagrangian support vector regression. Int J Knowl Based Intell Eng Syst 15:203–214

    Article  Google Scholar 

  4. Balasundaram S, Kapil (2010) On Lagrangian support vector regression. Expert Syst Appl 37:8784–8792

    Article  Google Scholar 

  5. Chen S, Wang M (2005) Seeking multi-threshold directly from support vectors for image segmentation. Neurocomputing 67:335–344

    Article  Google Scholar 

  6. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel based learning methods. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  7. Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  8. Ding S, Yu J, Qi B (2014) An overview on twin support vector machines. Artif Intell Rev 42(2):245–252

    Article  Google Scholar 

  9. Fung G, Mangasarian OL (2004) A feature selection Newton method for support vector machine classification. Comput Optim Appl 28:185–202

    Article  MathSciNet  MATH  Google Scholar 

  10. Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. The Johns Hopkins University Press, Baltimore

    MATH  Google Scholar 

  11. Gretton A, Doucet A, Herbrich R, Rayner PJW, Scholkopf B (2001) Support vector regression for black-box system identification. In: Proceedings of the 11th IEEE workshop on statistical signal processing, pp 341–344

  12. Guyon I, Weston J, Barnhill S, Vapnik V (2002) Gene selection for cancer classification using support vector machine. Mach Learn 46:389–422

    Article  MATH  Google Scholar 

  13. Huang H, Ding S, Shi Z (2013) Primal least squares twin support vector regression. J Zhejiang Univ Sci C 14(9):722–732

    Article  Google Scholar 

  14. Huang G-B, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B Cybern 42:513–528

    Article  Google Scholar 

  15. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: Theory and applications. Neurocomputing 70:489–501

    Article  Google Scholar 

  16. Jayadeva, Khemchandani R, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910

    Article  MATH  Google Scholar 

  17. Joachims T, Ndellec C, Rouveriol (1998) Text categorization with support vector machines: learning with many relevant features. In: European conference on machine learning No. 10, Chemnitz, Germany, pp 137–142

  18. Mangasarian OL (2009) A generalized Newton method for absolute value equations. Optim Lett 3:101–108

    Article  MathSciNet  MATH  Google Scholar 

  19. Mangasarian OL, Musicant DR (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177

    MathSciNet  MATH  Google Scholar 

  20. Mangasarian OL, Musicant DR (2001) Active set support vector machine classification. In: Leen TK, Dietterich TG, Tesp V (eds) Advances in neural information processing systems, vol 13. MIT Press, Cambridge, pp 577–586

    Google Scholar 

  21. Murphy PM, Aha DW (1992) UCI repository of machine learning databases. University of California, Irvine. http://www.ics.uci.edu/~mlearn

  22. Musicant DR, Feinberg A (2004) Active set support vector regression. IEEE Trans Neural Netw 15(2):268–275

    Article  Google Scholar 

  23. Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: IEEE conference on computer vision and pattern recognition, pp 130–136

  24. Peng X (2010) TSVR: An efficient twin support vector machine for regression. Neural Netw 23(3):365–372

    Article  Google Scholar 

  25. Sjoberg J, Zhang Q, Ljung L, Berveniste A, Delyon B, Glorennec P, Hjalmarsson H, Juditsky A (1995) Nonlinear black-box modeling in system identification: a unified overview. Automatica 31:1691–1724

    Article  MathSciNet  MATH  Google Scholar 

  26. Souza LGM, Barreto GA (2006) Nonlinear system identification using local ARX models based on the self-organizing map. Learn Nonlinear Models Rev Soc Brasil Redes Neurais (SBRN) 4(2):112–123

    Article  Google Scholar 

  27. Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300

    Article  MATH  Google Scholar 

  28. Vapnik VN (2000) The nature of statistical learning theory, 2nd edn. Springer, New York

    Book  MATH  Google Scholar 

Download references

Acknowledgments

The authors are very thankful to the referees for their useful comments. Mr. Gagandeep Benipal acknowledges the financial assistance by Maulana Azad National Fellowship, Government of India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Balasundaram.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Balasundaram, S., Benipal, G. On a new approach for Lagrangian support vector regression. Neural Comput & Applic 29, 533–551 (2018). https://doi.org/10.1007/s00521-016-2521-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-016-2521-3

Keywords

Navigation