Abstract
In this paper, a simplification of the necessary and sufficient Karush–Kuhn–Tucker (KKT) optimality conditions for the Lagrangian support vector regression in 2-norm resulting a naïve root finding problem of a system of equations in m variables is proposed, where m is the number of training examples. However, since the resulting system of equations contains terms having the nonsmooth “plus” function, two approaches are considered: (i) the problem is reformulated into an equivalent absolute value equation problem and solved by functional iterative and generalized Newton methods and (ii) solve the original root finding problem by generalized Newton method. The proofs of convergence and pseudo-codes of the iterative methods are given. Numerical experiments performed on a number of synthetic and real-world benchmark datasets showing similar generalization performance with much faster learning speed in comparison with support vector regression (SVR) and as good as performance in comparison with the least squares SVR, unconstrained Lagrangian SVR proposed in Balasundaram et al. (Neural Netw 51:67–79, 2014), twin SVR and extreme learning machine clearly indicate the effectiveness and suitability of the proposed problem formulation solved by functional iterative and generalized Newton methods.


Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Balasundaram S, Gupta D, Kapil (2014) Lagrangian support vector regression via unconstrained convex minimization. Neural Netw 51:67–79
Balasundaram S, Gupta D (2014) Training Lagrangian twin support vector regression via unconstrained convex minimization. Knowl Based Syst 59:85–96
Balasundaram S, Kapil (2011) Finite Newton method for implicit Lagrangian support vector regression. Int J Knowl Based Intell Eng Syst 15:203–214
Balasundaram S, Kapil (2010) On Lagrangian support vector regression. Expert Syst Appl 37:8784–8792
Chen S, Wang M (2005) Seeking multi-threshold directly from support vectors for image segmentation. Neurocomputing 67:335–344
Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel based learning methods. Cambridge University Press, Cambridge
Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30
Ding S, Yu J, Qi B (2014) An overview on twin support vector machines. Artif Intell Rev 42(2):245–252
Fung G, Mangasarian OL (2004) A feature selection Newton method for support vector machine classification. Comput Optim Appl 28:185–202
Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. The Johns Hopkins University Press, Baltimore
Gretton A, Doucet A, Herbrich R, Rayner PJW, Scholkopf B (2001) Support vector regression for black-box system identification. In: Proceedings of the 11th IEEE workshop on statistical signal processing, pp 341–344
Guyon I, Weston J, Barnhill S, Vapnik V (2002) Gene selection for cancer classification using support vector machine. Mach Learn 46:389–422
Huang H, Ding S, Shi Z (2013) Primal least squares twin support vector regression. J Zhejiang Univ Sci C 14(9):722–732
Huang G-B, Zhou H, Ding X, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B Cybern 42:513–528
Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: Theory and applications. Neurocomputing 70:489–501
Jayadeva, Khemchandani R, Chandra S (2007) Twin support vector machines for pattern classification. IEEE Trans Pattern Anal Mach Intell 29(5):905–910
Joachims T, Ndellec C, Rouveriol (1998) Text categorization with support vector machines: learning with many relevant features. In: European conference on machine learning No. 10, Chemnitz, Germany, pp 137–142
Mangasarian OL (2009) A generalized Newton method for absolute value equations. Optim Lett 3:101–108
Mangasarian OL, Musicant DR (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177
Mangasarian OL, Musicant DR (2001) Active set support vector machine classification. In: Leen TK, Dietterich TG, Tesp V (eds) Advances in neural information processing systems, vol 13. MIT Press, Cambridge, pp 577–586
Murphy PM, Aha DW (1992) UCI repository of machine learning databases. University of California, Irvine. http://www.ics.uci.edu/~mlearn
Musicant DR, Feinberg A (2004) Active set support vector regression. IEEE Trans Neural Netw 15(2):268–275
Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: IEEE conference on computer vision and pattern recognition, pp 130–136
Peng X (2010) TSVR: An efficient twin support vector machine for regression. Neural Netw 23(3):365–372
Sjoberg J, Zhang Q, Ljung L, Berveniste A, Delyon B, Glorennec P, Hjalmarsson H, Juditsky A (1995) Nonlinear black-box modeling in system identification: a unified overview. Automatica 31:1691–1724
Souza LGM, Barreto GA (2006) Nonlinear system identification using local ARX models based on the self-organizing map. Learn Nonlinear Models Rev Soc Brasil Redes Neurais (SBRN) 4(2):112–123
Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9(3):293–300
Vapnik VN (2000) The nature of statistical learning theory, 2nd edn. Springer, New York
Acknowledgments
The authors are very thankful to the referees for their useful comments. Mr. Gagandeep Benipal acknowledges the financial assistance by Maulana Azad National Fellowship, Government of India.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Balasundaram, S., Benipal, G. On a new approach for Lagrangian support vector regression. Neural Comput & Applic 29, 533–551 (2018). https://doi.org/10.1007/s00521-016-2521-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-016-2521-3