Skip to main content
Log in

An improvement on parametric \(\nu \)-support vector algorithm for classification

  • S.I.: Computational Biomedicine
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

One effective technique that has recently been considered for solving classification problems is parametric \(\nu \)-support vector regression. This method obtains a concurrent learning framework for both margin determination and function approximation and leads to a convex quadratic programming problem. In this paper we introduce a new idea that converts this problem into an unconstrained convex problem. Moreover, we propose an extension of Newton’s method for solving the unconstrained convex problem. We compare the accuracy and efficiency of our method with support vector machines and parametric \(\nu \)-support vector regression methods. Experimental results on several UCI benchmark data sets indicate the high efficiency and accuracy of this method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Alon, U., Barkai, N., Notterman, D. A., Gish, K., Ybarra, S., Mack, D., et al. (1999). Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proceedings of the National Academy of Sciences, 96(12), 6745–6750.

    Article  Google Scholar 

  • Bennett, K. P., & Bredensteiner, E. J. (2000). Duality and geometry in SVM classifiers. In Proceedings of the seventeenth international conference on machine learning (pp. 57–64). San Francisco.

  • Boser, B. E., Guyon, I. M., & Vapnik. V. N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on computational learning theory (COLT ’92) (pp. 144–152). ACM, New York, NY, USA.

  • Boyd, S., & Vandenberghe, L. (2004). Convex optimization. New York: Cambridge University Press.

    Book  Google Scholar 

  • Cao, L., & Tay, E. F. (2001). Financial forecasting using support vector machines. Neural Computing and Applications, 10(2), 184–192.

    Article  Google Scholar 

  • Chen, X., Yang, J., & Liang, J. (2012a). A flexible support vector machine for regression. Neural Computing and Applications, 21(8), 2005–2013.

    Article  Google Scholar 

  • Chen, X., Yang, J., Liang, J., & Ye, Q. (2012b). Smooth twin support vector regression. Neural Computing and Applications, 21(3), 505–513.

    Article  Google Scholar 

  • Clarke, F. (1990). Optimization and nonsmooth analysis. Philadelphia: Society for Industrial and Applied Mathematics.

    Book  Google Scholar 

  • Deng, N., Tian, Y., & Zhang, C. (2012). Support vector machines: Optimization based theory, algorithms, and extensions (1st ed.). Boca Raton: Chapman and Hall/CRC.

    Book  Google Scholar 

  • Hao, P. Y. (2010). New support vector algorithms with parametric insensitive/margin model. Neural Networks, 23(1), 60–73.

    Article  Google Scholar 

  • Hiriart-Urruty, J.-B., Strodiot, J.-J., & Nguyen, V. H. (1984). Generalized hessian matrix and second-order optimality conditions for problems with \(C^{1,1}\) data. Applied Mathematics and Optimization, 11(1), 43–56.

    Article  Google Scholar 

  • Hong, Z.-Q., & Yang, J.-Y. (1991). Optimal discriminant plane for a small number of samples and design method of classifier on the plane. Pattern Recognition, 24(4), 317–324.

    Article  Google Scholar 

  • Ivanciuc, O. (2007). Reviews in computational chemistry. London: Wiley.

    Google Scholar 

  • Joachims, T. (1998). Text categorization with support vector machines: Learning with many relevant features. In Proceedings of the 10th European conference on machine learning, ECML’98 (pp. 137–142). Springer, London, UK.

  • Ketabchi, S., & Moosaei, H. (2012). Minimum norm solution to the absolute value equation in the convex case. Journal of Optimization Theory and Applications, 154(3), 1080–1087.

    Article  Google Scholar 

  • Lee, Y.-J., & Mangasarian, O. (2001). SSVM: A smooth support vector machine for classification. Computational Optimization and Applications, 20(1), 5–22.

    Article  Google Scholar 

  • Lichman, M. (2013). UCI machine learning repository. http://archive.ics.uci.edu/ml.

  • Osuna, E., Freund, R., & Girosit, F. (1997). Training support vector machines: An application to face detection. In Proceedings of the 1997 IEEE computer society conference on computer vision and pattern recognition (pp. 130–136).

  • Pappu, V., Panagopoulos, O. P., Xanthopoulos, P., & Pardalos, P. M. (2015). Sparse proximal support vector machines for feature selection in high dimensional datasets. Expert Systems with Applications, 42(23), 9183–9191.

    Article  Google Scholar 

  • Pardalos, P. M., Ketabchi, S., & Moosaei, H. (2014). Minimum norm solution to the positive semidefinite linear complementarity problem. Optimization, 63(3), 359–369.

    Article  Google Scholar 

  • Pontil, M., Rifkin, R., & Evgeniou, T. (1998). From regression to classification in support vector machines. Technical Report. Massachusetts Institute of Technology, Cambridge, MA, USA.

  • Resende, M. G. C., & Pardalos, P. M. (2002). Handbook of applied optimization. Oxford: Oxford University Press.

    Google Scholar 

  • Ripley, B. (1996). Pattern recognition and neural networks datasets collection. www.stats.ox.ac.uk/pub/PRNN/.

  • Schölkopf, B., & Smola, A. J. (2001). Learning with kernels: Support vector machines, regularization, optimization, and beyond. Cambridge: MIT Press.

    Google Scholar 

  • Schölkopf, B., Smola, A. J., Williamson, R. C., & Bartlett, P. L. (2000). New support vector algorithms. Neural Computation, 12(5), 1207–1245.

    Article  Google Scholar 

  • Vapnik, V. (1998). Statistical learning theory. New York: Wiley.

    Google Scholar 

  • Vapnik, V., & Chervonenkis, A. (1974). Theory of pattern recognition. Moscow: Nauka. (in Russian).

    Google Scholar 

  • Wang, Z., Shao, Y., & Wu, T. (2014). Proximal parametric-margin support vector classifier and its applications. Neural Computing and Applications, 24(3–4), 755–764.

    Article  Google Scholar 

  • Xanthopoulos, P., Guarracino, M. R., & Pardalos, P. M. (2014). Robust generalized eigenvalue classifier with ellipsoidal uncertainty. Annals of Operations Research, 216(1), 327–342.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saeed Ketabchi.

Matlab code

Matlab code

% Generate random M,N;

%Input: m1,m2 n; Output:M N

pl=inline(’(abs(x)+x)/2’);

M=rand(m1,n); M=100*(M-0.5*spones(M));

M(:,2)=M(:,1)+1*ones(m1,1)+100*rand(m1,1)+100*rand(m1,1);

N=rand(m2,n); N=100*(N-0.5*spones(N));

N(:,2)=N(:,1)-1*ones(m2,1)-100*rand(m2,1)-100*rand(m2,1);

uu=5*rand(3,n); uu1=uu;uu1(:,2)= uu1(:,1)+1*ones(3,1);

uu2=uu;uu2(:,2)= uu2(:,1)-1*ones(3,1);

M=[M;uu1;10 0]; N=[N;uu2;30 -20];m1=m1+4;m2=m2+4;m=m1+m2;

xM=[-50:40*rand: 50];yM=xM+1;xN=[-50:20*rand:50];yN=xN-1;

plot(M(:,1),M(:,2),’oblack’,N(:,1),N(:,2),’*bl’);

axis square

format short ;[m1 m2 n toc],[max(M(:,1)) min( N(:,1))]

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ketabchi, S., Moosaei, H., Razzaghi, M. et al. An improvement on parametric \(\nu \)-support vector algorithm for classification. Ann Oper Res 276, 155–168 (2019). https://doi.org/10.1007/s10479-017-2724-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-017-2724-8

Keywords

Navigation