Linear programming ν-nonparallel support vector machine and its application in vehicle recognition
Introduction
Support vector machines (SVMs) [1], [2], [3], [4], [5] have been successfully applied in lots of fields [6], [7], [8], [9], [10], [11], [12]. Either for classification or regression, SVMs represent the decision boundary in terms of a typically small subset of all training points, called the support vectors (SVs). For the standard binary support vector classification (SVC), the basic idea is to find the optimal separating hyperplane between the positive and negative data points. The optimal hyperplane is obtained by maximizing the margin between two parallel hyperplanes, which is implemented by minimizing a corresponding a quadratic programming problem (QPP). By introducing the kernel trick, SVC can also solve nonlinear classification problem successfully.
Recently, nonparallel hyperplane SVM, including the generalized eigenvalue proximal support vector machine (GEPSVM) [13] and the twin support vector machine (TWSVM) [14], is developed and attracted many interests. For TWSVM, it seeks two nonparallel proximal hyperplanes such that each hyperplane is closer to one of the two classes and is at least one distance from the other. It is implemented by solving two smaller QPPs instead of a larger one, which increases the TWSVM training speed by approximately four-fold compared to that of standard SVC. TWSVMs have been studied extensively [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28]. The nonparallel support vector machine (NPSVM) [26], [27], as the improved TWSVM, is superior theoretically and overcomes several drawbacks of the existing TWSVMs. The ν version of NPSVM, ν-NPSVM [29], is parameterized by the quantity ν to let ones effectively control the number of support vectors. By combining the ν-support vector classification (ν-SVC) and the ν-support vector regression (ν-SVR) together to construct the primal problems, ν-NPSVM inherits the advantages of ν-SVM so that enables us to eliminate one of the other free parameters of the NPSVM: the accuracy parameter ε and the regularization constant C.
In the above TWSVMs or NPSVMs, qudractic programming problems need to be solved. However, it is also possible to formulate classification problems in linear programming, replacing the quadratic objective function by a linear function. In this paper, we proposed a linear programming ν-NPSVM . This algorithm can be considered as an improved version of ν-NPSVM. Their main difference is that our algorithm depends on solving linear programming instead of quadratic programming. Experimental results show that the speed of our algorithm is about several times faster than ν-NPSVM on the premise of keeping the classification precision.
The paper is organized as follows. Section 2 briefly dwells on the initial TWSVM, NPSVM, and its ν version, ν-NPSVM. Section 3 proposes the ν-NPSVM in linear programming. Section 4 deals with experimental results and a real application. Section 5 contains concluding remarks.
Section snippets
Background
In this section, we briefly introduce the basic TWSVM, the standard NPSVM and the ν-NPSVM.
Linear case
From ν-NPSVM, we know that the solutions of the primal problems with respect to or can be expressed aswhere α and β are the solutions of the dual problems. We apply above Eq. (9) to take the 1-norm terms and instead of the quadratic terms and respectively, furthermore, by introducing the constraints and , we finally get two linear programming problems (LPPs)
Numerical experiments and application
In this section, we perform the numerical experiments on the UCI datasets, and also apply this new method to the vehicle recognition problem.
Conclusion
In this paper, we proposed a linear programming formulated ν-NPSVM, termed as ν-LPNPSVM. Experiments and real application to vehicle recognition have shown that our algorithm is considerably faster, usually over several times, than ν-NPSVM while the same level of accuracy are kept. Therefore, it is suitable for solving large-scale data sets. Future research includes extensions to multi-class classification, regression and other
Acknowledgments
This work was joint-supported by the Research Project for Outstanding Scholars of Beijing Municipal Commission of Transport (No. kj2013-2-34), and the Special Fund Basic Scientific Research of Central Universities and Colleges (No. 2013JBM055).
Guangyu Zhu, born in 1972, an Associate Professor of Beijing Jiaotong University. His research interests focus on intelligent transportation analysis.
References (29)
- et al.
Texture classification using the support vector machines
Pattern Recognit.
(2003) - et al.
Robust and efficient multiclass svm models for phrase pattern recognition
Pattern Recognit.
(2008) - et al.
Color image segmentation using pixel wise support vector machine classification
Pattern Recognit.
(2011) - et al.
A novel svm+nda model for classification with an application to face recognition
Pattern Recognit.
(2012) - et al.
Application of smoothing technique on twin support vector machines
Pattern Recognit. Lett.
(2008) - et al.
A coordinate descent margin based-twin support vector machine for classification
Neural Netw.
(2012) Tsvran efficient twin support vector machine for regression
Neural Netw.
(2010)- et al.
Laplacian twin support vector machine for semi-supervised classification
Neural Netw.
(2012) - et al.
Twin support vector machine with universum data
Neural Netw.
(2012) - et al.
Structural twin support vector machine for classification
Knowl. Based Syst.
(2013)
Support-vector networks
Mach. Learn.
The Nature of Statistical Learning Theory
Statistical Learning Theory
A tutorial on support vector machines for pattern recognition
Data Min. Knowl. Discov.
Cited by (4)
Wavelet kernel twin support vector machine
2021, Journal of Information Hiding and Multimedia Signal ProcessingVehicle Identification Method based on Vehicle Combination Characteristics
2019, Proceedings of 2019 IEEE 3rd Advanced Information Management, Communicates, Electronic and Automation Control Conference, IMCEC 2019
Guangyu Zhu, born in 1972, an Associate Professor of Beijing Jiaotong University. His research interests focus on intelligent transportation analysis.