Skip to main content
Log in

Performance evaluation of support vector machine classification approaches in data mining

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

At present, knowledge extraction from the given data set plays a significant role in all the fields in our society. Feature selection process used to choose a few relevant features to achieve better classification performance. The existing feature selection algorithms consider the job as a single objective problem. Selecting attributes is prepared by the combination of attribute evaluation and search method using the WEKA Machine Learning Tool. The proposed method is performed in three phases. In the first step, support vector classifiers are implemented with four different kernel methods such as linear function, Polynomial function, Radial basis function and sigmoid functions to classify data items. In the second step, classifier subset evaluation is applied to feature selection, along with the SVM classification for optimizing feature vectors and this obtains the maximum accuracy. In the third step, introducing new kernel approach which generates the maximum accuracy in classification compared to the other four kernel methods. From the experimental analysis, SVM with the proposed kernel approach has produced maximum accuracy over other kernel methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Carrizosa, E., Martín-Barragán, B., Romero-Morales, D.: Detecting relevant variables and interactions in supervised classification. Euro. J. Oper. Res. 213, 260–269 (2011)

    Article  MathSciNet  Google Scholar 

  2. Liu, D., Qian, H., Dai, G., Zhang, Z.: An iterative SVM approach to feature selection and classification in high-dimensional datasets. Pattern Recognit. 46(9), 2531–2537 (2013)

  3. Thi, H.A.L., Vo, X.T., Dinh, T.P.: Feature selection for linear SVMs under uncertain data: Robust optimization based on the difference of convex functions algorithms. Neural Netw. 59, 36–50 (2014)

  4. Hassan, R., Othman, R.M., Saad, P., Kasim, S.: A compact hybrid feature vector for an accurate secondary structure prediction. Inf. Sci. 181, 5267–5277 (2011)

    Article  Google Scholar 

  5. Sun, L., Toh, K.-A., Lin, Z.: A center sliding Bayesian binary classifier adopting orthogonal polynomials. Pattern Recognit. 48(6), 2013–2028 (2015)

  6. Maldonado, S., Weber, R., Basak, J.: Kernel-penalized SVM for feature selection. Inf. Sci. 181, 115–128 (2011)

    Article  Google Scholar 

  7. Maldonado, S., López, J.: Imbalanced data classification using second-order cone programming support vector machines. Pattern Recognit. 47 (2014).

  8. Couellan, N., Jan, S., Jorquera, T., Georgé, J.-P.: Self-adaptive support vector machine: a multi-agent optimization perspective. Expert Syst. Appl. 42(9), 4284–4298 (2015).

  9. Nematzadeh Balagatabi, Z., Nematzadeh Balagatabi, H.: Comparison of decision tree and SVM methods in classification of researcher’s cognitive styles in academic environment. Indian J. Autom. Artif. Intell. 1(1). January (2013) ISSN 2320-4001

  10. Danenas, P., Garsva, G.: Selection of support vector machines based classifiers for credit risk domain. Expert Syst. Appl. 42(6), 3194–3204 (2015)

  11. Pradhan, B., Sameen, M.I.: Manifestation of SVM-based rectified linear unit (ReLU) kernel function in landslide modeling. In: Space Science and Communication for Sustainability, pp. 185–195. Springer, Singapore (2018)

  12. Qi, Z.Q., Tian, Y.J., Shi, Y.: Robust twin support vector machine for pattern classification. Pattern Recognit. 46(1), 305–316 (2013)

    Article  MATH  Google Scholar 

  13. Zhanga, R., Wang, W.: Facilitating the applications of support vector machine by using a new kernel. Expert Syst. Appl. 38(11), 14225–14230 (2011)

  14. Ravisankar, P., Ravi, V., Raghava, R.G., Bose, I.: Detection of financial statement fraud and feature selection using data mining techniques. Decis. Support Syst. 50(2), 491–500 (2011)

    Article  Google Scholar 

  15. Peng, S., Hu, Q., Chen, Y., Dang, J.: Improved support vector machine algorithm for heterogeneous data. Pattern Recognit. 48(6), 2072–2083 (2015)

  16. Maldonado, S., Weber, R., Famili, F.: Feature selection for high-dimensional class-imbalanced data sets using support vector machines. Inf. Sci. 286, 228–246 (2014).

  17. Abe, S.: Fuzzy support vector machines for multi-label classification. Pattern Recognit. 48(6), 2110–2117 (2015).

  18. Song, L., Smola, A., Gretton, A., Bedo, J., Borgwardt, K.: Feature selection via dependence maximization. J. Mach. Learn. Res. 13, 1393–1434 (2012)

    MathSciNet  MATH  Google Scholar 

  19. Wang, S., Liu, Q., Zhu, E., Porikli, F., Yin, J.: Hyperparameter selection of one-class support vector machine by self-adaptive data shifting. Pattern Recognit. 74, 198–211 (2018)

    Article  Google Scholar 

  20. Yan, H., Ye, Q., Dong-Jun, Yu., Yuan, X., Yiqing, X., Liyong, F.: Least squares twin bounded support vector machines based on L1-norm distance metric for classification. Pattern Recognit. 74, 434–447 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Chidambaram.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chidambaram, S., Srinivasagan, K.G. Performance evaluation of support vector machine classification approaches in data mining. Cluster Comput 22 (Suppl 1), 189–196 (2019). https://doi.org/10.1007/s10586-018-2036-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-018-2036-z

Keywords

Navigation