Skip to main content

Advertisement

Log in

An accelerator for support vector machines based on the local geometrical information and data partition

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

The support vector machines (SVM) is difficult to deal with large datasets for its low training efficiency. One of the important solutions has been developed by dividing a whole dataset into smaller subsets with data partition and combining the results of the classifiers over the divided subsets. However, traditional data partition approaches are difficult to preserve the class boundary of the dataset or control the size of divided subsets, so that their performance will be greatly influenced. To overcome this difficulty, we propose an accelerator for SVM algorithm based on the local geometrical information. In this algorithm, the feature space is divided into several regions with the approximately equal number of training instances by linear projection, and then each SVM classifier trained over the extended region only predicts the unlabeled instances within that original region. The proposed algorithm can not only hold the decision boundary of the raw data, but also saves a lot of execution time for implementing it in a parallel environment. Furthermore, the number of instances within each divided regions can be effectively controlled; it is conducive to choose the complexity of the execution in each of the processors. Experiments show that the classification performance of the proposed algorithm compares favorably with four state-of-the-art algorithms with the least training time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Bosner B, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifier. In: Proceedings of the 5th annual ACM workshop on computational learning theory, pp 144–152

  2. Doran G, Ray S (2014) A theoretical and empirical analysis of support vector machine methods for multiple-instance classification. Mach Learn 97(1–2):79–102

    Article  MathSciNet  MATH  Google Scholar 

  3. Chen W, Shao Y, Hong N (2014) Laplacian smooth twin support vector machine for semi-supervised classification. Int J Mach Learn Cybern 5(3):459–468

    Article  Google Scholar 

  4. Li C, Huang Y, Wu H, Shao Y, Yang Z (2016) Multiple recursive projection twin support vector machine for multi-class classification. Int J Mach Learn Cybern 7(5):729–740

    Article  Google Scholar 

  5. Abe S (2016) Fusing sequential minimal optimization and newtons method for support vector training. Int J Mach Learn Cybern 7(3):345–364

    Article  Google Scholar 

  6. Yang Z, Wu H, Li C, Shao Y (2016) Least squares recursive projection twin support vector machine for multi-class classification. Int J Mach Learn Cybern 7(3):411–426

    Article  Google Scholar 

  7. Peng X, Kong L, Chen D (2017) A structural information-based twin-hypersphere support vector machine classifier. Int J Mach Learn Cybern 8(1):295–308

    Article  Google Scholar 

  8. Ding S, Zhu Z, Zhang X (2017a) An overview on semi-supervised support vector machine. Neural Comput Appl 28(5):969–978

    Article  Google Scholar 

  9. Ding S, Zhang X, An Y, Xue Y (2017b) Weighted linear loss multiple birth support vector machine based on information granulation for multi-class classification. Pattern Recognit 67:32–46

    Article  Google Scholar 

  10. Fernández-Delgado M, Cernadas E, Barro S, Amorim D (2014) Do we need hundreds of classifiers to solve real world classification problems? J Mach Learn Res 15(1):3133–3181

    MathSciNet  MATH  Google Scholar 

  11. Cachin C (1994) Pedagogical pattern selection strategies. Neural Netw 7(1):175–181

    Article  Google Scholar 

  12. Foody GM (1999) The significance of border training patterns in classification by a feedforward neural network using back propagation learning. Int J Remote Sens 20(18):3549–3562

    Article  Google Scholar 

  13. Hsieh CJ, Si S, Dhillon IS (2014) A divide-and-conquer solver for kernel support vector machines. In: Proceedings of the 31th international conference on machine learning, pp 566–574

  14. Do TN, Poulet F (2015) Random local SVMS for classifying large datasets. In: Proceedings of the second international conference on future data and security engineering, pp 3–15

  15. Poggio T, Cauwenberghs G (2001) Incremental and decremental support vector machine learning. In: Advances in neural information processing systems, pp 409–415

  16. Pontil M, Verri A (1998) Properties of support vector machines. Neural Comput 10(4):955–974

    Article  Google Scholar 

  17. Koggalage R, Halgamuge S (2004) Reducing the number of training samples for fast support vector machine classification. Neural Inf Process Lett Rev 2(3):57–65

    Google Scholar 

  18. Lyhyaoui A, Martinez M, Mora I, Vaquez M, Sancho JL, Figueiras-Vidal AR (1999) Sample selection via clustering to construct support vector-like classifiers. IEEE Trans Neural Netw 10(6):1474–1481

    Article  Google Scholar 

  19. Angiulli F, Astorino A (2010) Scaling up support vector machines using nearest neighbor condensation. IEEE Trans Neural Netw 21(2):351–357

    Article  Google Scholar 

  20. Li Y, Maguire L (2011) Selecting critical patterns based on local geometrical and statistical information. IEEE Trans Pattern Anal Mach Intell 33(6):1189–1201

    Article  Google Scholar 

  21. Wang J, Wonka P, Ye J (2013) Scaling SVM and least absolute deviations via exact data reduction. Comput Sci 2013:523–531

    Google Scholar 

  22. Pan X, Yang Z, Xu Y, Wang L (2018a) Safe screening rules for accelerating twin support vector machine classification. IEEE Trans Neural Netw Learn Syst 29(5):1876–1887

    Article  MathSciNet  Google Scholar 

  23. Pan X, Pang X, Wang H, Xu Y (2018b) A safe screening based framework for support vector regression. Neurocomputing 287:163–172

    Article  Google Scholar 

  24. Collobert R, Bengio S, Bengio Y (2002) A parallel mixture of SVMS for very large scale problems. Neural Comput 14(5):1105–1114

    Article  MATH  Google Scholar 

  25. Graf HP, Cosatto E, Bottou L, Dourdanovic I, Vapnik V (2004) Parallel support vector machines: The cascade SVM. In: Advances in neural information processing systems, pp 521–528

  26. Singh D, Roy D, Mohan CK (2017) Dip-SVM: distribution preserving kernel support vector machine for big data. IEEE Trans Big Data 3(1):79–90

    Article  Google Scholar 

  27. Keerthi SS, Chapelle O, DeCoste D (2006) Building support vector machines with reduced classifier complexity. J Mach Learn Res 7(Jul):1493–1515

    MathSciNet  MATH  Google Scholar 

  28. Zhang K, Lan L, Wang Z, Moerchen F (2012) Scaling up kernel SVM on limited resources: A low-rank linearization approach. In: Artificial intelligence and statistics, pp 1425–1434

  29. Le Q, Sarlós T, Smola A (2013) Fastfood-approximating kernel expansions in loglinear time. In: Proceedings of the 30th international conference on machine learning, pp 16–21

  30. Jose C, Goyal P, Aggrwal P, Varma M (2013) Local deep kernel learning for efficient non-linear SVM prediction. In: Proceedings of the 30th international conference on machine learning, pp 486–494

  31. Vapnik V (2013) The nature of statistical learning theory. Springer, New York

    MATH  Google Scholar 

  32. Joachims T (2006) Training linear SVMs in linear time. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining, ACM, pp 217–226

  33. Shin H, Cho S (2007) Neighborhood property-based pattern selection for support vector machines. Neural Comput 19(3):816–855

    Article  MATH  Google Scholar 

  34. García-Osorio C, de Haro-García A, García-Pedrajas N (2010) Democratic instance selection: a linear complexity instance selection algorithm based on classifier ensemble concepts. Artif Intell 174(5):410–441

    Article  MathSciNet  Google Scholar 

  35. Garcia S, Derrac J, Cano J, Herrera F (2012) Prototype selection for nearest neighbor classification: taxonomy and empirical study. IEEE Trans Pattern Anal Mach Intell 34(3):417–435

    Article  Google Scholar 

  36. Asimov D (1985) The grand tour: a tool for viewing multidimensional data. SIAM J Sci Stat Comput 6(1):128–143

    Article  MathSciNet  MATH  Google Scholar 

  37. Kleiner A, Talwalkar A, Sarkar P, Jordan MI (2014) A scalable bootstrap for massive data. J R Stat Soc Ser B (Stat Methodol) 76(4):795–816

    Article  MathSciNet  Google Scholar 

  38. Zhang X (2004) Matrix analysis and application. Tsinghua University Press, Beijing

    Google Scholar 

  39. Chang CC, Lin CJ (2011) Libsvm: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):27

    Article  Google Scholar 

  40. Bache K, Lichman M (2017) UCI machine learning repository. http://archive.ics.uci.edu/ml/datasets.html

  41. Kugler M, Kuroyanagi S, Nugroho AS, Iwata A (2006) Combnet-iii: a support vector machine based large scale classifier with probabilistic framework. IEICE Trans Inf Syst 89(9):2533–2541

    Article  Google Scholar 

  42. Wang Z, Djuric N, Crammer K, Vucetic S (2011) Trading representability for scalability: adaptive multi-hyperplane machine for nonlinear classification. In: Proceedings of the 17th ACM SIGKDD international conference on knowledge discovery and data mining, pp 24–32

  43. Han J, Pei J, Kamber M (2011) Data mining: concepts and techniques. Margan Kaufmann, San Francisco

    MATH  Google Scholar 

  44. Ben-David A (2007) A lot of randomness is hiding in accuracy. Eng Appl Artif Intell 20(7):875–885

    Article  Google Scholar 

  45. Wilcoxon F (1992) Individual comparisons by ranking methods. Springer, New York

    Book  Google Scholar 

  46. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7(Jan):1–30

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 61432011, No. U1435212, and No. 61876103), the Project of Key Research and Development Plan of Shanxi Province (201603D111014), and the 1331 Engineering Project of Shanxi Province, China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiye Liang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, Y., Liang, J. & Wang, F. An accelerator for support vector machines based on the local geometrical information and data partition. Int. J. Mach. Learn. & Cyber. 10, 2389–2400 (2019). https://doi.org/10.1007/s13042-018-0877-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-018-0877-7

Keywords