Abstract
This article focuses on the problem how to shorten the time required for training and decision making by classifiers based on Support Vector Machines techniques. We propose the hybrid implementation of mentioned above algorithm which uses parallel implementation of SVM based on GPU programming model in a distributed computing system using MPI protocol. To estimate the computational efficiency of the proposed model a number of experiments were carried out on the basis of UCI benchmark datasets. Their results show that using parallel model in distributed computing environment can reduce computation time compared to both classical SVM used single processor only and to SVM implementation based on GPU.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bugres, C.J.: A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery 2, 121–167 (1998)
Cao, L., Keerthi, S.: Parallel sequential minimal optimization for the training of support vector machines. IEEE Transactions on Neural Networks 17(4), 1039–1049 (2006)
Catanzano, B., Sundaram, N., Keutzer, K.: Fast support vector machine training and classification on graphics processors. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 104–111. ACM, New York (2008)
Alphaydin, E.: Introduction to Machine Learning, 2nd edn. The MIT Press, London (2011)
Cristianini, N.: Support Vector and Kernel Machines. Tutorial at ICML (2001)
Duan, K.-B., Keerthi, S.S.: Which Is the Best Multiclass SVM Method? An Empirical Study. In: Oza, N.C., Polikar, R., Kittler, J., Roli, F. (eds.) MCS 2005. LNCS, vol. 3541, pp. 278–285. Springer, Heidelberg (2005)
Graf, H., Cosatto, E.: Parallel support vector machines: The cascade svm. In: Advances in Neural Information Processing Systems, pp. 521–528. MIT Press (2005)
Herrero-Lopez, S., Williams, J.R., Sanchez, A.: Parallel Multiclass Classification using SVMs on GPUs, Pittsburgh, PA, USA (2010)
Joachims, T.: Making large-scale support vector machine learning practical. MIT Press, Cambridge (1999)
Lindholm, E., Nickolls, J., Oberman, S., Montrym, J.: Nvidia tesla: A unified graphics and computing architecture. IEEE Micro. 28(2), 39–55 (2008)
Nickolls, J., Buck, I., Garland, M., Skadron, K.: Scalable parallel programming with CUDA. Queue 6(2), 40–53 (2008)
Frank, A., Asuncion, A.: UCI machine learning repository (2010), http://archive.ics.uci.edu/ml
Johnson, M.: SVM.NET Home Page, http://www.matthewajohnson.org/
Message Passing Interface Forum, http://www.mpi-forum.org
Wilk, T., Woźniak, M.: Complexity and Multithreaded Implementation Analysis of One Class-Classifiers Fuzzy Combiner. In: Corchado, E., Kurzyński, M., Woźniak, M. (eds.) HAIS 2011, Part II. LNCS, vol. 6679, pp. 237–244. Springer, Heidelberg (2011)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gajewski, K., Wozniak, M. (2012). Performance Evaluation of Hybrid Implementation of Support Vector Machine. In: Yin, H., Costa, J.A.F., Barreto, G. (eds) Intelligent Data Engineering and Automated Learning - IDEAL 2012. IDEAL 2012. Lecture Notes in Computer Science, vol 7435. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32639-4_92
Download citation
DOI: https://doi.org/10.1007/978-3-642-32639-4_92
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-32638-7
Online ISBN: 978-3-642-32639-4
eBook Packages: Computer ScienceComputer Science (R0)