Skip to main content

A Hierarchical and Parallel Method for Training Support Vector Machines

  • Conference paper
Advances in Neural Networks – ISNN 2005 (ISNN 2005)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3496))

Included in the following conference series:

Abstract

In order to handle large-scale pattern classification problems, various sequential and parallel classification methods have been developed according to the divide-and-conquer principle. However, existing sequential methods need long training time, and some of parallel methods lead to generalization accuracy decreasing and the number of support vectors increasing. In this paper, we propose a novel hierarchical and parallel method for training support vector machines. The simulation results indicate that our method can not only speed up training but also reduce the number of support vectors while maintaining the generalization accuracy.

This work was supported by the National Natural Science Foundation of China under the grants NSFC 60375022 and NSFC 60473040. This work was also supported in part by Open Fund of Grid Computing Center, Shanghai Jiao Tong University.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vapnik, V.N.: Statistical Learning Theory. Wiley Interscience, Hoboken (1998)

    MATH  Google Scholar 

  2. Osuna, E., Freund, R., Girosi, F.: An Improved Training Algorithm for Support Vector Machines. In: Proceedings of IEEE NNSP 1997, pp. 276–285 (1997)

    Google Scholar 

  3. Joachims, T.: Making Large-scale Support Vector Machine Learning Pratical. In: Schölkopf, B., Burges, C.J., Smola, A.J. (eds.) Advances in Kernel Methods- Support Vector Learning, pp. 169–184. MIT Press, Cambridge (2000)

    Google Scholar 

  4. Lu, B.L., Ito, M.: Task Decomposition and Module Combination Based on Class Relations: A Modular Neural Network for Pattern Classification. IEEE Transaction on Neural Networks 10, 1244–1256 (1999)

    Article  Google Scholar 

  5. Lu, B.L., Wang, K.A., Utiyama, M., Isahara, H.: A Part-versus-part Method for Massively Parallel Training of Support Vector Machines. In: Proceedings of IJCNN 2004, Budapest, Hungary, pp. 735–740 (2004)

    Google Scholar 

  6. Schwaighofer, A., Tresp, V.: The Bayesian Committee Support Vector Machine. In: Dorffner, G., Bischof, H., Hornik, K. (eds.) ICANN 2001. LNCS, vol. 2130, pp. 411–417. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  7. Schölkopf, B., Burges, C., Vapnik, V.N.: Extracting Support Data for a Given Task. In: Proceedings of the First International Conference on Knowledge Discovery & Data Mining, Menlo Park, CA, pp. 252–257 (1995)

    Google Scholar 

  8. Syed, N.A., Liu, H., Sung, K.K.: Incremental Learning with Support Vector Machines. In: Proceedings of the Workshop on Support Vector Machines at the International Joint Conference on Artificial Intelligence, Stockholm, Sweden (1999)

    Google Scholar 

  9. Wen, Y.M., Lu, B.L.: A Cascade Method for Reducing Training Time and the Number of Support Vectors. In: Yin, F.-L., Wang, J., Guo, C. (eds.) ISNN 2004. LNCS, vol. 3173, pp. 480–485. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  10. Blake, C.L., Merz, C. J.: UCI (1998), ftp://ftp.ics.uci.edu/pub/machine-learningdatabases

  11. Ke, H.X., Zhang, X.G.: Editing Support Vector Machines. In: Proceedings of IJCNN 2001, Washington, USA, pp. 1464–1467 (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wen, Y., Lu, B. (2005). A Hierarchical and Parallel Method for Training Support Vector Machines. In: Wang, J., Liao, X., Yi, Z. (eds) Advances in Neural Networks – ISNN 2005. ISNN 2005. Lecture Notes in Computer Science, vol 3496. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11427391_141

Download citation

  • DOI: https://doi.org/10.1007/11427391_141

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-25912-1

  • Online ISBN: 978-3-540-32065-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics