Skip to main content

Fast Support Vector Regression Based on Cut

  • Conference paper
Advances in Swarm Intelligence (ICSI 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6729))

Included in the following conference series:

  • 1975 Accesses

Abstract

In general, the similar input data have the similar output target values. A novel Fast Support Vector Regression (FSVR) is proposed on the reduced training set. Firstly, the improved learning machine divides the training data into blocks by using the traditional clustering methods, such as K-mean and FCM clustering techniques. Secondly, the membership function on each block is defined by the corresponding target values of the training data, all the training data have the membership degree falling into the interval [0, 1], which can vary the penalty coefficient by multiplying C. Thirdly, the reduced training set is used to training FSVR, which consists of the data with the membership degrees, which are greater than or equal to the selected suitable parameter λ. The experimental results on the traditional machine learning data sets show that the FSVR can not only achieve the better or acceptable performance but also downsize the number of training data and speed up training.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vapnik, V.N.: Statistical Learning Theory. John Wiley & Sons, New York (1998)

    MATH  Google Scholar 

  2. Wee, J.W., Lee, C.H.: Concurrent Support Vector Machine Processor for Disease Diagnosis. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 1129–1134. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  3. Buciu, L., Kotropoulos, C., Pitas, I.: Combining Support Vector Machines for Accurate Face Detection. In: Proceeding of International Conference on Image Processing, Thessaloniki, Greece, pp. 1054–1057 (2001)

    Google Scholar 

  4. Yang, H., Chan, L., King, I.: Support Vector Machine Regression for Volatile Stock Market Prediction. In: Jünger, M., Naddef, D. (eds.) Computational Combinatorial Optimization. LNCS, vol. 2241, pp. 391–396. Springer, Heidelberg (2001)

    Google Scholar 

  5. Ding, A., Zhao, X., Jiao, L.: Traffic Flow Time Series Prediction Based on Statistics Learning Theory. In: Proceedings of International Conference on Intelligent Transportation System, Singapore, pp. 727–730 (2002)

    Google Scholar 

  6. Osuna, E., Freund, R., Girosi, F.: An Improved Training Algorithm for Support Vector Machines. In: Proceedings of Workshop on Neural Networks for Signal Processing, Amelea Island, pp. 276–285 (1997)

    Google Scholar 

  7. Platt, J.C.: Fast Training of Support Vector Machines Using Sequential Minimal Optimization. In: Advances in Kernel Methods: Support Vector Learning, pp. 185–208 (1999)

    Google Scholar 

  8. Collobert, R., Bengio, S.: SVMTorch: Support vector machines for large-scale regression problems. Journal of Machine Learning 1(2), 143–160 (2001)

    MathSciNet  MATH  Google Scholar 

  9. Chang, C.C., Lin, C.J.: LIBSVM: A Library for Support Vector Machines (2001), http://www.csie.ntu.edu.tw/~cjlin

  10. Keerthi, S.S., DeCoste, D.M.: A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs. Journal of Machine Learning Research 6, 341–361 (2005)

    MathSciNet  MATH  Google Scholar 

  11. Girolami, M.: Orthogonal Series Density Estimation and the Kernel Eigenvalue Problem. Neural Computation 14(3), 669–688 (2002)

    Article  MATH  Google Scholar 

  12. Shin, H.J., Cho, S.: Invariance of Neighborhood Relation Under Input Space to Feature Space Mapping. Pattern Recognition Letter 26(6), 701–718 (2004)

    Google Scholar 

  13. Wang, W.J., Xu, Z.B.: A Heuristic Training for Support Vector Regression. Neurocomputing 61, 259–275 (2005)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhou, W., Xiong, Y., Wu, Ca., Liu, H. (2011). Fast Support Vector Regression Based on Cut. In: Tan, Y., Shi, Y., Chai, Y., Wang, G. (eds) Advances in Swarm Intelligence. ICSI 2011. Lecture Notes in Computer Science, vol 6729. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21524-7_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21524-7_44

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21523-0

  • Online ISBN: 978-3-642-21524-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics