Skip to main content

Autocovariance Based Weighting Strategy for Time Series Prediction with Weighted LS-SVM

  • Conference paper
  • 846 Accesses

Part of the book series: Advances in Soft Computing ((AINSC,volume 31))

Abstract

Classic kernel methods (SVM, LS-SVM) use some arbitrarily chosen loss functions. These functions equally penalize errors on all training samples. In problem of time series prediction better results can be achieved when the relative importance of the samples is expressed in the loss function. In this paper an autocovariance based weighting strategy for chaotic time series prediction is presented. Proposed method can be considered a way to improve the performance of kernel algorithms by incorporating some additional knowledge and information on the analyzed learning problem.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   259.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Jensen, D., Neville, J. (2003) Autocorrelation and Linkage Cause Bias in Evaluation of Relational Learners. LNCS 2583, 101–116, Springer-Verlag

    Google Scholar 

  2. Mackey, M. C., Glass, L. (1977) Oscillation and chaos in physiological control systems. Science 197, 287–289

    Google Scholar 

  3. Neville, J., Simsek, Ö., Jensen, D. (2004) Autocorrelation and Relational Learning: Challenges and Opportunities. Proceedings of SRL 2004

    Google Scholar 

  4. Niedzwiecki, M. (2000) Identification of Time-Varying Processes in Signal Processing, Wiley

    Google Scholar 

  5. Poggio, T., Smale, S. (2003) The Mathematics of Learning: Dealing with Data. Noticies of AMS 50, 5, 537–544

    MathSciNet  Google Scholar 

  6. Suykens, J. A. K., Van Gestel, T., De Brabanter, J. (2002) Least Squares Support Vector Machines. World Scientific

    Google Scholar 

  7. Svarer, C., Hansen, L. K., Larsen, J., Rasmussen, C. E. (1993) Designer networks for time series processing, Proceedings of the III IEEE Workshop on Neural Networks for Signal Processing, 78–87

    Google Scholar 

  8. Tikhonov, A. N., Arsenin, V. Y. (1977) Solution of Ill-posed problems. W. H. Winston, Washington

    Google Scholar 

  9. Vapnik, V. N. (1998) Statistical Learning Theory, Wiley, New York

    Google Scholar 

  10. Weigend, A. S., Gershenfeld, N. A. (1994) Time Series Prediction: Forecasting the Future and Understanding the Past, Addison-Wesley

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Majewski, P. (2005). Autocovariance Based Weighting Strategy for Time Series Prediction with Weighted LS-SVM. In: Kłopotek, M.A., Wierzchoń, S.T., Trojanowski, K. (eds) Intelligent Information Processing and Web Mining. Advances in Soft Computing, vol 31. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-32392-9_50

Download citation

  • DOI: https://doi.org/10.1007/3-540-32392-9_50

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-25056-2

  • Online ISBN: 978-3-540-32392-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics