Skip to main content

Boosting Recurrent Neural Networks for Time Series Prediction

  • Conference paper
Artificial Neural Nets and Genetic Algorithms

Abstract

We adapt a boosting algorithm to the problem of predicting future values of time series, using recurrent neural networks as base learners. The experiments we performed show that boosting actually provides improved results and that the weighted median is better for combining the learners than the weighted mean.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Rumelhart D.E., Hinton G.E., Williams R.J. (1986) Learning Internal Representations by Error Propagation. In Rumelhart, D. E., McClelland, J. (eds) Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge, pp. 318–362.

    Google Scholar 

  2. Seidl D.R., Lorenz R.D. (1991) A Structure by which a Recurrent Neural Network Can Approximate a Nonlinear Dynamic System. Int. Joint Conference on Neural Networks, Seattle, USA, pp. 709–714.

    Google Scholar 

  3. Santini S., Del Bimbo A. (1995) Recurrent Neural Networks Can Be Trained to Be Maximum a Posteriori Probability Classifiers. Neural Networks 8(1): 25–29.

    Article  Google Scholar 

  4. Schapire, R.E. (1990) The strength of weak learnability. Machine Learning 5: 197–227.

    Google Scholar 

  5. Cook, G.D., Robinson, A.J. (1996) Boosting the Performance of Connectionist Large Vocabulary Speech Recognition. In International Conf. in Spoken Language Processing, pp. 1305–1308. Philadelphia, 1996.

    Google Scholar 

  6. Avnimelech R., Intrator N. (1999) Boosting Regression Estimators. Neural Computation 11: 491–513.

    Google Scholar 

  7. Freund, Y. (1990) Boosting a weak learning algorithm by majority. In third annual workshop on Computational Learning Theory, pp. 202–216.

    Google Scholar 

  8. Freund Y., Schapire R.E. (1996) Experiments with a New Boosting Algorithm. In Proceedings of the Thirteenth International Conference on Machine Learning, pp. 148–156.

    Google Scholar 

  9. Freund, Y., Schapire, R.E. (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55: 119–139.

    Article  MathSciNet  MATH  Google Scholar 

  10. Drucker H. (1997) Improving Regressors using Boosting Techniques. Fourteenth International Conference on Machine Learning, pp. 107–115.

    Google Scholar 

  11. Drucker H. (1999) Boosting Using Neural Nets. In Sharkey, A. (ed.) Combining Artificial Neural Nets: Ensemble and Modular Learning. Springer, pp. 51–77.

    Google Scholar 

  12. Breiman, L. (1997) Prediction games and arcing algorithms. Technical Report 504. Statistics Dept., University of California, Berkeley, 30 p.

    Google Scholar 

  13. Mason, L., Baxter, J., Bartlett, P.L., Frean, M. (2000) Functional gradient techniques for combining hypotheses. In Smola, A.J., Bartlett, P.L., Schölkopf, B. and Schuurmans, D. (eds) Advances in Large Margin Classifiers. MIT Press, Cambridge, pp. 221–247.

    Google Scholar 

  14. Karakoulas, G., Shawe-Taylor, J. (2000) Towards a strategy for boosting regressors. In Smola, A.J., Bartlett, P.L., Schölkopf, B. and Schuurmans, D. (eds) Advances in Large Margin Classifiers. MIT Press, Cambridge, pp. 247–258.

    Google Scholar 

  15. Ridgeway G., Madigan D., Richardson T. (1999) Boosting Methodology for Regression Problems. Artificial Intelligence and Statistics, pp. 152–161.

    Google Scholar 

  16. Duffy N., Helmbold D. (2002) Boosting Methods for Regression. Machine Learning 47: 153–200.

    Article  MATH  Google Scholar 

  17. Friedman, J. H. (2000) Greedy Function Approximation: a Gradient Boosting Machine. Technical Report, Dept. of Statistics, Stanford University, 36 p.

    Google Scholar 

  18. Rätsch, G., Warmuth, M., Mika, S., Onoda, T., Lemm, S., Muller, K.-R. (2000) Barrier boosting. In Proceedings COLT. Morgan Kaufmann, pp. 170–179.

    Google Scholar 

  19. Bühlmann, P., Yu, B. (2002) Boosting with L2-loss: regression and classification. Research Report, Seminar für Statistik, ETH Zürich, June 2002, 32 p.

    Google Scholar 

  20. Zemel, R. S., Pitassi, T. (2001) A gradient-based boosting algorithm for regression problems. In Leen, T., Dietterich, T. G., Tresp, V. (eds) Advances in Neural Information Processing Systems 13. MIT Press, Cambridge, pp 696–702.

    Google Scholar 

  21. Audrino, F., Bühlmann, P. (2002) Volatility estimation with functional gradient descent for very high-dimensional financial time series, Research Report, Seminar für Statistik, ETH Zürich, June 2002, 23 p.

    Google Scholar 

  22. Akaike H. (1978) On the Likelihood of Time Series Model. The Statistician 27: 217–235.

    Article  MathSciNet  Google Scholar 

  23. Casdagli M. (1989) Nonlinear Prediction of Chaotic Time Series. Physica 35D: 335–356.

    MathSciNet  Google Scholar 

  24. Back A., Wan E.A., Lawrence S., Tsoi AC. (1994) A Unifying View of some Training Algorithms for Multilayer Perceptrons with FIR Filter Synapses. Neural Networks for Signal Processing IV, Ermioni, Greece, pp. 146–154.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Wien

About this paper

Cite this paper

Boné, R., Assaad, M., Crucianu, M. (2003). Boosting Recurrent Neural Networks for Time Series Prediction. In: Pearson, D.W., Steele, N.C., Albrecht, R.F. (eds) Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-0646-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-0646-4_4

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-00743-3

  • Online ISBN: 978-3-7091-0646-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics