Abstract
We adapt a boosting algorithm to the problem of predicting future values of time series, using recurrent neural networks as base learners. The experiments we performed show that boosting actually provides improved results and that the weighted median is better for combining the learners than the weighted mean.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Rumelhart D.E., Hinton G.E., Williams R.J. (1986) Learning Internal Representations by Error Propagation. In Rumelhart, D. E., McClelland, J. (eds) Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge, pp. 318–362.
Seidl D.R., Lorenz R.D. (1991) A Structure by which a Recurrent Neural Network Can Approximate a Nonlinear Dynamic System. Int. Joint Conference on Neural Networks, Seattle, USA, pp. 709–714.
Santini S., Del Bimbo A. (1995) Recurrent Neural Networks Can Be Trained to Be Maximum a Posteriori Probability Classifiers. Neural Networks 8(1): 25–29.
Schapire, R.E. (1990) The strength of weak learnability. Machine Learning 5: 197–227.
Cook, G.D., Robinson, A.J. (1996) Boosting the Performance of Connectionist Large Vocabulary Speech Recognition. In International Conf. in Spoken Language Processing, pp. 1305–1308. Philadelphia, 1996.
Avnimelech R., Intrator N. (1999) Boosting Regression Estimators. Neural Computation 11: 491–513.
Freund, Y. (1990) Boosting a weak learning algorithm by majority. In third annual workshop on Computational Learning Theory, pp. 202–216.
Freund Y., Schapire R.E. (1996) Experiments with a New Boosting Algorithm. In Proceedings of the Thirteenth International Conference on Machine Learning, pp. 148–156.
Freund, Y., Schapire, R.E. (1997) A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55: 119–139.
Drucker H. (1997) Improving Regressors using Boosting Techniques. Fourteenth International Conference on Machine Learning, pp. 107–115.
Drucker H. (1999) Boosting Using Neural Nets. In Sharkey, A. (ed.) Combining Artificial Neural Nets: Ensemble and Modular Learning. Springer, pp. 51–77.
Breiman, L. (1997) Prediction games and arcing algorithms. Technical Report 504. Statistics Dept., University of California, Berkeley, 30 p.
Mason, L., Baxter, J., Bartlett, P.L., Frean, M. (2000) Functional gradient techniques for combining hypotheses. In Smola, A.J., Bartlett, P.L., Schölkopf, B. and Schuurmans, D. (eds) Advances in Large Margin Classifiers. MIT Press, Cambridge, pp. 221–247.
Karakoulas, G., Shawe-Taylor, J. (2000) Towards a strategy for boosting regressors. In Smola, A.J., Bartlett, P.L., Schölkopf, B. and Schuurmans, D. (eds) Advances in Large Margin Classifiers. MIT Press, Cambridge, pp. 247–258.
Ridgeway G., Madigan D., Richardson T. (1999) Boosting Methodology for Regression Problems. Artificial Intelligence and Statistics, pp. 152–161.
Duffy N., Helmbold D. (2002) Boosting Methods for Regression. Machine Learning 47: 153–200.
Friedman, J. H. (2000) Greedy Function Approximation: a Gradient Boosting Machine. Technical Report, Dept. of Statistics, Stanford University, 36 p.
Rätsch, G., Warmuth, M., Mika, S., Onoda, T., Lemm, S., Muller, K.-R. (2000) Barrier boosting. In Proceedings COLT. Morgan Kaufmann, pp. 170–179.
Bühlmann, P., Yu, B. (2002) Boosting with L2-loss: regression and classification. Research Report, Seminar für Statistik, ETH Zürich, June 2002, 32 p.
Zemel, R. S., Pitassi, T. (2001) A gradient-based boosting algorithm for regression problems. In Leen, T., Dietterich, T. G., Tresp, V. (eds) Advances in Neural Information Processing Systems 13. MIT Press, Cambridge, pp 696–702.
Audrino, F., Bühlmann, P. (2002) Volatility estimation with functional gradient descent for very high-dimensional financial time series, Research Report, Seminar für Statistik, ETH Zürich, June 2002, 23 p.
Akaike H. (1978) On the Likelihood of Time Series Model. The Statistician 27: 217–235.
Casdagli M. (1989) Nonlinear Prediction of Chaotic Time Series. Physica 35D: 335–356.
Back A., Wan E.A., Lawrence S., Tsoi AC. (1994) A Unifying View of some Training Algorithms for Multilayer Perceptrons with FIR Filter Synapses. Neural Networks for Signal Processing IV, Ermioni, Greece, pp. 146–154.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Wien
About this paper
Cite this paper
Boné, R., Assaad, M., Crucianu, M. (2003). Boosting Recurrent Neural Networks for Time Series Prediction. In: Pearson, D.W., Steele, N.C., Albrecht, R.F. (eds) Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-0646-4_4
Download citation
DOI: https://doi.org/10.1007/978-3-7091-0646-4_4
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-00743-3
Online ISBN: 978-3-7091-0646-4
eBook Packages: Springer Book Archive