Abstract
This paper discusses the use of a recent boosting algorithm for recurrent neural networks as a tool to model nonlinear dynamical systems. It combines a large number of RNNs, each of which is generated by training on a different set of examples. This algorithm is based on the boosting algorithm where difficult examples are concentrated on during the learning process. However, unlike the original algorithm, all examples available are taken into account. The ability of the method to internally encode useful information on the underlying process is illustrated by several experiments on well known chaotic processes. Our model is able to find an appropriate internal representation of the underlying process from the observation of a subset of the states variables. We obtain improved prediction performances.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Siegelmann, H.T., Horne, B.G., Giles, C.L.: Computational Capabilities of Recurrent NARX Neural Networks. IEEE Transactions on Systems, Man and Cybernetics 27, 209–214 (1997)
Seidl, D.R., Lorenz, R.D.: A Structure by which a Recurrent Neural Network Can Approximate a Nonlinear Dynamic System. In: International Joint Conference on Neural Networks, pp. 709–714 (1991)
Schapire, R.E.: The Strength of Weak Learnability. Machine Learning 5, 197–227 (1990)
Levin, A.U., Narendra, K.S.: Control of Nonlinear Dynamical Systems Using Neural Networks. IEEE Transactions on Neural Networks 7, 30–42 (1996)
Oliveira, K.A., Vannucci, A., Silva, E.C.: Using Artificial Neural Networks to Forecast Chaotic Time Series. Physica A, 393–399 (2000)
Tronci, S., Giona, M., Baratti, R.: Reconstruction of Chaotic Time Series by Neural Models: a Case Study. Neurocomputing 55, 581–591 (2003)
Connor, J.T., Martin, R.D., Atlas, L.E.: Recurrent Neural Networks and Robust Time Series Prediction. IEEE Transactions on Neural networks 5, 240–254 (1994)
Takens, F.: Detecting Strange Attractors in Turbulence. In: Dynamical Systems and Turbulence, pp. 366–381. Springer, Heidelberg (1980)
Leontaritis, I.J., Billings, S.: Input-Output Parametric Models for Non-Linear Systems. International Journal of Control 41, 303–344 (1985)
Aussem, A.: Dynamical Recurrent Neural Networks: Towards Prediction and Modelling of Dynamical Systems. Neurocomputing 28, 207–232 (1999)
Boné, R., Crucianu, M., Asselin de Beauville, J.-P.: Learning Long-Term Dependencies by the Selective Addition of Time-Delayed Connections to Recurrent Neural Networks. NeuroComputing 48, 251–266 (2002)
Freund, Y., Schapire, R.E.: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Journal of Computer and System Sciences 55, 119–139 (1997)
Ridgeway, G., Madigan, D., Richardson, T.: Boosting Methodology for Regression Problems. Artificial Intelligence and Statistics, 152–161 (1999)
Mason, L., Baxter, J., Bartlett, P.L., Frean, M.: Functional Gradient Techniques for Combining Hypotheses. In: Smola, A.J., et al. (eds.) Advances in Large Margin Classifiers, pp. 221–247. MIT Press, Cambridge (1999)
Duffy, N., Helmbold, D.: Boosting Methods for Regression. Machine Learning 47, 153–200 (2002)
Cook, G.D., Robinson, A.J.: Boosting the Performance of Connectionist Large Vocabulary Speech Recognition. In: International Conference in Spoken Language Processing, pp. 1305–1308 (1996)
Avnimelech, R., Intrator, N.: Boosting Regression Estimators. Neural Computation 11, 491–513 (1999)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Rumelhart, D.E., McClelland, J. (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, pp. 318–362. MIT Press, Cambridge (1986)
Freund, Y.: Boosting a Weak Learning Algorithm by Majority. In: Workshop on Computational Learning Theory, pp. 202–216 (1990)
Drucker, H.: Boosting Using Neural Nets. In: Sharkey, A. (ed.) Combining Artificial Neural Nets: Ensemble and Modular Learning, pp. 51–77. Springer, Heidelberg (1999)
Boné, R., Assaad, M., Crucianu, M.: Boosting Recurrent Neural Networks for Time Series Prediction. In: International Conference on Artificial Neural Networks and Genetic Algorithms, pp. 18–22 (2003)
Mackey, M., Glass, L.: Oscillations and Chaos in Physiological Control Systems. Science, 197–287 (1977)
Casdagli, M.: Nonlinear Prediction of Chaotic Time Series. Physica 35D, 335–356 (1989)
Back, A., Wan, E.A., Lawrence, S., Tsoi, A.C.: A Unifying View of some Training Algorithms for Multilayer Perceptrons with FIR Filter Synapses. In: Neural Networks for Signal Processing IV, pp. 146–154 (1994)
Gers, F., Eck, D., Schmidhuber, J.: Applying LSTM to Time Series Predictable Through Time-Window Approaches. In: International Conference on Artificial Neural Networks, pp. 669–675 (2001)
Ott, E.: Chaos in Dynamical Sytems. Cambridge University Press, Cambridge (1993)
Wan, E.A.: Time Series Prediction by Using a Connection Network with Internal Delay Lines. In: Weigend, A.S., Gershenfeld, N.A. (eds.) Time Series Prediction: Forecasting the Future and Understanding the Past, pp. 195–217. Addison-Wesley, Reading (1994)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Assaad, M., Boné, R., Cardot, H. (2006). Predicting Chaotic Time Series by Boosted Recurrent Neural Networks. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4233. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893257_92
Download citation
DOI: https://doi.org/10.1007/11893257_92
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-46481-5
Online ISBN: 978-3-540-46482-2
eBook Packages: Computer ScienceComputer Science (R0)