Abstract
In recent years, considerable progress has been made in modeling chaotic time series with neural networks. Most of the work concentrates on the development of architectures and learning paradigms that minimize the prediction error. A more detailed analysis of modeling chaotic systems involves the calculation of the dynamical invariants which characterize a chaotic attractor. The features of the chaotic attractor are captured during learning only if the neural network learns the dynamical invariants. The two most important of these are the largest Lyapunov exponent which contains information on how far in the future predictions are possible, and the Correlation or Fractal Dimension which indicates how complex the dynamical system is. An additional useful quantity is the power spectrum of a time series which characterizes the dynamics of the system as well, and this in a more thorough form than the prediction error does.
In this paper, we introduce recurrent networks that are able to learn chaotic maps, and investigate whether the neural models also capture the dynamical invariants of chaotic time series. We show that the dynamical invariants can be learned already by feedforward neural networks, but that recurrent learning improves the dynamical modeling of the time series. We discover a novel type of overtraining which corresponds to the forgetting of the largest Lyapunov exponent during learning and call this phenomenondynamical overtraining. Furthermore, we introduce a penalty term that involves a dynamical invariant of the network and avoids dynamical overtraining. As examples we use the Hénon map, the logistic map and a real world chaotic series that corresponds to the concentration of one of the chemicals as a function of time in experiments on the Belousov-Zhabotinskii reaction in a well-stirred flow reactor.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
J. Principe, A. Rathie, J. Kuo. Prediction of chaotic time series with neural networks and the issue of dynamic modeling.Bifurcation and Chaos, vol. 2, no. 4, p. 989, 1992.
B. Pearlmutter. Learning state space trajectories in recurrent neural networks.Neural Computation, vol. 1, pp. 239–269, 1989.
M. Casdagli. Nonlinear prediction of chaotic time series.Physica D, vol. 35, pp. 335–356, 1989.
J. Eckmann, D. Rueile. Ergodic theory of chaos and strange attractors.Rev. Mod. Phys., vol. 57, pp. 617–656, 1985.
M. Henon. A two-dimensional mapping with a strange attractor.Comm. Math. Phys., vol. 50, p. 69, 1976.
J. Roux, R. Simoyi, H. Swinney. Observation of a strange attractor.Physica D, vol. 8, pp. 257–266, 1983.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Deco, G., Schürmann, B. Neural learning of chaotic dynamics. Neural Process Lett 2, 23–26 (1995). https://doi.org/10.1007/BF02312352
Issue Date:
DOI: https://doi.org/10.1007/BF02312352