Abstract
Artificial neural networks have presented their powerful ability and efficiency in nonlinear control, chaotic time series prediction, and many other fields. Reinforcement learning, which is the last learning algorithm by awarding the learner for correct actions, and punishing wrong actions, however, is few reported to nonlinear prediction.
In this paper, we construct a multi-layer neural network and using reinforcement learning, in particular, a learning algorithm called Stochastic Gradient Ascent (SGA) to predict nonlinear time series. The proposed system includes 4 layers: input layer, hidden layer, stochastic parameter layer and output layer. Using stochastic policy, the system optimizes its weights of connections and output value to obtain its prediction ability of nonlinear dynamics. In simulation, we used the Lorenz system, and compared short-term prediction accuracy of our proposed method with classical learning method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Casdagli, M.: Nonlinear Prediction of Chaotic Time Series. Physica D: Nonlinear Phenomena 35, 335–356 (1989)
de Oliveira, K.A., Vannucci, A., da Silva, E.C.: Using Artificial Neural Networks to Forecast Chaotic Time Series. Physica A 284, 393–404 (1996)
Leung, H., Lo, T., Wang, S.: Prediction of Noisy Chaotic Time Series Using an Optimal Radial Basis Function. IEEE Trans. on Neural Networks 12, 1163–1172 (2001)
Kodogiannis, V., Lolis, A.: Forecasting Financial Time Series Using Neural Network and Fuzzy System-based Techniques. Neural Computing & Applications 11, 90–102 (2002)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Kimura, H., Kobayashi, S.: Reinforcement Learning for Continuous Action Using Stochastic Gradient Ascent. Intelligent Autonomous Systems 5, 288–295 (1998)
Kuremoto, T., Obayashi, M., Yamamoto, A., Kobayashi, K.: Predicting Chaotic Time Series by Reinforcement Learning. In: Proc. of The 2nd Intern. Conf. on Computational Intelligence, Robotics and Autonomous Systems, CIRAS 2003 (2003)
Lorenz, E.N.: Deterministic Nonperiodic Flow. J. atomos. Sci. 20, 130–141 (1963)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Representation by Back-propagating Errors. Nature 232(9), 533–536 (1986)
Takens, F.: Detecting Strange Attractor in Turbulence. Lecture Notes in Mathematics, vol. 898, pp. 366–381. Springer, Heidelberg (1981)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kuremoto, T., Obayashi, M., Kobayashi, K. (2005). Nonlinear Prediction by Reinforcement Learning. In: Huang, DS., Zhang, XP., Huang, GB. (eds) Advances in Intelligent Computing. ICIC 2005. Lecture Notes in Computer Science, vol 3644. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11538059_112
Download citation
DOI: https://doi.org/10.1007/11538059_112
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28226-6
Online ISBN: 978-3-540-31902-3
eBook Packages: Computer ScienceComputer Science (R0)