Abstract
In this article we investigate the influence of a Pareto-like noise model on the performance of an artificial neural network used to predict a nonlinear time series. A Pareto-like noise model is, in contrast to a Gaussian noise model, based on a power law distribution which has long tails compared to a Gaussian distribution. This allows for larger fluctuations in the deviation between predicted and observed values of the time series. We define an optimization procedure that minimizes the mean squared error of the predicted time series by maximizing the likelihood function based on the Pareto-like noise model. Numerical results for an artificial time series show that this noise model gives better results than a model based on Gaussian noise demonstrating that by allowing larger fluctuations the parameter space of the likelihood function can be search more efficiently. As a consequence, our results may indicate a more generic characteristics of optimization problems not restricted to problems from time series prediction.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bak, P., Tang, T., Wiesenfeld, K.: Self-organized criticality: An explanation of the 1/f noise. Phys. Rev. Lett. 59, 381–384 (1987)
Baldi, P., Brunk, S.: Bioinformatics: The machine learning approach. MIT Press, Cambridge (2001)
Baragona, R., Battaglia, F., Cucina, D.: Fitting piecewise linear threshold autoregressive models by means of genetic algorithms. Computational Statistics and Data Analysis 47(2), 277–295 (2004)
Boettcher, S., Percus, A.: Nature’s way of optimizing. Artificial Intelligence 119, 275–286 (2000)
Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Contr. Sign. Syst. 2, 303 (1989)
Emmert-Streib, F., Dehmer, M.: Nonlinear Time Series Prediction based on a Power-Law Noise Model. International Journal of Modern Physics C, (accepted, 2007)
Funahashi, K.: On the approximate realization fo continous mappings by neural networks. Neural Networks 2, 183–192 (1989)
Giordano, F., La Rocca, M., Perna, C.: Forecasting nonlinear time series with neural network sieve bootstrap. Computational Statistics and Data Analysis, (in press, 2006)
Gutenberg, B., Richter, R.F.: Frequency of earthquakes in california. Bulletin of the Seismological Society of America 34, 185–188 (1944)
Jensen, H.J.: Self-Organized Criticality: Emergent Complex Behavior in Physical and Biological Systems. Cambridge University Press, Cambridge (1998)
Kirkpatrick, S., Gellatt, C., Vecchi, M.: Optimization by simulated annealing. Science 220, 671–680 (1983)
Liang, F.: Bayesian neural networks for nonlinear time series forecasting. Statistics and Computing 15, 13–29 (2005)
Mandelbrot, B.B.: The variation of certain speculative prices. J. Business 36, 394–419 (1963)
Newman, M.E.J.: Power laws, pareto distributions and zipf’s law. Contemporary Physics 46, 323–351 (2005)
Pontil, M., Mukherjee, S., Girosi, F.: On the noise model of support vector machine regression. Technical report, Center for Biological and Computational Learning and the Artifcial Intelligence Laboratory of the Massachusetts Institute of Technology (1998)
Roweis, S., Ghahramani, Z.: A unified review of linear gaussian models. Neural Computation 11(2), 305–345 (1999)
Schuster, H.G.: Deterministic Chaos. Wiley VCH, Chichester (1988)
Weigend, A., Huberman, B.A., Rumelhart, D.R.: Predicting the future: A connectionist approach. International Journal of Neural Systems 1(3), 193–209 (1990)
Zipf, G.K.: Human Behaviour and the Principle of Least Effort. Addison-Wesley, Reading, MA (1949)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Emmert-Streib, F., Dehmer, M. (2007). Optimization Procedure for Predicting Nonlinear Time Series Based on a Non-Gaussian Noise Model. In: Gelbukh, A., Kuri Morales, Á.F. (eds) MICAI 2007: Advances in Artificial Intelligence. MICAI 2007. Lecture Notes in Computer Science(), vol 4827. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-76631-5_51
Download citation
DOI: https://doi.org/10.1007/978-3-540-76631-5_51
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-76630-8
Online ISBN: 978-3-540-76631-5
eBook Packages: Computer ScienceComputer Science (R0)