Skip to main content

Optimization Procedure for Predicting Nonlinear Time Series Based on a Non-Gaussian Noise Model

  • Conference paper
MICAI 2007: Advances in Artificial Intelligence (MICAI 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4827))

Included in the following conference series:

  • 1040 Accesses

Abstract

In this article we investigate the influence of a Pareto-like noise model on the performance of an artificial neural network used to predict a nonlinear time series. A Pareto-like noise model is, in contrast to a Gaussian noise model, based on a power law distribution which has long tails compared to a Gaussian distribution. This allows for larger fluctuations in the deviation between predicted and observed values of the time series. We define an optimization procedure that minimizes the mean squared error of the predicted time series by maximizing the likelihood function based on the Pareto-like noise model. Numerical results for an artificial time series show that this noise model gives better results than a model based on Gaussian noise demonstrating that by allowing larger fluctuations the parameter space of the likelihood function can be search more efficiently. As a consequence, our results may indicate a more generic characteristics of optimization problems not restricted to problems from time series prediction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Bak, P., Tang, T., Wiesenfeld, K.: Self-organized criticality: An explanation of the 1/f noise. Phys. Rev. Lett. 59, 381–384 (1987)

    Article  MathSciNet  Google Scholar 

  2. Baldi, P., Brunk, S.: Bioinformatics: The machine learning approach. MIT Press, Cambridge (2001)

    MATH  Google Scholar 

  3. Baragona, R., Battaglia, F., Cucina, D.: Fitting piecewise linear threshold autoregressive models by means of genetic algorithms. Computational Statistics and Data Analysis 47(2), 277–295 (2004)

    Article  MathSciNet  Google Scholar 

  4. Boettcher, S., Percus, A.: Nature’s way of optimizing. Artificial Intelligence 119, 275–286 (2000)

    Article  MATH  Google Scholar 

  5. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Contr. Sign. Syst. 2, 303 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  6. Emmert-Streib, F., Dehmer, M.: Nonlinear Time Series Prediction based on a Power-Law Noise Model. International Journal of Modern Physics C, (accepted, 2007)

    Google Scholar 

  7. Funahashi, K.: On the approximate realization fo continous mappings by neural networks. Neural Networks 2, 183–192 (1989)

    Article  Google Scholar 

  8. Giordano, F., La Rocca, M., Perna, C.: Forecasting nonlinear time series with neural network sieve bootstrap. Computational Statistics and Data Analysis, (in press, 2006)

    Google Scholar 

  9. Gutenberg, B., Richter, R.F.: Frequency of earthquakes in california. Bulletin of the Seismological Society of America 34, 185–188 (1944)

    Google Scholar 

  10. Jensen, H.J.: Self-Organized Criticality: Emergent Complex Behavior in Physical and Biological Systems. Cambridge University Press, Cambridge (1998)

    MATH  Google Scholar 

  11. Kirkpatrick, S., Gellatt, C., Vecchi, M.: Optimization by simulated annealing. Science 220, 671–680 (1983)

    Article  MathSciNet  Google Scholar 

  12. Liang, F.: Bayesian neural networks for nonlinear time series forecasting. Statistics and Computing 15, 13–29 (2005)

    Article  MathSciNet  Google Scholar 

  13. Mandelbrot, B.B.: The variation of certain speculative prices. J. Business 36, 394–419 (1963)

    Article  Google Scholar 

  14. Newman, M.E.J.: Power laws, pareto distributions and zipf’s law. Contemporary Physics 46, 323–351 (2005)

    Article  Google Scholar 

  15. Pontil, M., Mukherjee, S., Girosi, F.: On the noise model of support vector machine regression. Technical report, Center for Biological and Computational Learning and the Artifcial Intelligence Laboratory of the Massachusetts Institute of Technology (1998)

    Google Scholar 

  16. Roweis, S., Ghahramani, Z.: A unified review of linear gaussian models. Neural Computation 11(2), 305–345 (1999)

    Article  Google Scholar 

  17. Schuster, H.G.: Deterministic Chaos. Wiley VCH, Chichester (1988)

    Google Scholar 

  18. Weigend, A., Huberman, B.A., Rumelhart, D.R.: Predicting the future: A connectionist approach. International Journal of Neural Systems 1(3), 193–209 (1990)

    Article  Google Scholar 

  19. Zipf, G.K.: Human Behaviour and the Principle of Least Effort. Addison-Wesley, Reading, MA (1949)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Alexander Gelbukh Ángel Fernando Kuri Morales

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Emmert-Streib, F., Dehmer, M. (2007). Optimization Procedure for Predicting Nonlinear Time Series Based on a Non-Gaussian Noise Model. In: Gelbukh, A., Kuri Morales, Á.F. (eds) MICAI 2007: Advances in Artificial Intelligence. MICAI 2007. Lecture Notes in Computer Science(), vol 4827. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-76631-5_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-76631-5_51

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-76630-8

  • Online ISBN: 978-3-540-76631-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics