Skip to main content

Forecasting with Recurrent Neural Networks: 12 Tricks

  • Chapter
Neural Networks: Tricks of the Trade

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7700))

Abstract

Recurrent neural networks (RNNs) are typically considered as relatively simple architectures, which come along with complicated learning algorithms. This paper has a different view: We start from the fact that RNNs can model any high dimensional, nonlinear dynamical system. Rather than focusing on learning algorithms, we concentrate on the design of network architectures. Unfolding in time is a well-known example of this modeling philosophy. Here a temporal algorithm is transferred into an architectural framework such that the learning can be performed by an extension of standard error backpropagation.

We introduce 12 tricks that not only provide deeper insights in the functioning of RNNs but also improve the identification of underlying dynamical system from data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Calvert, D., Kremer, S.: Networks with Adaptive State Transitions. In: Kolen, J.F., Kremer, S. (eds.) A Field Guide to Dynamical Recurrent Networks, pp. 15–25. IEEE (2001)

    Google Scholar 

  2. Föllmer, H.: Alles richtig und trotzdem falsch?, Anmerkungen zur Finanzkrise und Finanzmathematik. In: MDMV, vol. 17, pp. 148–154 (2009)

    Google Scholar 

  3. Haykin, S.: Neural Networks and Learning Machines, 3rd edn. Prentice Hall (2008)

    Google Scholar 

  4. Hull, J.: Options, Futures & Other Derivative Securities. Prentice Hall (2001)

    Google Scholar 

  5. Hornik, K., Stinchcombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximators. Neural Networks 2, 359–366 (1989)

    Article  Google Scholar 

  6. Kamien, M., Schwartz, N.: Dynamic Optimization: The Calculus of Variations and Optimal Control in Economics and Management, 2nd edn. Elsevier Science (October 1991)

    Google Scholar 

  7. McNeil, A., Frey, R., Embrechts, P.: Quantitative Risk Management: Concepts, Techniques and Tools. New Jersey. Princeton University Press, Princeton (2005)

    MATH  Google Scholar 

  8. Neuneier, R., Zimmermann, H.-G.: How to Train Neural Networks. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, pp. 373–423. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  9. Pearlmutter, B.: Gradient Calculations for Dynamic Recurrent Neural Networks. In: Kolen, J.F., Kremer, S. (eds.) A Field Guide to Dynamical Recurrent Networks, pp. 179–206. IEEE Press (2001)

    Google Scholar 

  10. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Rumelhart, D.E., McClelland, J.L., et al. (eds.) Parallel Distributed Processing, vol. 1. MIT Press, Cambridge (1986)

    Google Scholar 

  11. Schäfer, A.M., Zimmermann, H.-G.: Recurrent Neural Networks Are Universal Approximators. In: Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E. (eds.) ICANN 2006. LNCS, vol. 4131, pp. 632–640. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  12. Wei, W.S.: Time Series Analysis: Univariate and Multivariate Methods. Addison-Wesley Publishing Company, N.Y. (1990)

    MATH  Google Scholar 

  13. Werbos, P.J.: Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD Thesis, Harvard University (1974)

    Google Scholar 

  14. Williams, R.J., Zipser, D.: A Learning Algorithm for continually running fully recurrent neural networks. Neural Computation 1(2), 270–280 (1989)

    Article  Google Scholar 

  15. Zimmermann, H.G., Grothmann, R., Neuneier, R.: Modeling of Dynamical Systems by Error Correction Neural Networks. In: Soofi, A., Cao, L. (eds.) Modeling and Forecasting Financial Data, Techniques of Nonlinear Dynamics. Kluwer (2002)

    Google Scholar 

  16. Zimmermann, H.G., Grothmann, R., Schäfer, A.M., Tietz, Ch.: Modeling Large Dynamical Systems with Dynamical Consistent Neural Networks. In: Haykin, S., et al. (eds.) New Directions in Statistical Signal Processing. MIT Press, Cambridge (2006)

    Google Scholar 

  17. Zimmermann, H.G.: Neuronale Netze als Entscheidungskalkül. In: Rehkugler, H., Zimmermann, H.G. (eds.) Neuronale Netze in der Ökonomie, Grundlagen und wissenschaftliche Anwendungen, Vahlen, Munich (1994)

    Google Scholar 

  18. Zimmermann, H.G., Neuneier, R.: Neural Network Architectures for the Modeling of Dynamical Systems. In: Kolen, J.F., Kremer, S. (eds.) A Field Guide to Dynamical Recurrent Networks, pp. 311–350. IEEE Press (2001)

    Google Scholar 

  19. Zimmermann, H.G., Grothmann, R., Tietz, C., von Jouanne-Diedrich, H.: Market Modeling, Forecasting and Risk Analysis with Historical Consistent Neural Networks. In: Hu, B., et al. (eds.) Proceedings of Operations Research 2010, Selected Papers of the Annual Int. Conferences of the German OR Society (GOR), Munich. Springer, Heidelberg (2011)

    Google Scholar 

  20. Zimmermann, H.G., Grothmann, R., Tietz, C.: Forecasting Market Prices with Causal-Retro-Causal Neural Networks. In: Klatte, D., [ERROR while converting LaTeX/Unicode], H.-J., Schmedders, K. (eds.) Proceedings of Operations Research 2011, Selected Papers of the Int. Conference on Operations Research 2011 (OR 2011), Zurich, Switzerland. Springer (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Zimmermann, HG., Tietz, C., Grothmann, R. (2012). Forecasting with Recurrent Neural Networks: 12 Tricks. In: Montavon, G., Orr, G.B., Müller, KR. (eds) Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science, vol 7700. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35289-8_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-35289-8_37

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-35288-1

  • Online ISBN: 978-3-642-35289-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics