Skip to main content

Approximation errors of state and output trajectories using recurrent neural networks

  • Poster Presentations 3
  • Conference paper
  • First Online:
Book cover Artificial Neural Networks — ICANN 96 (ICANN 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1112))

Included in the following conference series:

  • 225 Accesses

Abstract

This paper addresses the problem of estimating training error bounds of state and output trajectories for a class of recurrent neural networks as models of nonlinear dynamic systems. We present training error bounds of trajectories between the recurrent neural network models and the target systems. The bounds are obtained provided that the models have been trained on N trajectories with N independent random initial values which are uniformly distributed over [a, b] m ε R m.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Almeida, L.: A learning rule for asynchronous perceptions with feedback in a combinatorial environment. Proceedings of the First International Conference on Neural Networks. (1987) 609–618.

    Google Scholar 

  2. Narendra, K., Parthasarathy, K.: Gradient methods for the optimization of dynamical systems containing neural networks. IEEE Transactions on Neural Networks. 2 (1991) 252–262.

    Google Scholar 

  3. Pearlmutter, B.: Learning state space trajectories in recurrent neural networks. Neural Computation. 1 (1989) 263–269.

    Google Scholar 

  4. Pineda, F.: Generalization of back-propagation to recurrent neural networks. Physical Review Letters. 59 (1987) 2229–2232.

    Google Scholar 

  5. Sontag, E.: Mathematical Control Theory. New York: Springer-Verlag, (1990).

    Google Scholar 

  6. Werbos, P.: Backpropagation through time: What it does and how to do it. Proceedings of IEEE. 78 (1990) 1550–1560.

    Google Scholar 

  7. Williams, R., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Computation. 1 (1989) 270–280.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Christoph von der Malsburg Werner von Seelen Jan C. Vorbrüggen Bernhard Sendhoff

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, B., Si, J. (1996). Approximation errors of state and output trajectories using recurrent neural networks. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds) Artificial Neural Networks — ICANN 96. ICANN 1996. Lecture Notes in Computer Science, vol 1112. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61510-5_135

Download citation

  • DOI: https://doi.org/10.1007/3-540-61510-5_135

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61510-1

  • Online ISBN: 978-3-540-68684-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics