Skip to main content

General transient length upper bound for recurrent neural networks

  • Computational Models of Neurons and Neural Nets
  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 930))

Abstract

We show how to construct a Lyapunov function for a discrete recurrent neural network using the variable-gradient method. This method can also be used to obtain the Hopfield energy function. Using our Lyapunov function, we compute an upper bound for the transient length for our neural network dynamics. We also show how our Lyapunov function can provide insights into the effect that the introduction of self-feedback weights to our neural network has on the sizes of the basins of attraction of the equilibrium points of the neural network state space.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. F. Csáki. Modern Control Theory. Akadémiai Kiado Budapest, 1971.

    Google Scholar 

  2. P. Floréen. Worst-case convergence times for hopfield memories. IEEE Trans. on Neural Networks, 2(5):533–535, 1991.

    Google Scholar 

  3. F. Fogelman-Soulie et. al. Transient length in sequential iteration of threshold functions. Discrete Applied Mathematics, 6:95–98, 1983.

    Google Scholar 

  4. E. Goles and S. Martínez. Neural and Automata Networks. Kluwer Academic Publisher, 1990.

    Google Scholar 

  5. E. Goles et. al. Decreasing energy functions as a tool for studying threshold networks. Discrete Applied Mathematics, 12:261–277, 1985.

    Google Scholar 

  6. M. Hirsch and S. Smale. Differential Equations, Dynamical Systems and Linear Algebra. Academic Press, 1974.

    Google Scholar 

  7. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. of Nat. Acad. Sci, U.S.A., 79:2554–2558, 1982.

    Google Scholar 

  8. Y. Kamp and M. Hasler. Recursive Neural Networks for Associative Memory. Wiley, 1990.

    Google Scholar 

  9. J. Komlós and R. Paturi. Convergence results in associative memory model. Neural Networks, 1:239–250, 1988.

    Google Scholar 

  10. A. Michel et. al. Qualitative analysis of neural networks. IEEE Trans. on Circuits and Systems, 36(2):229–243, 1989.

    Google Scholar 

  11. R. R. Mohler. Nonlinear Systems:Dynamics and Control. Prentice Hall, 1991.

    Google Scholar 

  12. P. C. Parks. A. M. Lyapunov's stability theory—100 years on. IMA Journal of Mathematical Control and Information, 9:275–303, 1992.

    Google Scholar 

  13. N. Peterfreud and Y. Baram. Second-order bounds on the domain of attraction and the rate of convergence of nonlinear dynamical systems and neural networks. IEEE Trans. on Neural Networks, 5(4):551–560, 1994.

    Google Scholar 

  14. J.-J. E. Slotine and Li W. Applied Nonlinear Control. Prentice Hall, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Francisco Sandoval

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ho, A.M.C.L., De Wilde, P. (1995). General transient length upper bound for recurrent neural networks. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_176

Download citation

  • DOI: https://doi.org/10.1007/3-540-59497-3_176

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59497-0

  • Online ISBN: 978-3-540-49288-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics