Skip to main content
Log in

Convergence of gradient method for a fully recurrent neural network

  • Original Paper
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Recurrent neural networks have been successfully used for analysis and prediction of temporal sequences. This paper is concerned with the convergence of a gradient-descent learning algorithm for training a fully recurrent neural network. In literature, stochastic process theory has been used to establish some convergence results of probability nature for the on-line gradient training algorithm, based on the assumption that a very large number of (or infinitely many in theory) training samples of the temporal sequences are available. In this paper, we consider the case that only a limited number of training samples of the temporal sequences are available such that the stochastic treatment of the problem is no longer appropriate. Instead, we use an off-line gradient training algorithm for the fully recurrent neural network, and we accordingly prove some convergence results of deterministic nature. The monotonicity of the error function in the iteration is also guaranteed. A numerical example is given to support the theoretical findings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

Download references

Acknowledgments

This work is partly supported by the National Natural Science Foundation of China (10471017).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhengxue Li.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Xu, D., Li, Z. & Wu, W. Convergence of gradient method for a fully recurrent neural network. Soft Comput 14, 245–250 (2010). https://doi.org/10.1007/s00500-009-0398-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-009-0398-0

Keywords

Navigation