Skip to main content
Log in

Constrained RTRL To Reduce Learning Rate and Forgetting Phenomenon

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

A fully connected continuous time recurrent neural network, trained by means of Real-Time Recurrent Learning, is investigated. A theoretical analysis of the output vector of the network during the training stage is performed. We point out the necessity to apply an additional constraint to the synaptic weight matrix with the intention of reducing the learning time while the forgetting is decreased. This constraint consists of updating the weights of the output cells using the output error gradient into the RTRL and a matrix of learning rates calculated from an average vector computed from the vectors previously memorized. For this first approach of the problem, only fixpoints attractors have been investigated. Some simple computational simulations validate the method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R.J. Williams and D. Zipser, “Experimental analysis of the recurrent learning algorithm”, Connection Science, 1, 87-111, 1989.

    Google Scholar 

  2. K. Doya and S. Yoshizawa, “Adaptive neural oscillator using continuous-time back-propagation learning”, Neural Networks, 2, 375-385, 1989.

    Google Scholar 

  3. K. Doya, “Bifurcations in the learning of recurrent neural networks”, Proceedings of the IEEE International Symposium on Circuits and Systems, 2777-2780, 1992.

  4. D.J. Amit, Modeling Brain Function, The World of Attractor Dynamics, Cambridge University Press, 1989.

  5. D. Parrachia, “Histoire et Philosophie du Concept de Réseau de Neurones”, in H. Paugam-Moisy, J.-P. Royet, D.A. Zighed (eds) Le Neuromimétisme: Epistémologie, Neurobiologie, Informatique, Hermes, 65-80, 1995.

  6. J-F. Jodouin, Les réseaux neuromimétiques. Hermes, Paris, 1994.

  7. M.L. Minsky and S.A. Papert, Perceptrons, MIT Press, Cambridge (MA), 1990.

    Google Scholar 

  8. R.J. Williams and D. Zipser, “A learning algorithm for continually running fully recurrent neural networks”, Neural Computation, 1(2), 270-280, 1989.

    Google Scholar 

  9. B. Pearlmutter, “Learning state space trajectories in recurrent neural networks”, Neural Computation, 1(2), 263-269, 1989.

    Google Scholar 

  10. R.A. Jacobs, “Increased learning rates through learning rate adaptation”, Neural Networks, 1, 295-397, 1988.

    Google Scholar 

  11. R. Glasius, A. Komoda and S.C.A.M. Gielen, “Neural network dynamics for path planning and obstacle avoidance”, Neural Networks, 8(1), 125-133, 1995.

    Google Scholar 

  12. A. Gottschalk, M.D. Ogilvie, D.W. Richter and A.I. Pack, “Computational aspects of the respiratory pattern generator”, Neural Computation, 6, 56-68, 1994.

    Google Scholar 

  13. Y. Hayashi, “Oscillatory neural network and learning of continuously transformed patterns”, Neural Networks, 7(2), 219-231, 1994.

    Google Scholar 

  14. G. Deco and B. Schürmann. “Neural learning of chaotic dynamics”, Neural Processing Letters, 2(2), 23-26, 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fabrice Druaux.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Druaux, F., Rogue, E. & Faure, A. Constrained RTRL To Reduce Learning Rate and Forgetting Phenomenon. Neural Processing Letters 7, 161–167 (1998). https://doi.org/10.1023/A:1009677128478

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009677128478

Navigation