Elsevier

Neurocomputing

Volume 15, Issues 3–4, June 1997, Pages 225-248
Neurocomputing

Special paper
Using Fourier-neural recurrent networks to fit sequential input/output data

https://doi.org/10.1016/S0925-2312(97)00008-8Get rights and content

Abstract

This paper suggests the use of Fourier-type activation functions in fully recurrent neural networks. The main theoretical advantage is that, in principle, the problem of recovering internal coefficients from input/output data is solvable in closed form.

References (17)

  • F. Albertini et al.

    For neural networks, function determines form

    Neural Networks

    (1993)
    F. Albertini et al.
  • F. Albertini et al.

    Identifiability of discrete-time neural networks

  • F. Albertini et al.

    State observability in recurrent neural networks

  • M. Ben-Or et al.

    A deterministic algorithm for sparse multivariate polynomial interpolation

  • A.R. Gallant et al.

    There exists a neural network that does not make avoidable mistakes

  • R. Koplon

    Linear systems with constrained outputs and transitions

  • R. Koplon et al.

    Techniques for parameter reconstruction in Fourier-neural recurrent networks

  • G. Kuhn, N.P. Herzberg, Some variations on training of recurrent networks, Neural Networks: Theory and Applications,...
There are more references available in the full text version of this article.

Cited by (20)

View all citing articles on Scopus
View full text