Skip to main content
Log in

A Note on the Equivalence of NARX and RNN

  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

This paper presents several aspects with regards the application of the NARX model and Recurrent Neural Network (RNN) model in system identification and control. We show that every RNN can be transformed to a first order NARX model, and vice versa, under the condition that the neuron transfer function is similar to the NARX transfer function. If the neuron transfer function is piecewise linear, that is f(x):=x if uxu , 1 and f(x):=sign(x) otherwise, we further show that every NARX model of order larger than one can be transformed into a RNN. According to these equivalence results, there are three advantages from which we can benefit: (i) if the output dimension of a NARX model is larger than the number of its hidden unit, training an equivalent RNN will be faster, i.e. the equivalent RNN is trained instead of the NARX model. Once the training is finished, the RNN is transformed back to an equivalent NARX model. On the other hand, (ii) if the output dimension of a RNN model is less than the number of its hidden units, the training of a RNN can be speeded up by using a similar method; (iii) the RNN pruning can be accomplished in a much simpler way, i.e. the equivalent NARX model is pruned instead of the RNN. After pruning, the NARX model is transformed back to the equivalent RNN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sum, J., Kan, WK. & Young, G. A Note on the Equivalence of NARX and RNN. NCA 8, 33–39 (1999). https://doi.org/10.1007/s005210050005

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s005210050005

Navigation