Abstract
A recurrent Sigma-Pi-linked back-propagation neural network is presented. The increase of input information is achieved by the introduction of “higher-order≓ terms, that are generated through functional-linked input nodes. Based on the Sigma-Pi-linked model, this network is capable of approximating more complex function at a much faster convergence rate. This recurrent network is intensively tested by applying to different types of linear and nonlinear time-series. Comparing to the conventional feedforward BP network, the training convergence rate is substantially faster. Results indicate that the functional approximation property of this recurrent network is remarkable for time-series applications.
Similar content being viewed by others
References
E. Levin. Hidden control neural architecture modeling of nonlinear time varying systems and its applications,IEEE trans. NN, vol. 4, No. 1, pp. 109–116, 1993.
K. Hornik, M. Stinchcombe, H. White. Multilayer feedforward networks are universal approximatiors,Neural Networks, vol. 2, pp. 359–366, 1989.
K.S. Narendra, K. Parthasarathy. Identification and control of dynamical systems using neural network,IEEE trans. NN, vol. 1, No. 1, pp. 4–27, 1990.
A. Chung, A. D. Back. Locally recurrent Globally Feedforward networks: A critical review of architectures,IEEE trans. NN, vol. 5, no. 2, pp. 229–239, 1994.
R.J. Williams, D. Zipser. A learning algorithm for continually running fully recurrent neural networks.Neural Comp., 1, pp. 270–280, 1989.
Y.H. Pao.Adaptive pattern recognition and neural networks. Addison-Weslay, 1989
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Chow, T.W.S., Fei, G. Recurrent Sigma-Pi-linked back-propagation network. Neural Process Lett 1, 5–8 (1994). https://doi.org/10.1007/BF02310935
Issue Date:
DOI: https://doi.org/10.1007/BF02310935