Skip to main content
Log in

Simultaneous recurrent neural network trained with non-recurrent backpropagation algorithm for static optimisation

  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

This paper explores feasibility of employing the non-recurrent backpropagation training algorithm for a recurrent neural network, Simultaneous Recurrent Neural network, for static optimisation. A simplifying observation that maps the recurrent network dynamics, which is configured to operate in relaxation mode as a static optimizer, to feedforward network dynamics is leveraged to facilitate application of a non-recurrent training algorithm such as the standard backpropagation and its variants. A simulation study that aims to assess feasibility, optimizing potential, and computational efficiency of training the Simultaneous Recurrent Neural network with non-recurrent backpropagation is conducted. A comparative computational complexity analysis between the Simultaneous Recurrent Neural network trained with non-recurrent backpropagation algorithm and the same network trained with the recurrent backpropagation algorithm is performed. Simulation results demonstrate that it is feasible to apply the non-recurrent backpropagation to train the Simultaneous Recurrent Neural network. The optimality and computational complexity analysis fails to demonstrate any advantage on behalf of the non-recurrent backpropagation versus the recurrent backpropagation for the optimisation problem considered. However, considerable future potential that is yet to be explored exists given that computationally efficient versions of the backpropagation training algorithm, namely quasi-Newton and conjugate gradient descent among others, are also applicable for the neural network proposed for static optimisation in this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Acknowledgments.

This project has been funded in part by the USA National Science Foundation Grant No. 9800247. Authors greatly acknowledge the computing resources made available by Dr. Demetrios Kazakos in his capacity as the chairman of the Electrical Engineering and Computer Science Department at the University of Toledo, Toledo, Ohio, USA. Authors also appreciate the valuable feedback received from anonymous referees towards noticeably improving this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gursel Serpen.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Serpen, G., Xu, Y. Simultaneous recurrent neural network trained with non-recurrent backpropagation algorithm for static optimisation. Neural Comput&Applic 12, 1–9 (2003). https://doi.org/10.1007/s00521-003-0365-0

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-003-0365-0

Keywords

Navigation