Elsevier

Neurocomputing

Volume 5, Issues 4–5, June 1993, Pages 165-174
Neurocomputing

Paper
Power series and neural-net computing

https://doi.org/10.1016/0925-2312(93)90004-MGet rights and content

Abstract

A power series expansion is represented by a three-layer feedforward network, the number of nodes in the hidden layer corresponding to the number of terms retained in the series, and the synaptic weights in the hidden layer representing the exponents used. The activation functions addressed to input, hidden and output nodes are log, anti-log and linear. The training rules applied to determine the weights of the output layer, that means the optimal power series coefficients, are the ordinary delta rule and a least squares based simultaneous learning rule called LSQ-rule. The performance of this network is compared to a three-layer network with the same number of nodes but sigmoidal activation functions, trained by backpropagation. With respect to the selected four functions to be approximated, the delta rule yields an output error considerably less than the error of the sigmoid network. Best results, however, are obtained combining the power network with the LSQ-rule. In this case, ‘performance related to number of training trials’ is - on average - about 30,000 times better, compared to backpropagation.

References (9)

There are more references available in the full text version of this article.

Cited by (0)

View full text