Research Note
Parallel Implementation of a Recursive Least-Squares Neural Network Training Method on the Intel iPSC/2

https://doi.org/10.1006/jpdc.1993.1047Get rights and content

Abstract

An algorithm based on the Marquardt-Levenberg least-squares optimization method has been shown by S. Kollias and D. Anastasiou to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in computational complexity of the method outweighs any gain in learning rate obtained over current training methods. However, the least-squares method lends itself to a more efficient implementation on distributed memory parallel computers than do standard methods. This is demonstrated by comparing computation times and learning rates for the least-squares method implemented on 1, 2, 4, 8, and 16 processors on an Intel iPSC/2 multicomputer. Two applications are given which demonstrate the faster real-time learning rate of the least-squares method over that of gradient descent.

References (0)

Cited by (2)

View full text