1 July 1997 Neurodynamics of learning and network performance
Charles L. Wilson, James L. Blue, Omid M. Omidvar
Author Affiliations +
Abstract
A simple dynamic model of a neural network is presented. Using the dynamic model of a neural network, we improve the performance of a three-layer multilayer perceptron (MLP). The dynamic model of a MLP is used to make fundamental changes in the network optimization strategy. These changes are: Neuron activation functions are used, which reduce the probability of singular Jacobians; Successive regularization is used to constrain the volume of the weight space being minimized; Boltzmann pruning is used to constrain the dimension of the weight space; and prior class probabilities are used to normalize all error calculations, so that statistically significant samples of rare but important classes can be included without distortion of the error surface. All four of these changes are made in the inner loop of a conjugate gradient optimization iteration and are intended to simplify the training dynamics ofthe optimization. On handprinted digits and fingerprint classification problems, these modifications improve error-reject performance by factors between 2 and 4 and reduce network size by 40 to 60%.
Charles L. Wilson, James L. Blue, and Omid M. Omidvar "Neurodynamics of learning and network performance," Journal of Electronic Imaging 6(3), (1 July 1997). https://doi.org/10.1117/12.272656
Published: 1 July 1997
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Supercontinuum generation

Dynamical systems

Error analysis

Binary data

Fractal analysis

Mathematical modeling

RELATED CONTENT


Back to Top