PaperStabilization and speedup of convergence in training feedforward neural networks
References (33)
Connectionist learning procedures
Artificial Intelligence
(1989)Neural network training tips and techniques
IEEE AI Expert
(Jan. 1991)An empirical study of learning speed in backpropagation
- et al.
Internal conflicts in neural networks
- et al.
Multilayer feedforward networks are universal approximators
Neural Networks
(1988) - et al.
Optimal brain damage
The effect of initial weights on premature saturation in backpropagation learning
Methodes des mondres quarres, pour trouver le milieu le plus probable entre les resultats de differences observations
Mem. Inst. France
(1810)
Acceleration of backpropagation through initial weight pre-training with delta rule
Stabilization and speedup of convergence in training feedforward neural networks
Noise injection into inputs in backpropagation
IEEE Trans. Systems Man Cybernet.
(1992)
Cited by (28)
TransMorph: Transformer for unsupervised medical image registration
2022, Medical Image AnalysisNeural network classifier optimization using Differential Evolution with Global Information and Back Propagation algorithm for clinical datasets
2016, Applied Soft Computing JournalCitation Excerpt :The training outputs of the NN are entirely dependent on the initial weights [5–8]. The local search with faster convergence of ANN for classification has been improved by various researchers [9–11]. Particle Swarm Optimization (PSO) developed by Kennedy and Eberhart [12,13] can be applied to overcome the local minima problem occurring in any optimization problems.
Stability analysis of a three-term backpropagation algorithm
2005, Neural NetworksArtificial neural network classification based on high-performance liquid chromatography of urinary and serum nucleosides for the clinical diagnosis of cancer
2002, Journal of Chromatography B: Analytical Technologies in the Biomedical and Life SciencesRadial basis functional link nets and fuzzy reasoning
2002, NeurocomputingCitation Excerpt :Table 4 shows the results of training on the same data with an MLP. We used the more efficient fullpropagation [20,21] mode rather than the epochal mode of backpropagation. The en route technique adjusted the learning rates η1 and η2 for the training of weights at the hidden and output neurodes, respectively.
Copyright © 1996 Published by Elsevier B.V.