Elsevier

Neurocomputing

Volume 10, Issue 1, January 1996, Pages 7-31
Neurocomputing

Paper
Stabilization and speedup of convergence in training feedforward neural networks

https://doi.org/10.1016/0925-2312(94)00026-3Get rights and content

Abstract

We review the training problem for feedforward neural networks and discuss various techniques for accelerating and stabilizing the convergence during training. Among other techniques, these include a self-adjusting step gain, bipolar sigmoid activation functions, training on all classes in parallel, adjusting the exponential rates in the sigmoids, bounding the sigmoid derivatives away from zero, training on exemplars to which noise has been added, adjusting the initial weight set to a subdomain of low values of the sum-squared error, and adjusting the momentum coefficient over the iterations. We also examine methods to assure the generalization of the learning, which include the pruning of unimportant weights and adding noise to exemplars for training.

References (33)

  • G.E. Hinton

    Connectionist learning procedures

    Artificial Intelligence

    (1989)
  • M. Caudill

    Neural network training tips and techniques

    IEEE AI Expert

    (Jan. 1991)
  • S.E. Fahlman

    An empirical study of learning speed in backpropagation

  • K.F. Gauss
  • S.Ye. Gilev et al.

    Internal conflicts in neural networks

  • A.N. Gorban
  • R. Hecht-Nielsen
  • K. Hornik et al.

    Multilayer feedforward networks are universal approximators

    Neural Networks

    (1988)
  • B. Kosko
  • Y. Le Cun et al.

    Optimal brain damage

  • Y. Lee et al.

    The effect of initial weights on premature saturation in backpropagation learning

  • A.M. Legendre

    Methodes des mondres quarres, pour trouver le milieu le plus probable entre les resultats de differences observations

    Mem. Inst. France

    (1810)
  • G. Li et al.

    Acceleration of backpropagation through initial weight pre-training with delta rule

  • P. Linz
  • C.G. Looney

    Stabilization and speedup of convergence in training feedforward neural networks

  • K. Matsuoka

    Noise injection into inputs in backpropagation

    IEEE Trans. Systems Man Cybernet.

    (1992)
  • Cited by (28)

    • Neural network classifier optimization using Differential Evolution with Global Information and Back Propagation algorithm for clinical datasets

      2016, Applied Soft Computing Journal
      Citation Excerpt :

      The training outputs of the NN are entirely dependent on the initial weights [5–8]. The local search with faster convergence of ANN for classification has been improved by various researchers [9–11]. Particle Swarm Optimization (PSO) developed by Kennedy and Eberhart [12,13] can be applied to overcome the local minima problem occurring in any optimization problems.

    • Radial basis functional link nets and fuzzy reasoning

      2002, Neurocomputing
      Citation Excerpt :

      Table 4 shows the results of training on the same data with an MLP. We used the more efficient fullpropagation [20,21] mode rather than the epochal mode of backpropagation. The en route technique adjusted the learning rates η1 and η2 for the training of weights at the hidden and output neurodes, respectively.

    View all citing articles on Scopus
    View full text