Elsevier

Neural Networks

Volume 9, Issue 4, June 1996, Pages 589-601
Neural Networks

Contributed article
Accelerating backpropagation through dynamic self-adaptation

https://doi.org/10.1016/0893-6080(95)00144-1Get rights and content

Abstract

Standard backpropagation and many procedures derived from it use the steepest-descent method to minimize a cost function. In this paper, were present a new genetic algorithm, dynamic self-adaptation, to accelerate steepest descent as it is used in iterative procedures. The underlying idea is to take the learning rate of the previous step, to increase and decrease it slightly, to evaluate the cost function for both new values of the learning rate, and to choose the one that gives the lower value of the cost function. In this way, the algorithm adapts itself locally to the cost function landscape. We present a convergence proof, estimate the convergence rate, and test the algorithm on several hard problems. As compared to standard backpropagation, the convergence rate can be improved by several orders of magnitude. Furthermore, dynamic self-adaptation can also be applied to several parameters simultaneously, such as the learning rate and momentum.

References (16)

There are more references available in the full text version of this article.

Cited by (71)

  • Practical options for selecting data-driven or physics-based prognostics algorithms with reviews

    2015, Reliability Engineering and System Safety
    Citation Excerpt :

    For these reasons, there have been many efforts to improve the drawbacks of the BPNN algorithm, such as a dynamic self-adaptation algorithm [55] a simulated annealing algorithm [56] combined genetic and differential evolution algorithm [57], and a technique combining the conjugate gradient optimization algorithm with the BPNN algorithm [58]. There are many ensemble techniques to improve the performance of algorithms [59–64], as well as other efforts in Refs. [40,55,58]. While the aforementioned methods are to find the weight parameters in a deterministic sense, there have been probabilistic approaches based on the Bayesian learning technique [65,66], in which the weight parameters are obtained by using a sampling method.

  • The intensity change of urban development land: Implications for the city master plan of Guangzhou, China

    2014, Land Use Policy
    Citation Excerpt :

    It is reported that the application requires the forecasting result that the model supplies (Jiao et al., 2009). The Back Propagation (BP) neural network is one of the most widely used neural network models (Salomon and Hemmen, 1996). A BP neural network is a feed-forward multilayer neural network, which trains the weight of the differentiable nonlinear functions based on an error BP (Dong, 2005).

  • Suitability evaluation of urban construction land based on geo-environmental factors of Hangzhou, China

    2011, Computers and Geosciences
    Citation Excerpt :

    Neurons at the same layer are not connected with each other, but neurons at adjacent layers are connected with the weight. The multilayer back-propagation algorithm, namely the BP algorithm, is most widely used (Rumelhart et al., 1986; Salomon and Hemmen, 1996) for its self-adaptive learning in a given environment. According to the urban geo-environment and construction land characteristics of Hangzhou, the BP neural network model of geo-environmental suitability evaluation of urban construction land was established in this study (Fig. 5).

  • A Hybrid Neural Network for Predictive Model in A Plastic Injection Molding Process

    2022, International Journal of Intelligent Engineering and Systems
View all citing articles on Scopus
View full text