Original contribution
Back-propagation algorithm which varies the number of hidden units

https://doi.org/10.1016/0893-6080(91)90032-ZGet rights and content

Abstract

This report presents a back-propagation algorithm that varies the number of hidden units. This algorithm is expected to escape local minima and makes it no longer necessary to decide the number of hidden units. We tested this algorithm on two examples. One was exclusive-OR learning and the other was 8 × 8 dot alphanumeric font learning. In both examples, the probability of becoming trapped in local minima was reduced. Furthermore, in alphanumeric font learning, the network converged two to three times faster than conventional back-propagation.

References (3)

  • D.E. Rumelhart et al.

    Learning representations by error propagation

There are more references available in the full text version of this article.

Cited by (407)

  • Prediction of blood screening parameters for preliminary analysis using neural networks

    2022, Predictive Modeling in Biomedical Data Mining and Analysis
  • Predicting rock displacement in underground mines using improved machine learning-based models

    2022, Measurement: Journal of the International Measurement Confederation
  • High speed and reconfigurable optronic neural network with digital nonlinear activation

    2021, Optik
    Citation Excerpt :

    Compared to the opto-electrical hybrid neural network relied on digital neural network [30,31], our architecture makes full use of the parallel processing ability of the light, the whole system has lower spatial complexity and is reconfigurable. As shown in Fig. 1(a), the neural network realizes its function through the cross connection of neurons and by means of the back-propagation algorithm [32]. The core of its work includes matrix multiplication, activation and objective function.

  • Towards a mathematical framework to inform neural network modelling via polynomial regression

    2021, Neural Networks
    Citation Excerpt :

    However, neural networks present several problems. Choosing the hyperparameters of NNs still depends mostly on an exploratory approach by trial and error, either for the learning algorithm parameters (Bengio, 2012) or for their structure topology like the needed number of layers, the number of hidden units per layer or their connections (Hirose et al., 1991; Ma & Khorasani, 2004; Weymaere & Martens, 1994), with genetic algorithms as an approach that has been explored to solve this (Leung et al., 2003). Another of their problems is that neural networks do not directly provide an estimate of the uncertainty produced in their predictions, which is of crucial importance in most of their applications, like flood predictions (Tiwari & Chatterjee, 2010), wind power forecasting (Wan et al., 2014) or molecular and atomic predictions (Musil et al., 2019).

View all citing articles on Scopus
View full text