Abstract.
In this paper, an additional entropy penalty term is used to steer the direction of the hidden node's activation in the process of learning. A state with minimum entropy means that most nodes are operating in the non-linear zones (i.e. saturation zones) near the extreme ends of the Sigmoid curve. As the training proceeds, redundant hidden nodes' activations are pushed towards their extreme value corresponding to a low entropy state with maximum information, while some relevant nodes remain active in the linear zone. As training progresses, more nodes get into saturation zones. The early creation of such nodes may impair generalization performance. To prevent the network from being driven into saturation before it can really learn, an entropy cycle is proposed in this paper to dampen the creation of such inactive nodes in the early stage of training. At the end of training, these inactive nodes can then be eliminated without affecting the performance of the original network. The concept has been successfully applied for pruning in two classification problems. The experiments indicate that redundant nodes are pruned resulting in optimal network topologies.
Similar content being viewed by others
Author information
Authors and Affiliations
Additional information
Received 3 October 1998 / Revised 14 April 1999 / Accepted in revised form 20 November 1999
Rights and permissions
About this article
Cite this article
Ng, G., Chan, K., Erdogan, S. et al. Neural Network Learning Using Entropy Cycle. Knowledge and Information Systems 2, 53–72 (2000). https://doi.org/10.1007/s101150050003
Issue Date:
DOI: https://doi.org/10.1007/s101150050003