Skip to main content

Advertisement

Log in

Neural Network Learning Using Entropy Cycle

  • Original Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract.

In this paper, an additional entropy penalty term is used to steer the direction of the hidden node's activation in the process of learning. A state with minimum entropy means that most nodes are operating in the non-linear zones (i.e. saturation zones) near the extreme ends of the Sigmoid curve. As the training proceeds, redundant hidden nodes' activations are pushed towards their extreme value corresponding to a low entropy state with maximum information, while some relevant nodes remain active in the linear zone. As training progresses, more nodes get into saturation zones. The early creation of such nodes may impair generalization performance. To prevent the network from being driven into saturation before it can really learn, an entropy cycle is proposed in this paper to dampen the creation of such inactive nodes in the early stage of training. At the end of training, these inactive nodes can then be eliminated without affecting the performance of the original network. The concept has been successfully applied for pruning in two classification problems. The experiments indicate that redundant nodes are pruned resulting in optimal network topologies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Received 3 October 1998 / Revised 14 April 1999 / Accepted in revised form 20 November 1999

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ng, G., Chan, K., Erdogan, S. et al. Neural Network Learning Using Entropy Cycle. Knowledge and Information Systems 2, 53–72 (2000). https://doi.org/10.1007/s101150050003

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s101150050003