Skip to main content
Log in

A sensitivity-based approach for pruning architecture of Madalines

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Architecture design is a very important issue in neural network research. One popular way to find proper size of a network is to prune an oversize trained network to a smaller one while keeping established performance. This paper presents a sensitivity-based approach to prune hidden Adalines from a Madaline with causing as little as possible performance loss and thus easy compensating for the loss. The approach is novel in setting up a relevance measure, by means of an Adalines’ sensitivity measure, to locate the least relevant Adaline in a Madaline. The sensitivity measure is the probability of an Adaline’s output inversions due to input variation with respect to overall input patterns, and the relevance measure is defined as the multiplication of the Adaline’s sensitivity value by the summation of the absolute value of the Adaline’s outgoing weights. Based on the relevance measure, a pruning algorithm can be simply programmed, which iteratively prunes an Adaline with the least relevance value from hidden layer of a given Madaline and then conducts some compensations until no more Adalines can be removed under a given performance requirement. The effectiveness of the pruning approach is verified by some experimental results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Reed R (1993) Pruning algorithms—a survey. IEEE Trans Neural Netw 4(5):740–747. doi:10.1109/72.248452

    Article  Google Scholar 

  2. Engelbrecht AP (2001) A new pruning heuristic based on variance analysis of sensitivity information. IEEE Trans Neural Netw 12(6):1386–1398. doi:10.1109/72.963775

    Article  Google Scholar 

  3. Castellano G, Fanelli A, Pelillo M (1997) An iterative pruning algorithm for feedforward neural networks. IEEE Trans Neural Netw 8(3):519–531. doi:10.1109/72.572092

    Article  Google Scholar 

  4. Suzuki K, Horiba I, Sugie N (2001) A simple neural network pruning algorithm with application to filter synthesis. Neural Process Lett 13(1):43–53. doi:10.1023/A:1009639214138

    Article  MATH  Google Scholar 

  5. Burrascano P (1993) A pruning technique maximizing generalization. In: Proceedings of the international joint conference on neural networks, pp 347–350

  6. Pedersen MW, Hansen LK, Larsen J (1996) Pruning with generalization based weight saliencies: γOBD, γOBS. In: Proceedings of the neural information processing systems, pp 521–528

  7. Zurada JM, Malinowski A, Usui S (1997) Perturbation method for deleting redundant inputs of perceptron networks. Neurocomputing 14(2):177–193

    Article  Google Scholar 

  8. Zeng X, Yeung DS (2006) Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure. Neurocomputing 69(7–9):825–837. doi:10.1016/j.neucom.2005.04.010

    Article  Google Scholar 

  9. Zeng X, Wang Y, Zhang K (2006) Computation of adalines’ sensitivity to weight perturbation. IEEE Trans Neural Netw 17(2):515–519. doi:10.1109/TNN.2005.863418

    Article  Google Scholar 

  10. Winter RG (1989) Madaline rule II: a new method for training networks for Adalines. Dissertation of Department of Electrical Engineering, Stanford University, CA, USA

    Google Scholar 

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant 60571048 and Grant 60673186.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoqin Zeng.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zeng, X., Shao, J., Wang, Y. et al. A sensitivity-based approach for pruning architecture of Madalines. Neural Comput & Applic 18, 957–965 (2009). https://doi.org/10.1007/s00521-008-0222-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-008-0222-2

Keywords

Navigation