Skip to main content

The Effect of Bottlenecks on Generalisation in Backpropagation Neural Networks

  • Conference paper
  • 2615 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6444))

Abstract

Many modifications have been proposed to improve back-propagation’s convergence time and generalisation capabilities. Typical techniques involve pruning of hidden neurons, adding noise to hidden neurons which do not learn, and reducing dataset size. In this paper, we wanted to compare these modifications’ performance in many situations, perhaps for which they were not designed. Seven famous UCI datasets were used. These datasets are different in dimension, size and number of outliers. After experiments, we find some modifications have excellent effect of decreasing network’s convergence time and improving generalisation capability while some modifications perform much the same as unmodified back-propagation. We also seek to find a combine of modifications which outperforms any single selected modification.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Gedeon, T.D., Harris, D.: Network Reduction Techniques. In: International Conference on Neural Networks Methodologies and Applications, vol. 1, pp. 119–126. AMSE, San Diego (1991)

    Google Scholar 

  2. Hagiwara, M.: Novel back propagation algorithm for reduction of hidden units and acceleration of convergence using artificial selection. In: International Joint Conference on Neural Networks, pp. 625–630. IEEE Press, San Diego (1990)

    Google Scholar 

  3. Wieland, A., Leighton, R.: Geometric analysis of neural network capabilities. In: ICNN 1987, vol. III, pp. 385–392 (1987)

    Google Scholar 

  4. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323, 533–536 (1986)

    Article  MATH  Google Scholar 

  5. Sietsma, J., Dow, R.F.: Creating Artificial Neural Networks That Generalize. Neural Networks 4, 67–79 (1991)

    Article  Google Scholar 

  6. Azamimi, A., Uwate, Y., Nishio, Y.: Good Learning Performance of Backpropagation Algorithm with Chaotic Noise Features. In: SSJW 2008, Sanuki, Japan, pp. 36–38 (2008)

    Google Scholar 

  7. Gedeon, T.D., Bowden, T.G.: Heuristic Pattern Reduction. In: IJCNN 1992, Beijing, vol. 2, pp. 449–453 (1992)

    Google Scholar 

  8. Gedeon, T.D., Bowden, T.G.: Heuristic Pattern Reduction II. In: 3rd ICYCS, Beijing, vol. 3, pp. 43–45 (1993)

    Google Scholar 

  9. Slade, P., Gedeon, T.D.: Bimodal Distribution Removal. In: Mira, J., Cabestany, J., Prieto, A.G. (eds.) IWANN 1993. LNCS, vol. 686, pp. 249–254. Springer, Heidelberg (1993)

    Chapter  Google Scholar 

  10. Gedeon, T.D., Slade, P.: Reducing Training Set Size to Improve Learning. In: 2nd European Congress on Intelligent Techniques and Soft Computing (EUFIT 1994), Aachen, pp. 1232–1236 (1994)

    Google Scholar 

  11. Karnin, E.D.: A simple procedure for pruning back-propagation trained neural networks. IEEE Transactions on Neural Networks 1, 239–242 (1990)

    Article  Google Scholar 

  12. Gedeon, T.D., Harris, D.: Creating Robust Networks. In: International Joint Conference on Neural Networks, Singapore, pp. 2553–2557 (1991)

    Google Scholar 

  13. Gedeon, T.D., Harris, D.: Progressive Image Compression. In: International Joint Conference on Neural Networks, Baltimore, vol. 4, pp. 403–407 (1992)

    Google Scholar 

  14. Gedeon, T.D., Harris, D.: Hidden Units in a Plateau. In: 1st International Conference on Intelligent Systems, Singapore, pp. 391–395 (1992)

    Google Scholar 

  15. Gedeon, T.D., Wong, P.M., Harris, D.: Balancing Bias and Variance: Network Topology and Pattern Set Reduction Techniques. In: Sandoval, F., Mira, J. (eds.) IWANN 1995. LNCS, vol. 930, pp. 551–558. Springer, Heidelberg (1995)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zang, X. (2010). The Effect of Bottlenecks on Generalisation in Backpropagation Neural Networks. In: Wong, K.W., Mendis, B.S.U., Bouzerdoum, A. (eds) Neural Information Processing. Models and Applications. ICONIP 2010. Lecture Notes in Computer Science, vol 6444. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17534-3_21

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-17534-3_21

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-17533-6

  • Online ISBN: 978-3-642-17534-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics