Skip to main content

Bayesian regularization in constructive neural networks

  • Poster Presentations 1
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 96 (ICANN 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1112))

Included in the following conference series:

Abstract

In this paper, we study the incorporation of Bayesian regularization into constructive neural networks. The degree of regularization is automatically controlled in the Bayesian inference framework and hence does not require manual setting. Simulation shows that regularization, with input training using a full Bayesian approach, produces networks with better generalization performance and lower susceptibility to over-fitting as the network size increases. Regularization with input training under MacKay's evidence framework, however, does not produce significant improvement on the problems tested.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. W.L. Buntine and A.S. Weigend. Bayesian back-propagation. Complex Systems, 5:603–643, 1991.

    Google Scholar 

  2. G.P. Drago and S. Ridella. Convergence properties of cascade correlation in function approximation. Neural Computing & Applications, 2:142–147, 1994.

    Google Scholar 

  3. S.E. Fahlman and C. Lebiere. The cascade-correlation learning architecture. In D.S. Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 524–532. Morgan Kaufmann, Los Altos CA, 1990.

    Google Scholar 

  4. E. Fiesler. Comparative bibliography of ontogenic neural networks. In Proceedings of the International Conference on Artificial Neural Networks, volume 1, pages 793–796, Sorrento, Italy, May 1994.

    Google Scholar 

  5. L.K. Hansen and M.W. Pedersen. Controlled growth of cascade correlation nets. In Proceedings of the International Conference on Artificial Neural Networks, volume 1, pages 797–800, Sorrento, Italy, May 1994.

    Google Scholar 

  6. J.N. Hwang, S.R. Lay, M. Maechler, D. Martin, and J. Schimert. Regression modeling in back-propagation and ojection pursuit learning. IEEE Transactions on Neural Networks, 5(3):342–353, May 1994.

    Google Scholar 

  7. V. Kurková and B. Beliczynski. Incremental approximation by one-hidden-layer neural networks. In Proceedings of the International Conference on Artificial Neural Networks, volume 1, pages 505–510, Paris, France, October 1995.

    Google Scholar 

  8. T.Y. Kwok and D.Y. Yeung. Objective functions for training new hidden units in constructive neural networks, 1995. Submitted.

    Google Scholar 

  9. D.J.C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415–447, May 1992.

    Google Scholar 

  10. D.J.C. MacKay. A practical Bayesian framework for backpropagation networks. Neural Computation, 4(3):448–472, May 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Christoph von der Malsburg Werner von Seelen Jan C. Vorbrüggen Bernhard Sendhoff

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kwok, TY., Yeung, DY. (1996). Bayesian regularization in constructive neural networks. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds) Artificial Neural Networks — ICANN 96. ICANN 1996. Lecture Notes in Computer Science, vol 1112. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61510-5_95

Download citation

  • DOI: https://doi.org/10.1007/3-540-61510-5_95

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61510-1

  • Online ISBN: 978-3-540-68684-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics