Skip to main content

Estimating the Number of Hidden Neurons in a Feedforward Network Using the Singular Value Decomposition

  • Conference paper
Advances in Neural Networks - ISNN 2006 (ISNN 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3971))

Included in the following conference series:

Abstract

We attempt to quantify the significance of increasing the number of neurons in the hidden layer of a feedforward neural network architecture using the singular value decomposition (SVD). Through this, we extend some well-known properties of the SVD in evaluating the generalizability of single hidden layer feedforward networks (SLFNs) with respect to the number of hidden neurons. The generalization capability of the SLFN is measured by the degree of linear independency of the patterns in hidden layer space, which can be indirectly quantified from the singular values obtained from the SVD, in a post-learning step.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Huang, S., Huang, Y.: Bounds on Number of Hidden Neurons of Multilayer Perceptrons in Classification and Recognition. IEEE International Symposium on Circuits and Systems 4, 2500–2503 (1990)

    Article  Google Scholar 

  2. Sartori, M., Antsaklis, P.: A Simple Method to Derive Bounds on the Size and to Train Multi-Layer Neural Networks. IEEE Trans. on Neural Networks 2(4), 467–471 (1991)

    Article  Google Scholar 

  3. Tamura, S.: Capabilities of a Three-Layer Feedforward Neural Network. In: Proc. Int. Joint Conf. on Neural Networks (1991)

    Google Scholar 

  4. Tamura, S., Tateishi, M.: Capabilities of a Four-Layered Feedforward Neural Network: Four Layers Versus Three. IEEE Trans. on Neural Networks 8(2), 251–255 (1997)

    Article  Google Scholar 

  5. Huang, G., Babri, H.: Upper Bounds on the Number of Hidden Neurons in Feedforward Networks with Arbitrary Bounded Nonlinear Activation Functions. IEEE Trans. Neural Networks 9(1) (1998)

    Google Scholar 

  6. Huang, G.: Learning Capability and Storage Capacity of Two-Hidden-Layer Feedforward Networks. IEEE Trans. on Neural Networks 14(2), 274–281 (2003)

    Article  Google Scholar 

  7. Cover, T.: Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition. IEEE Trans. Electronic. Comput. 14, 326–334 (1965)

    Article  MATH  Google Scholar 

  8. Hayashi, M.: A Fast Algorithm for the Hidden Units in a Multilayer Perceptron. Proc. Int. Joint Conf. on Neural Networks 1, 339–342 (1993)

    Google Scholar 

  9. Tamura, S., Tateishi, M., Matsumoto, M., Akita, S.: Determination of the Number of Redundant Hidden Units in a Three-Layered Feedforward Neural Network. Proc. Int. Joint Conf. on Neural Networks 1, 335–338 (1993)

    Google Scholar 

  10. Psichogios, D., Ungar, L.: SVD-NET: An Algorithm that Automatically Selects Network Structure. IEEE Trans. on Neural Networks 5(3), 513–515 (1994)

    Article  Google Scholar 

  11. Xiang, C., Ding, S., Lee, T.: Geometrical Interpretation and Architecture Selection of MLP. IEEE Trans. Neural Networks 16(1), 84–96 (2005)

    Article  Google Scholar 

  12. Stewart, G.: Determining Rank in the Presence of Error. Technical Report (TR-92-108) Institute for Advanced Computer Studies (TR-2972) Department of Computer Science, University of Maryland, College Park (October 1992)

    Google Scholar 

  13. Golub, G.H., Van Loan, C.: Matrix Computations, 3rd edn. John Hopkins University Press, Baltimore (1999)

    Google Scholar 

  14. Hansen, P.: Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion. SIAM, Philadelphia (1998)

    Book  Google Scholar 

  15. Golub: G., Klema, V., Stewart, G.: Rank Degeneracy and the Least Squares Problem. Technical Report (STAN-CS-76-559), Computer Science Department, School of Humanities and Sciences, Stanford University (1976)

    Google Scholar 

  16. Ljung, L.: System Identification: Theory for the User. Prentice-Hall, Englewood Cliffs (1987)

    MATH  Google Scholar 

  17. Konstantinides, K., Yao, K.: Statistical Analysis of Effective Singular Values in Matrix Rank Determination. IEEE Trans. on Acoustics, Speech and Signal Processing 36(5), 757–763 (1988)

    Article  MATH  Google Scholar 

  18. Moller, M.: A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning. Neural Networks 6, 525–533 (1993)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Teoh, E.J., Xiang, C., Tan, K.C. (2006). Estimating the Number of Hidden Neurons in a Feedforward Network Using the Singular Value Decomposition. In: Wang, J., Yi, Z., Zurada, J.M., Lu, BL., Yin, H. (eds) Advances in Neural Networks - ISNN 2006. ISNN 2006. Lecture Notes in Computer Science, vol 3971. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11759966_126

Download citation

  • DOI: https://doi.org/10.1007/11759966_126

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-34439-1

  • Online ISBN: 978-3-540-34440-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics