Skip to main content

Comparisons of Single- and Multiple-Hidden-Layer Neural Networks

  • Conference paper
Book cover Advances in Neural Networks – ISNN 2011 (ISNN 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6675))

Included in the following conference series:

Abstract

In this study we conduct fair and systematic comparisons of two types of neural networks: single- and multiple-hidden-layer networks. For fair comparisons, we ensure that the two types use the same activation and output functions and have the same numbers of nodes, feedforward connections, and parameters. The networks are trained by the gradient descent algorithm to approximate linear and quadratic functions, and we examine their convergence properties. We show that, in both linear and quadratic cases, the learning rate is more flexible for networks with a single hidden layer than for those with multiple hidden layers. We also show that single-hidden-layer networks converge faster to linear target functions compared to multiple-hidden-layer networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barron, A.R.: Approximation and estimation bounds for artificial neural networks. Machine Learning 14, 115–133 (1994)

    MATH  Google Scholar 

  2. Chester, D.L.: Why two hidden layers are better than one. In: Proceedings of the International Joint Conference on Neural Networks, vol. 1, pp. 265–268 (1990)

    Google Scholar 

  3. Fausett, L.: Fundamentals of Neural Networks. Prentice Hall, Englewood Cliffs (1994)

    MATH  Google Scholar 

  4. Funahashi, K.: On the approximate realization of continuous mappings by neural networks. Neural Networks 2, 183–192 (1989)

    Article  Google Scholar 

  5. Hassoun, M.: Fundamentals of Artificial Neural Networks. MIT Press, Boston (1995)

    MATH  Google Scholar 

  6. Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall, Upper Saddle River (1999)

    MATH  Google Scholar 

  7. Heskes, T., Wiegerinck, W.: A theoretical comparison of batch-mode, online, cyclic, and almost-cyclic learning. IEEE Transactions on Neural Networks 7, 919–925 (1996)

    Article  Google Scholar 

  8. Hornik, K.: Some new results on neural network approximation. Neural Networks 6, 1069–1072 (1993)

    Article  Google Scholar 

  9. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Networks 2, 359–366 (1989)

    Article  Google Scholar 

  10. Nakama, T.: Theoretical analysis of batch and on-line training for gradient descent learning in neural networks. Neurocomputing 73, 151–159 (2009)

    Article  Google Scholar 

  11. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by error propagation. In: Rumelhart, D.E., McClelland, J.L., PDP Research Group (eds.) Parallel Distributed Processing, vol. 1, pp. 318–362. MIT Press, Cambridge (1986)

    Google Scholar 

  12. Sontag, E.D.: Feedback stabilization using two-hidden-layer nets. IEEE Transactions on Neural Networks 3, 981–990 (1992)

    Article  Google Scholar 

  13. Wilson, D.R., Martinez, T.R.: The general inefficiency of batch training for gradient descent learning. Neural Networks 16, 1429–1451 (2003)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Nakama, T. (2011). Comparisons of Single- and Multiple-Hidden-Layer Neural Networks. In: Liu, D., Zhang, H., Polycarpou, M., Alippi, C., He, H. (eds) Advances in Neural Networks – ISNN 2011. ISNN 2011. Lecture Notes in Computer Science, vol 6675. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-21105-8_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-21105-8_32

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-21104-1

  • Online ISBN: 978-3-642-21105-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics