Skip to main content

The Essential Approximation Order for Neural Networks with Trigonometric Hidden Layer Units

  • Conference paper
Advances in Neural Networks - ISNN 2006 (ISNN 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3971))

Included in the following conference series:

  • 107 Accesses

Abstract

There have been various studies on approximation ability of feedforward neural networks. The existing studies are, however, only concerned with the density or upper bound estimation on how a multivariate function can be approximated by the networks, and consequently, the essential approximation ability of networks cannot be revealed. In this paper, by establishing both upper and lower bound estimations on approximation order, the essential approximation ability of a class of feedforward neural networks with trigonometric hidden layer units is clarified in terms of the second order modulus of smoothness of approximated function.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barron, A.R.: Universial Approximation Bounds for Superpositions of a Sigmodial Function. IEEE Trans. Inform. Theory 39, 930–945 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  2. Cao, F.L., Xiong, J.Y.: Steckin-Marchaud-Type Inequality in Connection with L p Approximation for Multivariate Bernstein-Durrmeyer Operators. Chinese Contemporary Mathematics 22(2), 137–142 (2001)

    MathSciNet  Google Scholar 

  3. Cao, F.L., Li, Y.M., Xu, Z.B.: Pointwise Approximation for Neural Networks. In: Wang, J., Liao, X.-F., Yi, Z. (eds.) ISNN 2005. LNCS, vol. 3496, pp. 39–44. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  4. Chen, T.P., Chen, H.: Universal Approximation to Nonlinear Operators by Neural Networks with Arbitrary Activation Functions and Its Application to Dynamical System. IEEE Trans. Neural Networks 6, 911–917 (1995)

    Article  Google Scholar 

  5. Chen, X.H., White, H.: Improved Rates and Asymptotic Normality for Nonparametric Neural Network Estimators. IEEE Trans. Inform. Theory 45, 682–691 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  6. Chui, C.K., Li, X.: Approximation by Ridge Functions and Neural Networks with One Hidden Layer. J. Approx. Theory 70, 131–141 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  7. Cybenko, G.: Approximation by Superpositions of Sigmoidal Function. Math. of Control Signals, and System 2, 303–314 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  8. Ditzian, Z., Totik, V.: Moduli of Smoothness. Springer, Heidelberg (1987)

    MATH  Google Scholar 

  9. Funahashi, K.I.: On the Approximate Realization of Continuous Mappings by Neural Networks. Neural Networks 2, 183–192 (1989)

    Article  Google Scholar 

  10. Hornik, K., Stinchombe, M., White, H.: Multilayer Feedforward Networks are Universal Approximation. Neural Networks 2, 359–366 (1989)

    Article  Google Scholar 

  11. Johnen, H., Scherer, K.: On the Equivalence of the K-Functional and the Moduli of Continuity and Some Applications. In: Schempp, W., Zeller, K. (eds.) FTRTFT 1992. Lecture Notes in Mathematics, vol. 571, pp. 119–140. Springer, Heidelberg (1977)

    Chapter  Google Scholar 

  12. Kůrkova, V., Kainen, P.C., Kreinovich, V.: Estimates of the Number of Hidden Units and Variation with Respect to Half-Space. Neural Networks 10, 1068–1078 (1997)

    Google Scholar 

  13. Maiorov, V., Meir, R.S.: Approximation Bounds for Smooth Functions in C(R d) by Neural and Mixture Networks. IEEE Trans. Neural Networks 9, 969–978 (1998)

    Article  Google Scholar 

  14. Peetre, J.: On the Connection Between the Theory of Interpolation Spaces and Approximation Theory. In: Alexits, G., Stechkin, S.B. (eds.) Proc. Conf. Construction of Function, Budapest, pp. 351–363 (1969)

    Google Scholar 

  15. Suzuki, S.: Constructive Function Approximation by Three-Layer Artificial Neural Networks. Neural Networks 11, 1049–1058 (1998)

    Article  Google Scholar 

  16. Xu, Z.B., Cao, F.L.: Simultaneous L p-Approximation Order for Neural Networks. Neural Networks 18, 914–923 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  17. Ito, Y.: Approximation of Functions on a Compact Set by Finite Sums of Sigmoid Function without Scaling. Neural Networks 4, 817–826 (1991)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ding, C., Cao, F., Xu, Z. (2006). The Essential Approximation Order for Neural Networks with Trigonometric Hidden Layer Units. In: Wang, J., Yi, Z., Zurada, J.M., Lu, BL., Yin, H. (eds) Advances in Neural Networks - ISNN 2006. ISNN 2006. Lecture Notes in Computer Science, vol 3971. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11759966_11

Download citation

  • DOI: https://doi.org/10.1007/11759966_11

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-34439-1

  • Online ISBN: 978-3-540-34440-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics