Skip to main content
Log in

Lower estimation of approximation rate for neural networks

  • Published:
Science in China Series F: Information Sciences Aims and scope Submit manuscript

Abstract

Let SF d and \( \Pi _{\phi ,n,d} = \left\{ {\sum\nolimits_{j = 1}^n {b_j \phi (\omega _j \cdot x + \theta _j ):b_j } ,\theta _j \in \mathbb{R},\omega _j \in \mathbb{R}^d } \right\} \) be the set of periodic and Lebesgue’s square-integrable functions and the set of feedforward neural network (FNN) functions, respectively. Denote by dist (SF d Πϕ,n,d ) the deviation of the set SF d from the set Πϕ,n,d . A main purpose of this paper is to estimate the deviation. In particular, based on the Fourier transforms and the theory of approximation, a lower estimation for dist (SF d Πϕ,n,d ) is proved. That is, dist(SF d Πϕ,n,d ) ⩾ \( \frac{C} {{(n\log _2 n)^{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}} }} \). The obtained estimation depends only on the number of neuron in the hidden layer, and is independent of the approximated target functions and dimensional number of input. This estimation also reveals the relationship between the approximation rate of FNNs and the topology structure of hidden layer.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Cybenko G. Approximation by superpositions of a single function. Math Control Sig Syst, 1989, 2: 303–314

    Article  MATH  MathSciNet  Google Scholar 

  2. Funahashi K. On the approximate realization of continuous mappings by neural networks. Neural Netw, 1989, 2: 183–192

    Article  Google Scholar 

  3. Hornik K, Stinchombe M, White H. Multilayer feedforward networks are universal approximators. Neural Netw, 1989, 2: 359–366

    Article  Google Scholar 

  4. Leshno M, Lin V Y, Pinkus A, et al. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw, 1993, 6: 861–867

    Article  Google Scholar 

  5. Mhaskar H N, Micchelli C A. Approximation by superposition of sigmoidal and radial basis functions. Adv Appl Math, 1992, 13: 350–373

    Article  MATH  MathSciNet  Google Scholar 

  6. Yoshifusa I. Approximation of functions on a compact set by finite sums of sigmoid function without scaling. Neural Netw, 1991, 4: 817–826

    Article  Google Scholar 

  7. Hornik H. Approximation capabilities of multilayer feedforward networks. Neural Netw, 1991, 4: 251–257

    Article  Google Scholar 

  8. Chen T P, Chen H. Approximation capability to functions of several variables, nonlinear functionals, and operators by radial basis function neural networks. IEEE Trans Neural Netw, 1995, 6: 904–910

    Article  Google Scholar 

  9. Chen T P, Chen H. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Trans Neural Netw, 1995, 6: 911–917

    Article  Google Scholar 

  10. Chen T P. Approximation problems in system identification with neural networks (in Chinese). Sci China Ser A-Math, 1994, 24(1): 1–7

    Google Scholar 

  11. Mhaskar H N, Micchelli C A. Degree of approximation by neural and translation networks with a single hidden layer. Adv Appl Math, 1995, 16: 151–183

    Article  MATH  MathSciNet  Google Scholar 

  12. Mhaskar H N, Micchelli C A. Dimension independent bounds on the degree of approximation by neural networks. IBM J Res Develop, 1994, 38(3): 277–284

    Article  MATH  Google Scholar 

  13. Chui C K, Li X. Approximation by ridge functions and neural networks with one hidden layer. J Approx Theory, 1992, 70: 131–141

    Article  MATH  MathSciNet  Google Scholar 

  14. Maiorov V, Meir R S. Approximation bounds for smooth functions in C(R d) by neural and mixture networks. IEEE Trans Neural Netw, 1998, 9: 969–978

    Article  Google Scholar 

  15. Kåurkova V, Kainen P C, Kreinovich V. Estimates of the number of hidden units and variation with respect to half-space. Neural Netw, 1997, 10: 1068–1078

    Google Scholar 

  16. Kåurkova V, Sanguineti M. Bounds on rates of variable-basis and neural network approximation. IEEE Trans Inf Theory, 2001, 47(6): 2659–2665

    Article  MathSciNet  Google Scholar 

  17. Li X. Simultaneous approximation of multivariate function and their derivatives by neural networks with one hidden layer. Neurocomputing, 1996, 12: 327–343

    Article  MATH  Google Scholar 

  18. Chen X H, White H. Improve rates and asymptotic normality for nonparametric neural network estimators. IEEE Trans Inf Theory, 1999, 45: 682–691

    Article  MATH  MathSciNet  Google Scholar 

  19. Suzuki S. Constructive function approximation by three-layer artificial neural networks. Neural Netw, 1998, 11: 1049–1058

    Article  Google Scholar 

  20. Ritter G. Efficient estimation of neural weights by polynomial approximation. IEEE Trans Inf Theory, 1999, 45: 1541–1550

    Article  MATH  MathSciNet  Google Scholar 

  21. Maiorov V E. On best approximation by ridge function. J Approx Theory, 1999, 99: 68–94

    Article  MATH  MathSciNet  Google Scholar 

  22. Kåurkova V, Sanguineti M. Comparison of worst case errors in linear and neural networks approximation. IEEE Trans Inf Theory, 2002, 48: 264–275

    Article  MathSciNet  Google Scholar 

  23. Lavretsky E. On the geometric convergence of neural approximations. IEEE Trans Neural Netw, 2002, 13: 274–282

    Article  Google Scholar 

  24. Barron A. Universal approximation bounds for superposition of a sigmoidal function. IEEE Trans Inf Theory, 1993, 39: 930–945

    Article  MATH  MathSciNet  Google Scholar 

  25. Xu Z B, Cao F L. The essential order of approximation for neural networks. Sci China Ser F-Inf Sci, 2004, 47(1): 97–112

    Article  MATH  MathSciNet  Google Scholar 

  26. Cao F L, Zhang Y Q, Zhang W G. Neural networks with single hidden layer and the best polynomial approximation (in Chinese). Acta Math Sin, 2007, 50(2): 385–392

    MATH  MathSciNet  Google Scholar 

  27. Cao F L, Zhang Y Q. Interpolation and approximation by neural networks in distance space (in Chinese). Acta Math Sin, 2008, 51(1): 91–98

    MATH  MathSciNet  Google Scholar 

  28. Cao F L, Xie T F, Xu Z B. The estimate of approximation error for neural networks: a constructive approach. Neurocomputing, 2008, 71: 626–630

    Article  Google Scholar 

  29. Xu Z B, Cao F L. Simultaneous L p-approximation order for neural networks. Neural Netw, 2005, 18: 914–923

    Article  MATH  MathSciNet  Google Scholar 

  30. Ditzian Z, Totik V. Moduli of Smoothness. Berlin/New York: Springer-Verlag, 1987. 1–35

    MATH  Google Scholar 

  31. Xu Z B, Wang J J. The essential order of approximation for nearly exponential type neural networks. Sci China Ser F-Inf Sci, 2006, 49(4): 446–460

    Article  Google Scholar 

  32. Xie T F, Zhou S P. Approximation Theory of Real Function (in Chinese). Hangzhou: Hangzhou University Press, 1998. 63–71

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to FeiLong Cao.

Additional information

Supported by the National Natural Science Foundation of China (Grant No. 60873206), and the National Basic Research Program of China (Grant No. 2007CB311002)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cao, F., Zhang, Y. & Xu, Z. Lower estimation of approximation rate for neural networks. Sci. China Ser. F-Inf. Sci. 52, 1321–1327 (2009). https://doi.org/10.1007/s11432-009-0027-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11432-009-0027-7

Keywords

Navigation