Skip to main content
Log in

Universal Approximation Theorem for Interval Neural Networks

  • Published:
Reliable Computing

Abstract

One of the main computer-learning tools is an (artificial) neural network (NN); based on the values y (p) of a certain physical quantity y at several points x (p)=(x (p)1 ,...,x (p) n ), the NN finds a dependence y = f(x1,...,x n ) that explains all known observations and predicts the value of y for other x = (x1,...,xn). The ability to describe an arbitrary dependence follows from the universal approximation theorem, according to which an arbitrary continuous function of a bounded set can be, within a given accuracy, approximated by an appropriate NN.

The measured values of y are often only known with interval uncertainty. To describe such situations, we can allow interval parameters in a NN and thus, consider an interval NN. In this paper, we prove the universal approximation theorem for such interval NN's.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

References

  1. Alefeld, G. and Herzberger, J.: Introduction to Interval Computations, Academic Press, NY, 1983.

    Google Scholar 

  2. Haykin, S.: Neural Networks: A Comprehensive Foundation, Macmillan College Publishing Company, New York, NY, 1994.

    Google Scholar 

  3. Hecht-Nielsen, R.: Kolmogorov's Mapping Neural Network Existence Theorem, in: Proceedings of First IEEE International Conference on Neural Networks, San Diego, CA, 1987, pp. 11–14.

  4. Hornik, K., Stinchcombe, M., and White, H.: Multilayer Feedforward Neural Networks Are Universal Approximators, Neural Networks 2 (1989), pp. 359–366.

    Google Scholar 

  5. Hornik, K.: Approximation Capabilities of Multilayer Feedforward Neural Networks. Neural Networks 4 (1991), pp. 251–257.

    Google Scholar 

  6. Ishibuchi, H. and Tanaka, H.: An Architecture of Neural Networks with Interval Weights and Its Application to Fuzzy Regression, Fuzzy Sets and Systems 57 (1993), pp. 27–39.

    Google Scholar 

  7. Kůrková, V.: Kolmogorov's Theorem Is Relevant, Neural Computation 3 (1991), pp. 617–622.

    Google Scholar 

  8. Kůrková, V.: Kolmogorov's Theorem and Multilayer Neural Networks, Neural Networks 5 (1992), pp. 501–506.

    Google Scholar 

  9. Nesterov, V. M.: Interval Analogues of Hilbert's 13th Problem, in: Abstracts of the Int'l Conference Interval '94, St. Petersburg, Russia, March 7–10, 1994, pp. 185–186.

  10. Patil, R. B.: Interval Neural Networks, in: Extended Abstracts of APIC '95: International Workshop on Applications of Interval Computations, El Paso, TX, Febr. 23–25, 1995, Reliable Computing (1995). Supplement. p. 164.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Baker, M.R., Patil, R.B. Universal Approximation Theorem for Interval Neural Networks. Reliable Computing 4, 235–239 (1998). https://doi.org/10.1023/A:1009951412412

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009951412412

Keywords