Abstract
Devices such as neural networks typically approximate the elements of some function space X by elements of a nontrivial finite union M of finite-dimensional spaces. It is shown that if X=L p(Ω) (1<p<∞ and Ω⊂R d), then for any positive constant Γ and any continuous function φ from X to M, ‖f−φ(f)‖>‖f−M‖+Γ for some f in X. Thus, no continuous finite neural network approximation can be within any positive constant of a best approximation in the L p-norm.
Similar content being viewed by others
References
R. DeVore, R. Howard and C. Micchelli, Optimal nonlinear approximation, Manuscripta Mathematica 63 (1989) 469–478.
A.L. Dontchev and T. Zolezzi, Well–Posed Optimization Problems, Lecture Notes in Mathematics 1543 (Springer, Berlin, 1993).
E. Hewitt and K. Stromberg, Real and Abstract Analysis (Springer, New York, 1965).
R. Huotari and W. Li, Continuities of metric projections and geometric consequences, J. Approx. Theory 90 (1997) 319–339.
P.C. Kainen, V. Kůrková and A. Vogt, Approximation by neural networks is not continuous, Neurocomputing 29 (1999) 47–56.
P.C. Kainen, V. Kůrková and A. Vogt, Geometry and topology of continuous best and near best approximations, J. Approx. Theory 105 (2000) 252–262.
I. Singer, Best Approximation in Normed Linear Spaces by Elements of Linear Subspaces (Springer, New York, 1970).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Kainen, P.C., Kůrková, V. & Vogt, A. Continuity of Approximation by Neural Networks in Lp Spaces. Annals of Operations Research 101, 143–147 (2001). https://doi.org/10.1023/A:1010916406274
Issue Date:
DOI: https://doi.org/10.1023/A:1010916406274