Abstract.
The multiplicity of approximation theorems for Neural Networks do not relate to approximation of linear functions per se. The problem for the network is to construct a linear function by superpositions of non-linear activation functions such as the sigmoid function. This issue is important for applications of NNs in statistical tests for neglected nonlinearity, where it is common practice to include a linear function through skip-layer connections. Our theoretical analysis and evidence point in a similar direction, suggesting that the network can in fact provide linear approximations without additional ‘assistance’. Our paper suggests that skip-layer connections are unnecessary, and if employed could lead to misleading results.
Similar content being viewed by others
Author information
Authors and Affiliations
Corresponding author
Additional information
Received: August 2002, Revised: March 2003,
AMS Classification:
82c32
The authors are grateful to Prof. Mick Silver and to GFK Marketing for help with the provision of data.
Rights and permissions
About this article
Cite this article
Curry, B., Morgan, P.H. Neural networks, linear functions and neglected non-linearity. Computational Management Science 1, 15–29 (2003). https://doi.org/10.1007/s10287-003-0003-4
Issue Date:
DOI: https://doi.org/10.1007/s10287-003-0003-4