Abstract
The Turing machine was suggested in 1935 as a model of a mathematician that solves problems by a finitely specifiable algorithm using unlimited time, energy, pencils, and paper. Although frequently used, there are other possible computational models and devices, not all of them finitely specifiable. The brain, for example, could be perceived as a powerful computer with its excellent ability for speech recognition, image recognition, and the development of new theories. The nervous system, constituting an intricately interconnected web of 1010–1011 neurons whose synaptic connection strength changes in an adaptive and continuous manner, cannot be perceived as a static algorithm; the chemical and physical processes affecting the neuronal states are not specifiable by finite means.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
J. L. Balcázar, R. Gavaldà, H. T. Siegelmann, Computational power of neural networks: A characterization in terms of Kolmogorov complexity, IEEE Transactions on Information Theory, 43, 4 (1997), 1175–1183.
M. P. Casey, The dynamics of discrete-time computation with application to recurrent neural networks and finite state machine extraction, Neural Computation, 8, 6 (1996), 1135–1178.
R. Gavaldà, H. T. Siegelmann, Discontinuities in recurrent neural networks, Neural Computation, 11, 3 (1999), 715–745.
R. M. Karp, R. Lipton, Turing machines that take advice, Enseignment Mathematique, 28 (1982), 191–209.
J. Kilian, H. T. Siegelmann, The dynamic universality of sigmoidal neural networks, Information and Computation, 128, 1 (1996), 48–56.
C. Moore, Unpredictability and undecidability in dynamical systems, Physical Review Letters, 64 (1990), 2354–2357.
W. Maass, E. D. Sontag, Analog neural nets with Gaussian or other common noise distribution cannot recognize arbitrary regular languages, Neural Computation, 11, 3 (1999), 771–782.
P. Orponen, W. Maass, On the effect of analog noise on discrete time analog computations, Neural Computation, 10 (1998), 1071–1095.
I. Parberry, Circuit Complexity and Neural Networks, MIT Press, Cambridge, MA, 1994.
R. Penrose, The Emperor’s New Mind, Oxford University Press, Oxford, England, 1989.
H. T. Siegelmann, Computation beyond the Turing limit, Science, 268 (April 1995), 545–548.
H. T. Siegelmann, Neural Networks and Analog Computation: Beyond the Turing Limit, Birkhauser, Boston, MA, 1999.
H. T. Siegelmann, A. Roitershtein, Noisy computational systems and definite languages, Discrete Applied Mathematics, 1999, to appear.
H. T. Siegelmann, E. D. Sontag, Analog computation via neural networks, Theoretical Computer Science, 131 (1994), 331–360.
H. T. Siegelmann, E. D. Sontag, On computational power of neural networks, Journal of Computer and System Sciences, 50, 1 (1995), 132–150.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag London Limited
About this chapter
Cite this chapter
Siegelmann, H.T. (2000). Finite Versus Infinite Neural Computation. In: Finite Versus Infinite. Discrete Mathematics and Theoretical Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-0751-4_19
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0751-4_19
Publisher Name: Springer, London
Print ISBN: 978-1-85233-251-8
Online ISBN: 978-1-4471-0751-4
eBook Packages: Springer Book Archive