Skip to main content

Finite Versus Infinite Neural Computation

  • Chapter
Finite Versus Infinite

Part of the book series: Discrete Mathematics and Theoretical Computer Science ((DISCMATH))

  • 273 Accesses

Abstract

The Turing machine was suggested in 1935 as a model of a mathematician that solves problems by a finitely specifiable algorithm using unlimited time, energy, pencils, and paper. Although frequently used, there are other possible computational models and devices, not all of them finitely specifiable. The brain, for example, could be perceived as a powerful computer with its excellent ability for speech recognition, image recognition, and the development of new theories. The nervous system, constituting an intricately interconnected web of 1010–1011 neurons whose synaptic connection strength changes in an adaptive and continuous manner, cannot be perceived as a static algorithm; the chemical and physical processes affecting the neuronal states are not specifiable by finite means.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. J. L. Balcázar, R. Gavaldà, H. T. Siegelmann, Computational power of neural networks: A characterization in terms of Kolmogorov complexity, IEEE Transactions on Information Theory, 43, 4 (1997), 1175–1183.

    Article  MATH  Google Scholar 

  2. M. P. Casey, The dynamics of discrete-time computation with application to recurrent neural networks and finite state machine extraction, Neural Computation, 8, 6 (1996), 1135–1178.

    Article  Google Scholar 

  3. R. Gavaldà, H. T. Siegelmann, Discontinuities in recurrent neural networks, Neural Computation, 11, 3 (1999), 715–745.

    Article  Google Scholar 

  4. R. M. Karp, R. Lipton, Turing machines that take advice, Enseignment Mathematique, 28 (1982), 191–209.

    MathSciNet  MATH  Google Scholar 

  5. J. Kilian, H. T. Siegelmann, The dynamic universality of sigmoidal neural networks, Information and Computation, 128, 1 (1996), 48–56.

    Article  MathSciNet  MATH  Google Scholar 

  6. C. Moore, Unpredictability and undecidability in dynamical systems, Physical Review Letters, 64 (1990), 2354–2357.

    Article  MathSciNet  MATH  Google Scholar 

  7. W. Maass, E. D. Sontag, Analog neural nets with Gaussian or other common noise distribution cannot recognize arbitrary regular languages, Neural Computation, 11, 3 (1999), 771–782.

    Article  Google Scholar 

  8. P. Orponen, W. Maass, On the effect of analog noise on discrete time analog computations, Neural Computation, 10 (1998), 1071–1095.

    Article  Google Scholar 

  9. I. Parberry, Circuit Complexity and Neural Networks, MIT Press, Cambridge, MA, 1994.

    MATH  Google Scholar 

  10. R. Penrose, The Emperor’s New Mind, Oxford University Press, Oxford, England, 1989.

    Google Scholar 

  11. H. T. Siegelmann, Computation beyond the Turing limit, Science, 268 (April 1995), 545–548.

    Article  Google Scholar 

  12. H. T. Siegelmann, Neural Networks and Analog Computation: Beyond the Turing Limit, Birkhauser, Boston, MA, 1999.

    MATH  Google Scholar 

  13. H. T. Siegelmann, A. Roitershtein, Noisy computational systems and definite languages, Discrete Applied Mathematics, 1999, to appear.

    Google Scholar 

  14. H. T. Siegelmann, E. D. Sontag, Analog computation via neural networks, Theoretical Computer Science, 131 (1994), 331–360.

    Article  MathSciNet  MATH  Google Scholar 

  15. H. T. Siegelmann, E. D. Sontag, On computational power of neural networks, Journal of Computer and System Sciences, 50, 1 (1995), 132–150.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag London Limited

About this chapter

Cite this chapter

Siegelmann, H.T. (2000). Finite Versus Infinite Neural Computation. In: Finite Versus Infinite. Discrete Mathematics and Theoretical Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-0751-4_19

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-0751-4_19

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-85233-251-8

  • Online ISBN: 978-1-4471-0751-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics