Skip to main content

Three Analog Neurons Are Turing Universal

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11324))

Abstract

The languages accepted online by binary-state neural networks with rational weights have been shown to be context-sensitive when an extra analog neuron is added (1ANNs). In this paper, we provide an upper bound on the number of additional analog units to achieve Turing universality. We prove that any Turing machine can be simulated by a binary-state neural network extended with three analog neurons (3ANNs) having rational weights, with a linear-time overhead. Thus, the languages accepted offline by 3ANNs with rational weights are recursively enumerable, which refines the classification of neural networks within the Chomsky hierarchy.

J. Šíma—Research was done with institutional support RVO: 67985807 and partially supported by the grant of the Czech Science Foundation No. P202/12/G061.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The results are valid for more general classes of activation functions [6, 11, 15, 22] including the logistic function [5].

References

  1. Alon, N., Dewdney, A.K., Ott, T.J.: Efficient simulation of finite automata by neural nets. J. ACM 38(2), 495–514 (1991)

    Article  MathSciNet  Google Scholar 

  2. Balcázar, J.L., Gavaldà, R., Siegelmann, H.T.: Computational power of neural networks: a characterization in terms of Kolmogorov complexity. IEEE Trans. Inf. Theory 43(4), 1175–1183 (1997)

    Article  MathSciNet  Google Scholar 

  3. Horne, B.G., Hush, D.R.: Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Netw. 9(2), 243–252 (1996)

    Article  Google Scholar 

  4. Indyk, P.: Optimal simulation of automata by neural nets. In: Mayr, E.W., Puech, C. (eds.) STACS 1995. LNCS, vol. 900, pp. 337–348. Springer, Heidelberg (1995). https://doi.org/10.1007/3-540-59042-0_85

    Chapter  Google Scholar 

  5. Kilian, J., Siegelmann, H.T.: The dynamic universality of sigmoidal neural networks. Inf. Comput. 128(1), 48–56 (1996)

    Article  MathSciNet  Google Scholar 

  6. Koiran, P.: A family of universal recurrent networks. Theor. Comput. Sci. 168(2), 473–480 (1996)

    Article  MathSciNet  Google Scholar 

  7. Lupanov, O.B.: On the synthesis of threshold circuits. Probl. Kibern. 26, 109–140 (1973)

    Google Scholar 

  8. Minsky, M.: Computations: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs (1967)

    MATH  Google Scholar 

  9. Orponen, P.: Computing with truly asynchronous threshold logic networks. Theor. Comput. Sci. 174(1–2), 123–136 (1997)

    Article  MathSciNet  Google Scholar 

  10. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  11. Siegelmann, H.T.: Recurrent neural networks and finite automata. J. Comput. Intell. 12(4), 567–574 (1996)

    Article  Google Scholar 

  12. Siegelmann, H.T.: Neural Networks and Analog Computation: Beyond the Turing Limit. Birkhäuser, Boston (1999)

    Book  Google Scholar 

  13. Siegelmann, H.T., Sontag, E.D.: Analog computation via neural networks. Theor. Comput. Sci. 131(2), 331–360 (1994)

    Article  MathSciNet  Google Scholar 

  14. Siegelmann, H.T., Sontag, E.D.: On the computational power of neural nets. J. Comput. Syst. Sci. 50(1), 132–150 (1995)

    Article  MathSciNet  Google Scholar 

  15. Šíma, J.: Analog stable simulation of discrete neural networks. Neural Netw. World 7(6), 679–686 (1997)

    Google Scholar 

  16. Šíma, J.: Energy complexity of recurrent neural networks. Neural Comput. 26(5), 953–973 (2014)

    Article  MathSciNet  Google Scholar 

  17. Šíma, J.: The power of extra analog neuron. In: Dediu, A.-H., Lozano, M., Martín-Vide, C. (eds.) TPNC 2014. LNCS, vol. 8890, pp. 243–254. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13749-0_21

    Chapter  Google Scholar 

  18. Šíma, J.: Neural networks between integer and rational weights. In: Proceedings of the IJCNN 2017 Thirties International Joint Conference on Neural Networks, pp. 154–161. IEEE (2017)

    Google Scholar 

  19. Šíma, J., Orponen, P.: General-purpose computation with neural networks: a survey of complexity theoretic results. Neural Comput. 15(12), 2727–2778 (2003)

    Article  Google Scholar 

  20. Šíma, J., Savický, P.: Quasi-periodic \(\beta \)-expansions and cut languages. Theor. Comput. Sci. 720, 1–23 (2018)

    Article  MathSciNet  Google Scholar 

  21. Šíma, J., Wiedermann, J.: Theory of neuromata. J. ACM 45(1), 155–178 (1998)

    Article  MathSciNet  Google Scholar 

  22. Šorel, M., Šíma, J.: Robust RBF finite automata. Neurocomputing 62, 93–110 (2004)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiří Šíma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Šíma, J. (2018). Three Analog Neurons Are Turing Universal. In: Fagan, D., Martín-Vide, C., O'Neill, M., Vega-Rodríguez, M.A. (eds) Theory and Practice of Natural Computing. TPNC 2018. Lecture Notes in Computer Science(), vol 11324. Springer, Cham. https://doi.org/10.1007/978-3-030-04070-3_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04070-3_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04069-7

  • Online ISBN: 978-3-030-04070-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics