Skip to main content

Robust Implementation of Finite Automata by Recurrent RBF Networks

  • Conference paper
  • First Online:
SOFSEM 2000: Theory and Practice of Informatics (SOFSEM 2000)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1963))

Abstract

In this paper a recurrent network, which consists of O(√m log m) RBF (radial basis functions)units with maximum norm employing any activation function that has different values in at least two nonnegative points, is constructed so as to implement a given deterministic finite automaton with m states the underlying simulation proves to be robust with respect to analog noise for a large class of smooth activation functions with a special type of inflexion.

Research supported by GA AS CR Grant B2030007.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alon, N., Dewdney, A. K., Ott,. J.Efficient simulation of finite automata by neural nets. Journal of the ACM 38 495–514, 1991. 433

    Article  MATH  MathSciNet  Google Scholar 

  2. Balcázar, J.L., Gavaldá, R., Siegelmann, H.T. Computational power of neural networks: A characterization in terms of Kolmogorov complexity. IEEE Transactions of Information Theory 43 1175–1183, 1997. 431

    Article  MATH  Google Scholar 

  3. Broomhead, D.S., Lowe, D. Multivariable functional interpolation and adaptive networks.Complex Systems 2 321–355, 1988. 431

    MATH  MathSciNet  Google Scholar 

  4. Das, S., Mozer, M.C. A unified gradient-descent/clustering architecture for finite state machine induction. In J. Cowan, G. Tesauro, and J. Alspector, editors, Neural Information Processing Systems 6 19–26, 1994. 432

    Google Scholar 

  5. Frasconi, P., Gori, M., Maggini, M., Soda, G. A unified approach for integrating explicit knowledge and learning by example in recurrent networks. In Proceedings of the IEEE International Joint Conference on Neural Networks IJCNN’91 Seattle, vol.I 881–916, IEEE Press, New York, 1991. 432

    Google Scholar 

  6. Frasconi, P., Gori, M., Maggini, M., Soda, G. Representation of finite state automata in recurrent radial basis function networks. Machine Learning 23 5–32, 1996. 432

    MATH  Google Scholar 

  7. Giles, C.L., Miller, C.B., Chen, D., Chen, H.H., Sun, G.Z., Lee, Y. C. Learning and extracting finite state automata with second-order recurrent neural networks. Neural Computation 4 393–405,1992. 432

    Article  Google Scholar 

  8. Gori, M., Maggini, M., Soda, G. Inductive inference with recurrent radial basis function networks. In Proceedings of the International Conference on Artificial Neural Networks ICANN’94 Sorrento, Italy, 238–241, Springer-Verlag, 1994. 432

    Google Scholar 

  9. Haykin, S. Neural Networks: A Comprehensive Foundation. Prentice-Hall, Upper Saddle River, NJ, 2nd edition, 1999. 431

    Google Scholar 

  10. Horne, B. G., Hush, D. R. On the node complexity of neural networks. Neural Networks 7 1413–1426, 1994. 434

    Article  Google Scholar 

  11. Horne, B. G., Hush, D. R. Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Networks 9 243–252, 1996. 432

    Article  Google Scholar 

  12. Indyk, P. Optimal simulation of automata by neural nets. In Proceedings of the Twelfth Annual Symposium on Theoretical Aspects of Computer Science STACS’95 vol.900 of LNCS, 337–348, Springer-Verlag, Berlin, 1995. 431, 432, 433, 437

    Google Scholar 

  13. Kilian, J., Siegelmann, H.T. The dynamic universality of sigmoidal neural networks. Information and Computation 128 48–56, 1996. 437

    Article  MATH  MathSciNet  Google Scholar 

  14. Kleene, S.C. Representation of Events in Nerve Nets and Finite Automata. In C.E. Shannon and J. McCarthy, editors, Automata Studies vol.34 of Annals of Mathematics Studies 3–41, Princeton University Press, NJ, 1956. 431

    Google Scholar 

  15. Maass, W., Orponen, P. On the effect of analog noise in discrete-time analog computations. Neural Computation 10 1071–1095, 1998. 432, 437

    Article  Google Scholar 

  16. Manolios, P., Fanelli, R. First-order recurrent neural networks and deterministic finite state automata. Neural Computation 6 1155–1173, 1994. 432

    Article  Google Scholar 

  17. Minsky, M.L., Papert, S.A. Perceptrons. MI Press, Cambridge, MA, 1969. 431

    MATH  Google Scholar 

  18. Moody, J.E., Darken, C.J. Fast learning in networks of locally-tuned processing units. Neural Computation 1 281–294, 1989. 431

    Article  Google Scholar 

  19. Omlin, C.W., Giles, C.L. Training second-order recurrent neural networks using hints. In D. Sleeman and P. Edwards, editors, Proceedings of the Ninth International Conference on Machine Learning 363–368, San Mateo, CA, Morgan Kaufman Publishers, 1992. 432

    Google Scholar 

  20. Omlin, C.W., Giles, C.L. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM 43 937–972, 1996. 432, 437

    Article  MATH  MathSciNet  Google Scholar 

  21. Poggio, T., Girosi, F. Networks for approximation and learning. In Proceedings of the IEEE 78 1481–1497, 1990. 431

    Google Scholar 

  22. Powell, M. J. D. Radial basis functions for multivariable interpolation:A review. In J.C. Mason and M.G. Cox, editors, Proceedings of the IMA Conference on Algorithms for the Approximation of Functions and Data RMCS, Shrivenham, UK, 143–167, Oxford Science Publications, 1985. 431

    Google Scholar 

  23. Renals, S. Radial basis function network for speech pattern classification. Electronics Letters 25 437–439, 1989. 431

    Article  Google Scholar 

  24. Siegelmann, H.T., Sontag, E.D. Computational power of neural networks. Journal of Computer System Science 50 132–150, 1995. 431, 433, 437

    Article  MATH  MathSciNet  Google Scholar 

  25. Šíma, J. Analog stable simulation of discrete neural networks. Neural Network World 7 679–686, 1997. 432, 437

    Google Scholar 

  26. Šíma, J., Wiedermann, J. Theory of neuromata.Journal of the ACM 45 155–178, 1998. 432, 433

    Article  MATH  MathSciNet  Google Scholar 

  27. Tiňo, P., Šajda, J. Learning and extracting initial mealy automata with a modular neural network model. Neural Computation 7 822–844, 1995. 432

    Article  Google Scholar 

  28. Zeng, Z., Goodman, R., Smyth, P. Learning finite state machines with selfclustering recurrent networks, Neural Computation 5 976–990, 1993. 432

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Šorel, M., Šíma, J. (2000). Robust Implementation of Finite Automata by Recurrent RBF Networks. In: Hlaváč, V., Jeffery, K.G., Wiedermann, J. (eds) SOFSEM 2000: Theory and Practice of Informatics. SOFSEM 2000. Lecture Notes in Computer Science, vol 1963. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44411-4_32

Download citation

  • DOI: https://doi.org/10.1007/3-540-44411-4_32

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41348-6

  • Online ISBN: 978-3-540-44411-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics