Skip to main content

Finite State Automata and Connectionist Machines: A survey

  • Learning
  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 930))

Abstract

Work in the literature related to Finite State Automata (FSAs) and Neural Networks (NNs) is review. These studies have dealt with Grammatical Inference tasks as well as how to represent FSAs through a neural model. The inference of Regular Grammars through NNs has been focused either on the acceptance or rejection of strings generated by the grammar or on the prediction of the possible successor(s) for each character in the string. Different neural architectures using first and second order connections were adopted. In order to extract the FSA inferred by a trained net, several techniques have been described in the literature, which are also reported here. Finally, theoretical work about the relationship between NNs and FSAs is outlined and discussed.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. Representation and Recognition of Regular Grammars by means of Second-Order Recurrent Neural Networks. R. Alquézar, A. Sanfeliu. In New Trends in Neural Computation. Eds. J.Mira, J.Cabestany, A.Prieto. Springer Verlag. Lecture Notes in Computer Science, Vol. 686, pp. 143–148. 1993.

    Google Scholar 

  2. Simulation of Stochastic Regular Grammars through Simple Recurrent Networks. M.A. Castaño, F. Casacuberta, E. Vidal. In New Trends in Neural Computation. Eds. J. Mira, J. Cabestany, A. Prieto. Springer Verlag. Lecture Notes in Computer Science, Vol. 686, pp. 210–215. 1993.

    Google Scholar 

  3. Inference of Stochastic Regular Languages through Simple Recurrent Networks. M.A. Castaño, E. Vidal, F. Casacuberta. In Procs. of the First International Conference on Grammatical Inference. 1993.

    Google Scholar 

  4. Constructive Learning of Recurrent Neural Networks: Limitations of Recurrent Cascade Correlation and a Simple Solution. D. Chen, C.L. Giles, G.Z. Sun, H.H. Chen, Y.C. Lee, M.W. Goudreau. IEEE Transactions on Neural Networks. 1995. In press

    Google Scholar 

  5. Finite State Automata and Simple Recurrent Networks. A. Cleeremans, D. Servan-Schreiber, J.L. McClelland. Neural Computation, no. 1, pp. 372–381. 1989.

    Google Scholar 

  6. Using Hints to Successfully Learn Context-Free Grammars with a Neural Network Pushdown Automaton. S. Das, C.L. Giles, G.Z. Sun. In Advances in Neural Information Processing Systems 5 Eds C.L. Giles, R.P. Lipmann. 1993.

    Google Scholar 

  7. Finding Structure in Time. J.L. Elman. Technical Report No. 8801. Center for Research in Language. University of California. La Jolla. 1988.

    Google Scholar 

  8. The Recurrent Cascade-Correlation Architecture. S.E. Fahlman. Technical Report CMU-CS-91-100, School of Computer Science, Carnegie Mellon University, Pittsburgh. 1991.

    Google Scholar 

  9. Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks. C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee. Neural Computation, no 4, pp. 393–405 1992.

    Google Scholar 

  10. Extracting and Learning an Unknown Grammar with Recurrent Neural Networks. C.L. Giles, C.B. Miller, D. Chen, G.Z. Sun, H.H. Chen, Y.C. Lee. In Advances in Neural Information Processing Systems 4. Eds. J.E. Moody, S.J. Hanson, R.P. Lipmann. 1992.

    Google Scholar 

  11. Inserting Rules into Recurrent Neural Networks. C.L. Giles, C.W. Omlin. In Procs. of the 1992 IEE Signal Processing, pp. 13–22. 1992.

    Google Scholar 

  12. Rule Refinement with Recurrent Neural Networks. C.L. Giles, C.W. Omlin. In Procs of the 1993 IEE International Conference on Neural Networks. 1993.

    Google Scholar 

  13. Extraction, Insertion and Refinement of Symbolic Rules in Dynamically-Driven Recurrent Neural Networks. C.L. Giles, C.W. Omlin. Connection Science, vol. 5, no. 3, pp. 307–337. 1993.

    Google Scholar 

  14. First-Order vs. Second-Order Single Layer Recurrent Neural Networks. M.W. Goudreau, C.L. Giles, S.T. Chkradhar, D. Chen. IEEE Transactions on Neural Networks, vol. 5, no. 3, pp. 511–513. 1994.

    Google Scholar 

  15. Serial order: A parallel distributed processing approach. M.I. Jordan. Technical Report No. 8604. Institute of Cognitive Science. University of California. San Diego. 1988.

    Google Scholar 

  16. Algebraic Grammatical Inference. S.M. Lucas. In Procs. of the First International Conference on Grammatical Inference. 1993.

    Google Scholar 

  17. First Order Recurrent Neural Networks and Deterministic Finite State Automata. P. Manolios, R. Fanelli. Technical Report NNRG-930625A, Department of computer Science and Physics, Brooklyn College of the City University of New York. Brooklyn. 1993.

    Google Scholar 

  18. Forcing Simple Recurrent Neural Networks to Encode Context. Procs. of the 1992 Long Island Conference on Artificial Intelligence and Computer Graphics. 1992.

    Google Scholar 

  19. A logical Calculus of the Ideas Imminent in Nervous Activity. W.S. McCulloch, W. Pits. Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133. 1943.

    Google Scholar 

  20. Generalization in Neural Networks: The Contiguity Problem. T. Maxwell, C.L. Giles, Y.C. Lee. In Procs. of the International Joint Conference on Neural Networks, vol. 2, pp. 41–46. 1989.

    Google Scholar 

  21. Experimental Comparison of the Effect of Order in Recurrent Neural Networks. C.B. Miller, C.L. Giles. International Journal of Pattern Recognition and Artificial Intelligence. 1993.

    Google Scholar 

  22. Computation: Finite and Infinite Machines. M.L. Minsky. Chap. 3.5. Ed. Prentice-Hall, Englewood Cliffs, New York. 1967.

    Google Scholar 

  23. Training Second-Order Recurrent Neural Networks using Hints. C.W. Omlin, C.L. Giles. In Procs. of the Ninth International Conference on Machine Learning. 1992.

    Google Scholar 

  24. Pruning Recurrent Neural Networks for Improved Generalization Performance. C.W. Omlin, C.L. Giles. Technical Report No. 93-6. Computer Science Department, Rensselaer Polytechnic Institute, Troy, N.Y. 1993.

    Google Scholar 

  25. The Induction of Dynamical Recognizers. J.B. Pollack. Machine Learning, no. 7, pp. 227–252. 1991.

    Google Scholar 

  26. Learning sequential structure in simple recurrent networks. D.E. Rumelhart, G. Hinton, R. Williams. Parallel distributed processing: Experiments in the microstructure of cognition, vol. 1. Ed. Rumelhart, D.E. McClelland, J.L. and the PDP Research Group. MIT Press. Cambridge. 1986.

    Google Scholar 

  27. Understanding Neural Networks for Grammatical Inference and Recognition. A. Sanfeliu, R. Alquézar. In Advances in Structural and Syntactic Pattern Recognition, pp. 75–948. Ed. H.Bunke. 1992

    Google Scholar 

  28. Encoding sequential structure in simple recurrent networks. Servan-Schreiber, D.A. Cleeremans, J.L. McClelland. Technical Report CMU-CS-183. School of Computer Science. Carnegie Mellon University. Pittsburgh, PA. 1988.

    Google Scholar 

  29. Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks. D. Servan-Schreiber, A. Cleeremans, J.L. McClelland. Machine Learning, no. 7, pp. 161–193. 1991.

    Google Scholar 

  30. Learning Sequential Structure with the Real-Time Recurrent Learning Algorithm. A.W. Smith, D. Zipser. International Journal of Neural Systems, vol. 1, no. 2, pp. 125–131. 1989.

    Article  Google Scholar 

  31. Connectionist Pushdown Automata that Learn Context-Free Grammars. G.Z. Sun, H.H. Chen, C.L. Giles, Y.C. Lee, D. Chen. In Procs. of the International Joint Conference on Neural Networks, vol. 1, pp. 577–580. 1990.

    Google Scholar 

  32. Dynamic Construction of Finite-State Automata from Examples using Hill-Climbing. M. Tomita. In Procs. of the Fourth Annual Cognitive Science Conference, pp. 105–108. 1982.

    Google Scholar 

  33. Induction of Finite-State Languages Using Second-Order Recurrent Networks. R.L Watrous, G.M. Kuhn. Neural Computation, no. 4, pp. 406–414. 1992.

    Google Scholar 

  34. Experimental Analysis of the Real-time Recurrent Learning Algorithm. R.J. Williams, and D. Zipser. Connection Science, vol. 1, no.1, pp. 87–111. 1989.

    Google Scholar 

  35. Discrete Recurrent Neural Networks for Grammatical Inference. Z. Zeng, M. Goodman, P. Smith. IEEE Transactions on Neural Networks. Vol. 5, no. 2, pp. 320–330 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Francisco Sandoval

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Castaño, M.A., Vidal, E., Casacuberta, F. (1995). Finite State Automata and Connectionist Machines: A survey. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_206

Download citation

  • DOI: https://doi.org/10.1007/3-540-59497-3_206

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59497-0

  • Online ISBN: 978-3-540-49288-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics