Abstract
Work in the literature related to Finite State Automata (FSAs) and Neural Networks (NNs) is review. These studies have dealt with Grammatical Inference tasks as well as how to represent FSAs through a neural model. The inference of Regular Grammars through NNs has been focused either on the acceptance or rejection of strings generated by the grammar or on the prediction of the possible successor(s) for each character in the string. Different neural architectures using first and second order connections were adopted. In order to extract the FSA inferred by a trained net, several techniques have been described in the literature, which are also reported here. Finally, theoretical work about the relationship between NNs and FSAs is outlined and discussed.
This is a preview of subscription content, log in via an institution.
Preview
Unable to display preview. Download preview PDF.
Bibliography
Representation and Recognition of Regular Grammars by means of Second-Order Recurrent Neural Networks. R. Alquézar, A. Sanfeliu. In New Trends in Neural Computation. Eds. J.Mira, J.Cabestany, A.Prieto. Springer Verlag. Lecture Notes in Computer Science, Vol. 686, pp. 143–148. 1993.
Simulation of Stochastic Regular Grammars through Simple Recurrent Networks. M.A. Castaño, F. Casacuberta, E. Vidal. In New Trends in Neural Computation. Eds. J. Mira, J. Cabestany, A. Prieto. Springer Verlag. Lecture Notes in Computer Science, Vol. 686, pp. 210–215. 1993.
Inference of Stochastic Regular Languages through Simple Recurrent Networks. M.A. Castaño, E. Vidal, F. Casacuberta. In Procs. of the First International Conference on Grammatical Inference. 1993.
Constructive Learning of Recurrent Neural Networks: Limitations of Recurrent Cascade Correlation and a Simple Solution. D. Chen, C.L. Giles, G.Z. Sun, H.H. Chen, Y.C. Lee, M.W. Goudreau. IEEE Transactions on Neural Networks. 1995. In press
Finite State Automata and Simple Recurrent Networks. A. Cleeremans, D. Servan-Schreiber, J.L. McClelland. Neural Computation, no. 1, pp. 372–381. 1989.
Using Hints to Successfully Learn Context-Free Grammars with a Neural Network Pushdown Automaton. S. Das, C.L. Giles, G.Z. Sun. In Advances in Neural Information Processing Systems 5 Eds C.L. Giles, R.P. Lipmann. 1993.
Finding Structure in Time. J.L. Elman. Technical Report No. 8801. Center for Research in Language. University of California. La Jolla. 1988.
The Recurrent Cascade-Correlation Architecture. S.E. Fahlman. Technical Report CMU-CS-91-100, School of Computer Science, Carnegie Mellon University, Pittsburgh. 1991.
Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks. C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee. Neural Computation, no 4, pp. 393–405 1992.
Extracting and Learning an Unknown Grammar with Recurrent Neural Networks. C.L. Giles, C.B. Miller, D. Chen, G.Z. Sun, H.H. Chen, Y.C. Lee. In Advances in Neural Information Processing Systems 4. Eds. J.E. Moody, S.J. Hanson, R.P. Lipmann. 1992.
Inserting Rules into Recurrent Neural Networks. C.L. Giles, C.W. Omlin. In Procs. of the 1992 IEE Signal Processing, pp. 13–22. 1992.
Rule Refinement with Recurrent Neural Networks. C.L. Giles, C.W. Omlin. In Procs of the 1993 IEE International Conference on Neural Networks. 1993.
Extraction, Insertion and Refinement of Symbolic Rules in Dynamically-Driven Recurrent Neural Networks. C.L. Giles, C.W. Omlin. Connection Science, vol. 5, no. 3, pp. 307–337. 1993.
First-Order vs. Second-Order Single Layer Recurrent Neural Networks. M.W. Goudreau, C.L. Giles, S.T. Chkradhar, D. Chen. IEEE Transactions on Neural Networks, vol. 5, no. 3, pp. 511–513. 1994.
Serial order: A parallel distributed processing approach. M.I. Jordan. Technical Report No. 8604. Institute of Cognitive Science. University of California. San Diego. 1988.
Algebraic Grammatical Inference. S.M. Lucas. In Procs. of the First International Conference on Grammatical Inference. 1993.
First Order Recurrent Neural Networks and Deterministic Finite State Automata. P. Manolios, R. Fanelli. Technical Report NNRG-930625A, Department of computer Science and Physics, Brooklyn College of the City University of New York. Brooklyn. 1993.
Forcing Simple Recurrent Neural Networks to Encode Context. Procs. of the 1992 Long Island Conference on Artificial Intelligence and Computer Graphics. 1992.
A logical Calculus of the Ideas Imminent in Nervous Activity. W.S. McCulloch, W. Pits. Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133. 1943.
Generalization in Neural Networks: The Contiguity Problem. T. Maxwell, C.L. Giles, Y.C. Lee. In Procs. of the International Joint Conference on Neural Networks, vol. 2, pp. 41–46. 1989.
Experimental Comparison of the Effect of Order in Recurrent Neural Networks. C.B. Miller, C.L. Giles. International Journal of Pattern Recognition and Artificial Intelligence. 1993.
Computation: Finite and Infinite Machines. M.L. Minsky. Chap. 3.5. Ed. Prentice-Hall, Englewood Cliffs, New York. 1967.
Training Second-Order Recurrent Neural Networks using Hints. C.W. Omlin, C.L. Giles. In Procs. of the Ninth International Conference on Machine Learning. 1992.
Pruning Recurrent Neural Networks for Improved Generalization Performance. C.W. Omlin, C.L. Giles. Technical Report No. 93-6. Computer Science Department, Rensselaer Polytechnic Institute, Troy, N.Y. 1993.
The Induction of Dynamical Recognizers. J.B. Pollack. Machine Learning, no. 7, pp. 227–252. 1991.
Learning sequential structure in simple recurrent networks. D.E. Rumelhart, G. Hinton, R. Williams. Parallel distributed processing: Experiments in the microstructure of cognition, vol. 1. Ed. Rumelhart, D.E. McClelland, J.L. and the PDP Research Group. MIT Press. Cambridge. 1986.
Understanding Neural Networks for Grammatical Inference and Recognition. A. Sanfeliu, R. Alquézar. In Advances in Structural and Syntactic Pattern Recognition, pp. 75–948. Ed. H.Bunke. 1992
Encoding sequential structure in simple recurrent networks. Servan-Schreiber, D.A. Cleeremans, J.L. McClelland. Technical Report CMU-CS-183. School of Computer Science. Carnegie Mellon University. Pittsburgh, PA. 1988.
Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks. D. Servan-Schreiber, A. Cleeremans, J.L. McClelland. Machine Learning, no. 7, pp. 161–193. 1991.
Learning Sequential Structure with the Real-Time Recurrent Learning Algorithm. A.W. Smith, D. Zipser. International Journal of Neural Systems, vol. 1, no. 2, pp. 125–131. 1989.
Connectionist Pushdown Automata that Learn Context-Free Grammars. G.Z. Sun, H.H. Chen, C.L. Giles, Y.C. Lee, D. Chen. In Procs. of the International Joint Conference on Neural Networks, vol. 1, pp. 577–580. 1990.
Dynamic Construction of Finite-State Automata from Examples using Hill-Climbing. M. Tomita. In Procs. of the Fourth Annual Cognitive Science Conference, pp. 105–108. 1982.
Induction of Finite-State Languages Using Second-Order Recurrent Networks. R.L Watrous, G.M. Kuhn. Neural Computation, no. 4, pp. 406–414. 1992.
Experimental Analysis of the Real-time Recurrent Learning Algorithm. R.J. Williams, and D. Zipser. Connection Science, vol. 1, no.1, pp. 87–111. 1989.
Discrete Recurrent Neural Networks for Grammatical Inference. Z. Zeng, M. Goodman, P. Smith. IEEE Transactions on Neural Networks. Vol. 5, no. 2, pp. 320–330 1994.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Castaño, M.A., Vidal, E., Casacuberta, F. (1995). Finite State Automata and Connectionist Machines: A survey. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_206
Download citation
DOI: https://doi.org/10.1007/3-540-59497-3_206
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-59497-0
Online ISBN: 978-3-540-49288-7
eBook Packages: Springer Book Archive