Skip to main content

Stochastic simple recurrent neural networks

  • Session: Inference of Stochastic Models 2
  • Conference paper
  • First Online:
Grammatical Interference: Learning Syntax from Sentences (ICGI 1996)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1147))

Included in the following conference series:

  • 136 Accesses

Abstract

Simple recurrent neural networks (SRNs) have been advocated as an alternative to traditional probabilistic models for grammatical inference and language modeling. However, unlike hidden Markov Models and stochastic grammars, SRNs are not formulated explicitly as probability models, in that they do not provide their predictions in the form of a probability distribution over the alphabet. In this paper, we introduce a stochastic variant of the SRN. This new variant makes explicit the functional description of how the SRN solution reflects the target structure generating the training sequence. We explore the links between the stochastic version of SRNs and traditional grammatical inference models. We show that the stochastic single-layer SRN can be seen as a generalized hidden Markov model or a probabilistic automaton. The two-layer stochastic SRN can be interpreted as a probabilistic machine whose state-transitions are triggered by inputs producing outputs, that is, a probabilistic finite-state sequential transducer. It can also be thought of as a hidden Markov model with two alphabets, each with its own distinct output distribution. We provide efficient procedures based on the forward-backward approach, used in the context of hidden Markov models, to evaluate the various probabilities occurring in the model. We derive a gradient-based algorithm for finding the parameters of the network that maximize the likelihood of the training sequences. Finally, we show that if the target structure generating the training sequences is unifilar, then the trained stochastic SRN behaves deterministically.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. P. Bartlett (1992). Using Random Weights to Train Multilayer Networks of Hard-Limiting Units. IEEE Trans. Neural Networks, Vol. 3, p. 202.

    Article  Google Scholar 

  2. Y. Bengio and P. Frasconi (1995). An Input Output HMM Architecture. In G. Tesauro et al (Eds), Advances in Information Processing Systems (NIPS) 7. MIT Press.

    Google Scholar 

  3. H. Boulard and C. Wellekens (1990). Links Between HMMs and Multilayer Perceptrons. IEEE Trans. pattern An. Mach. Intell., Vol. 12, p. 1167.

    Article  Google Scholar 

  4. J.L. Elman (1991). Distributed Representations, Simple Recurrent Networks, and Grammatical structure. Machine Learning, Vol. 7, p. 195.

    Google Scholar 

  5. Y. Freund and D. Haussler (1994). Unsupervised Learning of Distribution over Binary Vectors using Two-layer networks. UCSC-GRL-94-25. Also, Neural Information Processing Systems, Vol. 4, p. 912.

    Google Scholar 

  6. C.L. Giles and C.W. Omlin. Extraction, insertion, and refinement of symbolic rules in dynamically-driven recurrent neural networks. Connection Science, Vol. 5, p. 307, 1993.

    Google Scholar 

  7. M. Hanes, S. Ahalt, and A. Krishnamurthy. (1994). Acoustic-to-Phonetic Mapping Using Recurrent Neural Networks. IEEE Trans. on Neural Networks, Vol. 5, p. 659.

    Article  Google Scholar 

  8. King-Sun Fu and Taylor L. Booth. Grammatical inference: Introduction and survey — part I and II. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-8(3), May 1986.

    Google Scholar 

  9. S.C. Kremer (1995). On the Computational Power of Elman-Style Recurrent Networks. IEEE Trans. on Neural Networks, Vol. 6, p. 1000.

    Article  Google Scholar 

  10. S.M. Lucas. New directions in grammatical inference. In IEE Colloquim on Grammatical Inference: Theory, Applications and Alternatives, pages 1/1–7, April 1993.

    Google Scholar 

  11. L.R. Rabiner (1989). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, Vol. 77, p. 257.

    Article  Google Scholar 

  12. D. Ron (1995). Automata Learning and its Applications. Ph.D. Thesis, Hebrew University.

    Google Scholar 

  13. D. Servan-Schreiber, A. Cleeremans, and J.L. MacClelland (1991). Graded State Machines: The Representation of Temporal Contingencies in Simple Recurrent Networks. Machine Learning, Vol. 7, p. 161.

    Google Scholar 

  14. P. Tino, B. Horne, and L. Giles (1995). Finite State Machines and Recurrent Neural Networks — Automata and Dynamical Systems Approaches”, Technical Report-UMCP-CSD:CS-TR-3396, University of Maryland, College Park.

    Google Scholar 

  15. P. Tino and J. Sajda (1995). Learning and Extracting Initial Mealy Automata with Modular Neural Models. Neural Computation, Vol. 7, p.822.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Laurent Miclet Colin de la Higuera

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Golea, M., Matsuoka, M., Sakakibara, Y. (1996). Stochastic simple recurrent neural networks. In: Miclet, L., de la Higuera, C. (eds) Grammatical Interference: Learning Syntax from Sentences. ICGI 1996. Lecture Notes in Computer Science, vol 1147. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0033360

Download citation

  • DOI: https://doi.org/10.1007/BFb0033360

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61778-5

  • Online ISBN: 978-3-540-70678-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics