Skip to main content

A Characterization of Simple Recurrent Neural Networks with Two Hidden Units as a Language Recognizer

  • Conference paper
Neural Information Processing (ICONIP 2007)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4984))

Included in the following conference series:

  • 1222 Accesses

Abstract

We give a necessary condition that a simple recurrent neural network with two sigmoidal hidden units to implement a recognizer of the formal language {a n b n | n > 0 } which is generated by a set of generating rules {SaSb, Sab } and show that by setting parameters so as to conform to the condition we get a recognizer of the language. The condition implies instability of learning process reported in previous studies. The condition also implies, contrary to its success in implementing the recognizer, difficulty of getting a recognizer of more complicated languages.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bodén, M., Wiles, J., Tonkes, B., Blair, A.: Learning to predict a context-free language: analysis of dynamics in recurrent hidden units. Artificial Neural Networks (1999); Proc. ICANN 1999, vol. 1, pp. 359–364 (1999)

    Google Scholar 

  2. Bodén, M., Wiles, J.: Context-free and context-sensitive dynamics in recurrent neural networks. Connection Science 12(3/4), 197–210 (2000)

    Article  Google Scholar 

  3. Bodén, M., Blair, A.: Learning the dynamics of embedded clauses. Applied Intelligence: Special issue on natural language and machine learning 19(1/2), 51–63 (2003)

    MATH  Google Scholar 

  4. Casey, M.: Correction to proof that recurrent neural networks can robustly recognize only regular languages. Neural Computation 10, 1067–1069 (1998)

    Article  Google Scholar 

  5. Chalup, S.K., Blair, A.D.: Incremental Training Of First Order Recurrent Neural Networks To Predict A Context-Sensitive Language. Neural Networks 16(7), 955–972 (2003)

    Article  Google Scholar 

  6. Elman, J.L.: Distributed representations, simple recurrent networks and grammatical structure. Machine Learning 7, 195–225 (1991)

    Google Scholar 

  7. Elman, J.L.: Language as a dynamical system. In: Mind as Motion: Explorations in the Dynamics of Cognition, pp. 195–225. MIT Press, Cambridge

    Google Scholar 

  8. Gers, F.A., Schmidhuber, J.: LSTM recurrent networks learn simple context free and context sensitive languages. IEEE Transactions on Neural Networks 12(6), 1333–1340 (2001)

    Article  Google Scholar 

  9. Guckenheimer, J., Holmes, P.: Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer, Heidelberg (Corr. 5th print, 1997)

    Google Scholar 

  10. Hopcroft, J.E., Ullman, J.D.: Introduction to automata theory, languages, and computation. Addison-Wesley, Reading (1979)

    MATH  Google Scholar 

  11. Katok, A., Hasselblatt, B.: Introduction to the Modern Theory of Dynamical Systems. Cambridge University Press, Cambridge (1996)

    Google Scholar 

  12. Maass, W., Orponen, P.: On the effect of analog noise in discrete-time analog computations. Neural Computation 10, 1071–1095 (1998)

    Article  Google Scholar 

  13. Rodriguez, P., Wiles, J., Elman, J.L.: A recurrent neural network that learns to count. Connection Science 11, 5–40 (1999)

    Article  Google Scholar 

  14. Rodriguez, P.: Simple recurrent networks learn context-free and context-sensitive languages by counting. Neural Computation 13(9), 2093–2118 (2001)

    Article  MATH  Google Scholar 

  15. Schmidhuber, J., Gers, F., Eck, D.: Learning Nonregular Languages: A Comparison of Simple Recurrent Networks and LSTM. Neural Computation 14(9), 2039–2041 (2002)

    Article  MATH  Google Scholar 

  16. Siegelmann, H.T.: Neural Networks and Analog Computation: beyond the Turing Limit, Birkhäuser (1999)

    Google Scholar 

  17. Wiles, J., Blair, A.D., Bodén, M.: Representation Beyond Finite States: Alternatives to Push-Down Automata. In: A Field Guide to Dynamical Recurrent Networks

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Masumi Ishikawa Kenji Doya Hiroyuki Miyamoto Takeshi Yamakawa

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Iwata, A., Shinozawa, Y., Sakurai, A. (2008). A Characterization of Simple Recurrent Neural Networks with Two Hidden Units as a Language Recognizer. In: Ishikawa, M., Doya, K., Miyamoto, H., Yamakawa, T. (eds) Neural Information Processing. ICONIP 2007. Lecture Notes in Computer Science, vol 4984. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69158-7_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-69158-7_46

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-69154-9

  • Online ISBN: 978-3-540-69158-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics