Skip to main content

Computation in recurrent neural networks: From counters to iterated function systems

  • Scientific Track
  • Conference paper
  • First Online:
Advanced Topics in Artificial Intelligence (AI 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1502))

Included in the following conference series:

Abstract

In the paper we address the problem of computation in recurrent neural networks (RNN). In the first part we provide a formal analysis of the dynamical behavior of a RNN with a single self-recurrent unit in the hidden layer, show how such a RNN may be designed to perform an (unrestricted) counting task and describe a generalization of the counter network that performs binary stack operations.

In the second part of the paper we focus on the analysis of RNNs. We show how a layered RNN can be mapped to a corresponding iterated function system (IFS) and formulate conditions under which the behavior of the IFS and therefore the behavior of the corresponding RNN can be characterized as the performance of stack operations. This result enables us to analyze any layered RNN in terms of classical computation and, hence, improves our understanding of computation within a broad class of RNNs.

Moreover, we show how to use this knowledge as a design principle for RNNs which implement computational tasks that require stack operations. This principle is exemplified by presenting the design of particular RNNs for the recognition of words within the class of Dyck languages.

The author acknowledges support from the German Academic Exchange Service (DAAD) under grant no. D/97/29570.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. M. Barnsley: Fractals Everywhere. CA: Academic Press, San Diego, 1988.

    MATH  Google Scholar 

  2. M. Casey. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural Computation, 8(6):1135–1178, 1996.

    Google Scholar 

  3. J.L. Elman: Finding Structure in Time. Cognitive Science, 14, pp. 179–211, 1990.

    Article  Google Scholar 

  4. J. Wiles and J. Elman: Learning to Count without a Counter: A Case Study of Dynamics and Activation Landscapes in Recurrent Networks. Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society, Cambridge, MA, MIT Press, 1995.

    Google Scholar 

  5. C.W. Omlin and C.L. Giles. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 45(6):p. 937, 1996.

    Article  MathSciNet  Google Scholar 

  6. P. Grünwald and M. Steijvers: A Recurrent Network that performs a context-sensitive prediction task. Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society, Morgan Kauffman, 1996.

    Google Scholar 

  7. S. Hölldobler and Y. Kalinke and H. Lehmann: Designing a Counter: Another Case Study of Dynamics and Activation Landscapes in Recurrent Networks. LNAI 1303, Proceedings of the KI97: Advances in Artificial Intelligence, Springer, pp. 313–324, 1997.

    Google Scholar 

  8. M.I. Jordan: Attractor Dynamics and Parallelism in a Connectionist Sequential Machine. Proceedings of the Annual Conference of the Cognitive Science Society, pp. 531–546, 1986.

    Google Scholar 

  9. J.F. Kolen: Exploring the Computational Capabilities of Recurrent Neural Networks. PhD Thesis, Ohio State University, 1994.

    Google Scholar 

  10. H. Nakahara and K. Doya: Dynamics of Attention as Near Saddle-Node Bifurcation Behavior. In: D.S. Touretzky and M.C. Mozer and M.E. Hasselmo (eds.): Advances in Neural Information Processing Systems, Volume 8, Neural Information Processing Systems 1995, MIT Press, 1996.

    Google Scholar 

  11. J.B. Pollack: Recursive Distributed Representations. Artificial Intelligence, 46, pp. 77–105, 1990.

    Article  Google Scholar 

  12. P. Rodriguez and J. Wiles: Recurrent Neural Networks can Learn to Implement Symbol-Sensitive Counting. Proceedings of the 11th Conference on Neural Information Processing Systems, Denver CA, USA, 1998 (to appear).

    Google Scholar 

  13. H. Siegelmann and E.D. Sontag: Turing Computability with Neural Nets. Applied Mathematics Letters, 4(6), pp. 77–80, 1991.

    Article  MATH  MathSciNet  Google Scholar 

  14. A. Stolcke and D. Wu: Tree Matching with Recursive Distributed Representations. International Computer Science Institute, Berkeley, Technical Report TR-92-025, 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Grigoris Antoniou John Slaney

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kalinke, Y., Lehmann, H. (1998). Computation in recurrent neural networks: From counters to iterated function systems. In: Antoniou, G., Slaney, J. (eds) Advanced Topics in Artificial Intelligence. AI 1998. Lecture Notes in Computer Science, vol 1502. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0095051

Download citation

  • DOI: https://doi.org/10.1007/BFb0095051

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65138-3

  • Online ISBN: 978-3-540-49561-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics