Skip to main content

Improving the State Space Organization of Untrained Recurrent Networks

  • Conference paper
Advances in Neuro-Information Processing (ICONIP 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5506))

Included in the following conference series:

  • 1563 Accesses

Abstract

Recurrent neural networks are frequently used in cognitive science community for modeling linguistic structures. More or less intensive training process is usually performed but several works showed that untrained recurrent networks initialized with small weights can be also successfully used for this type of tasks. In this work we demonstrate that the state space organization of untrained recurrent neural network can be significantly improved by choosing appropriate input representations. We experimentally support this notion on several linguistic time series.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Elman, J.L.: Finding structure in time. Cognitive Science 14(2), 179–211 (1990)

    Article  Google Scholar 

  2. Tiňo, P., Čerňanský, M., Beňušková, Ľ.: Markovian architectural bias of recurrent neural networks. IEEE Transactions on Neural Networks 15(1), 6–15 (2004)

    Google Scholar 

  3. Ron, D., Singer, Y., Tishby, N.: The power of amnesia. Machine Learning 25, 117–149 (1996)

    Article  MATH  Google Scholar 

  4. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  5. Jaeger, H.: Short term memory in echo state networks. Technical Report GMD 152, German National Research Center for Information Technology (2001)

    Google Scholar 

  6. Tong, M.H., Bickett, A.D., Christiansen, E.M., Cottrell, G.W.: Learning grammatical structure with Echo State Networks. Neural Networks 20, 424–432 (2007)

    Article  MATH  Google Scholar 

  7. Frank, S.L.: Strong systematicity in sentence processing by an Echo State Network. In: Kollias, S., Stafylopatis, A., Duch, W., Oja, E. (eds.) ICANN 2006. LNCS, vol. 4131, pp. 505–514. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  8. Frank, S.L.: Learn more by training less: systematicity in sentence processing by recurrent networks. Connection Science 18, 287–302 (2006)

    Article  Google Scholar 

  9. Tiňo, P., Čerňanský, M., Beňušková, Ľ.: Organization of the state space of a simple recurrent neural network before and after training on recursive linguistic structures. Neural Networks 20, 236–244 (2007)

    Google Scholar 

  10. Frank, S.L., Čerňanský, M.: Generalization and systematicity in echo state networks. In: Proceedings of the 30th Cognitive Science Conference, Washington, DC, USA, pp. 733–738 (2008)

    Google Scholar 

  11. Frank, S.L., Jacobsson, H.: Sentence processing in echo state networks: a qualitative analysis by finite state machine extraction. Journal of Algorithms (2008) (submitted)

    Google Scholar 

  12. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical Report GMD 148, German National Research Center for Information Technology (2001)

    Google Scholar 

  13. Bullinaria, J.A., Levy, J.P.: Extracting semantic representations from word co-occurrence statistics: a computational study. Behavior Research Methods 39, 510–526 (2007)

    Article  Google Scholar 

  14. Elman, J.: Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning 7, 195–225 (1991)

    Google Scholar 

  15. Farkaš, I., Crocker, M.: Recurrent networks and natural language: exploiting self-organization. In: Proceedings of the 28th Cognitive Science Conference, Vancouver, Canada, pp. 1275–1280 (2006)

    Google Scholar 

  16. Čerňanský, M., Beňušková, Ľ.: Simple recurrent network trained by RTRL and extended Kalman filter algorithms. Neural Network World 13(3), 223–234 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Čerňanský, M., Makula, M., Beňušková, Ľ. (2009). Improving the State Space Organization of Untrained Recurrent Networks. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5506. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02490-0_82

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02490-0_82

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02489-4

  • Online ISBN: 978-3-642-02490-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics