Skip to main content

Training Recurrent Connectionist Models on Symbolic Time Series

  • Conference paper
  • 2093 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5506))

Abstract

This work provide a short study of training algorithms useful for adaptation of recurrent connectionist models for symbolic time series modeling tasks. We show that approaches based on Kalman filtration outperform standard gradinet based training algorithms. We propose simple approximation to the Kalman filtration with favorable computational requirements and on several linguistic time series taken from recently published papers we demonstrate superior ability of the proposed method.

This work was supported by the grants VG-1/0848/08 and VG-1/0822/08.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Elman, J.L.: Finding structure in time. Cognitive Science 14(2), 179–211 (1990)

    Article  Google Scholar 

  2. Christiansen, M., Chater, N.: Toward a connectionist model of recursion in human linguistic performance. Cognitive Science 23, 417–437 (1999)

    Article  Google Scholar 

  3. Rodriguez, P.: Simple recurrent networks learn contex-free and contex-sensitive languages by counting. Neural Computation 13, 2093–2118 (2001)

    Article  MATH  Google Scholar 

  4. Bodén, M., Wiles, J.: On learning context free and context sensitive languages. IEEE Transactions on Neural Networks 13(2), 491–493 (2002)

    Article  Google Scholar 

  5. Hochreiter, J., Schmidhuber, J.: Long short term memory. Neural Computation 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  6. Werbos, P.: Backpropagation through time; what it does and how to do it. Proceedings of the IEEE 78, 1550–1560 (1990)

    Article  Google Scholar 

  7. Prokhorov, D.V.: Kalman filter training of neural networks: Methodology and applications. In: Tutorial on IJCNN 2004, Budapest, Hungary (2004)

    Google Scholar 

  8. Tiňo, P., Dorffner, G.: Predicting the future of discrete sequences from fractal representations of the past. Machine Learning 45(2), 187–218 (2001)

    Article  MATH  Google Scholar 

  9. Elman, J.: Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning 7, 195–225 (1991)

    Google Scholar 

  10. Tong, M.H., Bickett, A.D., Christiansen, E.M., Cottrell, G.W.: Learning grammatical structure with Echo State Networks 20, 424–432 (2007)

    Google Scholar 

  11. Farkaš, I., Crocker, M.: Recurrent networks and natural language: exploiting self-organization. In: Proceedings of the 28th Cognitive Science Conference, Vancouver, Canada, pp. 1275–1280 (2006)

    Google Scholar 

  12. Tiňo, P., Čerňanský, M., Beňušková, Ľ.: Markovian architectural bias of recurrent neural networks. IEEE Transactions on Neural Networks 15(1), 6–15 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Čerňanský, M., Beňušková, Ľ. (2009). Training Recurrent Connectionist Models on Symbolic Time Series. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5506. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02490-0_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02490-0_35

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02489-4

  • Online ISBN: 978-3-642-02490-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics