Abstract
This work provide a short study of training algorithms useful for adaptation of recurrent connectionist models for symbolic time series modeling tasks. We show that approaches based on Kalman filtration outperform standard gradinet based training algorithms. We propose simple approximation to the Kalman filtration with favorable computational requirements and on several linguistic time series taken from recently published papers we demonstrate superior ability of the proposed method.
This work was supported by the grants VG-1/0848/08 and VG-1/0822/08.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Elman, J.L.: Finding structure in time. Cognitive Science 14(2), 179–211 (1990)
Christiansen, M., Chater, N.: Toward a connectionist model of recursion in human linguistic performance. Cognitive Science 23, 417–437 (1999)
Rodriguez, P.: Simple recurrent networks learn contex-free and contex-sensitive languages by counting. Neural Computation 13, 2093–2118 (2001)
Bodén, M., Wiles, J.: On learning context free and context sensitive languages. IEEE Transactions on Neural Networks 13(2), 491–493 (2002)
Hochreiter, J., Schmidhuber, J.: Long short term memory. Neural Computation 9(8), 1735–1780 (1997)
Werbos, P.: Backpropagation through time; what it does and how to do it. Proceedings of the IEEE 78, 1550–1560 (1990)
Prokhorov, D.V.: Kalman filter training of neural networks: Methodology and applications. In: Tutorial on IJCNN 2004, Budapest, Hungary (2004)
Tiňo, P., Dorffner, G.: Predicting the future of discrete sequences from fractal representations of the past. Machine Learning 45(2), 187–218 (2001)
Elman, J.: Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning 7, 195–225 (1991)
Tong, M.H., Bickett, A.D., Christiansen, E.M., Cottrell, G.W.: Learning grammatical structure with Echo State Networks 20, 424–432 (2007)
Farkaš, I., Crocker, M.: Recurrent networks and natural language: exploiting self-organization. In: Proceedings of the 28th Cognitive Science Conference, Vancouver, Canada, pp. 1275–1280 (2006)
Tiňo, P., Čerňanský, M., Beňušková, Ľ.: Markovian architectural bias of recurrent neural networks. IEEE Transactions on Neural Networks 15(1), 6–15 (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Čerňanský, M., Beňušková, Ľ. (2009). Training Recurrent Connectionist Models on Symbolic Time Series. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5506. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02490-0_35
Download citation
DOI: https://doi.org/10.1007/978-3-642-02490-0_35
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02489-4
Online ISBN: 978-3-642-02490-0
eBook Packages: Computer ScienceComputer Science (R0)