Abstract
In the Echo State Networks (ESN) and, more generally, Reservoir Computing paradigms (a recent approach to recurrent neural networks), linear readout weights, i.e., linear output weights, are the only ones actually learned under training. The standard approach for this is SVD–based pseudo–inverse linear regression. Here it will be compared with two well known on–line filters, Least Minimum Squares (LMS) and Recursive Least Squares (RLS). As we shall illustrate, while LMS performance is not satisfactory, RLS can be a good on–line alternative that may deserve further attention.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Atiya, A., Parlos, A.: New Results on Recurrent Network Training: Unifying the Algorithms and Accelerating Convergence. IEEE Transactions on Neural Networks 11, 697–709 (2000)
Cioffi, J., Kailath, T.: Fast, recursive-least-squares transversal filters for adaptive filtering. IEEE Transactions on Acoustics, Speech and Signal Processing 32(2), 304–337 (1984)
Eweda, E.: Convergence analysis of adaptive filtering algorithms with singular data covariance matrix. IEEE Transactions on Signal Processing 49(2), 334–343 (2001)
Feuer, A., Weinstein, E.: Convergence analysis of LMS filters with uncorrelated Gaussian data. IEEE Transactions on Acoustics, Speech and Signal Processing 33(1), 222–230 (1985)
Glass, L., Mackey, M.: Mackey-Glass equation. Scholarpedia 5(3), 6908 (2010)
Haykin, S.: Adaptive Filter Theory. Prentice Hall, New Jersey (2001)
Jaeger, H.: Echo state network. Scholarpedia 2(9), 2330 (2007)
Lanzi, P., Loiacono, D., Wilson, S., Goldberg, D.: Prediction update algorithms for XCSF. In: Proceedings of GECCO 2006, pp. 1505–1512 (2006)
Legenstein, R., Maass, W.: Edge of chaos and prediction of computational performance for neural circuit models. Neural Networks 20(3), 323–334 (2007)
Letellier, C., Rossler, O.: Rossler attractor. Scholarpedia 1(10), 1721 (2006)
Maass, W., Natschläger, T., Markram, H.: Real–Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations. Neural Computation 14(11), 2531–2560 (2002)
Steil, J.: Backpropagation–Decorrelation: online recurrent learning with O(N) complexity. IEEE Transactions on Neural Networks 2, 843–848 (2004)
Werbos, P.: Backpropagation Through Time: What It Does and How to Do It. Proceedings of the IEEE 78(10), 1550–1560 (1990)
Williams, R., Zipser, D.: A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Computation 1, 270–280 (1989)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Alaíz, C.M., Dorronsoro, J.R. (2011). On the Learning of ESN Linear Readouts. In: Lozano, J.A., Gámez, J.A., Moreno, J.A. (eds) Advances in Artificial Intelligence. CAEPIA 2011. Lecture Notes in Computer Science(), vol 7023. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25274-7_13
Download citation
DOI: https://doi.org/10.1007/978-3-642-25274-7_13
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-25273-0
Online ISBN: 978-3-642-25274-7
eBook Packages: Computer ScienceComputer Science (R0)