Abstract
We have implemented the recurrent neural networks training algorithms as joint estimation of synaptic weights and neuron outputs using approximate nonlinear recursive Bayesian estimators. We have considered two nonlinear derivative free estimators: Divided Difference Filter and Unscented Kalman filter and compared there computational efficiency and performances to the Extended Kalman Filter as training algorithms for different recurrent neural network architectures. Algorithms and architectures were tested on problems of long term, chaotic time series prediction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Anderson, B., Moore, J.: Optimal Filtering. Englewood Cliffs, NJ, Prentice-Hall (1979)
Julier, S., Uhlmann, J., Durrant-Whyte, H.: A new approach for filtering nonlinear systems. In: Proceedings of the American Control Conference, pp. 1628–1632 (1995)
Julier, S.J., Uhlmann, J.K.: A general method for approximating nonlinear transformations of probability distributions. Technical report, RRG, Department of Engineering Science, University of Oxford (1996)
Julier, S.J.: A skewed approach to filtering. In: SPIE Conference on Signal and Data Processing of Small Targets, vol. 3373, pp. 271–282. SPIE, Orlando, Florida (1998)
Julier, S.J.: The scaled unscented transformation. In: Proceedings of the American Control Conference, vol. 6, pp. 4555–4559 (2002)
Nørgaard, M., Poulsen, N.K., Ravn, O.: Advances in derivative free state estimation for nonlinear systems, Technical Report, IMM-REP-1998-15. Department of Mathematical Modelling, DTU (2000)
Todorović, B., Stanković, M., Moraga, C.: On-line learning in recurrent neural networks using nonlinear Kalman Filters. In: Proceedings of the ISSPIT 2003, Darmstadt, Germany (2003)
Todorović, B., Stanković, M., Moraga, C.: Nonlinear Bayesian estimation of recurrent neural networks. In: Proceedings of the IEEE 4th International Conference on Intelligent Systems Design and Applications ISDA, Budapest, Hungary, pp. 855–860, 26–28 Aug 2004
Van der Merwe, R., Wan, E.A.: Efficient derivative-free Kalman Filters for online learning. In: Proceedings of the ESSAN, Bruges, Belgium (2001)
Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1, 270–280 (1989)
Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent connectionist networks. TR NU_CCS_90-9. Northeastern University, Boston (1990)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Todorović, B., Stanković, M., Moraga, C. (2016). Recurrent Neural Networks Training Using Derivative Free Nonlinear Bayesian Filters. In: Merelo, J.J., Rosa, A., Cadenas, J.M., Dourado, A., Madani, K., Filipe, J. (eds) Computational Intelligence. IJCCI 2014. Studies in Computational Intelligence, vol 620. Springer, Cham. https://doi.org/10.1007/978-3-319-26393-9_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-26393-9_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-26391-5
Online ISBN: 978-3-319-26393-9
eBook Packages: EngineeringEngineering (R0)