Abstract
Based on various approaches, several different learing algorithms have been given in the literature for neural networks. Almost all algorithms have constant learning rates or constant accelerative parameters, though they have been shown to be effective for some practical applications. The learning procedure of neural networks can be regarded as a problem of estimating (or identifying) constant parameters (i.e. connection weights of network) with a nonlinear or linear observation equation. Making use of the Kalman filtering, we derive a new back-propagation algorithm whose learning rate is computed by a time-varying Riccati difference equation. Perceptron-like and correlational learning algorithms are also obtained as special cases. Furthermore, a self-organising algorithm of feature maps is constructed within a similar framework.
Similar content being viewed by others
References
Amari, S (1978) Neural theory of association and concept-formation, Biological Cybernetics 26, 175–185.
Anderson, B.D.O. and Moore, J.B. (1979) Optimal Filtering, Prentice-Hall, New Jersey.
Gorman, R.P. and Sejnowski, T.J. (1988) Analysis of hidden units in a layered network trained to classify sonar targets, Neural Networks 1, 75–89.
Hebb, D.O. (1949) Organization of Behavior, Wiley, New York.
Hinton, G.E. (1987) Connectionists learing procedures, Carnegie-Mellon Technical Report CMUCS-87-115.
Hopfield, J.J. (1982) Neural networks and physical system with emergent collective computational properties, Proc. National Academy of Science USA, Vol. 79, pp. 2554–2558.
Kawato, M., Uno, Y., and Suzuki, R. (1988) Adaptation and learing in control of voluntary movement II, J. Robotic Soc. Japan 6(3), 50–58 (in Japanese).
Kohonen, T. (1982) Self-organized formation of topologically correct feature maps, Biological Cybernetics 43, 59–69.
Kohonen, T. (1986) Self-organized sensory maps and associative memory, Physics of Cognitive Processes 1986, pp. 258–272.
Kohonen, T. (1987) Adaptive, associative, and self-organizing functions in neural computing, Applied Optics 26(23), 4910–4916.
Lippmann, P. (1987) An introduction to computing with neural nets, IEEE Acoustics Speech Signal Processing Magazine, Vol. 4, April, pp. 4–22.
Matheus, C.J. and Hohensee, W.E. (1987) Learning in artificial neural systems, Computational Intelligence 3(4), 263–294.
Minsky, M.L. and Papert, S.A. (1988) Perceptrons, Expanded Edition, MIT Press, Cambridge, MA.
Plaut, D.C. and Hinton, G.E. (1987) Learning sets of filters using back-propagation, Computer Speech and Language 2, 35–61.
Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1986a) Learning internal representation by error propagation, in D.E. Rumelhart and J.L. McClelland and the PDP research group (eds), Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Vol. I: Foundations, MIT Press, Cambridge, MA.
Rumelhart, D.E., Hinton, G.E., and Williams, R.J. (1986b) Learning representations by back-propagating errors, Nature 323(9), 533–536.
Widrow, B. and Hoff, M.E. (1960) Adaptive switching circuits, Institute of Radio Engineers, Western Electronic Show and Convention, Convention Record, Part 4, pp. 96–104.
Widrow, B., McCool, J.M., Larimore, M.G., and Johnson, C.R. (1976) Stationary and nonstationary learning characteristics of the LMS adaptive filter, Proc. IEEE 64(8), 1151–1162.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Watanabe, K., Tzafestas, S.G. Learning algorithms for neural networks with the Kalman filters. J Intell Robot Syst 3, 305–319 (1990). https://doi.org/10.1007/BF00439421
Received:
Issue Date:
DOI: https://doi.org/10.1007/BF00439421