Neural Kalman filter
Introduction
Linear dynamical systems (LDS) are widely applied tools in state estimation and control tasks. The so-called Kalman-filter recursion (KFR) makes the inference in LDS simple; the resulting estimations are unbiased and have minimized covariance. Here, an approximation of the KFR is provided (Section 2) by applying the recursive prediction error (RPE) method [5]. We show that the approximation (i) can be represented in neuronal form, (ii) is efficient and (iii) can be mapped (Section 3) onto the entorhinal–hippocampal (EC–HC) loop, the center of memory functions. Conclusions are drawn in Section 4.
Section snippets
Approximated Kalman-filter recursion
Consider the following LDS:where variables and are independent Gaussian noise processes. The task is to estimate the hidden variables given the series of observations , . For squared norm in the cost, the optimal solution was derived in [4]. The prediction equation is used to estimate before the measurement:where is the ‘Kalman gain’, which can be
Mapping onto the hippocampal-entorhinal loop
Motivated by [6], [8] we map the KF model onto the EC-HC loop (Fig. 2). The mapping is an extension of the memory model described in [6], [7], which stated that the goal of internal representation is to reconstruct the input and to minimize the reconstruction error. The necessity of optimal information transfer then follows and is enabled by bottom-up signal separation (Independent Component Analysis, ICA [3] and references therein). The model assumed additional noise filtering and pattern
Conclusion
We have extended the functional model of Lőrincz and Buzsáki [6], [7] on the memory organization in the EC–HC loop. One of the verified predictions of this model is that specific sustaining mechanisms [1] should exist at the model layer—the deep layers of the EC—to ensure temporal integration of the reconstruction error. However, for high noise, minimizing the reconstruction error itself is not sufficient to estimate the external world (i.e. to form optimal internal representation): the
Gábor Szirtes received his M.Sc. in chemistry in 1999 and is about to finish his Ph.D. studies in Computer Sciences in András Lőrincz's group at the Eötvös Loránd University of Sciences, Budapest, Hungary. Currently he is working in Kenneth D. Miller's group at Columbia University, New York. His main interests are mapping of high level functions onto the neural correlate, the working of the entorhinal-hippocampal loop and emerging properties in the developing brain, such as direction
References (9)
- et al.
Graded persistent activity in entorhinal cortex neurons
Nature
(2002) - et al.
Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex
Science
(2000) - et al.
Independent Component Analysis
(2001) A new approach to linear filtering and prediction problems
Trans. ASME-J. Basic Eng.
(1960)
Cited by (16)
Neural network learning of optimal Kalman prediction and control
2008, Neural NetworksCitation Excerpt :The output from a nonlinear NN has been used in conjunction with that of a classical (non-neural) KF algorithm, to improve predictions when applied to a nonlinear plant process (Klimasauskas & Guiver, 2001; Tresp, 2001). A recent KF-inspired NN algorithm (Szirtes, Póczos, & Lőrincz, 2005) is described as a ‘neural Kalman filter’. However, it substantially alters Kalman’s formulation, to the extent that the resulting NN does not in general implement KF, even approximately.
Simple conditions for forming triangular grids
2007, NeurocomputingCitation Excerpt :From the point of view of the equations (Eqs. (2) and (3), bottom-up information transfer can be modulated top-down, because in these equations trainingg is directed by output neural activities. Top-down modulation is straightforward in the two-phase model [10,15] by construction (see below). Our model suggests that the observed stability of grid cells is not a consequence of low-level idiothetic information, rather a side-effect of generalization occurring on a higher level.
Prediction of Rolling Load in Hot Strip Mill by Innovations Feedback Neural Networks
2007, Journal of Iron and Steel Research InternationalUncertainty–guided learning with scaled prediction errors in the basal ganglia
2022, PLoS Computational Biology
Gábor Szirtes received his M.Sc. in chemistry in 1999 and is about to finish his Ph.D. studies in Computer Sciences in András Lőrincz's group at the Eötvös Loránd University of Sciences, Budapest, Hungary. Currently he is working in Kenneth D. Miller's group at Columbia University, New York. His main interests are mapping of high level functions onto the neural correlate, the working of the entorhinal-hippocampal loop and emerging properties in the developing brain, such as direction selectivity in the primary visual cortex.
Barnabás Póczos was born in 1978 in Miskolc, Hungary. He obtained his M.Sc. degree in applied mathematics in 2001. He has won the first prize in the Hungarian student research competition in 2001. He is currently Ph.D. student in Computer Sciences in András Lőrincz's group at the Eötvös Loránd University of Sciences, Budapest. His recent research works include multidimensional independent component analysis, kernel methods, machine learning, and artificial neural networks.
András Lőrincz professor, senior researcher. He has been affiliated with the Faculty of Informatics of the Eötvös University, since 1998. His research focuses on distributed intelligent systems and their applications in neurobiological and cognitive modeling, as well as medicine. He has acted as the PI of several international projects in collaboration with Panasonic, Honda FTR and the US Air Force in the fields of hardware-software co-synthesis, image processing and human-computer collaboration. He was awarded with the Széchenyi Professor Award, Master Professor Award, Széchenyi István Award and Kalmár Prize, in 2000, 2001, 2003, and 2004, respectively. He received his PhD and CSc degrees in experimental and theoretical solid-state physics and chemical physics, respectively, and habilitated in laser physics based on his work on optimal control of quantum systems.