Abstract
We present a new approach to learning directed information flow networks from multi-channel spike train data. A novel scoring function, the Snap Shot Score, is used to assess potential networks with respect to their quality of causal explanation for the data. Additionally, we suggest a generic concept of plausibility in order to assess network learning techniques under partial observability conditions. Examples demonstrate the assessment of networks with the Snap Shot Score, and neural network simulations show its performance in complex situations with partial observability. We discuss the application of the new score to real data and indicate how it can be modified to suit other neural data types.
Similar content being viewed by others
Notes
Other choices for the decay constant are possible: the smaller d is chosen, the larger the range of time-lags considered for detection of interrelations. Extreme values where d = 0 (activity level constantly 1 once a spike occurred on the channel) or d ≈ 0 (activity decaying extremely slow) are unlikely to deliver sensible results. We have chosen d = 1/3 to keep examples (Section 3) expressive and clear. For real data, the decay constant can be derived from the anticipated maximal causal lag (in time-bins) or by using a parameter series as outlined in Example 3, later.
A loop-link renders a node parent and child at the same time. We will refer to such configurations as self-exciting.
Implementation note: Profiling implementations (in C (Kernighan and Ritchie 1988) and Python (van Rossum et al. 2009)) of our method revealed that calculating joins is computationally much more expensive than calculating the SSS value. Instead of recalculating a join for different nodes, we suggest to perform scoring join-wise, i.e. scoring all nodes with a join once it has been computed.
Generally the mean score is unknown, because an exhaustive evaluation is computationally impossible in practical dimensions.
For independent random spike trains, SSS values of all parent configurations lie within mean ± SD and approach mean score (\(SD \searrow 0\)) for increasing length of spike trains (not shown).
Start- and end-nodes underlined, nodes on path in full network (given for illustration) italic, hidden nodes in brackets.
A series of directed links is called a path, if the origin of all links equals their predecessors’ destination. Length l(a →b) of directed path from node a to b is defined as number of links on path. The length of a path a →b directly corresponds to the time-lag a signal needs to propagate from a to b. In our neural simulation, the time-lag in time-bins (1 ms) is equal to the length of a path.
For l min = 1 there exist 12 (≈ 6.6% out of 182 possible links, l max = 1), 27 (≈ 14.8%, l max = 2), 34 (≈ 18.7%, l max = 3), 37 (≈ 20.0%, l max = 4) plausible links.
Note that the definition of the recovery rate corresponds to sensitivity. We have called it differently because we do not expect the recovery rate to reach 100% (as is explained in the text later), which the reader might assume if confronted with the familiar but misleading term sensitivity.
The chance level for at least h hits out of p plausible links out of N total links with k links learned is \( \sum_{i=h}^{\min\{p,k\}}q_i\), where \(q_i = \frac{\binom{p}{i}\binom{N-p}{k-i}}{\binom{N}{k}}\).
The \(\textit{impetus}=100\cdot\#\textit{evoked spikes}/{\#\textit{stimulating spikes}}\) where number of \(\textit{evoked spikes} = \#\textit{spikes in simulation output} - \#\textit{stimulating spikes}\). If, for example, impetus=0%, then no spikes were evoked by the stimulation and the simulation output is equal to uncorrelated random spike trains. For impetus=100%, the simulation output is a mixture of two halves: stimulation spikes and evoked spikes.
The data sets sum up to a total recording time of 5 min. The impetus was found between 29.7% and 30.0%. For an impetus of 30%, the simulation output consists of 3 evoked spikes per 10 uncorrelated stimulation spikes (on average).
Activity level series correspond to rate-limited firing rate series that were computed using a causal kernel. A causal kernel function f satisfies \(\text{supp}(f) \cap \mathbb{R}_{<0} = \{ x \in \mathbb{R} | f(x)>0 \} \cap \mathbb{R}_{<0} = \emptyset\) (Dayan and Abbott 2005, p. 14). Such a kernel does not make use of information gained in the future. The activity level of channel k at time t is given by a k,t = ∑ j = 0,...,t f( t − j ·s k,t − j ) .
References
Abbott, L. F. (1999). Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Research Bulletin, 50(5–6), 303–304.
Aertsen, A. M. H. J., Gerstein, G. L., Habib, M. K., & Palm, G. (1989). Dynamics of neuronal firing correlation—modulation of effective connectivity. Journal of Neurophysiology, 61(5), 900–917.
Airoldi, E. M. (2007). Getting started in probabilistic graphical models. PLoS Computational Biology, 3(12), e252.
Ashlock, D. (2004). Evolutionary computation for modeling and optimization. New York: Springer.
Astolfi, L., Cincotti, F., Mattia, D., Marciani, M. G., Baccalá, L. A., Fallani, F. D., et al. (2006). Assessing cortical functional connectivity by partial directed coherence: Simulations and application to real data. IEEE Transactions on Biomedical Engineering, 53(9), 1802–1812.
Baccalá, L. A., & Sameshima, K. (2001). Partial directed coherence: A new concept in neural structure determination. Biological Cybernetics, 84(6), 463–474.
Bäck, T. (1996). Evolutionary algorithms in theory and practice: Evolution strategies, evolutionary programming, genetic algorithms. New York: Oxford University Press.
Borst, A., & Theunissen, F. E. (1999). Information theory and neural coding. Nature Neuroscience, 2(11), 947–957.
Brown, E. N., Kass, R. E., & Mitra, P. P. (2004). Multiple neural spike train data analysis: State-of-the-art and future challenges. Nature Neuroscience, 7(5), 456–461.
Burge, J., Lane, T., Link, H., Qiu, S., & Clark, V. P. (2009). Discrete dynamic Bayesian network analysis of fMRI data. Human Brain Mapping, 30(1), 122–137.
Cadotte, A. J., DeMarse, T. B., He, P., & Ding, M. (2008). Causal measures of structure and plasticity in simulated and living neural networks. PLoS ONE, 3(10), e3355.
Casella, G., & George, E. I. (1992). Explaining the Gibbs sampler. The American Statistician, 46(3), 167–174.
Cerny, V. (1985). Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm. Journal of Optimization Theory and Applications, 45(1), 41–51.
Chornoboy, E. S., Schramm, L. P., & Karr, A. F. (1988). Maximum-likelihood identification of neural point process systems. Biological Cybernetics, 59(4–5), 265–275.
Cooper, G. F., & Herskovits, E. (1992). A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9(4), 309–347.
Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2001). Greedy algorithms. In Introduction to algorithms (2nd ed., pp. 370–404). Cambridge: MIT.
Cox, R. T. (1946). Probability, frequency and reasonable expectation. American Journal of Physics, 14(1), 1–13.
Dayan, P., & Abbott, L. F. (2005). Theoretical neuroscience: Computational and mathematical modeling of neural systems (1st paperback ed.). Cambridge: MIT.
Eberhart, R., Shi, Y., & Kennedy, J. (2001). Swarm intelligence. Artificial intelligence. San Francisco: Morgan Kaufmann.
Eichler, M. (2006). On the evaluation of information flow in multivariate systems by the directed transfer function. Biological Cybernetics, 94(6), 469–482.
Eldawlatly, S., Zhou, Y., Jin, R., & Oweiss, K. (2008). Reconstructing functional neuronal circuits using dynamic Bayesian networks. In 30th annual international IEEE engineering in medicine and biology society (EMBS) conference, vol. 2008, pp. 5531–5534, Vancouver, British Columbia.
Feller, W. (1950). An introduction to probability theory and its applications (vol. 1, 3rd ed.). New York: Wiley.
Friedman, N. (1997). Learning belief networks in the values and hidden variables. In 14th international conference on machine learning (ICML 1997), pp. 125–133. Nashville: Morgan Kaufmann.
Friedman, N., Linial, M., Nachman, I., & Pe’er, D. (2000). Using Bayesian networks to analyze expression data. Journal of Computational Biology, 7(3–4), 601–620.
Friston, K. J. (1994). Functional and effective connectivity in neuroimaging: A synthesis. Human Brain Mapping, 2, 56–78.
Gerstein, G. L., & Aertsen, A. M. (1985). Representation of cooperative firing activity among simultaneously recorded neurons. Journal of Neurophysiology, 54(6), 1513–1528.
Gerstein, G. L., & Perkel, D. H. (1969). Simultaneously recorded trains of action potentials: Analysis and functional interpretation. Science, 164(881), 828–830.
Gerstein, G. L., Perkel, D. H., & Dayhoff, J. E. (1985). Cooperative firing activity in simultaneously recorded populations of neurons: Detection and measurement. Journal of Neuroscience, 5(4), 881–889.
Gerstner, W., & Kistler, W. M. (2002). Spiking neuron models: Single neurons, populations, plasticity (1st ed.). Cambridge: Cambridge University Press.
Ghahramani, Z. (1998). Learning dynamic Bayesian networks. Adaptive Processing of Sequences and Data Structures, 1387, 168–197.
Granger, C. W. J. (1969). Investigating causal relations by econometric models and cross-spectral methods. Econometrica, 37(3), 424–438.
Hastings, W. K. (1970). Monte-Carlo sampling methods using Markov chains and their applications. Biometrika, 57(1), 97–109.
Hebb, D. O. (1949). The organization of behavior. New York: Wiley.
Heckerman, D., Geiger, D., & Chickering, D. M. (1995). Learning Bayesian networks—The combination of knowledge and statistical-data. Machine Learning, 20(3), 197–243.
Heuschkel, M. O., Fejtl, M., Raggenbass, M., Bertrand, D., & Renaud, P. (2002). A three-dimensional multi-electrode array for multi-site stimulation and recording in acute brain slices. Journal of Neuroscience Methods, 114(2), 135–148.
Jezzard, P., Matthews, P. M., & Smith, S. M. (2001). Functional MRI: An introduction to methods (1st ed.). Oxford: Oxford University Press.
Johnson, J. L., & Welsh, J. P. (2003). Independently movable multielectrode array to record multiple fast-spiking neurons in the cerebral cortex during cognition. Methods, 30(1), 64–78.
Junning Li, Z. W., & McKeown, M. (2006). Dynamic Bayesian networks (DBNs) demonstrate impaired brain connectivity during performance of simultaneous movements in Parkinson’s disease. In 3rd IEEE international symposium on biomedical imaging: Nano to macro (pp. 964–967).
Kaminski, M. J., & Blinowska, K. J. (1991). A new method of the description of the information flow in the brain structures. Biological Cybernetics, 65(3), 203–210.
Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In IEEE international conference on neural networks, vol. 4, pp. 1942–1948, Perth, WA.
Kernighan, B. W., & Ritchie, D. M. (1988). The C programming language (2nd ed.). Englewood Cliffs: Prentice Hall.
Kim, S., Imoto, S., & Miyano, S. (2004). Dynamic Bayesian network and nonparametric regression for nonlinear modeling of gene networks from time series gene expression data. Biosystems, 75(1–3), 57–65.
Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680.
Lam, W., & Bacchus, F. (1994). Learning Bayesian belief networks: an approach based on the MDL principle. Computational Intelligence, 10(3), 269–293.
Lauritzen, S. L. (1996). Graphical models. Oxford: Oxford University Press.
Li, J., Wang, Z. J., & McKeown, M. J. (2007). A framework for group analysis of fMRI data using dynamic Bayesian networks. In Annual international conference of the IEEE engineering in medicine and biology society, pp. 5992–5995.
Lindsey, B. G., & Gerstein, G. L. (2006). Two enhancements of the gravity algorithm for multiple spike train analysis. Journal of Neuroscience Methods, 150(1), 116–127.
Madigan, D., & Raftery, A. E. (1994). Model selection and accounting for model uncertainty in graphical models using Occam’s window. Journal of the American Statistical Association, 89(428), 1535–1546.
Makarov, V. A., Panetsos, F., & de Feo, O. (2005). A method for determining neural connectivity and inferring the underlying network dynamics using extracellular spike recordings. Journal of Neuroscience Methods, 144(2), 265–279.
Matthews, P. M., & Jezzard, P. (2004). Functional magnetic resonance imaging. Journal of Neurology Neurosurgery and Psychiatry, 75(1), 6–12.
Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., & Teller, E. (1953). Equation of state calculation by fast computing machines. Journal of Chemical Physics, 21, 1087–1092.
Murphy, K. P. (2002). Dynamic Bayesian networks: Representation, inference and learning. PhD thesis.
Murphy, K., & Mian, S. (1999). Modelling gene expression data using dynamic Bayesian networks. Technical report, MIT Artificial Intelligence Laboratory.
Neyman, J., & Pearson, E. S. (1933). On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London, Series A, 231, 289–337.
Nunez, P. L., & Srinivasan, R. (2007). Electroencephalogram. Scholarpedia, 2(2), 1348.
Nykamp, D. Q. (2005). Revealing pairwise coupling in linear-nonlinear networks. SIAM Journal on Applied Mathematics, 65(6), 2005–2032.
Oka, H., Shimono, K., Ogawa, R., Sugihara, H., & Taketani, M. (1999). A new planar multielectrode array for extracellular recording: Application to hippocampal acute slice. Journal of Neuroscience Methods, 93(1), 61–67.
Okatan, M., Wilson, M. A., & Brown, E. N. (2005). Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. Neural Computation, 17(9), 1927–1961.
Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge: Cambridge University Press.
Perkel, D. H., Gerstein, G. L., & Moore, G. P. (1967). Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains. Biophysical Journal, 7(4), 419–440.
Perrin, B. E., Ralaivola, L., Mazurie, A., Bottani, S., Mallet, J., & d’Alche Buc, F. (2003). Gene networks inference using dynamic Bayesian networks. Bioinformatics, 19(Suppl 2), ii138–ii148.
Pillow, J. W., Shlens, J., Paninski, L., Sher, A., Litke, A. M., & Chichilnisky, E. J. (2008). Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207), 995–U37.
Rajapakse, J. C., Wang, Y., Zheng, X., & Zhou, J. (2008). Probabilistic framework for brain connectivity from functional MR images. IEEE Transactions on Medical Imaging, 27(6), 825–833.
Rajapakse, J. C. & Zhou, J. (2007). Learning effective brain connectivity with dynamic Bayesian networks. Neuroimage, 37(3), 749–760.
Rieke, F., Warland, D., van Steveninck, R. d. R., & Bialek, W. (1999). Spikes: Exploring the neural code (1st paperback ed.). Cambridge: MIT.
Robert, C. P., & Casella, G. (2004). The multi-stage Gibbs sampler. In Monte Carlo statistical methods (2nd ed., pp. 337–370). New York: Springer.
Sameshima, K., & Baccalá, L. A. (1999). Using partial directed coherence to describe neuronal ensemble interactions. Journal of Neuroscience Methods, 94, 93–103.
Sato, T., Suzuki, T., & Mabuchi, K. (2007). A new multi-electrode array design for chronic neural recording, with independent and automatic hydraulic positioning. Journal of Neuroscience Methods, 160(1), 45–51.
Schwarz, G. (1978). Estimating dimension of a model. Annals of Statistics, 6(2), 461–464.
Smith, V. A., Yu, J., Smulders, T. V., Hartemink, A. J., & Jarvis, E. D. (2006). Computational inference of neural information flow networks. PLoS Computational Biology, 2(11), e161.
Sporns, O., Chialvo, D. R., Kaiser, M., & Hilgetag, C. C. (2004). Organization, development and function of complex brain networks. Trends in Cognitive Sciences, 8(9), 418–425.
Stein, R. B. (1965). A theoretical analysis of neuronal variability. Biophysical Journal, 5, 173–194.
Stosiek, C., Garaschuk, O., Holthoff, K., & Konnerth, A. (2003). In vivo two-photon calcium imaging of neuronal networks. Proceedings of the National Academy of Sciences of the United States of America, 100(12), 7319–7324.
Takahashi, D. Y., Baccalá, L. A., & Sameshima, K. (2007). Connectivity inference between neural structures via partial directed coherence. Journal of Applied Statistics, 34(10), 1259–1273.
Truccolo, W., Eden, U. T., Fellows, M. R., Donoghue, J. P., & Brown, E. N. (2005). A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. Journal of Neurophysiology, 93(2), 1074–1089.
Tsytsarev, V., Taketani, M., Schottler, F., Tanaka, S., & Hara, M. (2006). A new planar multielectrode array: recording from a rat auditory cortex. Journal of Neural Engineering, 3(4), 293–298.
van Rossum et al. (2009). Python language website. http://www.python.org/.
Whitley, D. (1994). A genetic algorithm tutorial. Statistics and Computing, 4(2), 65–85.
Zou, M., & Conzen, S. D. (2005). A new dynamic Bayesian network (DBN) approach for identifying gene regulatory networks from time course microarray data. Bioinformatics, 21(1), 71–79.
Acknowledgements
We thank two anonymous reviewers for a thorough review and helpful comments. This work was supported by the CARMEN e-science project (www.carmen.org.uk) funded by the EPSRC (EP/E002331/1).
Author information
Authors and Affiliations
Corresponding author
Additional information
Action Editor: Rob Kass
Rights and permissions
About this article
Cite this article
Echtermeyer, C., Smulders, T.V. & Smith, V.A. Causal pattern recovery from neural spike train data using the Snap Shot Score. J Comput Neurosci 29, 231–252 (2010). https://doi.org/10.1007/s10827-009-0174-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10827-009-0174-2