Abstract
Stimulus response latency is the time period between the presentation of a stimulus and the occurrence of a change in the neural firing evoked by the stimulation. The response latency has been explored and estimation methods proposed mostly for excitatory stimuli, which means that the neuron reacts to the stimulus by an increase in the firing rate. We focus on the estimation of the response latency in the case of inhibitory stimuli. Models used in this paper represent two different descriptions of response latency. We consider either the latency to be constant across trials or to be a random variable. In the case of random latency, special attention is given to models with selective interaction. The aim is to propose methods for estimation of the latency or the parameters of its distribution. Parameters are estimated by four different methods: method of moments, maximum-likelihood method, a method comparing an empirical and a theoretical cumulative distribution function and a method based on the Laplace transform of a probability density function. All four methods are applied on simulated data and compared.













Similar content being viewed by others
References
Baker SN, Gerstein GL (2001) Determination of response latency and its application to normalization of cross-correlation measures. Neural Comput 13:1351–1377
Bonnasse-Gahot L, Nadal JP (2012) Perception of categories: from coding efficiency to reaction times. Brain Res 1434:47–61
Chase SM, Young ED (2007) First-spike latency information in single neurons increases when referenced to population onset. Proc Natl Acad Sci USA 104:5175–5180
Chow CC, White JA (1996) Spontaneous action potentials due to channel fluctuations. Biophys J 71:3013–3021
Commenges D, Seal J, Pinatel F (1986) Inference about a change point in experimental neurophysiology. Math Biosci 80:81–108
Ditlevsen S, Lansky P (2005) Estimation of the input parameters in the Ornstein-Uhlenbeck neuronal model. Phys Rev E 71:011907
Ditlevsen S, Lansky P (2006) Estimation of the input parameters in the Feller neuronal model. Phys Rev E 73:061910
Dorval AD (2008) Probability distributions of the logarithm of inter-spike intervals yield accurate entropy estimates from small datasets. J Neurosci Meth 173:129–139
Duchamp-Viret P, Palouzier-Paulignan B, Duchamp A (1996) Odor coding properties of frog olfactory cortical neurons. Neuroscience 74:885–895
Epps TW, Pulley LB (1985) Parameter estimates and test of fit for infinite mixture distributions. Commun Stat Theory Methods 14:3125–3145
Farkhooi F, Strube-Bloss MF, Nawrot MP (2009) Serial correlation in neural spike trains: experimental evidence, stochastic modeling, and single neuron variability. Phys Rev E 79:021905
Fienberg SE (1974) Stochastic models for single neuron firing trains: a survey. Biometrics 30:399–427
Friedman HS, Priebe CE (1998) Estimating stimulus response latency. J Neurosci Methods 83:185–194
Gautrais J, Thorpe S (1997) Rate coding versus temporal order coding: a theoretical approach. Biosystems 48:57–65
Hentall I (2000) Interactions between brainstem and trigeminal neurons detected by cross-spectral analysis. Neuroscience 96:601–610
Kang K, Amari S (2008) Discrimination with spike times and ISI distributions. Neural Comput 20:1411–1426
Koutrouvelis IA, Canavos GC (1997) Estimation in the three parameter gamma distribution based on the empirical moment generating function. J Stat Comput Simul 59:47–62
Koutrouvelis IA, Meintainis S (2002) Estimating the parameters of Poisson-exponential models. Aust NZ J Stat 44:233–245
Koutrouvelis IA, Canavos GC, Meintanis SG (2005) Estimation in the three-parameter inverse Gaussian distribution. Comput Stat Data An 49:1132–1147
Krofczik S, Menzel R, Nawrot MP (2009) Rapid odor processing in the honeybee antennal lobe network. Front Comput Neurosci 2:9
Mandl G (1993) Coding for stimulus velocity by temporal patterning of spike discharges in visual cells of cat superior colliculus. Vis Res 33:1451–1475
McKeegan D (2002) Spontaneous and odour evoked activity in single avian olfactory bulb neurons. Brain Res 929:48–58
Miura K, Okada M, Amari SI (2006) Estimating spiking irregularities under changing environments. Neural Comput 18:2359–2386
Nawrot M, Boucsein C, Molina V, Riehle A, Aertsen A, Rotter S (2008) Measurement of variability dynamics in cortical spike trains. J Neurosci Methods 169:374–390
Nawrot MP, Aertsen A, Rotter S (2003) Elimination of response variability in neuronal spike trains. Biol Cybern 88:321–334
Pawlas Z, Klebanov LB, Beneš V, Prokešová M, Popelář J, Lánský P (2010) First-spike latency in the presence of spontaneous activity. Neural Comput 22:1675–1697
Quandt RE, Ramsey JB (1978) Estimating mixtures of normals and switching regressions. J Am Stat Assoc 73:730–738
R Core Team (2013) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/
Reisenman CE, Heinbockel T, Hildebrand JG (2008) Inhibitory interactions among olfactory glomeruli do not necessarily reflect spatial proximity. J Neurophysiol 100:554–564
Rospars JP, Lánský P, Duchamp-Viret P, Duchamp A (2000) Spiking frequency versus odorant concentration in olfactory receptor neurons. Biosystems 58:133–141
Shimokawa T, Shinomoto S (2009) Estimating instantaneous irregularity of neuronal firing. Neural Comput 21:1931–1951
Tamborrino M, Ditlevsen S, Lansky P (2012) Identification of noisy response latency. Phys Rev E 86:021128
Tamborrino M, Ditlevsen S, Lansky P (2013) Parametric inference of neuronal response latency in presence of a background signal. BioSystems 112:249–257
Van Rullen R, Gautrais J, Delorme A, Thorpe S (1998) Face processing using one spike per neurone. Biosystems 48:229–239
Van Rullen R, Guyonneau R, Thorpe S (2005) Spike times make sense. Trends Neurosci 28:1–4
Wainrib G, Thieullen M, Pakdaman K (2010) Intrinsic variability of latency to first-spike. Biol Cybern 103:43–56
Acknowledgments
M.L. and P.L. were supported by the Grant Agency of the Czech Republic, project P304/12/G069, and by RVO:67985823. S.D. was supported by the Danish Council for Independent Research | Natural Sciences. The work is part of the Dynamical Systems Interdisciplinary Network, University of Copenhagen.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendices
Appendix 1: Conditional and unconditional distribution of \(T\) in Model A and B
General expressions for the cdf, Laplace transform of the pdf, mean and variance of \(T_\mathrm{A}\), see (4) and (7), and \(T_\mathrm{B}\), see (5) and (8), are given below, first conditionally on \(\theta \), then the unconditional expressions.
Model A From the definition of \(T_\mathrm{A}\) in (2) and using that \(W\) and \(U\) are independent, or by integrating (4), the cdf of \(T_\mathrm{A}\) is
To determine the mean and variance of \(T_\mathrm{A}\), we derive the Laplace transform of the pdf of \(T_\mathrm{A}\) given by (4),
where \(\widehat{f_U}(s)\) is the Laplace transform of \(f_U(t)\). Thus, it holds
The unconditional expresions are
where we have used the relations \(\mathbb {E}(T) = \mathbb {E}\left[ \mathbb {E}(T\,|\,\varTheta ) \right] \) and \(\mathrm{Var}(T) = \mathbb {E}\left[ \mathrm{Var}(T\,|\,\varTheta ) \right] + \mathrm{Var}\left[ \mathbb {E}(T\,|\,\varTheta ) \right] \).
Model B From the independence of \(W\) and \(U\) and based on (3) we get the cdf of \(T_\mathrm{B}\),
The Laplace transform of the pdf of \(T_\mathrm{B}\) is
Using the Laplace transform \(\widehat{f_{T_\mathrm{B}|\,\theta }}(s)\) yields the mean and variance of \(T_\mathrm{B}\),
The unconditional expressions are
Appendix 2: Laplace transform and moments in Model 1–5
Model 1 The Laplace transform of (10) is
From (29) and (30) with \(E(U) = 1/\kappa \) and \(E(U^2)=2/\kappa ^2\) we get
Model 2 The Laplace transform of (11) is
From (36) and (37) follows (with \(E(U)=k/\lambda \) and \(E(U^2)=k(k+1)/\lambda ^2\))
Model 3 The Laplace transform, mean and variance of (12) are
Model 4 From (38)–(40), the Laplace transform, mean and variance are
Model 5 Using (13) with \(\widehat{f_E}(s)=\left( \lambda /(s+\lambda )\right) ^k\), the Laplace transform of \(f_X(t)\) is
The mean of \(X_{M5}\), calculated from (14), is
The Laplace transform calculated from (17) is
Again, from \(\widehat{f_{T_{M5}}}(s)\) we can determine the mean of \(T_{M5}\),
In general, it is difficult to find the inverse transform of \(\widehat{f_{X_{M5}}}(s)\) and \(\widehat{f_{T_{M5}}}(s)\). However, for \(k=1\), such that the excitatory pulses form a Poisson process, it is possible. It holds for a Poisson process that \(f^+_E(t)\) is equal to \(f_E(t)\) and therefore the distribution of \(T_{M5}\) is the same as the distribution of \(X_{M5}\), i.e. \(f_{X_{M5}}(t) = f_{T_{M5}}(t)\). The pdf is therefore
Appendix 3: Moment estimators
Models with constant latency (Model 1 and 2) By plugging (42), (43), (45) and (46) into (21), we get moment equations for the models. In both cases, the obtained system of equations has no analytical solution with respect to \(\theta \). However, parameters \(\kappa \) or \(k\) can be isolated in the first equation and inserted into the second equation. The estimates of \(\theta ^*\) are then calculated numerically as solutions to these equations:
Models with random latency (Model 3 and 4) The moment equations (21) for Model 3 and 4 have analytical solutions. The estimators of \(\theta ^*\) are
Model with selective interaction (Model 5) Since \(\theta ^*\) is the only unknown parameter in Model 5, we only use the first moment in (21). The estimate is denoted by \(\hat{\theta }^*_{M,T}\). Because the distribution of \(X\) is known, we can also use observations of \(X\) for the estimation. The moment equation \(\mathbb {E}(X) = \bar{x}\) is employed, where \(\bar{x}\) is the average of observations \(x_1,\ldots ,x_n\) of \(X\). This moment estimator is denoted by \(\hat{\theta }^*_{M,X}\). Another alternative is to use observations \(t_i\) and \(x_i\) together. Let the time from the stimulus onset to the second observed spike after it be a random variable denoted by \(Z\), i.e. \(Z = T + X\). The mean of \(Z\) is \(\mathbb {E}(Z) = \mathbb {E}(T) + \mathbb {E}(X)\) and the moment equation is thus \(\mathbb {E}(T) + \mathbb {E}(X) = \bar{z}\), where \(\bar{z}\) is the average of \(t_i+x_i,\,i=1,\ldots ,n\). The corresponding moment estimator is denoted by \(\hat{\theta }^*_{M,Z}\).
The solution to \(\mathbb {E}(X)=\bar{x}\) with \(\mathbb {E}X\) given by (54) yields
No closed forms exist for \(\hat{\theta }^*_{M,T}\) and \(\hat{\theta }^*_{M,Z}\), they are found as numerical solutions to the equations
If excitatory pulses form a Poisson process, i.e. \(k=1\), all three moment equations have analytical solutions, namely
Appendix 4: Log-likelihood functions
The pdfs of \(T_\mathrm{A}\) and \(T_\mathrm{B}\) are given in (4) and (5) and the log-likelihood functions of \(\theta ^*\) are
and
where \(n_{\theta ^*}\) is the number of observations \(t_i \le \theta ^*\).
For Model 1, 2 and 3, the log-likelihood functions are
The log-likelihood function for Model 5 when \(k=1\) is
Rights and permissions
About this article
Cite this article
Levakova, M., Ditlevsen, S. & Lansky, P. Estimating latency from inhibitory input. Biol Cybern 108, 475–493 (2014). https://doi.org/10.1007/s00422-014-0614-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00422-014-0614-6