Skip to main content
Log in

Estimating latency from inhibitory input

  • Original Paper
  • Published:
Biological Cybernetics Aims and scope Submit manuscript

Abstract

Stimulus response latency is the time period between the presentation of a stimulus and the occurrence of a change in the neural firing evoked by the stimulation. The response latency has been explored and estimation methods proposed mostly for excitatory stimuli, which means that the neuron reacts to the stimulus by an increase in the firing rate. We focus on the estimation of the response latency in the case of inhibitory stimuli. Models used in this paper represent two different descriptions of response latency. We consider either the latency to be constant across trials or to be a random variable. In the case of random latency, special attention is given to models with selective interaction. The aim is to propose methods for estimation of the latency or the parameters of its distribution. Parameters are estimated by four different methods: method of moments, maximum-likelihood method, a method comparing an empirical and a theoretical cumulative distribution function and a method based on the Laplace transform of a probability density function. All four methods are applied on simulated data and compared.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Baker SN, Gerstein GL (2001) Determination of response latency and its application to normalization of cross-correlation measures. Neural Comput 13:1351–1377

    Article  CAS  PubMed  Google Scholar 

  • Bonnasse-Gahot L, Nadal JP (2012) Perception of categories: from coding efficiency to reaction times. Brain Res 1434:47–61

    Article  CAS  PubMed  Google Scholar 

  • Chase SM, Young ED (2007) First-spike latency information in single neurons increases when referenced to population onset. Proc Natl Acad Sci USA 104:5175–5180

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Chow CC, White JA (1996) Spontaneous action potentials due to channel fluctuations. Biophys J 71:3013–3021

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  • Commenges D, Seal J, Pinatel F (1986) Inference about a change point in experimental neurophysiology. Math Biosci 80:81–108

    Article  Google Scholar 

  • Ditlevsen S, Lansky P (2005) Estimation of the input parameters in the Ornstein-Uhlenbeck neuronal model. Phys Rev E 71:011907

    Article  Google Scholar 

  • Ditlevsen S, Lansky P (2006) Estimation of the input parameters in the Feller neuronal model. Phys Rev E 73:061910

    Article  Google Scholar 

  • Dorval AD (2008) Probability distributions of the logarithm of inter-spike intervals yield accurate entropy estimates from small datasets. J Neurosci Meth 173:129–139

    Article  Google Scholar 

  • Duchamp-Viret P, Palouzier-Paulignan B, Duchamp A (1996) Odor coding properties of frog olfactory cortical neurons. Neuroscience 74:885–895

    Article  CAS  PubMed  Google Scholar 

  • Epps TW, Pulley LB (1985) Parameter estimates and test of fit for infinite mixture distributions. Commun Stat Theory Methods 14:3125–3145

    Article  Google Scholar 

  • Farkhooi F, Strube-Bloss MF, Nawrot MP (2009) Serial correlation in neural spike trains: experimental evidence, stochastic modeling, and single neuron variability. Phys Rev E 79:021905

    Article  Google Scholar 

  • Fienberg SE (1974) Stochastic models for single neuron firing trains: a survey. Biometrics 30:399–427

    Article  CAS  PubMed  Google Scholar 

  • Friedman HS, Priebe CE (1998) Estimating stimulus response latency. J Neurosci Methods 83:185–194

    Article  CAS  PubMed  Google Scholar 

  • Gautrais J, Thorpe S (1997) Rate coding versus temporal order coding: a theoretical approach. Biosystems 48:57–65

    Article  Google Scholar 

  • Hentall I (2000) Interactions between brainstem and trigeminal neurons detected by cross-spectral analysis. Neuroscience 96:601–610

    Article  CAS  PubMed  Google Scholar 

  • Kang K, Amari S (2008) Discrimination with spike times and ISI distributions. Neural Comput 20:1411–1426

    Article  PubMed  Google Scholar 

  • Koutrouvelis IA, Canavos GC (1997) Estimation in the three parameter gamma distribution based on the empirical moment generating function. J Stat Comput Simul 59:47–62

    Article  Google Scholar 

  • Koutrouvelis IA, Meintainis S (2002) Estimating the parameters of Poisson-exponential models. Aust NZ J Stat 44:233–245

    Article  Google Scholar 

  • Koutrouvelis IA, Canavos GC, Meintanis SG (2005) Estimation in the three-parameter inverse Gaussian distribution. Comput Stat Data An 49:1132–1147

    Article  Google Scholar 

  • Krofczik S, Menzel R, Nawrot MP (2009) Rapid odor processing in the honeybee antennal lobe network. Front Comput Neurosci 2:9

    PubMed Central  PubMed  Google Scholar 

  • Mandl G (1993) Coding for stimulus velocity by temporal patterning of spike discharges in visual cells of cat superior colliculus. Vis Res 33:1451–1475

    Article  CAS  PubMed  Google Scholar 

  • McKeegan D (2002) Spontaneous and odour evoked activity in single avian olfactory bulb neurons. Brain Res 929:48–58

    Article  CAS  PubMed  Google Scholar 

  • Miura K, Okada M, Amari SI (2006) Estimating spiking irregularities under changing environments. Neural Comput 18:2359–2386

    Article  PubMed  Google Scholar 

  • Nawrot M, Boucsein C, Molina V, Riehle A, Aertsen A, Rotter S (2008) Measurement of variability dynamics in cortical spike trains. J Neurosci Methods 169:374–390

    Article  PubMed  Google Scholar 

  • Nawrot MP, Aertsen A, Rotter S (2003) Elimination of response variability in neuronal spike trains. Biol Cybern 88:321–334

    Article  PubMed  Google Scholar 

  • Pawlas Z, Klebanov LB, Beneš V, Prokešová M, Popelář J, Lánský P (2010) First-spike latency in the presence of spontaneous activity. Neural Comput 22:1675–1697

    Article  PubMed  Google Scholar 

  • Quandt RE, Ramsey JB (1978) Estimating mixtures of normals and switching regressions. J Am Stat Assoc 73:730–738

    Article  Google Scholar 

  • R Core Team (2013) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org/

  • Reisenman CE, Heinbockel T, Hildebrand JG (2008) Inhibitory interactions among olfactory glomeruli do not necessarily reflect spatial proximity. J Neurophysiol 100:554–564

    Article  PubMed Central  PubMed  Google Scholar 

  • Rospars JP, Lánský P, Duchamp-Viret P, Duchamp A (2000) Spiking frequency versus odorant concentration in olfactory receptor neurons. Biosystems 58:133–141

    Article  CAS  PubMed  Google Scholar 

  • Shimokawa T, Shinomoto S (2009) Estimating instantaneous irregularity of neuronal firing. Neural Comput 21:1931–1951

    Article  PubMed  Google Scholar 

  • Tamborrino M, Ditlevsen S, Lansky P (2012) Identification of noisy response latency. Phys Rev E 86:021128

    Article  Google Scholar 

  • Tamborrino M, Ditlevsen S, Lansky P (2013) Parametric inference of neuronal response latency in presence of a background signal. BioSystems 112:249–257

    Article  CAS  PubMed  Google Scholar 

  • Van Rullen R, Gautrais J, Delorme A, Thorpe S (1998) Face processing using one spike per neurone. Biosystems 48:229–239

    Article  PubMed  Google Scholar 

  • Van Rullen R, Guyonneau R, Thorpe S (2005) Spike times make sense. Trends Neurosci 28:1–4

    Article  Google Scholar 

  • Wainrib G, Thieullen M, Pakdaman K (2010) Intrinsic variability of latency to first-spike. Biol Cybern 103:43–56

    Article  PubMed  Google Scholar 

Download references

Acknowledgments

M.L. and P.L. were supported by the Grant Agency of the Czech Republic, project P304/12/G069, and by RVO:67985823. S.D. was supported by the Danish Council for Independent Research | Natural Sciences. The work is part of the Dynamical Systems Interdisciplinary Network, University of Copenhagen.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marie Levakova.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (txt 22 KB)

Appendices

Appendix 1: Conditional and unconditional distribution of \(T\) in Model A and B

General expressions for the cdf, Laplace transform of the pdf, mean and variance of \(T_\mathrm{A}\), see (4) and (7), and \(T_\mathrm{B}\), see (5) and (8), are given below, first conditionally on \(\theta \), then the unconditional expressions.

Model A From the definition of \(T_\mathrm{A}\) in (2) and using that \(W\) and \(U\) are independent, or by integrating (4), the cdf of \(T_\mathrm{A}\) is

$$\begin{aligned} F_{T_\mathrm{A}|\,\theta }(t) = {\left\{ \begin{array}{ll} 1-\mathrm{e}^{-\lambda t} &{} t \in [0,\theta ] \\ 1-\mathrm{e}^{-\lambda \theta } + \mathrm{e}^{-\lambda t}F_U(t-\theta ) &{} t \in (\theta , \infty ). \end{array}\right. } \end{aligned}$$
(27)

To determine the mean and variance of \(T_\mathrm{A}\), we derive the Laplace transform of the pdf of \(T_\mathrm{A}\) given by (4),

$$\begin{aligned} \widehat{f_{T_\mathrm{A}|\,\theta }}(s) = \frac{\lambda }{s+\lambda }\left( 1-\mathrm{e}^{-(s+\lambda )\theta }\right) + \mathrm{e}^{-(s+\lambda )\theta } \widehat{f_U}(s), \end{aligned}$$
(28)

where \(\widehat{f_U}(s)\) is the Laplace transform of \(f_U(t)\). Thus, it holds

$$\begin{aligned} \mathbb {E}(T_\mathrm{A}\,|\,\theta ) = \frac{1}{\lambda }\left[ 1-\mathrm{e}^{-\lambda \theta } \right] + \mathbb {E}(U)\mathrm{e}^{-\lambda \theta } \end{aligned}$$
(29)
$$\begin{aligned} \mathrm{Var}(T_\mathrm{A}\,|\,\theta )&= \mathrm{e}^{-\lambda \theta }\left[ \mathbb {E}(U^2) + 2\mathbb {E}(U)\left( \theta - \frac{1}{\lambda } \right) - \frac{2\theta }{\lambda }\right] \nonumber \\&\quad -\, \mathrm{e}^{-2\lambda \theta }\left[ \mathbb {E}(U)-\frac{1}{\lambda } \right] ^2 + \frac{1}{\lambda ^2}. \end{aligned}$$
(30)

The unconditional expresions are

$$\begin{aligned}&\widehat{f_{T_\mathrm{A}}}(s) = \frac{1}{s+\lambda +1/\theta ^*}\left( \lambda + \frac{\widehat{f_U}(s)}{\theta ^*} \right) \end{aligned}$$
(31)
$$\begin{aligned}&\mathbb {E}(T_\mathrm{A}) = \frac{\mathbb {E}(U)+\theta ^*}{\lambda \theta ^*+1} \end{aligned}$$
(32)
$$\begin{aligned}&\mathrm{Var}(T_\mathrm{A}) = \frac{{\theta ^*}^2+(\lambda \theta ^*+1)\mathbb {E}(U^2)-\left[ \mathbb {E}(U)\right] ^2}{(\lambda \theta ^*+1)^2} \end{aligned}$$
(33)

where we have used the relations \(\mathbb {E}(T) = \mathbb {E}\left[ \mathbb {E}(T\,|\,\varTheta ) \right] \) and \(\mathrm{Var}(T) = \mathbb {E}\left[ \mathrm{Var}(T\,|\,\varTheta ) \right] + \mathrm{Var}\left[ \mathbb {E}(T\,|\,\varTheta ) \right] \).

Model B From the independence of \(W\) and \(U\) and based on (3) we get the cdf of \(T_\mathrm{B}\),

$$\begin{aligned} F_{T_\mathrm{B}|\,\theta }(t) = {\left\{ \begin{array}{ll} 1-\mathrm{e}^{-\lambda t} &{} t \in [0,\theta ] \\ 1-\mathrm{e}^{-\lambda \theta }+\int _\theta ^t \lambda \mathrm{e}^{-\lambda y}F_U(t-y)\,\mathrm{d}y &{} t \in (\theta , \infty ). \end{array}\right. } \end{aligned}$$
(34)

The Laplace transform of the pdf of \(T_\mathrm{B}\) is

$$\begin{aligned} \widehat{f_{T_\mathrm{B}|\,\theta }}(s) = \frac{\lambda }{\lambda +s} \left[ 1-\mathrm{e}^{-(\lambda +s)\theta } \left( 1-\widehat{f_U}(s) \right) \right] . \end{aligned}$$
(35)

Using the Laplace transform \(\widehat{f_{T_\mathrm{B}|\,\theta }}(s)\) yields the mean and variance of \(T_\mathrm{B}\),

$$\begin{aligned} \mathbb {E}(T_\mathrm{B}\,|\,\theta ) = \frac{1}{\lambda } + \mathrm{e}^{-\lambda \theta }\mathbb {E}(U) \end{aligned}$$
(36)
$$\begin{aligned} \mathrm{Var}(T_\mathrm{B}\,|\,\theta )&= \mathrm{e}^{-\lambda \theta }\left[ \mathbb {E}(U^2) + 2\theta \mathbb {E}(U) \right] \nonumber \\&\quad - \mathrm{e}^{-2\lambda \theta } \left[ \mathbb {E}(U) \right] ^2 + \frac{1}{\lambda ^2}. \end{aligned}$$
(37)

The unconditional expressions are

$$\begin{aligned}&\widehat{f_{T_\mathrm{B}}}(s) = \frac{\lambda }{s+\lambda +1/\theta ^*} \left[ 1 + \frac{\widehat{f_U}(s)}{\theta ^*(s+\lambda )} \right] \end{aligned}$$
(38)
$$\begin{aligned}&\mathbb {E}(T_\mathrm{B}) = \frac{1}{\lambda } + \frac{\mathbb {E}(U)}{\lambda \theta ^*+1} \end{aligned}$$
(39)
$$\begin{aligned}&\mathrm{Var}(T_\mathrm{B}) = \frac{\mathbb {E}(U^2)}{\lambda \theta ^*+1} + \frac{\mathbb {E}(U)[2\theta ^*-\mathbb {E}(U)]}{(\lambda \theta ^*+1)^2} + \frac{1}{\lambda ^2}. \end{aligned}$$
(40)

Appendix 2: Laplace transform and moments in Model 1–5

Model 1 The Laplace transform of (10) is

$$\begin{aligned} \widehat{f_{T_{M1}}}(s) = \frac{\lambda }{s+\lambda }\left( 1-\mathrm{e}^{-(\lambda +s)\theta ^*} \right) + \frac{\kappa }{s+\kappa } \mathrm{e}^{-(\lambda +s)\theta ^*}. \end{aligned}$$
(41)

From (29) and (30) with \(E(U) = 1/\kappa \) and \(E(U^2)=2/\kappa ^2\) we get

$$\begin{aligned} \mathrm{E}(T_{M1}) = \frac{1-\mathrm{e}^{-\lambda \theta ^*}}{\lambda } + \frac{\mathrm{e}^{-\lambda \theta ^*}}{\kappa } \end{aligned}$$
(42)
$$\begin{aligned} \mathrm{Var}(T_{M1})&= \frac{1}{\lambda ^2} + \frac{2 (\lambda - \kappa ) (1 + \kappa \theta ^*)}{\lambda \kappa ^2} \mathrm{e}^{-\lambda \theta ^*}\nonumber \\&\quad - \left( \frac{\lambda - \kappa }{\lambda \kappa } \mathrm{e}^{-\lambda \theta ^*} \right) ^2. \end{aligned}$$
(43)

Model 2 The Laplace transform of (11) is

$$\begin{aligned} \widehat{f_{T_{M2}}}(s) = \frac{\lambda }{s+\lambda } + \frac{\lambda \mathrm{e}^{-(\lambda +s)\theta ^*}}{s+\lambda }\left[ \left( \frac{\lambda }{s+\lambda } \right) ^k-1 \right] . \end{aligned}$$
(44)

From (36) and (37) follows (with \(E(U)=k/\lambda \) and \(E(U^2)=k(k+1)/\lambda ^2\))

$$\begin{aligned}&\mathbb {E}(T_{M2}) = \frac{1}{\lambda } + \frac{k}{\lambda }\mathrm{e}^{-\lambda \theta ^*} \end{aligned}$$
(45)
$$\begin{aligned}&\mathrm{Var}(T_{M2}) = \frac{1}{\lambda ^2} + \frac{k(2\lambda \theta ^*\!+k+\!1)}{\lambda ^2}\mathrm{e}^{-\lambda \theta ^*} \! - \frac{k^2}{\lambda ^2}\mathrm{e}^{-2\lambda \theta ^*}. \end{aligned}$$
(46)

Model 3 The Laplace transform, mean and variance of (12) are

$$\begin{aligned} \widehat{f_{T_{M3}}}(s)&= \frac{\lambda }{s+\lambda + 1/\theta ^*} \nonumber \\&\quad +\frac{\kappa }{(\lambda -\kappa )\theta ^* +1}\left[ \frac{1}{s+\kappa } - \frac{1}{s+\lambda +1/\theta ^*} \right] \end{aligned}$$
(47)
$$\begin{aligned}&\mathbb {E}(T_{M3}) = \frac{\kappa \theta ^*+1}{\kappa (\lambda \theta ^*+1)} \end{aligned}$$
(48)
$$\begin{aligned}&\mathrm{Var}(T_{M3}) = \frac{(\kappa \theta ^*)^2 + 2\lambda \theta ^* + 1}{\kappa ^2(\lambda \theta ^*+1)^2}. \end{aligned}$$
(49)

Model 4 From (38)–(40), the Laplace transform, mean and variance are

$$\begin{aligned}&\widehat{f_{T_{M4}}}(s) = \frac{\lambda }{s+\lambda +1/\theta ^*}\left[ 1 + \frac{1}{\theta ^*(s+\lambda )}\left( \frac{\lambda }{s+\lambda }\right) ^k\right] \end{aligned}$$
(50)
$$\begin{aligned}&\mathbb {E}(T_{M4}) = \frac{\lambda \theta ^* + k+1}{\lambda (\lambda \theta ^* + 1)} \end{aligned}$$
(51)
$$\begin{aligned}&\mathrm{Var}(T_{M4}) = \frac{1}{\lambda ^2}\left\{ 1+\frac{k\left[ (k+3)\lambda \theta ^* + 1 \right] }{(\lambda \theta ^*+1)^2} \right\} \end{aligned}$$
(52)

Model 5 Using (13) with \(\widehat{f_E}(s)=\left( \lambda /(s+\lambda )\right) ^k\), the Laplace transform of \(f_X(t)\) is

$$\begin{aligned} \widehat{f_{X_{M5}}}(s) = \frac{\left[ \lambda (s+\lambda )\right] ^{k}}{\left[ \left( s+\lambda )(s +\lambda +\frac{1}{\theta ^*}\right) \right] ^{k} + \left[ \lambda (s+\lambda )\right] ^{k} - \left[ \lambda \!\left( s+\lambda +\frac{1}{\theta ^*}\right) \right] ^{k}}. \end{aligned}$$
(53)

The mean of \(X_{M5}\), calculated from (14), is

$$\begin{aligned} \mathbb {E}(X_{M5}) = \frac{k}{\lambda }\left( \frac{\lambda \theta ^* + 1}{\lambda \theta ^*} \right) ^{k} . \end{aligned}$$
(54)

The Laplace transform calculated from (17) is

$$\begin{aligned}&\widehat{f_{T_{M5}}}(s)\nonumber \\&\quad = \frac{-\lambda }{k(s+1/\theta ^*)}\left\{ \left( \frac{\lambda }{s+\lambda +1/\theta ^*}\right) ^k - 1 + \left( \frac{\lambda }{s+\lambda +1/\theta ^*}\right) ^k \right. \nonumber \\&\qquad \left. \times \frac{\left[ s \left( \left( \frac{\lambda }{s+\lambda }\right) ^k - \left( \frac{\lambda }{s+\lambda +1/\theta ^*}\right) ^k\right) + \frac{1}{\theta ^*}\left( \left( \frac{\lambda }{s+\lambda }\right) ^k-1\right) \right] }{s \left[ 1 - \left( \frac{\lambda }{s+\lambda }\right) ^k + \left( \frac{\lambda }{s+\lambda +1/\theta ^*}\right) ^k\right] } \right\} . \end{aligned}$$
(55)

Again, from \(\widehat{f_{T_{M5}}}(s)\) we can determine the mean of \(T_{M5}\),

$$\begin{aligned} \mathbb {E}(T_{M5}) = \left( \frac{k}{\lambda } - \theta ^* \right) \left( \frac{\lambda \theta ^* + 1}{\lambda \theta ^*} \right) ^k + \frac{k+1}{2\lambda } + \theta ^*. \end{aligned}$$
(56)

In general, it is difficult to find the inverse transform of \(\widehat{f_{X_{M5}}}(s)\) and \(\widehat{f_{T_{M5}}}(s)\). However, for \(k=1\), such that the excitatory pulses form a Poisson process, it is possible. It holds for a Poisson process that \(f^+_E(t)\) is equal to \(f_E(t)\) and therefore the distribution of \(T_{M5}\) is the same as the distribution of \(X_{M5}\), i.e. \(f_{X_{M5}}(t) = f_{T_{M5}}(t)\). The pdf is therefore

$$\begin{aligned} f_{T_{M5}}(t)&= \frac{1}{2}\frac{\lambda }{4\lambda \theta ^* + 1} \mathrm{e}^{-\frac{t}{2}(2\lambda +1/\theta ^*)} \nonumber \\&\quad \times \left[ \left( 1+4\lambda \theta ^*+ \sqrt{1+4\lambda \theta ^*} \right) \mathrm{e}^{-\frac{t}{2\theta ^*} \sqrt{1+4\lambda \theta ^*}} \right. \nonumber \\&\quad + \left. \left( 1+4\lambda \theta ^*- \sqrt{1+4\lambda \theta ^*} \right) \mathrm{e}^{\frac{t}{2\theta ^*}\sqrt{1+4\lambda \theta ^*}} \right] . \end{aligned}$$
(57)

Appendix 3: Moment estimators

Models with constant latency (Model 1 and 2) By plugging (42), (43), (45) and (46) into (21), we get moment equations for the models. In both cases, the obtained system of equations has no analytical solution with respect to \(\theta \). However, parameters \(\kappa \) or \(k\) can be isolated in the first equation and inserted into the second equation. The estimates of \(\theta ^*\) are then calculated numerically as solutions to these equations:

$$\begin{aligned} \text{ Model } \text{1: } s^2&= \frac{1}{\lambda ^2}- \left( \bar{t} - \frac{1}{\lambda }\right) ^2 \nonumber \\&\quad +\,2\left( \bar{t} - \frac{1}{\lambda }\right) \left[ \left( \bar{t} - \frac{1}{\lambda }\right) \mathrm{e}^{\lambda \theta ^*} + \frac{1}{\lambda } + \theta ^*\right] \end{aligned}$$
(58)
$$\begin{aligned} \text{ Model } \text{2: } s^2&= \frac{1}{\lambda ^2}\Big [ (2\lambda \theta ^* + 1)(\lambda \bar{t} - 1) \nonumber \\&\quad \left. +\, (\lambda \bar{t} - 1)^2\mathrm{e}^{\lambda \theta ^*} - \lambda \bar{t}\left( \lambda \bar{t} - 2\right) \right] . \end{aligned}$$
(59)

Models with random latency (Model 3 and 4) The moment equations (21) for Model 3 and 4 have analytical solutions. The estimators of \(\theta ^*\) are

$$\begin{aligned} \hat{\theta }^*_{M,M3}&= \left( \frac{2\bar{t}(\lambda \bar{t}-1)}{2(s^2-\bar{t}^2)} - \frac{3}{2}\lambda \right. \nonumber \\&\quad \left. +\frac{\sqrt{\lambda ^2(\bar{t}^2+s^2)^2+4\lambda \bar{t}(\bar{t}-3s^2)-4(\bar{t}^2-2s^2)}}{2(s^2-\bar{t}^2)}\right) ^{-1} \end{aligned}$$
(60)
$$\begin{aligned} \hat{\theta }^*_{M,M4}&= \left( \frac{\lambda ^2\bar{t}^2 - 1}{2(\lambda s^2 - \bar{t})} -\frac{\lambda }{2} \right. \nonumber \\&\quad \left. +\frac{\sqrt{\lambda ^4 (s^2 + \bar{t}^2)^2 - 2\lambda ^3\bar{t}(5s^2 + \bar{t}) + \lambda ^2(6s^2 + 7\bar{t}^2) - 6\lambda \bar{t} + 1}}{2(\lambda s^2 - \bar{t})}\right) ^{-1} \end{aligned}$$
(61)

Model with selective interaction (Model 5) Since \(\theta ^*\) is the only unknown parameter in Model 5, we only use the first moment in (21). The estimate is denoted by \(\hat{\theta }^*_{M,T}\). Because the distribution of \(X\) is known, we can also use observations of \(X\) for the estimation. The moment equation \(\mathbb {E}(X) = \bar{x}\) is employed, where \(\bar{x}\) is the average of observations \(x_1,\ldots ,x_n\) of \(X\). This moment estimator is denoted by \(\hat{\theta }^*_{M,X}\). Another alternative is to use observations \(t_i\) and \(x_i\) together. Let the time from the stimulus onset to the second observed spike after it be a random variable denoted by \(Z\), i.e. \(Z = T + X\). The mean of \(Z\) is \(\mathbb {E}(Z) = \mathbb {E}(T) + \mathbb {E}(X)\) and the moment equation is thus \(\mathbb {E}(T) + \mathbb {E}(X) = \bar{z}\), where \(\bar{z}\) is the average of \(t_i+x_i,\,i=1,\ldots ,n\). The corresponding moment estimator is denoted by \(\hat{\theta }^*_{M,Z}\).

The solution to \(\mathbb {E}(X)=\bar{x}\) with \(\mathbb {E}X\) given by (54) yields

$$\begin{aligned} \hat{\theta }^*_{M,X} = \frac{1}{\lambda } \left[ \left( \frac{\lambda }{k}\bar{x} \right) ^{1/k} - 1 \right] ^{-1}. \end{aligned}$$
(62)

No closed forms exist for \(\hat{\theta }^*_{M,T}\) and \(\hat{\theta }^*_{M,Z}\), they are found as numerical solutions to the equations

$$\begin{aligned}&\left( \frac{k}{\lambda } - \hat{\theta }^*_{M,T} \right) \left( \frac{\lambda \hat{\theta }^*_{M,T} + 1}{\lambda \hat{\theta }^*_{M,T}} \right) ^k + \frac{k+1}{2\lambda } + \hat{\theta }^*_{M,T} = \bar{t} \\&\left( \frac{2k}{\lambda } - \hat{\theta }^*_{M,Z} \right) \left( \frac{\lambda \hat{\theta }^*_{M,Z} + 1}{\lambda \hat{\theta }^*_{M,Z}} \right) ^k + \frac{k+1}{2\lambda } + \hat{\theta }^*_{M,Z} = \bar{z}. \end{aligned}$$

If excitatory pulses form a Poisson process, i.e. \(k=1\), all three moment equations have analytical solutions, namely

$$\begin{aligned} \hat{\theta }^*_{M,T}&= \left[ \lambda \left( \lambda \bar{t} - 1 \right) \right] ^{-1} \\ \hat{\theta }^*_{M,X}&= \left[ \lambda \left( \lambda \bar{x} - 1 \right) \right] ^{-1} \\ \hat{\theta }^*_{M,Z}&= \left[ \lambda \left( \frac{1}{2}\lambda \bar{z}-1\right) \right] ^{-1}. \end{aligned}$$

Appendix 4: Log-likelihood functions

The pdfs of \(T_\mathrm{A}\) and \(T_\mathrm{B}\) are given in (4) and (5) and the log-likelihood functions of \(\theta ^*\) are

$$\begin{aligned} l^A(\theta ^* )&= n_{\theta ^*}\ln \lambda -\lambda \Big ( (n-n_{\theta ^*})\theta ^* + \sum _{t_i\le \theta ^*} t_i \Big )\nonumber \\&\quad + \sum _{t_i > \theta ^*} \ln f_U(t_i-\theta ^*) \end{aligned}$$
(63)

and

$$\begin{aligned} l^B(\theta ^* ) \!=\! n\ln \lambda \!-\! \lambda \sum _{t_i\le \theta ^*} t_i + \sum _{t_i>\theta ^*} \ln \int \limits _{\theta ^*}^{t_i} \mathrm{e}^{-\lambda y} f_U(t_i-y)\,\mathrm{d}y, \end{aligned}$$
(64)

where \(n_{\theta ^*}\) is the number of observations \(t_i \le \theta ^*\).

For Model 1, 2 and 3, the log-likelihood functions are

$$\begin{aligned} l^{M1}(\theta ^* ,\kappa )&= \sum _{t_i\le \theta ^*} (\ln \lambda - \lambda t_i)\nonumber \\&\quad + \sum _{t_i> \theta ^*} \left[ \ln \kappa - \lambda \theta ^* - \kappa (t_i-\theta ^*)\right] \end{aligned}$$
(65)
$$\begin{aligned} l^{M2}(\theta ^* ,k)&= n \ln \lambda - \lambda \sum _{i=1}^n t_i \nonumber \\&\quad + \sum _{t_i>\theta ^*} \left( k\ln [\lambda (t_i-\theta ^*)]-\ln \Gamma (k+1) \right) \end{aligned}$$
(66)
$$\begin{aligned} l^{M3}(\theta ^*,\kappa )&= -\frac{\lambda \theta ^*+1}{\theta ^*}\sum _{i=1}^n t_i \nonumber \\&+ \sum _{i=1}^n \ln \left[ \lambda + \frac{\kappa }{(\lambda -\kappa )\theta ^*+1}\left( \mathrm{e}^{(\lambda -\kappa +1/\theta ^*)t_i} - 1 \right) \right] . \end{aligned}$$
(67)

The log-likelihood function for Model 5 when \(k=1\) is

$$\begin{aligned} l^{M5}(\theta ^*)&= -n\ln 2 + n\ln \lambda - n\ln \theta ^*\nonumber \\&\quad -\, n\ln \left[ 4\lambda +\frac{1}{\theta ^*}\right] - \frac{2\lambda \theta ^*+1}{2\theta ^*}\sum _{i=1}^n t_i \nonumber \\&\quad + \sum _{i=1}^n\ln \left[ \left( 1+4\lambda \theta ^* +\sqrt{1+4\lambda \theta ^*}\right) \mathrm{e}^{-\frac{t_i}{2\theta ^*} \sqrt{1+4\lambda \theta ^*}} \right. \nonumber \\&\quad \left. + \left( 1+4\lambda \theta ^* -\sqrt{1+4\lambda \theta ^*}\right) \mathrm{e}^{\frac{t_i}{2\theta ^*} \sqrt{1+4\lambda \theta ^*}}\right] . \end{aligned}$$
(68)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Levakova, M., Ditlevsen, S. & Lansky, P. Estimating latency from inhibitory input. Biol Cybern 108, 475–493 (2014). https://doi.org/10.1007/s00422-014-0614-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00422-014-0614-6

Keywords