Skip to main content
Log in

Spike-time reliability of layered neural oscillator networks

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

We study the reliability of layered networks of coupled “type I” neural oscillators in response to fluctuating input signals. Reliability means that a signal elicits essentially identical responses upon repeated presentations, regardless of the network’s initial condition. We study reliability on two distinct scales: neuronal reliability, which concerns the repeatability of spike times of individual neurons embedded within a network, and pooled-response reliability, which concerns the repeatability of total synaptic outputs from a subpopulation of the neurons in a network. We find that neuronal reliability depends strongly both on the overall architecture of a network, such as whether it is arranged into one or two layers, and on the strengths of the synaptic connections. Specifically, for the type of single-neuron dynamics and coupling considered, single-layer networks are found to be very reliable, while two-layer networks lose their reliability with the introduction of even a small amount of feedback. As expected, pooled responses for large enough populations become more reliable, even when individual neurons are not. We also study the effects of noise on reliability, and find that noise that affects all neurons similarly has much greater impact on reliability than noise that affects each neuron differently. Qualitative explanations are proposed for the phenomena observed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. A priori” here refers to one’s best guess based on system parameters alone without further knowledge of the dynamics. For example, as the system evolves, each neuron will, in time, acquire a mean frequency, which is likely to be different than its intrinsic frequency. But before studying the dynamics of the system—that is, a priori—there is no information on how the two will differ. So we take its a priori value to be equal to its intrinsic frequency.

  2. The d sync values in Table 1 are computed over 1.6×104 time units. This is the reason for the discrepancy with the median d sync value for ρ = 0.1 in Fig. 4, which were computed over 200 time units.

  3. Inspection of raster plots at N = 1000, for example, clearly shows substantial trial-to-trial variability, with an interesting effect: the variability of single neurons tends to wax and wane over time, and this waxing and waning is differently timed for different neurons.

  4. As explained in Appendix A, the distribution of states at t units of time after the initial presentation of the stimulus is given by one of the sample measures μ ω , which are singular and quite complicated. Estimating (6) will require knowledge of these measures.

  5. In this paper, we use the Itô interpretation throughout.

References

  • Arnold, L. (2003). Random dynamical systems. New York: Springer.

    Google Scholar 

  • Averbeck, B., Latham, P. E., & Pouget, A. (2006). Neural correlations, population coding and computation. Nature Reviews Neuroscience, 7(5), 358–366, May.

    Article  PubMed  CAS  Google Scholar 

  • Aviel, Y., Mehring, C., Abeles, M., & Horn, D. (2003). On embedding synfire chains in a balanced network. Neural Computation, 15, 1321–1340.

    Article  PubMed  CAS  Google Scholar 

  • Bair, W., Zohary, E., & Newsome, W. T. (2001). Correlated firing in macaque visual area MT: Time scales and relationship to behavior. Journal of Neuroscience, 21(5), 1676–1697.

    PubMed  CAS  Google Scholar 

  • Banerjee, A. (2006). On the sensitive dependence on initial conditions of the dynamics of networks of spiking neurons. Journal of Computational Neuroscience, 20, 321–348.

    Article  PubMed  Google Scholar 

  • Banerjee, A., Seriès, P., & Pouget, A. (2008). Dynamical constraints on using precise spike timing to compute in recurrent cortical networks. Neural Computation, 20, 974–993.

    Article  PubMed  Google Scholar 

  • Baxendale, P. H. (1992). Stability and equilibrium properties of stochastic flows of diffeomorphisms. In Progr. Probab. 27. Boston: Birkhauser.

    Google Scholar 

  • Bazhenov, M., Rulkov, N., Fellous, J., & Timofeev, I. (2005). Role of network dynamics in shaping spike timing reliability. Physical Review E, 72, 041903.

    Article  Google Scholar 

  • Berry, M., Warland, D., & Meister, M. (1997). The structure and precision of retinal spike trains. PNAS, 94, 5411–5416.

    Article  PubMed  CAS  Google Scholar 

  • Bertschlinger, N., & Natschlager, T. (2004). Real-time computation at the edge of chaos in recurrent neural networks. Neural Computation, 16, 1413–1436.

    Article  Google Scholar 

  • Borgers, C., Epstein, C., & Kopell, N. (2005). Background gamma rhythmicity and attention in cortical local circuits: A computational study. Journal of Neuroscience, 102, 7002–7007.

    Google Scholar 

  • Bruno, R. M., & Sakmann, B. (2006). Cortex is driven by weak but synchronously active thalamocortical synapses. Nature, 312, 1622–1627.

    CAS  Google Scholar 

  • Bryant, H. L., & Segundo, J. P. (1976). Spike initiation by transmembrane current: A white-noise analysis. Journal of Physiology, 260, 279–314.

    PubMed  CAS  Google Scholar 

  • de Reuter van Steveninck, R., Lewen, R., Strong, S., Koberle, R., & Bialek, W. (1997). Reproducibility and variability in neuronal spike trains. Science, 275, 1805–1808.

    Article  Google Scholar 

  • Doiron, B., Chacron, M. J., Maler, L., Longtin, A., & Bastian, J. (2003). Inhibitory feedback required for network burst responses to communication but not to prey stimuli. Nature, 421, 539–543.

    Article  PubMed  CAS  Google Scholar 

  • Douglas, E., & Martin, K. (2004). Neuronal circuits of the neocortex. Annual Review of Neuroscience, 27, 419–451.

    Article  PubMed  CAS  Google Scholar 

  • Eckmann, J.-P., & Ruelle, D. (1985). Ergodic theory of chaos and strange attractors. Reviews of Modern Physics, 57, 617–656.

    Article  CAS  Google Scholar 

  • Ermentrout, G. B. (1996). Type I membranes, phase resetting curves, and synchrony. Neural Computation, 8, 979–1001.

    Article  PubMed  CAS  Google Scholar 

  • Ermentrout, G. B., & Kopell, N. (1984). Frequency plateaus in a chain of weakly coupled oscillators, I. SIAM Journal on Mathematical Analysis, 15, 215–237.

    Article  Google Scholar 

  • Faisal, A. A., Selen, L. P. J., & Wolpert, D. M. (2008). Noise in the nervous system. Nature Reviews Neuroscience, 9, 292–303.

    Article  PubMed  CAS  Google Scholar 

  • Hodgkin, A. (1948). The local electric changes associated with repetitive action in a non-medulated axon. Journal of Physiology, 117, 500–544.

    Google Scholar 

  • Hunter, J., Milton, J., Thomas, P., & Cowan, J. (1998). Resonance effect for neural spike time reliability. Journal of Neurophysiology, 80, 1427–1438.

    PubMed  CAS  Google Scholar 

  • Johnston, D., & Wu, S. (1997). Foundations of cellular neurophysiology. Cambridge: MIT.

    Google Scholar 

  • Kandel, E., Schwartz, J., & Jessell, T. (1991). Principles of neural science, 4th edn. New York: McGraw-Hill.

    Google Scholar 

  • Kara, P., Reinagel, P., & Reid, R. C. (2000). Low response variability in simultaneously recorded retinal, thalamic, and cortical neurons. Neuron, 27, 636–646.

    Article  Google Scholar 

  • Kifer, Y. (1986). Ergodic theory of random transformations. Boston: Birkhauser.

    Google Scholar 

  • Koch, C. (1999). Biophysics of computation: Information processing in single neurons. Oxford: Oxford University Press.

    Google Scholar 

  • Kunita, H. (1990). Stochastic flows and stochastic differential equations. Cambridge studies in advanced mathematics (Vol. 24). Cambridge: Cambridge University Press.

    Google Scholar 

  • Lampl, I., Reichova, I., & Ferster, D. S. (1999). Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron, 22, 361–374.

    Article  PubMed  CAS  Google Scholar 

  • Latham, P. E., Richmond, B. J., Nelson, P. G., & Nirenberg, S. (2000). Intrinsic dynamics in neuronal networks. I. theory. Journal of Neurophysiology, 83, 808–827.

    PubMed  CAS  Google Scholar 

  • Le Jan, Y. (1987). Équilibre statistique pour les produits de difféomorphismes aléatoires indépendants. Annales de l’Institut Henri Poincaré Probabilités et Statistiques, 23(1), 111–120.

    Google Scholar 

  • Ledrappier, F., & Young, L.-S. (1988). Entropy formula for random transformations. Probability Theory and Related Fields, 80, 217–240.

    Article  Google Scholar 

  • Lin, K. K., Shea-Brown, E., & Young, L.-S. (2009a). Reliability of coupled oscillators. Journal of Nonlinear Science (in press).

  • Lin, K. K., Shea-Brown, E., & Young, L.-S. (2009b). Reliability of layered neural oscillator networks. Communications in Mathematical Sciences (in press).

  • Lu, T., Liang, L., & Wang, X. (2001). Temporal and rate representations of time-varying signals in the auditory cortex of awake primates. Nature Neuroscience, 4, 1131–1138.

    Article  PubMed  CAS  Google Scholar 

  • Maei, H. R., & Latham, P. E. (2005). Can randomly connected networks exhibit long memories? Preprint, Gatsby Computational Neuroscience Unit.

  • Mainen, Z., & Sejnowski, T. (1995). Reliability of spike timing in neocortical neurons. Science, 268, 1503–1506.

    Article  PubMed  CAS  Google Scholar 

  • Mazurek, M., & Shadlen, M. (2002). Limits to the temporal fidelity of cortical spike rate signals. Nature Neuroscience, 5, 463–471.

    PubMed  CAS  Google Scholar 

  • Murphy, G., & Rieke, F. (2007). Network variability limits stimulus-evoked spike timing precision in retinal ganglion cells. Neuron, 52, 511–524.

    Article  Google Scholar 

  • Pakdaman, K., & Mestivier, D. (2001). External noise synchronizes forced oscillators. Physical Review E, 64, 030901–030904.

    Article  CAS  Google Scholar 

  • Perkel, D., & Bullock, T. (1968). Neural coding. Neurosciences Research Program Bulletin, 6, 221–344.

    Google Scholar 

  • Pikovsky, A., Rosenblum, M., & Kurths, J. (2001). Synchronization: A universal concept in nonlinear sciences. Cambridge: Cambridge University Press.

    Google Scholar 

  • Reyes, A. (2003). Synchrony-dependent propagation of firing rate in iteratively constructed networks in vitro. Nature Neuroscience, 6, 593–599.

    Article  PubMed  CAS  Google Scholar 

  • Rieke, F., Warland, D., de Ruyter van Steveninck, R., & Bialek, W. (1996). Spikes: Exploring the neural code. Cambridge: MIT.

    Google Scholar 

  • Rinzel, J., & Ermentrout, G. B. (1998). Analysis of neural excitability and oscillations. In C. Koch, & I. Segev (Eds.), Methods in neuronal modeling (pp. 251–291). Cambridge: MIT.

    Google Scholar 

  • Ritt, J. (2003). Evaluation of entrainment of a nonlinear neural oscillator to white noise. Physical Review E, 68, 041915–041921.

    Article  Google Scholar 

  • Seriès, P., Latham, P. E., & Pouget, A. (2004). Tuning curve sharpening for orientation selectivity: Coding efficiency and the impact of correlations. Nature Neuroscience, 7, 1129–1135.

    Article  PubMed  Google Scholar 

  • Shadlen, M. N., & Newsome, W. T. (1998). The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. Journal of Neuroscience, 18, 3870–3896.

    PubMed  CAS  Google Scholar 

  • Shepard, G. (2004). The synaptic organization of the brain. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Teramae, J., & Fukai, T. (2007). Reliability of temporal coding on pulse-coupled networks of oscillators. arXiv:0708.0862v1 [nlin.AO].

  • Teramae, J., & Tanaka, D. (2004). Robustness of the noise-induced phase synchronization in a general class of limit cycle oscillators. Physical Review Letters, 93, 204103–204106.

    Article  PubMed  Google Scholar 

  • Terman, D., Rubin, J., Yew, A., & Wilson, C. J. (2002). Activity patterns in a model for the subthalamopallidal network of the basal ganglia. Journal of Neuroscience, 22, 2963–2976.

    PubMed  CAS  Google Scholar 

  • van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274, 1724–1726.

    Article  PubMed  Google Scholar 

  • van Vreeswijk, C., & Sompolinsky, H. (1998). Chaotic balanced state in a model of cortical circuits. Neural Computation, 10, 1321–1371.

    Article  PubMed  Google Scholar 

  • Vogels, T., & Abbott, L. (2005). Signal propagation and logic gating in networks of integrate-and-fire neurons. Journal of Neuroscience, 25, 10786–10795.

    Article  PubMed  CAS  Google Scholar 

  • Winfree, A. (2001). The geometry of biological time. New York: Springer.

    Google Scholar 

  • Zhou, C., & Kurths, J. (2003). Noise-induced synchronization and coherence resonance of a hodgkin-huxley model of thermally sensitive neurons. Chaos, 13, 401–409.

    Article  PubMed  Google Scholar 

  • Zohary, E., Shadlen, M. N., & Newsome, W. T. (1994). Correlated neuronal discharge rate and its implication for psychophysical performance. Nature, 370, 140–143.

    Article  PubMed  CAS  Google Scholar 

Download references

Acknowledgements

We thank David Cai, Anne-Marie Oswald, Alex Reyes, and John Rinzel for their helpful discussions of this material. We acknowledge a Career Award at the Scientific Interface from the Burroughs-Wellcome Fund (E.S.-B.), and a grant from the NSF (L.-S.Y.).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric Shea-Brown.

Additional information

Action Editor: David Terman

Appendices

Appendix A Review of random dynamical systems theory

In this appendix, we review some relevant mathematical theory that justifies the use of Lyapunov exponents in determining the reliability of a system. “Reliability” here refers exclusively to “neuronal reliability”. As these results are very general and can potentially be used elsewhere, we will present them in a context considerably more general than the system defined by Eq. (1).

Consider a stochastic differential equation (SDE) of the form

$$ \label{eq:sde} dx_t = a(x_t)\ dt + \sum\limits_{i=1}^k b_i(x_t)\circ dW^i_t\ . $$
(7)

Here x t  ∈ M where M is a compact Riemannian manifold of any dimension d ≥ 1, a(·) and b(·) are smooth functions on M, and \((W^1_t, \cdots, W^k_t)\) is a k-dimensional standard Brownian motion. In general, the equation is assumed to be of Stratonovich type, but when \(M\,=\,{\mathbb T}^N \equiv {\mathbb{S}}^1 \times {\mathbb{S}}^1 \times \cdots \times {\mathbb{S}}^1\), we have the choice between the Itô and Stratonovich integrals.Footnote 5 Equation (1) is a special case of this setting with \(M={\mathbb T}^N\).

1.1 A.1 Stochastic flows associated with SDEs (see e.g. Kunita 1990; Baxendale 1992)

In general, one fixes an initial x 0, and looks at the distribution of x t for t > 0. Under fairly general conditions, these distributions converge to the unique stationary measure μ, the density of which is given by the Fokker-Planck equation. For our purposes, however, this is not the most relevant viewpoint. Since reliability is about a system’s reaction to a single realization of Brownian motion at a time, and concerns the simultaneous evolution of all or large ensembles of initial conditions, of relevance to us are not the distributions of x t but flow-maps of the form \(F_{t_1,t_2;\omega}\). Here t 1 < t 2 are two points in time, ω is a sample Brownian path, and \(F_{t_1,t_2;\omega}(x_{t_1})=x_{t_2}\) where x t is the solution of (7) corresponding to ω. A well known theorem states that such stochastic flows of diffeomorphisms are well defined if the functions a(x) and b(x) in Eq. (7) are sufficiently smooth (see Kunita 1990). More precisely, the maps \(F_{t_1,t_2;\omega}\) are well defined for almost every ω, and they are invertible, smooth transformations with smooth inverses. Moreover, \(F_{t_1,t_2;\omega}\) and \(F_{t_3,t_4;\omega}\) are independent for t 1 < t 2 < t 3 < t 4. These results allow us to treat the evolution of systems described by (7) as compositions of random, i.i.d., smooth maps.

Since reliability questions involve one ω at a time, the stationary measure μ, which gives the steady-state distribution averaged over all ω, is not the object of direct interest. Of relevance are the sample measures {μ ω }, which are the conditional measures of μ given the past. More precisely, we think of ω as defined for all t ∈ ( − ∞ , ∞ ) and not just for t > 0. Then μ ω describes what one sees at t = 0 given that the system has experienced the input defined by ω for all t < 0. Two useful facts about these sample measures are

  1. (a)

    (F  − t,0;ω )* μμ ω as t → ∞, where (F  − t,0;ω )* μ is the measure obtained by transporting μ forward by F  − t,0;ω , and

  2. (b)

    the family {μ ω } is invariant in the sense that \((F_{0,t;\omega})_*(\mu_\omega) = \mu_{\sigma_t(\omega)}\) where σ t (ω) is the time-shift of the sample path ω by t.

Thus in the context of a reliability study, if our initial distribution is given by a probability density ρ and we apply the stimulus corresponding to ω, then the distribution at time t is (F 0,t;ω )* ρ. For t sufficiently large, one expects in most situations that (F 0,t;ω )* ρ is very close to (F 0,t;ω )* μ, which by (a) above is essentially given by \(\mu_{\sigma_t(\omega)}\). The time-shift by t of ω is necessary because by definition, μ ω is the conditional distribution of μ at time 0.

1.2 A.2 Lyapunov exponents of random dynamical systems (see e.g. Arnold 2003)

The fact that the evolution of systems described by (7) can be represented as compositions of random, i.i.d., smooth maps allows us to tap into a large part of dynamical systems theory, namely the theory of random dynamical systems (RDS). Many of the techniques for analyzing smooth deterministic systems have been extended to this random setting, including the notion of Lyapunov exponents. For the stochastic flows above, the largest Lyapunov exponent is defined to be

$$ {\lambda_{\rm max}}(x, \omega) = \lim_{t \to \infty} \frac{1}{t} \log \|DF_{0,t;\omega}(x)\|\ . $$

These numbers are known to be defined for μ-a.e. x ∈ M and a.e. ω. Moreover, they are nonrandom, i.e., they do not depend on ω, and when μ is ergodic, λ max does not depend on x either, i.e., λ max is equal to a single (fixed) number for almost every initial condition in the phase space and for almost every sample path.

Numerical calculation of Lyapunov exponents

Lyapunov exponents can be computed numerically by solving the variational equations associated with Eq. (7); the largest Lyapunov λ max is given by the logarithmic growth rate of a typical tangent vector. This is what we have done in this paper, using the Euler method for SDEs to solve the variational equations.

1.3 A.3 Implications of the sign of λ max

The two results below are in fact valid in greater generality, but let us restrict ourselves to the SDE setting at the beginning of this subsection.

Theorem 1

Let μ be an ergodic stationary measure of the RDS defined by Eq. (7).

  1. (1)

    (Random sinks) (Le Jan 1987) If λ max  < 0, then for a.e. ω , μ ω is supported on a finite set of points.

  2. (2)

    (Random strange attractors) (Ledrappier and Young 1988) If μ has a density and λ max  > 0, then for a.e. ω , μ ω is a random SRB measure.

In Part (1) of the theorem, if in addition to λ max  < 0, mild conditions (on the relative motions of two points) are assumed, then almost surely μ ω is supported on a single point (Baxendale 1992). From the discussion above, μ ω being supported on a single point corresponds to the collapse of trajectories starting from almost all initial conditions to a single trajectory. In the context of Eq.(1), this is exactly what it means for the system to be neuronally reliable as explained in Section 3.1.

The conclusion of Part (2) requires clarification: In deterministic dynamical systems theory, SRB measures are natural invariant measures that describe the asymptotic dynamics of chaotic dissipative systems (in the same way that Liouville measures are the natural invariant measures for Hamiltonian systems). SRB measures are typically singular. They are concentrated on unstable manifolds, which are families of curves, surfaces etc. that wind around in a complicated way in the phase space (Eckmann and Ruelle 1985). Part (2) of Theorem 1 generalizes these ideas to random dynamical systems. Here, random (meaning ω-dependent) SRB measures live on random unstable manifolds, which are complicated families of curves, surfaces, etc. that evolve with time. In particular, in a system with random SRB measures, different initial conditions lead to very different outcomes at time t when acted on by the same stimulus; this is true for all t > 0, however large. In the context of a reliability study, therefore, it is natural to regard the distinctive geometry of random SRB measures as a signature of unreliability.

We do not claim here that mathematically, the results in Theorem 1 apply to Eq. (1). To formally apply these results, conditions of ergodicity, invariant density etc. have to be verified. Evidence—both analytic and numerical—point to an affirmative answer when the coupling constants a, a i , a ff etc. are nonzero.

Appendix B A 2-D toy model of two-layer networks

We provide here more detail on how two-neuron models can be used to shed light on two-layer networks as suggested in Section 5.2. Specifically, we will explain how the shapes of the phase distributions P 12 and P 21 are predicted.

Consider a system comprised of two neurons whose dynamics obey Eq. (1). For definiteness, we set ω 1 = ω 2 = 1, A ff = 2.8, and A fb = 0.8 to mimic the parameters in the two-layer networks considered in Section 5.2, with neurons 1 and 2 representing layers 1 and 2 in the two-layer system. The phase space of this system is the 2-torus, which we identify with the square [0.1]2 with periodic boundary conditions; the coordinates are denoted by (θ 1,θ 2). In Fig. 12, we show a few trajectories of the undriven system, i.e., with ε = 0. Away from the edges, they are northeasterly with slope 1; near the edges, they are bent due to the coupling. We now turn on the stimulus, setting ε = 2.5 as in Section 5.2. Because only neuron 1 hears the stimulus, it perturbs trajectories only in the θ 1 direction. When the stimulus is turned on, trajectories will, for the most part, continue to go in roughly the same directions as those shown in Fig. 12, but they become “wriggly”, being driven randomly to the left and right by the white-noise stimulus.

Fig. 12
figure 12

A few trajectories for a two-oscillator toy model with ε = 0. The trajectories are drawn on a “lift” of the 2-torus to the plane. The parameters are ω 1 = ω 2 = 1, A ff = 2.8, and A fb = 0.8

We will refer to the top and bottom edges of the square (which are identified with each other) as Σ. Every time a trajectory crosses Σ, neuron 2 spikes, and the location in Σ tells us the phase of neuron 1 when this spiking occurs. We view Σ as a cross-section to the flow, and consider the induced return map Φ: Σ→Σ. In the case of two neurons with feedback, the distribution of trajectories of Φ on Σ tells us the phase distribution of neuron 1 when it receives a synaptic input from neuron 2. In our analogy with the two-layer system, this is the analog of P 21. Similarly, the distribution of returns to the left and right edges identified (we call that Σ′) represents the phases of neuron 2 when it receives an input from neuron 1., i.e., the distribution analogous to P 12.

To understand these distributions, let us view the return dynamics to Σ as the result of two “moves” in succession (this is not entirely accurate but will suffice for present purposes): The first is the return map for the flow with ε = 0, and the second is a “smearing”, obtained by, e.g., taking a convolution with a Gaussian, to simulate the perturbations experienced by the trajectory between returns to Σ.

The return dynamics of the undriven flow are very simple: From the geometry of the flowlines, one sees that starting from any point in Σ, there is a leftward displacement due to the fact that the upward kick along the vertical edges are stronger than the rightward kicks along the horizontal edges (i.e. A ff > A fb). This leftward displacement is quite substantial away from θ 1 = 0, reaching a maximum at θ 1 ≈ 0.75. Because of the character of the phase response function, this displacement is very small (but strictly positive) near θ 1 ≈ 0. It is so weak there that with ε = 0, all trajectories spend most of their time near the diagonal, with only brief excursions in between. In other words, when ε = 0, the phase distributions on Σ peak sharply at θ 1 = 0.

With ε = 2.5, the “smearing” is nontrivial. Immediately, one sees that it causes the distribution to be more spread out. It is also easy to see that some concentration near θ 1 = 0 will be retained, only that the peak will be more rounded. We now explain why one should expect the peak to be shifted to the right: Suppose we start with a roughly constant distribution on an interval centered at θ 1 = 0. Since the return map is nearly the identity in this region, we may assume it does not change this distribution substantially. Next we take a convolution, which causes the distribution to have wider support. Now the part of the distribution that is pushed to the right of the original interval will get pushed back in when we apply the return map of the undriven flow again, due to the leftward displacement discussed earlier, whereas the part that is pushed to the left will be pushed further away from θ 1 = 0 by the ε = 0 dynamics. The result is an obvious asymmetry in the distribution, one that is reinforced in subsequent iterations. (The argument we have presented does not constitute a proof, but a proof is probably not out of reach.)

Once we have convinced ourselves of the geometry of P 21, it is easy to see from the northeasterly direction of the flowlines that if P 21 peaks to the right of θ 1 = 0 on Σ, then P 12 must peak just below θ 2 = 0 on Σ′. This completes the explanation promised in Section 5.2.

We finish with a couple of remarks that may be illuminating:

  1. (1)

    The phenomenon described above occurs for both excitatory and inhibitory couplings: with A ff < 0 and |A fb| < |A ff|, the displacement of the return map Φ: Σ→Σ is to the right. But in the inhibitory situation, there are other forces shaping the phase distribution, making the picture there more complicated. (In case the reader wonders how to interpret our reasoning in the pure-feedforward case: statements about phase distributions on Σ are valid, except that spikings of neuron 2 do not impact neuron 1!)

  2. (2)

    We may also represent the single-layer system by two neurons. Here by far the biggest difference is that both neurons receive the stimulus in the same way, and that translates into perturbations that are in the direction of the diagonal. Such perturbations are not very effective in spreading out distributions, especially when the trajectories are concentrated near the diagonal. These observations provide a geometric understanding for the material in Section 4.2.

A detailed analysis of the two-neuron model with different emphasis is carried out in Lin et al. (2009a).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lin, K.K., Shea-Brown, E. & Young, LS. Spike-time reliability of layered neural oscillator networks. J Comput Neurosci 27, 135–160 (2009). https://doi.org/10.1007/s10827-008-0133-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-008-0133-3

Keywords

Navigation