Skip to main content
Log in

A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

In this paper we provide a general methodology for systematically reducing the dynamics of a class of integrate-and-fire networks down to an augmented 4-dimensional system of ordinary-differential-equations. The class of integrate-and-fire networks we focus on are homogeneously-structured, strongly coupled, and fluctuation-driven. Our reduction succeeds where most current firing-rate and population-dynamics models fail because we account for the emergence of ‘multiple-firing-events’ involving the semi-synchronous firing of many neurons. These multiple-firing-events are largely responsible for the fluctuations generated by the network and, as a result, our reduction faithfully describes many dynamic regimes ranging from homogeneous to synchronous. Our reduction is based on first principles, and provides an analyzable link between the integrate-and-fire network parameters and the relatively low-dimensional dynamics underlying the 4-dimensional augmented ODE.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Notes

  1. For examples, see the MFE-like semi-synchrony in Fig. 1c of Ostojic (2014), as well as the slower rate-like fluctuations in Fig. 4 of Rangan and Young (2013a) and Fig. 3a below

  2. In fact, this approach is entirely general, and need not be built around MFEs. As described in Section 4.1 of Zhang et al. (2013), we could use a partitioned-ensemble-average to capture the fluctuations induced by any sharp network transition.

  3. This estimate of 90 % depends on the choice of units. A more fair estimate of the quality of the projection can be obtained by first normalizing the trajectory \(\vec {U}(t) \) so that each component has mean 0 and variance 1. Once this is done, 73 % of the variance is captured by the normalized version of \(\vec {u}_{1}\) and \(\vec {u}_{2}\), which are in this case \(\vec {u}_{1}=\left \{ 0.45,0.45,0.20,0.52\right \} \) and \(\vec {u}_{2}=\left \{ 0.28,-0.49,0.79,0.09\right \} \).

References

  • Abbott, LF, & van Vreeswijk, CA (1993). Asynchronous states in networks of pulse-coupled neurons. Physical Review E, 48, 1483–1488.

    Article  Google Scholar 

  • Anderson, J, Lampl, I, Reichova, I, Carandini, M, Ferster, D (2000). Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex. Nature neuroscience, 3(6), 617–621.

    Article  CAS  PubMed  Google Scholar 

  • Bak, P, Tang, C, Wiesenfeld, K (1987). Self-organized criticality: an explanation of 1/f noise. Physical Review Letters, 59(4), 381–384.

    Article  PubMed  Google Scholar 

  • Battaglia, D, & Hansel, D (2011). Synchronous chaos and broad band gamma rhythm in a minimal multi-layer model of primary visual cortex. PLoS Comparative Biology, 7.

  • Bauer, HU, & Pawelzik, K (1993). Alternating oscillatory and stochastic dynamics in a model for a neuronal assembly. Physica D, 69, 380–393.

    Article  Google Scholar 

  • Beggs, JM, & Plenz, D (2003). Neuronal avalanches in neocortical circuits. Journal Neuroscience, 23(35), 11167–11177.

    CAS  Google Scholar 

  • Borgers, C, & Kopell, N (2003). Synchronization in networks of excitatory and inhibitory neurons with sparse, random connectivity. Neural Computation, 15, 509–538.

    Article  PubMed  Google Scholar 

  • Brunel, N (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Comparative Neuroscience, 8, 183–208.

    Article  CAS  Google Scholar 

  • Brunel, N, & Hakim, V (1999). Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing Rates. Neural Computation, 11, 1621–1671.

    Article  CAS  PubMed  Google Scholar 

  • Buice, MA, & Chow, CC (2007). Correlations, fluctuations, and stability of a finite-size network of coupled oscillators. Physical Review E, 76, 031118.1–031118.25.

    Article  Google Scholar 

  • Cai, D, Tao, L, Shelley, M, McLaughlin, D (2004). An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex. Proceedings of the National Academy of Science of the United States of America, 10120), 7757–7762.

    Article  CAS  Google Scholar 

  • Cardanobile, S, & Rotter, S (2010). Multiplicatively interacting point processes and applications to neural modeling. Journal of Computational Neuroscience, 28, 267–284.

    Article  PubMed  Google Scholar 

  • Csicsvari, J, Hirase, H, Mamiya, A, Buzsaki, G (2000). Ensemble patterns of hippocampal ca3-ca1 neurons during sharp wave-associated population events. Neuron, 28, 585–594.

    Article  CAS  PubMed  Google Scholar 

  • Dehghani, N, Hatsopoulos, NG, Haga, NG, Parker, RA, Greger, B, Halgren, E, Cash, SS, Destexhe, A (2012). Avalanche analysis from multi-electrode ensemble recordings in cat, monkey and human cerebral cortex during wakefulness and sleep. arXiv: 1203.0738v4:[q–bio.NC].

  • DeVille, L, & Zheng, Y (2014). Synchrony and Periodicity in Excitable Neural Networks with Multiple Subpopulations. SIAM Journal of Applied Dynamical Systems. accepted.

  • Gerstner, W, & van Hemmen, JL (1993). Coherence and incoherence in a globally coupled ensemble of pulse-emitting units. Physical Review Letters, 71(312).

  • Grill-Spector, K, & Weiner, KS (2014). The functional architecture of the ventral temporal cortex and its role in categorization. Nature Reviews Neuroscience, 15, 536–548.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  • Hahn, G, Petermann, T, Havenith, MN, Yu, S, Singer, W, Plenz, D, Nikolic, D (2010). Neuronal avalanches in spontaneous activity in vivo. Journal of Neurophysiology, 104, 3313–3322.

    Article  Google Scholar 

  • Haldeman, C, & Beggs, JM (2005). Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States. Physical Review Letters, 94, 058101.

    Article  PubMed  Google Scholar 

  • Hatsopoulos, NG, Ojakangas, CL, Paniniski, L, Donoghue, JP (1998). Information about movement direction obtained from synchronous activity of motor cortical neurons. Proceedings of the National Academy of Science, 95, 15706–15711.

    Article  CAS  Google Scholar 

  • Helias, M, Deger, M, Rotter, S, Diesmann, M (2010). Instantaneous Non-Linear Processing by Pulse-Coupled Threshold Units. doi:10.1371/journal.pcbi.1000929.

  • Henrie, JA, & Shapley, R (2005). LFP Power Spectra in V1 Cortex: The Graded Effect of Stimulus Contrast. Journal of Neurophysiology, 94, 479–490.

    Article  PubMed  Google Scholar 

  • Hertz, AVM, Hopfield, JJ, Cycles, Earthquake, Reverberations, Neural (1995). Collective Oscillations in Systems with Pulse-Coupled Threshold Elements. Physical Review Letters, 75(6), 1222–1225.

    Article  Google Scholar 

  • Hu, Y, Trousdale, J, Josic, K, Shea-Brown, E (2013). Motif statistics and spike correlations in neuronal networks. Journal of Statistical Mechanics, P03012, 1–51.

    Google Scholar 

  • Hubel, D (1995). Eye Brain Vision. W. H. Freeman, ISBN-10: 0716760096.

  • Jahnke, S, Memmesheimer, R M, Timme, M (2009). How chaotic is the balanced state? Frontiers Computational Neuroscience. doi:10.3389/neuro.10.013.2009.

    Google Scholar 

  • Knight, B (1972). The Relationship between the Firing Rate of a Single Neuron and the Level of Activity in a Population of Neurons. Journal of General Plant Pathology, 59, 734.

    CAS  Google Scholar 

  • Kohn, A, & Smith, MA (2005). Stimulus dependence of neuronal correlation in primary visual cortex of the macaque. Journal Neuroscience, 25, 3661–73.

    Article  CAS  Google Scholar 

  • Lampl, I, Reichova, I, Ferster, D (1999). Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron, 22, 361–374.

    Article  CAS  PubMed  Google Scholar 

  • Ledoux, E, & Brunel, N (2011). Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs. Frontiers in Computational Neuroscience, 5(25), 1–17.

    Google Scholar 

  • Leinekugel, X, Khazipov, R, Cannon, R, Hirase, H, Ben-Ari, Y, Buzsaki, G (2002). Correlated bursts of activity in the neonatal hippocampus in vivo. Science, 296, 2049–2052.

    Article  CAS  PubMed  Google Scholar 

  • Litwin-Kumar, A, & Doiron, B (2012). Slow dynamics and high variability in balanced cortical networks with clustered connections. Nature Neuroscience. doi:10.1038/nn.3220.

    PubMed Central  PubMed  Google Scholar 

  • Mark, S, & Tsodyks, M (2012). Population spikes in cortical networks during different functional states. Frontiers in Computational Neuroscience. doi:10.3389/fn-com.2012.00043.

    PubMed Central  PubMed  Google Scholar 

  • Mazzoni, A, Broccard, FD, Garcia-Perez, E, Bonifazi, P, Ruaro, ME, Torre, V (2007). On the dynamics of the spontaneous activity in neuronal networks. PLoS One, 5, e439.

    Article  Google Scholar 

  • Mirollo, RE, & Strogatz, SH (1990). Synchronization of pulse-coupled biological oscillators. SIAM Journal of Applied Mathematics, 50(6), 1645–1662.

    Article  Google Scholar 

  • Murphy, B, & Miller, K (2009). Balanced Amplification: A New Mechanism of Selective Amplification of Neural Activity Patterns. Neuron, 61, 635–648.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  • Murthy, VN, & Fetz, EE (1992). Coherent 25- to 35-Hz oscillations in the sensorimotor cortex of awake behaving monkeys. Proceedings of the National Academy of Sciences of the United States of America, 89, 5670–5674.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  • Newhall, K, Kovacic, G, Kramer, P, Zhou, D, Rangan, AV, Cai, D (2010). Dynamics of current-Based, poisson driven, integrate-and-fire neuronal networks. Communications in Mathematical Sciences, 8(2), 541–600.

    Article  Google Scholar 

  • Nykamp, DQ, & Tranchina, D (2000). Fast neural network simulations with population density methods. Neurocomputing, 32, 487–492.

    Article  Google Scholar 

  • Omurtag, A, Kaplan, E, Knight, B, Sirovich, L (2000). A population approach to cortical dynamics with an application to orientation tuning. Network: Computation in Neural Systems, 11(4), 247– 260.

    Article  CAS  Google Scholar 

  • Ostojic, S, & Brunel, N (2011). From spiking neuron models to linear-nonlinear models. PLoS Comp. Bio., 7 (1), e1001056.

    Article  CAS  Google Scholar 

  • Ostojic, S (2014). Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature Neuroscience, 17(4). doi:10.1038/nn.3658.

  • Petermann, T, Thiagarajan, TC, Lebedev, MA, Nicolelis, MAL, Chailvo, DR, Plenz, D (2009). Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proceedings of the National Academy of Science, 106 (37), 15921–15926.

    Article  CAS  Google Scholar 

  • Plenz, D, Stewart, CV, Shew, W, Yang, H, Klaus, A, Bellay, T (2011). Multi-electrode array recordings of neuronal avalanches in organotypic cultures. J. Vis. Exp., 54, 2949.

    PubMed  Google Scholar 

  • Plenz, D, & Thiagarajan, TC (2007). The organizing principles of neuronal avalanches: cell assemblies in the cortex? Trends in Neurosciences, 30(101).

  • Poil, SS, Hardstone, R, Mansvelder, HD, Linkenkaer-Hansen, K (2012). Critical-state dynamics of avalanches and oscillations jointly emerge from balanced excitation/inhibition in neuronal networks. Journal Neuroscience, 33, 9817–9823.

    Article  Google Scholar 

  • Rangan, AV (2009). Diagrammatic expansion of pulse-coupled network dynamics. Physical Review Letters, 102, 158101.

    Article  PubMed  Google Scholar 

  • Rangan, AV, & Cai, D (2006). Maximum-Entropy Closures for Kinetic Theories of Neuronal Network Dynamics. Physical Review Letters, 96, 178101.

    Article  PubMed  Google Scholar 

  • Rangan, AV, & Young, LS (2013a). Dynamics of spiking neurons: between homogeneity and synchrony. Journal of Computational Neuroscience. doi:10.1007/s10827-012-0429-1.

    PubMed Central  Google Scholar 

  • Rangan, AV, & Young, LS (2013b). Emergent dynamics in a model of visual cortex. Journal of Computational Neuroscience. doi:10.1007/s10827-013-0445-9.

    PubMed Central  Google Scholar 

  • Roxin, A, Brunel, N, Hansel, D, Mongillo, G, Vreeswijk, CV (2011). On the distribution of firing rates in networks of cortical neurons. Journal of Neuroscience, 31(45), 16217–16226.

    Article  CAS  PubMed  Google Scholar 

  • Samonds, JM, Zhou, Z, Bernard, MR, Bonds, AB (2005). Synchronous activity in cat visual cortex encodes collinear and cocircular contours. Journal of Neurophysiology, 95(4), 2602–2616.

    Article  PubMed  Google Scholar 

  • Sakata, S, & Harris, KD (2009). Laminar structure of spontaneous and sensory-evoked population activity in auditory cortex. Neuron, 12(3), 404–418.

    Article  Google Scholar 

  • Shew, S, Yang, H, Yu, S, Roy, R, Plenz, D (2011). Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. Journal Neuroscience, 31, 55–63.

    Article  CAS  Google Scholar 

  • Stern, EA, Kincaid, AE, Wilson, CJ (1997). Spontaneous Subthreshold Membrane Potential Fluctuations and Action Potential Variability of Rat Corticostriatal and Striatal Neurons In Vivo. Journal of Neurophysiology, 77, 1697–1715.

    CAS  PubMed  Google Scholar 

  • van Vreeswijk, C, & Sompolinsky, H (1996). Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory Activity. Science, 274(5293), 1724–1726.

    Article  CAS  PubMed  Google Scholar 

  • van Vreeswijk, C, & Sompolinsky, H (1998). Chaotic Balanced State in a Model of Cortical Circuits. Neural Computation, 10, 1321–1371.

    Article  CAS  PubMed  Google Scholar 

  • Vogels, TP, & Abbott, LF (2005). Signal propagation and logic gating in networks of integrate-and-fire neurons. Journal of Neuroscience, 25, 10786–95.

    Article  CAS  PubMed  Google Scholar 

  • Yoshida, T, & Katz, DB (2011). Control of pre-stimulus activity related to improved sensory coding within a discrimination task. Journal Neuroscience, 31(11), 4101–4112.

    Article  CAS  Google Scholar 

  • Yu, S, Yang, H, Nakahara, H, Santos, GS, Nikolic, D, Plenz, D (2011). Higher-order interactions characterized in cortical activity. Journal of Neuroscience, 31, 17514–17526.

    Article  CAS  PubMed  Google Scholar 

  • Zhang, JW, Newhall, K, Zhou, D, Rangan, AV (2010). Distribution of correlated spiking events in a population-based approach for integrate-and-fire networks. J. Comput. Neurosci., to appear, 2013. in primary visual cortex during sensory stimulation. Neuron, 68, 1187–1201.

    Article  Google Scholar 

  • Zhang, JW, Zhou, D, Cai, D, Rangan, AV (2013). A coarse-grained framework for spiking neuronal networks: between homogeneity and synchrony. Journal of Computational Neuroscience. doi:10.1007/s10827-013-0488-y.

    Google Scholar 

  • Zhao, LQ, Beverlin, B, Netoff, T, Nykamp, DQ (2011). Synchronization from second order network connectivity statistics. Frontiers in Computational Neuroscience, 5(28).

Download references

Acknowledgments

Dr. Rangan was supported by NSF DMS-1162548.

Conflict of interest

The authors declare that they have no conflict of interest.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. V. Rangan.

Additional information

Action Editor: Nicolas Brunel

Appendices

Appendix A: Spike resolution

As mentioned in Section 3.1, the system of ODEs given by Eq. 2 is not actually well-posed. We need to clarify the dynamics that ensue whenever multiple neurons cross threshold simultaneously (e.g., as a consequence of an excitatory synaptic event). To ensure that our system is well-posed, we consider our system to be a specific limit of the following well-posed system:

$$\begin{array}{@{}rcl@{}} \frac{d}{dt}{V_{j}^{Q}} &=&-\frac{1}{\tau_{V}}\left( {V_{j}^{Q}}-V^{L}\right) \\[-2pt] &&+\sum\limits_{k}S^{QY}\delta \left( t-T_{j,k}^{Y}\right) +g_{j}^{Q,E}(t) +g_{j}^{Q,I}(t) , \\[-2pt] g_{j}^{Q,E} &=&\sum\limits_{j^{\prime }\text{ of type }E}\sum\limits_{k}S^{QE}\alpha_{E}\left( t-T_{j^{\prime },k}^{E}\right) , \\[-2pt] g_{j}^{Q,I} &=&\sum\limits_{j^{\prime }\text{ of type }I}\sum\limits_{k}S^{QI}\alpha_{I}\left( t-T_{j^{\prime },k}^{I}\right) , \end{array} $$
(29)

where the \(g_{j}^{Q,R}\) model synaptic currents, and

$$\alpha_{Q}(t) =\boldsymbol{1}_{[ 0,\infty )}(t) \cdot \tau_{Q}^{-1}\exp \left( -t/\tau_{Q}\right) ; $$

that is to say, the α E and α I are alpha-functions with infinite rise-time and decay times τ E and τ I , respectively. For this current-based system we also assume that, after any one neuron fires, it is held at V L for a time τ ref, referred to as a refractory period.

The well-posed system of ODEs that we use in this work, alluded to by Eq. (2), is actually the system given by Eq. (29) in the limit τ I , τ E , \(\tau _{\text {{ref}}}\rightarrow 0\), with τ I τ E τ ref. This is similar to the resolution rule used in Refs. Zhang et al. (2010) and Zhang et al. (2013), as well as in the Appendix of Rangan and Young (2013a). An example of this resolution rule is described in Appendix A.1 below.

We remark that our choice of resolution rule will affect the dynamics of the network ‘within’ each MFE (see Appendix A.1 for more details). For example, if we had instead taken the limit \(\tau _{E}\ll \tau _{I}\ll \tau _{\text {{ref}}}\rightarrow 0\), then the inhibitory neurons would not participate during any MFE, and the dynamics of each MFE could be determined by examining only the excitatory population.

1.1 A.1 An example of spike-resolution

In this section we give an example of the spike-resolution associated with our IF-network (described in Section 3.1 and Appendix A ). This example includes the ‘infinitesimal-time’ evolution that occurs ‘within’ an MFE.

For the sake of exposition, consider a situation involving N=6 neurons, the first N E =3 of which are excitatory and the latter N I =3 of which are inhibitory. Assume that the N=6 neurons are connected in an all-to-all fashion, with \(S^{QQ^{\prime }}\) given by the array:

$$S^{EE}=0.20, S^{EI}=-0.25, S^{IE}=0.275, S^{II}=-0.20 $$

and let us assume that, as a result of feedforward Poisson input, the excitatory neuron indexed by j=1 and Q=E has reached and crossed threshold at time t spk. Now, at this time t spk, we freeze the macroscopic time t of the original system and create an ‘infinitesimal-time’ system which evolves according to an infinitesimal time τ. At t spk the state-variables of the original system are given by

$${V_{j}^{Q}}\left( t_{\text{spk}}^{-}\right) \text{ for }j=1,{\ldots} ,N_{Q}\text{ with }{V_{1}^{E}}\left( t_{\text{spk}}^{-}\right) \geq V^{T}. $$

This infinitesimal-time system will use the infinitesimal time τ to describe the effects of the excitatory conductances in Eq. (29) , while taking the limit that τ I τ E τ V . The state-variables for this infinitesimal-time system are the 6 voltages of the neurons in the network, \({v_{j}^{Q}}(\tau ) \), each of which describes the membrane potential of the j th neuron of type Q∈{E,I}, which start out at

$${v_{j}^{Q}}(0) ={V_{j}^{Q}}(t_{\text{spk}}^{-}) \text{ respectively, with }{v_{1}^{E}}(0^{-}) \geq V^{T}, $$

as well as the excitatory conductances \(g_{j}^{Q,E}(\tau ) \) which are all initialized to 0 at τ=0. We do not explicitly track the inhibitory conductances \(g_{j}^{Q,I}(\tau )\), since we are taking the limit τ I τ E .

The dynamics of the infinitesimal-time-system are given by:

$$\begin{array}{@{}rcl@{}} \frac{d}{d\tau }{v_{j}^{Q}}(\tau) &=&g_{j}^{Q,E}(\tau) +g_{j}^{Q,I}(\tau) \\ g_{j}^{Q,E}(\tau) &=&\sum\limits_{l\text{ of type }E}S^{Q,E}\alpha^{E}\left( \tau -{T_{l}^{E}}\right) \\ g_{j}^{Q,I}(\tau) &=&\sum\limits_{l\text{ of type I}}S^{Q,I}\delta \left( \tau -{T_{l}^{I}}\right) , \end{array} $$

where α E is an alpha-function with infinitely fast rise-time and decay time τ E . For this infinitesimal-time system we fix τ E to be finite – without loss of generality we set τ E ≡1, yielding \(\alpha ^{E}(\tau ) =\exp (-\tau )\) for τ>0 and α E(τ)≡0 for τ<0. The spike-times \({T_{l}^{Q}}\) record the infinitesimal-time after t spk at which neuron l of type Q fires. Since the refractory period for this infinitesimal-time system is \(\tau _{\text {{ref}}}=\infty \), each neuron can fire only once. Moreover, as \({v_{1}^{E}}(0^{-})\) is already at threshold, we immediately set \({T_{1}^{E}}=0\), and clamp \({v_{1}^{E}}(0^{+})=V^{R}\); because \(\tau _{\text {{ref}}}=\infty \), \( {v_{1}^{E}}(\tau ) \) will not change as τ increases. In addition, because \({v_{1}^{E}}\) fires at τ=0, we update \( g_{j}^{Q,E}\left (0\right ) \) to be S QE for each other neuron j,Q. We then evolve this system until \(\tau \rightarrow \infty \).

During this evolution we resolve the trajectory to machine precision by using analytical formulae for the \({v_{j}^{E}}(\tau ) \) and \( g_{j}^{Q,E}(\tau ) \). As an example, imagine a given neuron with voltage \({v_{j}^{Q}}(\tau ) \) and conductance \(g_{j}^{Q,E}(\tau ) \) at time τ. If this neuron receives no further synaptic input, then the trajectory of that neuron will be

$$\begin{array}{@{}rcl@{}} g_{j}^{Q,E}\left( \tau^{\prime }\right) &=&g_{j}^{Q,E}(\tau) \exp \left( -\left( \tau^{\prime }-\tau \right) \right), \\ {v_{j}^{Q}}\left( \tau^{\prime }\right) &=&{v_{j}^{Q}}(\tau) +g_{j}^{Q,E}(\tau) \left[ 1-\exp \left( -\left( \tau^{\prime}-\tau \right) \right) \right] , \end{array} $$

for infinitesimal-times \(\tau ^{\prime }>\tau \). The voltage \( {v_{j}^{Q}}(\tau ) \) will evolve to a steady state of

$$\lim_{\tau^{\prime} \rightarrow \infty }{v_{j}^{Q}}(\tau^{\prime}) ={v_{j}^{Q}}(\tau) +g_{j}^{Q,E}(\tau) ; $$

if this is higher than V T, then the neuron will fire at time

$${T_{j}^{Q}}=\tau -\log \left[ \frac{{v_{j}^{Q}}(\tau) +g_{j}^{Q,E}(\tau) -V^{T}}{g_{j}^{Q,E}(\tau) } \right] . $$

Using this formula, we can compute the limiting values \(\lim _{\tau^{\prime} \rightarrow \infty }{v_{j}^{Q}}(\tau^{\prime} ) \) and putative future firing-times \({T_{j}^{Q}}\) for each neuron j,Q. We can then evolve the system up to the infinitesimal time \(\tau =\min _{j,Q}{T_{j}^{Q}}\). At this point neuron j,Q fires, and we clamp \({v_{j}^{Q}}([{T_{j}^{Q}}]^{+})\) to V R. If thisfiring-neuron is excitatory (i.e. Q=E), then we update the excitatory-conductance of each other neuron at time \({T_{j}^{E}}\):

$$\begin{array}{@{}rcl@{}} g_{j^{\prime }}^{Q^{\prime },E}\left( \left[ {T_{j}^{E}}\right]^{+}\right) &=&g_{j^{\prime }}^{Q^{\prime },E}\left( \left[ {T_{j}^{E}}\right]^{-}\right)\\ &&+S^{Q^{\prime }E}, \end{array} $$

and recalculate the limiting values and firing-times for each other neuron. If, on the other hand, the firing-neuron is inhibitory, then we update each other voltage:

$$v_{j^{\prime}}^{Q^{\prime }}\left( \left[ {T_{j}^{I}}\right]^{+}\right) =v_{j^{\prime}}^{Q^{\prime}}\left( \left[ {T_{j}^{I}}\right]^{-}\right) +S^{Q^{\prime }I}, $$

and again recalculate the limiting values and firing-times for each other neuron. In this way we can evolve the entire infinitesimal-time system to machine precision – one spike at a time – until each neuron has either fired or reached its equilibrium value.

As an example, we illustrate in Fig. 20 a trajectory for the 6 neuron network described above. On the top row we show a 7ms trace of activity for the 6 neurons in the network. The neurons are driven by feedforward excitatory Poisson input. The inhibitory neurons are plotted in blue and excitatory neurons 2 and 3 are plotted in red. The first excitatory neuron, which fires as a result of the feedforward input at time t spk=5 ms, is plotted in green. When neuron 1 fires we stop evolving t, and initialize the instantaneous-time system. The trajectory of the instantaneous-time system is shown in the bottom of this figure. At instantaneous-time τ=0 the first excitatory neuron (green) fires, and resets to V R. When this firing event occurs, the excitatory conductances of all the other neurons are increased. As τ increases another excitatory neuron fires, and then an inhibitory neuron, and then finally a second inhibitory neuron. Eventually, as \(\tau \rightarrow \infty \), the remaining 2 neurons – one excitatory and the other inhibitory – relax to their steady-state voltages. The barrage of synaptic activity at t spk=5 ms will not caused them to fire.

Fig. 20
figure 20

Example of infinitesimal-time evolution for an all-to-all network of excitatory and inhibitory neurons. On the top we show the evolution of a simple network. Whenever a neuron fires we turn to an ‘infinitesimal-time’ system, as depicted in the grey inset and bottom. This infinitesimal-time system is used to resolve the cascade of neuronal firing-events (each firing-event indicated by a dashed vertical line). See Appendix for details

After we determine which neurons fire at t spk= 5 ms, as well as which neurons do not fire, we take the state-variables at \(\tau =\infty \) and return to the original system. The voltages of the original system are set to

$${V_{j}^{Q}}\left( t_{\text{spk}}^{+}\right) :={v_{j}^{Q}}\left( \infty \right) \text{,} $$

and the firing-events which occurred during the instantaneous-time evolution are all collapsed onto the same time t spk. In this example we would say that an MFE with magnitudes L E =2 and L I =2 occured at t=5 ms. While all these firing-events are deemed to have occured ‘instantaneously’ (i.e., all at t spk=5 ms), the order of these firing-events can be disambiguated by appealing to the the instantaneous-time system.

Appendix B: Details of the maximum-entropy approximation

In this Appendix we describe some of the details related to the maximum-entropy approximation introduced in Section 7. As in Section 7, we will drop the superscript Q for clarity.

1.1 B.1 Implementing the constraints

The constraints G that we impose on ρ within the maximum-entropy formulation are as follows. First, we expect the given full-moments χ k to be consistent with ρ:

$$G\left( \chi^{k}\right) =\int\nolimits_{-\infty }^{V^{T}}v^{k}\rho dv-\chi^{k}=0. $$

Second, we expect that ρ itself should be continuous across each of the W j (i.e., the boundary between \(\mathcal {I}_{j-1}\) and \(\mathcal {I}_{j}\)):

$$G\left( \rho ;W_{j}\right) =\rho_{j+1}\left( W_{j}^{+}\right) -\rho_{j}\left( W_{j}^{-}\right) =0, $$

where by ρ j we mean simply ρ restricted to interval \(\mathcal { I}_{j}\). Third, we expect that J should jump by m τ V at W 1, and be continuous across the other W j :

$$\begin{array}{@{}rcl@{}} G\left( J;W_{j}\right) &=&J_{j+1}\left( W_{j}^{+}\right) -J_{j}\left( W_{j}^{-}\right)\\ &=&0\text{ for }j>1\text{, and }m\tau_{V}\text{ when }j=1 , \end{array} $$

where \(m=J\left (V^{T}\right ) \) is the flux of ρ over threshold.

The constraints \(G\left (\rho ;W_{j}\right ) \) are automatically satisfied because of our choice of ρ eq – see Eq. (13) above, and Eqs. 32, 33 below. Our remaining goal will be to express the constraints G(J;W j ) in integral form so that they will be compatible with the G(χ k). To this end we define functions g +(v) and g (v), supported in (0,2ε) and (−2ε,0) respectively, as follows:

$$\begin{array}{@{}rcl@{}} g_{+}(v) &=&\frac{1}{\varepsilon }\left( 1-\left\vert \frac{ x-\varepsilon }{\varepsilon }\right\vert \right) \text{ for }x\in \left( 0,2\varepsilon \right) \text{, and }\\ g_{-}(v)&=&\frac{1}{ \varepsilon }\left( 1-\left\vert \frac{x+\varepsilon }{\varepsilon } \right\vert \right) \text{, for }x\in \left( -2\varepsilon ,0\right), \end{array} $$

so that each g ±(v) integrates to 1, and

$$\begin{array}{@{}rcl@{}} \partial_{v}g_{+}(v) &=&\frac{1}{\varepsilon^{2}}\text{ in } \left[ 0,\varepsilon \right] \text{, and }\frac{-1}{\varepsilon^{2}} \text{ in }\left[ \varepsilon ,2\varepsilon \right] \text{, and }\\\partial_{v}g_{+}(v) &=&\frac{1}{\varepsilon^{2}}\text{ in }\left[ -2\varepsilon ,\varepsilon \right] \text{, and }\frac{-1}{\varepsilon^{2}} \text{ in }\left[ \varepsilon ,0\right] . \end{array} $$

These g ± are continuous approximations to the delta-function which are supported either to the right or left of the origin. Using g ±, we can evaluate ρ as follows:

$$\begin{array}{@{}rcl@{}} \rho_{j+1}\left( W_{j}^{+}\right) &=&\int\nolimits_{W_{j}}^{W_{j+1}}g_{+}\left( v-W_{j}\right) \rho_{j+1}(v) dv\text{, and }\\\rho_{j}\left( W_{j}^{-}\right) &=&\int\nolimits_{W_{j-1}}^{W_{j}}g_{-}\left( v-W_{j}\right) \rho_{j}(v) dv, \end{array} $$
(30)

and we can evaluate J as follows:

$$\begin{array}{@{}rcl@{}} J_{j+1}\left( W_{j}^{+}\right) &=&\int\nolimits_{W_{j}}^{W_{j+1}}\left[\vphantom{\frac{\sigma_{j+1}^{2}}{2}}-g_{+}\left( v-W_{j}\right) \left( v-\mu_{j+1}\right)\right.\\ &&+\left.\partial_{v}g_{+}\left( v-W_{j}\right) \frac{\sigma_{j+1}^{2}}{2}\right] \rho_{j+1}(v) dv \\ J_{j}\left( W_{j}^{-}\right) &=&\int\nolimits_{W_{j-1}}^{W_{j}}\left[\vphantom{\frac{{\sigma_{j}^{2}}}{2}} -g_{-}\left( v-W_{j}\right) \left(v-\mu_{j}\right)\right.\\ &&+\left.\partial_{v}g_{-}\left( v-W_{j}\right) \frac{{\sigma_{j}^{2}}}{2}\right] \rho_{j}(v) dv , \end{array} $$
(31)

where we have integrated-by-parts and taken advantage of the limited support of g ± to ignore boundary terms.

This re-expression allows us to write G(J;W j ) in integral form. Note that these constraints affect only a boundary-layer of diameter 2ε around each interior boundary W j . In the limit \( \varepsilon \rightarrow 0\) this boundary layer becomes infinitesimal, and does not affect the G(χ k) constraints.

After maximizing H subject to these constraints, we can express the maximum-entropy distribution ρ me as follows:

$$\begin{array}{@{}rcl@{}} \rho_{\text{me}} &=&\bar{\rho}(v) \cdot {\prod}_{j}\exp \lambda \left( J;W_{j}\right) \left[\vphantom{\frac{{\sigma_{j}^{2}}}{2}} -g_{+}\left( v-W_{j}\right) \left( v-\mu_{j+1}\right)\right.\\ &&+\partial_{v}g_{+}\left( v-W_{j}\right) \frac{\sigma_{j+1}^{2}}{2} \\ &&\left. +g_{-}\left( v-W_{j}\right) \left( v-\mu_{j}\right) -\partial_{v}g_{-}\left( v-W_{j}\right) \frac{{\sigma_{j}^{2}}}{2}\right] , \end{array} $$
(32)

where the function

$$ \bar{\rho}(v) \text{ is given by }\bar{\rho}_{j}(v) =\rho_{\text{eq}}\exp \left[ \sum\limits_{k=0}^{M}\lambda \left( \chi^{k}\right) v^{k}\right] , $$
(33)

with M=2 referring to the maximum moment matched within the maximum-entropy framework.

The function \(\bar {\rho }(v) \) is what one would obtain as a maximum-entropy distribution given only the G(χ k) constraints. The λ(χ k) are lagrange-multipliers chosen so that each G(χ k) is satisfied. Note that \(\bar {\rho }(v)\) will not necessarily meet the G(J;W j ) constraints. The lagrange-multipliers λ(J;W j ) allow us to modify the behavior of ρ me at each interior boundary layer, thus satisfying the G(J;W j ) constraints.

In the limit \(\varepsilon \rightarrow 0\), the evaluation of J at the interior boundaries (i.e., Eq. (31)) reduces to

$$\begin{array}{@{}rcl@{}} J_{j+1}\left( W_{j}^{+}\right) &=&\bar{\rho}_{j+1}\left( W_{j}^{+}\right) \left[ 2\lambda \left( J;W_{j}\right) \left[ \frac{\sigma_{j+1}^{2}}{2} \right]^{2}\right.\\ &&-\left.\left( W_{j}-\mu_{j+1}\right) \right] -\partial_{v}\bar{\rho}_{j+1}\left( W_{j}^{+}\right) \frac{\sigma_{j+1}^{2}}{2} \\ && \\ J_{j}\left( W_{j}^{-}\right) &=&\bar{\rho}_{j}\left( W_{j}^{-}\right) \left[ -2\lambda \left( J;W_{j}\right) \left[ \frac{{\sigma_{j}^{2}}}{2}\right]^{2}\right.\\ &&-\left.\left( W_{j}-\mu_{j}\right) \right] -\partial_{v}\bar{\rho}_{j}\left( W_{j}^{-}\right) \frac{{\sigma_{j}^{2}}}{2}, \end{array} $$
(34)

where we have absorbed a factor of ε 3 into \(\lambda \left (J;W_{j}\right )\). The jump in the flux at W j is given by:

$$\begin{array}{@{}rcl@{}} &&J_{j+1}\left( W_{j}^{+}\right) -J_{j}\left( W_{j}^{-}\right)\\ &&=2\lambda \left( J;W_{j}\right) \left( \left[ \frac{\sigma_{j+1}^{2}}{2}\right]^{2} \bar{\rho}_{j+1}\left( W_{j}^{+}\right) +\left[ \frac{{\sigma_{j}^{2}}}{2} \right]^{2}\bar{\rho}_{j}\left( W_{j}^{-}\right) \right) \\ &&-\left[ \bar{\rho}_{j+1}\left( W_{j}^{+}\right) \left( W_{j}-\mu_{j+1}\right) -\bar{\rho}_{j}\left( W_{j}^{-}\right) \left( W_{j}-\mu_{j}\right) \right] \\ &&-\left[ \partial_{v}\bar{\rho}_{j+1}\left( W_{j}^{+}\right) \frac{\sigma _{j+1}^{2}}{2}-\partial_{v}\bar{\rho}_{j}\left( W_{j}^{-}\right) \frac{ {\sigma_{j}^{2}}}{2}\right] , \end{array} $$

which can be controlled by varying λ(J;W j ).

These formulae can be used to determine ρ me and evaluate the J and ρ terms in Eq. (21).

1.2 B.2 Calculating \(\frac {d}{dt}m^{Q}\)

In our final reduction (see step 8 of algorithm 2 in Section 8) it will be useful to calculate the time-derivative of m directly from χ k and λ(χ k). To do this, we first note that the time-derivative of λ(χ k) is linked to that of χ k. Since

$$\chi^{k}(t) ={\int}_{-\infty }^{V^{T}}v^{k}\rho_{\text{me}}(v) dv={\int}_{-\infty }^{V^{T}}v^{k}\rho_{\text{eq}}(v) \cdot \exp \left( \sum\limits_{k=0}^{M}\lambda \left( \chi^{k}\right) v^{k}\right) dv, $$

we have that

$$ \partial_{\lambda \left( \chi^{j}\right) }\chi^{k}(t) ={\int}_{-\infty }^{V^{T}}v^{k+j}\rho_{\text{eq}}(v) \cdot \exp \left( \sum\limits_{k=0}^{M}\lambda \left( \chi^{k}\right) v^{k}\right) =\chi^{k+j}(t) , $$
(35)

where M=2 is the maximum moment used in Eq. (33). This implies that

$$ \frac{d}{dt}\chi^{k}(t) =\sum\limits_{j=0}^{M}\chi^{k+j}\cdot \frac{d }{dt}\lambda \left( \chi^{j}\right) . $$
(36)

Thus, if we have access to the moments k=0,…,2M, then we can relate \( \frac {d}{dt}\lambda (\chi ^{k})\) to \(\frac {d}{dt}\chi ^{k}\).

Once we have this relationship between \(\frac {d}{dt}\lambda (\chi ^{k})\) and \(\frac {d}{dt}\chi ^{k}\), we can assume that m is computed implicitly via ρ me as the λ(χ k) change, and we can find \(\frac {d}{dt}m\) easily:

$$m=\exp \left( \sum\limits_{k=0}^{M}\lambda \left( \chi^{k}\right) \right) \cdot m_{ \text{eq}}\text{, implying that} $$
$$ \frac{d}{dt}m=\left[ \sum\limits_{k=0}^{M}\frac{d}{dt}\lambda \left( \chi^{k}\right) \right] \cdot \exp \left( \sum\limits_{k=0}^{M}\lambda \left( \chi^{k}\right) \right) \cdot m_{\text{eq}}=\left[ \sum\limits_{k=0}^{M} \frac{d}{dt}\lambda \left( \chi^{k}\right) \right] \cdot m. $$
(37)

An immediate corollary is:

$$\frac{d}{dt}\log (m) =\sum\limits_{k=0}^{M}\frac{d}{dt}\lambda \left( \chi^{k}\right) . $$

We remark that this derivative is not the ‘true’ derivative of m that would be obtained if we were to evolve ρ me in an unconstrained fashion under the fokker-planck Eqs. (10), (14). Rather, the quantity given in Eq. (37) represents the rate of change of m when constrained by the maximum-entropy formulation. This is the appropriate form of \(\frac {d}{dt}m\) to use in our reduction, since we need the values χ k, \(\lambda \left (\chi ^{k}\right ) \) and m to be consistent with ρ me.

1.3 B.3 A remark on conditioning

The moment reduction of Section 6 can, in principle, support an arbitrary number of moments M. However, within this paper we limit ourselves to M=2 moments. The reason for this is that, within the maximum-entropy framework described above, we are required to solve for the M+1 lagrange multipliers λ(χ k) for k=0,…,M to enforce the M+1 consistency conditions G(χ k); as M increases this problem becomes very poorly conditioned. This poor conditioning can be deduced from Eq. 35 which highlights the derivative of χ k with respect to λ(χ k). The (M+1)×(M+1) symmetric matrix λ χ relating these two is given by

$$\partial_{\lambda }\chi =\left[ \begin{array}{cccc} \chi^{0} & \chi^{1} & {\cdots} & \chi^{M} \\ \chi^{1} & \chi^{2} & {\cdots} & \chi^{M+1} \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ \chi^{M} & \chi^{M+1} & {\cdots} & \chi^{2M} \end{array} \right] . $$

As M increases, the condition number κ of this matrix grows exponentially. Quantitatively speaking: when ρ eq is a simple uniform distribution then κ is well approximated by 24M. If ρ eq is a more naturally shaped distribution of the kind given in Eqs. (13), (18), then κ grows even more quickly (e.g., κ∼32M). For M=2 we find that, for our work, the condition number of λ χ is usually in the low thousands (e.g., 1000−3000). If we were to take M to be higher, then the condition number would skyrocket and we would not be able to easily relate the moments χ k to the lagrange-multipliers λ(χ k).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J.W., Rangan, A.V. A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony. J Comput Neurosci 38, 355–404 (2015). https://doi.org/10.1007/s10827-014-0543-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-014-0543-3

Keywords