Abstract
In this paper we provide a general methodology for systematically reducing the dynamics of a class of integrate-and-fire networks down to an augmented 4-dimensional system of ordinary-differential-equations. The class of integrate-and-fire networks we focus on are homogeneously-structured, strongly coupled, and fluctuation-driven. Our reduction succeeds where most current firing-rate and population-dynamics models fail because we account for the emergence of ‘multiple-firing-events’ involving the semi-synchronous firing of many neurons. These multiple-firing-events are largely responsible for the fluctuations generated by the network and, as a result, our reduction faithfully describes many dynamic regimes ranging from homogeneous to synchronous. Our reduction is based on first principles, and provides an analyzable link between the integrate-and-fire network parameters and the relatively low-dimensional dynamics underlying the 4-dimensional augmented ODE.



















Similar content being viewed by others
Notes
This estimate of 90 % depends on the choice of units. A more fair estimate of the quality of the projection can be obtained by first normalizing the trajectory \(\vec {U}(t) \) so that each component has mean 0 and variance 1. Once this is done, 73 % of the variance is captured by the normalized version of \(\vec {u}_{1}\) and \(\vec {u}_{2}\), which are in this case \(\vec {u}_{1}=\left \{ 0.45,0.45,0.20,0.52\right \} \) and \(\vec {u}_{2}=\left \{ 0.28,-0.49,0.79,0.09\right \} \).
References
Abbott, LF, & van Vreeswijk, CA (1993). Asynchronous states in networks of pulse-coupled neurons. Physical Review E, 48, 1483–1488.
Anderson, J, Lampl, I, Reichova, I, Carandini, M, Ferster, D (2000). Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex. Nature neuroscience, 3(6), 617–621.
Bak, P, Tang, C, Wiesenfeld, K (1987). Self-organized criticality: an explanation of 1/f noise. Physical Review Letters, 59(4), 381–384.
Battaglia, D, & Hansel, D (2011). Synchronous chaos and broad band gamma rhythm in a minimal multi-layer model of primary visual cortex. PLoS Comparative Biology, 7.
Bauer, HU, & Pawelzik, K (1993). Alternating oscillatory and stochastic dynamics in a model for a neuronal assembly. Physica D, 69, 380–393.
Beggs, JM, & Plenz, D (2003). Neuronal avalanches in neocortical circuits. Journal Neuroscience, 23(35), 11167–11177.
Borgers, C, & Kopell, N (2003). Synchronization in networks of excitatory and inhibitory neurons with sparse, random connectivity. Neural Computation, 15, 509–538.
Brunel, N (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Comparative Neuroscience, 8, 183–208.
Brunel, N, & Hakim, V (1999). Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing Rates. Neural Computation, 11, 1621–1671.
Buice, MA, & Chow, CC (2007). Correlations, fluctuations, and stability of a finite-size network of coupled oscillators. Physical Review E, 76, 031118.1–031118.25.
Cai, D, Tao, L, Shelley, M, McLaughlin, D (2004). An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex. Proceedings of the National Academy of Science of the United States of America, 10120), 7757–7762.
Cardanobile, S, & Rotter, S (2010). Multiplicatively interacting point processes and applications to neural modeling. Journal of Computational Neuroscience, 28, 267–284.
Csicsvari, J, Hirase, H, Mamiya, A, Buzsaki, G (2000). Ensemble patterns of hippocampal ca3-ca1 neurons during sharp wave-associated population events. Neuron, 28, 585–594.
Dehghani, N, Hatsopoulos, NG, Haga, NG, Parker, RA, Greger, B, Halgren, E, Cash, SS, Destexhe, A (2012). Avalanche analysis from multi-electrode ensemble recordings in cat, monkey and human cerebral cortex during wakefulness and sleep. arXiv: 1203.0738v4:[q–bio.NC].
DeVille, L, & Zheng, Y (2014). Synchrony and Periodicity in Excitable Neural Networks with Multiple Subpopulations. SIAM Journal of Applied Dynamical Systems. accepted.
Gerstner, W, & van Hemmen, JL (1993). Coherence and incoherence in a globally coupled ensemble of pulse-emitting units. Physical Review Letters, 71(312).
Grill-Spector, K, & Weiner, KS (2014). The functional architecture of the ventral temporal cortex and its role in categorization. Nature Reviews Neuroscience, 15, 536–548.
Hahn, G, Petermann, T, Havenith, MN, Yu, S, Singer, W, Plenz, D, Nikolic, D (2010). Neuronal avalanches in spontaneous activity in vivo. Journal of Neurophysiology, 104, 3313–3322.
Haldeman, C, & Beggs, JM (2005). Critical Branching Captures Activity in Living Neural Networks and Maximizes the Number of Metastable States. Physical Review Letters, 94, 058101.
Hatsopoulos, NG, Ojakangas, CL, Paniniski, L, Donoghue, JP (1998). Information about movement direction obtained from synchronous activity of motor cortical neurons. Proceedings of the National Academy of Science, 95, 15706–15711.
Helias, M, Deger, M, Rotter, S, Diesmann, M (2010). Instantaneous Non-Linear Processing by Pulse-Coupled Threshold Units. doi:10.1371/journal.pcbi.1000929.
Henrie, JA, & Shapley, R (2005). LFP Power Spectra in V1 Cortex: The Graded Effect of Stimulus Contrast. Journal of Neurophysiology, 94, 479–490.
Hertz, AVM, Hopfield, JJ, Cycles, Earthquake, Reverberations, Neural (1995). Collective Oscillations in Systems with Pulse-Coupled Threshold Elements. Physical Review Letters, 75(6), 1222–1225.
Hu, Y, Trousdale, J, Josic, K, Shea-Brown, E (2013). Motif statistics and spike correlations in neuronal networks. Journal of Statistical Mechanics, P03012, 1–51.
Hubel, D (1995). Eye Brain Vision. W. H. Freeman, ISBN-10: 0716760096.
Jahnke, S, Memmesheimer, R M, Timme, M (2009). How chaotic is the balanced state? Frontiers Computational Neuroscience. doi:10.3389/neuro.10.013.2009.
Knight, B (1972). The Relationship between the Firing Rate of a Single Neuron and the Level of Activity in a Population of Neurons. Journal of General Plant Pathology, 59, 734.
Kohn, A, & Smith, MA (2005). Stimulus dependence of neuronal correlation in primary visual cortex of the macaque. Journal Neuroscience, 25, 3661–73.
Lampl, I, Reichova, I, Ferster, D (1999). Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron, 22, 361–374.
Ledoux, E, & Brunel, N (2011). Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs. Frontiers in Computational Neuroscience, 5(25), 1–17.
Leinekugel, X, Khazipov, R, Cannon, R, Hirase, H, Ben-Ari, Y, Buzsaki, G (2002). Correlated bursts of activity in the neonatal hippocampus in vivo. Science, 296, 2049–2052.
Litwin-Kumar, A, & Doiron, B (2012). Slow dynamics and high variability in balanced cortical networks with clustered connections. Nature Neuroscience. doi:10.1038/nn.3220.
Mark, S, & Tsodyks, M (2012). Population spikes in cortical networks during different functional states. Frontiers in Computational Neuroscience. doi:10.3389/fn-com.2012.00043.
Mazzoni, A, Broccard, FD, Garcia-Perez, E, Bonifazi, P, Ruaro, ME, Torre, V (2007). On the dynamics of the spontaneous activity in neuronal networks. PLoS One, 5, e439.
Mirollo, RE, & Strogatz, SH (1990). Synchronization of pulse-coupled biological oscillators. SIAM Journal of Applied Mathematics, 50(6), 1645–1662.
Murphy, B, & Miller, K (2009). Balanced Amplification: A New Mechanism of Selective Amplification of Neural Activity Patterns. Neuron, 61, 635–648.
Murthy, VN, & Fetz, EE (1992). Coherent 25- to 35-Hz oscillations in the sensorimotor cortex of awake behaving monkeys. Proceedings of the National Academy of Sciences of the United States of America, 89, 5670–5674.
Newhall, K, Kovacic, G, Kramer, P, Zhou, D, Rangan, AV, Cai, D (2010). Dynamics of current-Based, poisson driven, integrate-and-fire neuronal networks. Communications in Mathematical Sciences, 8(2), 541–600.
Nykamp, DQ, & Tranchina, D (2000). Fast neural network simulations with population density methods. Neurocomputing, 32, 487–492.
Omurtag, A, Kaplan, E, Knight, B, Sirovich, L (2000). A population approach to cortical dynamics with an application to orientation tuning. Network: Computation in Neural Systems, 11(4), 247– 260.
Ostojic, S, & Brunel, N (2011). From spiking neuron models to linear-nonlinear models. PLoS Comp. Bio., 7 (1), e1001056.
Ostojic, S (2014). Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature Neuroscience, 17(4). doi:10.1038/nn.3658.
Petermann, T, Thiagarajan, TC, Lebedev, MA, Nicolelis, MAL, Chailvo, DR, Plenz, D (2009). Spontaneous cortical activity in awake monkeys composed of neuronal avalanches. Proceedings of the National Academy of Science, 106 (37), 15921–15926.
Plenz, D, Stewart, CV, Shew, W, Yang, H, Klaus, A, Bellay, T (2011). Multi-electrode array recordings of neuronal avalanches in organotypic cultures. J. Vis. Exp., 54, 2949.
Plenz, D, & Thiagarajan, TC (2007). The organizing principles of neuronal avalanches: cell assemblies in the cortex? Trends in Neurosciences, 30(101).
Poil, SS, Hardstone, R, Mansvelder, HD, Linkenkaer-Hansen, K (2012). Critical-state dynamics of avalanches and oscillations jointly emerge from balanced excitation/inhibition in neuronal networks. Journal Neuroscience, 33, 9817–9823.
Rangan, AV (2009). Diagrammatic expansion of pulse-coupled network dynamics. Physical Review Letters, 102, 158101.
Rangan, AV, & Cai, D (2006). Maximum-Entropy Closures for Kinetic Theories of Neuronal Network Dynamics. Physical Review Letters, 96, 178101.
Rangan, AV, & Young, LS (2013a). Dynamics of spiking neurons: between homogeneity and synchrony. Journal of Computational Neuroscience. doi:10.1007/s10827-012-0429-1.
Rangan, AV, & Young, LS (2013b). Emergent dynamics in a model of visual cortex. Journal of Computational Neuroscience. doi:10.1007/s10827-013-0445-9.
Roxin, A, Brunel, N, Hansel, D, Mongillo, G, Vreeswijk, CV (2011). On the distribution of firing rates in networks of cortical neurons. Journal of Neuroscience, 31(45), 16217–16226.
Samonds, JM, Zhou, Z, Bernard, MR, Bonds, AB (2005). Synchronous activity in cat visual cortex encodes collinear and cocircular contours. Journal of Neurophysiology, 95(4), 2602–2616.
Sakata, S, & Harris, KD (2009). Laminar structure of spontaneous and sensory-evoked population activity in auditory cortex. Neuron, 12(3), 404–418.
Shew, S, Yang, H, Yu, S, Roy, R, Plenz, D (2011). Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. Journal Neuroscience, 31, 55–63.
Stern, EA, Kincaid, AE, Wilson, CJ (1997). Spontaneous Subthreshold Membrane Potential Fluctuations and Action Potential Variability of Rat Corticostriatal and Striatal Neurons In Vivo. Journal of Neurophysiology, 77, 1697–1715.
van Vreeswijk, C, & Sompolinsky, H (1996). Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory Activity. Science, 274(5293), 1724–1726.
van Vreeswijk, C, & Sompolinsky, H (1998). Chaotic Balanced State in a Model of Cortical Circuits. Neural Computation, 10, 1321–1371.
Vogels, TP, & Abbott, LF (2005). Signal propagation and logic gating in networks of integrate-and-fire neurons. Journal of Neuroscience, 25, 10786–95.
Yoshida, T, & Katz, DB (2011). Control of pre-stimulus activity related to improved sensory coding within a discrimination task. Journal Neuroscience, 31(11), 4101–4112.
Yu, S, Yang, H, Nakahara, H, Santos, GS, Nikolic, D, Plenz, D (2011). Higher-order interactions characterized in cortical activity. Journal of Neuroscience, 31, 17514–17526.
Zhang, JW, Newhall, K, Zhou, D, Rangan, AV (2010). Distribution of correlated spiking events in a population-based approach for integrate-and-fire networks. J. Comput. Neurosci., to appear, 2013. in primary visual cortex during sensory stimulation. Neuron, 68, 1187–1201.
Zhang, JW, Zhou, D, Cai, D, Rangan, AV (2013). A coarse-grained framework for spiking neuronal networks: between homogeneity and synchrony. Journal of Computational Neuroscience. doi:10.1007/s10827-013-0488-y.
Zhao, LQ, Beverlin, B, Netoff, T, Nykamp, DQ (2011). Synchronization from second order network connectivity statistics. Frontiers in Computational Neuroscience, 5(28).
Acknowledgments
Dr. Rangan was supported by NSF DMS-1162548.
Conflict of interest
The authors declare that they have no conflict of interest.
Author information
Authors and Affiliations
Corresponding author
Additional information
Action Editor: Nicolas Brunel
Appendices
Appendix A: Spike resolution
As mentioned in Section 3.1, the system of ODEs given by Eq. 2 is not actually well-posed. We need to clarify the dynamics that ensue whenever multiple neurons cross threshold simultaneously (e.g., as a consequence of an excitatory synaptic event). To ensure that our system is well-posed, we consider our system to be a specific limit of the following well-posed system:
where the \(g_{j}^{Q,R}\) model synaptic currents, and
that is to say, the α E and α I are alpha-functions with infinite rise-time and decay times τ E and τ I , respectively. For this current-based system we also assume that, after any one neuron fires, it is held at V L for a time τ ref, referred to as a refractory period.
The well-posed system of ODEs that we use in this work, alluded to by Eq. (2), is actually the system given by Eq. (29) in the limit τ I , τ E , \(\tau _{\text {{ref}}}\rightarrow 0\), with τ I ≪τ E ≪τ ref. This is similar to the resolution rule used in Refs. Zhang et al. (2010) and Zhang et al. (2013), as well as in the Appendix of Rangan and Young (2013a). An example of this resolution rule is described in Appendix A.1 below.
We remark that our choice of resolution rule will affect the dynamics of the network ‘within’ each MFE (see Appendix A.1 for more details). For example, if we had instead taken the limit \(\tau _{E}\ll \tau _{I}\ll \tau _{\text {{ref}}}\rightarrow 0\), then the inhibitory neurons would not participate during any MFE, and the dynamics of each MFE could be determined by examining only the excitatory population.
1.1 A.1 An example of spike-resolution
In this section we give an example of the spike-resolution associated with our IF-network (described in Section 3.1 and Appendix A ). This example includes the ‘infinitesimal-time’ evolution that occurs ‘within’ an MFE.
For the sake of exposition, consider a situation involving N=6 neurons, the first N E =3 of which are excitatory and the latter N I =3 of which are inhibitory. Assume that the N=6 neurons are connected in an all-to-all fashion, with \(S^{QQ^{\prime }}\) given by the array:
and let us assume that, as a result of feedforward Poisson input, the excitatory neuron indexed by j=1 and Q=E has reached and crossed threshold at time t spk. Now, at this time t spk, we freeze the macroscopic time t of the original system and create an ‘infinitesimal-time’ system which evolves according to an infinitesimal time τ. At t spk the state-variables of the original system are given by
This infinitesimal-time system will use the infinitesimal time τ to describe the effects of the excitatory conductances in Eq. (29) , while taking the limit that τ I ≪τ E ≪τ V . The state-variables for this infinitesimal-time system are the 6 voltages of the neurons in the network, \({v_{j}^{Q}}(\tau ) \), each of which describes the membrane potential of the j th neuron of type Q∈{E,I}, which start out at
as well as the excitatory conductances \(g_{j}^{Q,E}(\tau ) \) which are all initialized to 0 at τ=0. We do not explicitly track the inhibitory conductances \(g_{j}^{Q,I}(\tau )\), since we are taking the limit τ I ≪τ E .
The dynamics of the infinitesimal-time-system are given by:
where α E is an alpha-function with infinitely fast rise-time and decay time τ E . For this infinitesimal-time system we fix τ E to be finite – without loss of generality we set τ E ≡1, yielding \(\alpha ^{E}(\tau ) =\exp (-\tau )\) for τ>0 and α E(τ)≡0 for τ<0. The spike-times \({T_{l}^{Q}}\) record the infinitesimal-time after t spk at which neuron l of type Q fires. Since the refractory period for this infinitesimal-time system is \(\tau _{\text {{ref}}}=\infty \), each neuron can fire only once. Moreover, as \({v_{1}^{E}}(0^{-})\) is already at threshold, we immediately set \({T_{1}^{E}}=0\), and clamp \({v_{1}^{E}}(0^{+})=V^{R}\); because \(\tau _{\text {{ref}}}=\infty \), \( {v_{1}^{E}}(\tau ) \) will not change as τ increases. In addition, because \({v_{1}^{E}}\) fires at τ=0, we update \( g_{j}^{Q,E}\left (0\right ) \) to be S QE for each other neuron j,Q. We then evolve this system until \(\tau \rightarrow \infty \).
During this evolution we resolve the trajectory to machine precision by using analytical formulae for the \({v_{j}^{E}}(\tau ) \) and \( g_{j}^{Q,E}(\tau ) \). As an example, imagine a given neuron with voltage \({v_{j}^{Q}}(\tau ) \) and conductance \(g_{j}^{Q,E}(\tau ) \) at time τ. If this neuron receives no further synaptic input, then the trajectory of that neuron will be
for infinitesimal-times \(\tau ^{\prime }>\tau \). The voltage \( {v_{j}^{Q}}(\tau ) \) will evolve to a steady state of
if this is higher than V T, then the neuron will fire at time
Using this formula, we can compute the limiting values \(\lim _{\tau^{\prime} \rightarrow \infty }{v_{j}^{Q}}(\tau^{\prime} ) \) and putative future firing-times \({T_{j}^{Q}}\) for each neuron j,Q. We can then evolve the system up to the infinitesimal time \(\tau =\min _{j,Q}{T_{j}^{Q}}\). At this point neuron j,Q fires, and we clamp \({v_{j}^{Q}}([{T_{j}^{Q}}]^{+})\) to V R. If thisfiring-neuron is excitatory (i.e. Q=E), then we update the excitatory-conductance of each other neuron at time \({T_{j}^{E}}\):
and recalculate the limiting values and firing-times for each other neuron. If, on the other hand, the firing-neuron is inhibitory, then we update each other voltage:
and again recalculate the limiting values and firing-times for each other neuron. In this way we can evolve the entire infinitesimal-time system to machine precision – one spike at a time – until each neuron has either fired or reached its equilibrium value.
As an example, we illustrate in Fig. 20 a trajectory for the 6 neuron network described above. On the top row we show a 7ms trace of activity for the 6 neurons in the network. The neurons are driven by feedforward excitatory Poisson input. The inhibitory neurons are plotted in blue and excitatory neurons 2 and 3 are plotted in red. The first excitatory neuron, which fires as a result of the feedforward input at time t spk=5 ms, is plotted in green. When neuron 1 fires we stop evolving t, and initialize the instantaneous-time system. The trajectory of the instantaneous-time system is shown in the bottom of this figure. At instantaneous-time τ=0 the first excitatory neuron (green) fires, and resets to V R. When this firing event occurs, the excitatory conductances of all the other neurons are increased. As τ increases another excitatory neuron fires, and then an inhibitory neuron, and then finally a second inhibitory neuron. Eventually, as \(\tau \rightarrow \infty \), the remaining 2 neurons – one excitatory and the other inhibitory – relax to their steady-state voltages. The barrage of synaptic activity at t spk=5 ms will not caused them to fire.
Example of infinitesimal-time evolution for an all-to-all network of excitatory and inhibitory neurons. On the top we show the evolution of a simple network. Whenever a neuron fires we turn to an ‘infinitesimal-time’ system, as depicted in the grey inset and bottom. This infinitesimal-time system is used to resolve the cascade of neuronal firing-events (each firing-event indicated by a dashed vertical line). See Appendix for details
After we determine which neurons fire at t spk= 5 ms, as well as which neurons do not fire, we take the state-variables at \(\tau =\infty \) and return to the original system. The voltages of the original system are set to
and the firing-events which occurred during the instantaneous-time evolution are all collapsed onto the same time t spk. In this example we would say that an MFE with magnitudes L E =2 and L I =2 occured at t=5 ms. While all these firing-events are deemed to have occured ‘instantaneously’ (i.e., all at t spk=5 ms), the order of these firing-events can be disambiguated by appealing to the the instantaneous-time system.
Appendix B: Details of the maximum-entropy approximation
In this Appendix we describe some of the details related to the maximum-entropy approximation introduced in Section 7. As in Section 7, we will drop the superscript Q for clarity.
1.1 B.1 Implementing the constraints
The constraints G that we impose on ρ within the maximum-entropy formulation are as follows. First, we expect the given full-moments χ k to be consistent with ρ:
Second, we expect that ρ itself should be continuous across each of the W j (i.e., the boundary between \(\mathcal {I}_{j-1}\) and \(\mathcal {I}_{j}\)):
where by ρ j we mean simply ρ restricted to interval \(\mathcal { I}_{j}\). Third, we expect that J should jump by m τ V at W 1, and be continuous across the other W j :
where \(m=J\left (V^{T}\right ) \) is the flux of ρ over threshold.
The constraints \(G\left (\rho ;W_{j}\right ) \) are automatically satisfied because of our choice of ρ eq – see Eq. (13) above, and Eqs. 32, 33 below. Our remaining goal will be to express the constraints G(J;W j ) in integral form so that they will be compatible with the G(χ k). To this end we define functions g +(v) and g −(v), supported in (0,2ε) and (−2ε,0) respectively, as follows:
so that each g ±(v) integrates to 1, and
These g ± are continuous approximations to the delta-function which are supported either to the right or left of the origin. Using g ±, we can evaluate ρ as follows:
and we can evaluate J as follows:
where we have integrated-by-parts and taken advantage of the limited support of g ± to ignore boundary terms.
This re-expression allows us to write G(J;W j ) in integral form. Note that these constraints affect only a boundary-layer of diameter 2ε around each interior boundary W j . In the limit \( \varepsilon \rightarrow 0\) this boundary layer becomes infinitesimal, and does not affect the G(χ k) constraints.
After maximizing H subject to these constraints, we can express the maximum-entropy distribution ρ me as follows:
where the function
with M=2 referring to the maximum moment matched within the maximum-entropy framework.
The function \(\bar {\rho }(v) \) is what one would obtain as a maximum-entropy distribution given only the G(χ k) constraints. The λ(χ k) are lagrange-multipliers chosen so that each G(χ k) is satisfied. Note that \(\bar {\rho }(v)\) will not necessarily meet the G(J;W j ) constraints. The lagrange-multipliers λ(J;W j ) allow us to modify the behavior of ρ me at each interior boundary layer, thus satisfying the G(J;W j ) constraints.
In the limit \(\varepsilon \rightarrow 0\), the evaluation of J at the interior boundaries (i.e., Eq. (31)) reduces to
where we have absorbed a factor of ε 3 into \(\lambda \left (J;W_{j}\right )\). The jump in the flux at W j is given by:
which can be controlled by varying λ(J;W j ).
These formulae can be used to determine ρ me and evaluate the J and ρ terms in Eq. (21).
1.2 B.2 Calculating \(\frac {d}{dt}m^{Q}\)
In our final reduction (see step 8 of algorithm 2 in Section 8) it will be useful to calculate the time-derivative of m directly from χ k and λ(χ k). To do this, we first note that the time-derivative of λ(χ k) is linked to that of χ k. Since
we have that
where M=2 is the maximum moment used in Eq. (33). This implies that
Thus, if we have access to the moments k=0,…,2M, then we can relate \( \frac {d}{dt}\lambda (\chi ^{k})\) to \(\frac {d}{dt}\chi ^{k}\).
Once we have this relationship between \(\frac {d}{dt}\lambda (\chi ^{k})\) and \(\frac {d}{dt}\chi ^{k}\), we can assume that m is computed implicitly via ρ me as the λ(χ k) change, and we can find \(\frac {d}{dt}m\) easily:
An immediate corollary is:
We remark that this derivative is not the ‘true’ derivative of m that would be obtained if we were to evolve ρ me in an unconstrained fashion under the fokker-planck Eqs. (10), (14). Rather, the quantity given in Eq. (37) represents the rate of change of m when constrained by the maximum-entropy formulation. This is the appropriate form of \(\frac {d}{dt}m\) to use in our reduction, since we need the values χ k, \(\lambda \left (\chi ^{k}\right ) \) and m to be consistent with ρ me.
1.3 B.3 A remark on conditioning
The moment reduction of Section 6 can, in principle, support an arbitrary number of moments M. However, within this paper we limit ourselves to M=2 moments. The reason for this is that, within the maximum-entropy framework described above, we are required to solve for the M+1 lagrange multipliers λ(χ k) for k=0,…,M to enforce the M+1 consistency conditions G(χ k); as M increases this problem becomes very poorly conditioned. This poor conditioning can be deduced from Eq. 35 which highlights the derivative of χ k with respect to λ(χ k). The (M+1)×(M+1) symmetric matrix ∂ λ χ relating these two is given by
As M increases, the condition number κ of this matrix grows exponentially. Quantitatively speaking: when ρ eq is a simple uniform distribution then κ is well approximated by 24M. If ρ eq is a more naturally shaped distribution of the kind given in Eqs. (13), (18), then κ grows even more quickly (e.g., κ∼32M). For M=2 we find that, for our work, the condition number of ∂ λ χ is usually in the low thousands (e.g., 1000−3000). If we were to take M to be higher, then the condition number would skyrocket and we would not be able to easily relate the moments χ k to the lagrange-multipliers λ(χ k).
Rights and permissions
About this article
Cite this article
Zhang, J.W., Rangan, A.V. A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony. J Comput Neurosci 38, 355–404 (2015). https://doi.org/10.1007/s10827-014-0543-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10827-014-0543-3