Skip to main content
Log in

Dynamics of recurrent neural networks with delayed unreliable synapses: metastable clustering

  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

The influence of unreliable synapses on the dynamic properties of a neural network is investigated for a homogeneous integrate-and-fire network with delayed inhibitory synapses. Numerical and analytical calculations show that the network relaxes to a state with dynamic clusters of identical size which permanently exchange neurons. We present analytical results for the number of clusters and their distribution of firing times which are determined by the synaptic properties. The number of possible configurations increases exponentially with network size. In addition to states with a maximal number of clusters, metastable ones with a smaller number of clusters survive for an exponentially large time scale. An externally excited cluster survives for some time, too, thus clusters may encode information.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Abbott, L. F., & van Vreeswijk, C. (1993). Asynchronous states in networks of pulse-coupled oscillators. Physical Review E, 48(2), 1483–1490.

    Article  Google Scholar 

  • Abeles, M. (1991). Corticonics: Neural circuits of the cerebral cortex. New York: Cambridge University Press.

    Google Scholar 

  • Abramowitz, M., & Stegun, I. A. (1964). Handbook of mathematical functions with formulas, graphs, and mathematical tables. New York: Dover.

    Google Scholar 

  • Allen, C., & Stevens, C. F. (1994). An Evaluation of Causes for Unreliability of Synaptic Transmission. Proceedings of the National Academy of Sciences of the United States of America, 91(22), 10380–10383.

    Article  PubMed  CAS  Google Scholar 

  • Amit, D. J., & Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex, 7, 237–252.

    Article  PubMed  CAS  Google Scholar 

  • Barnsley, M. F. (1989). Fractals everywhere. Boston: Academic.

    Google Scholar 

  • Bressloff, P. C. (1999). Mean-field theory of globally coupled integrate-and-fire neural oscillators with dynamic synapses. Physical Review E, 60(2), 2160–2170.

    Article  CAS  Google Scholar 

  • Brunel, N. (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience, 8, 183–208.

    Article  PubMed  CAS  Google Scholar 

  • Brunel, N., & Hakim, V. (1999). Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation, 11(7), 1621–1671.

    Article  PubMed  CAS  Google Scholar 

  • Buzsaki, G. (2006). Rhythms of the brain. New York: Oxford University Press.

    Book  Google Scholar 

  • Diesmann, M., Gewaltig, M., & Aertsen, A. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature, 402, 529–533.

    Article  PubMed  CAS  Google Scholar 

  • Ernst, U., Pawelzik, K., & Geisel, T. (1995). Synchronization induced by temporal delays in pulse-coupled oscillators. Physical Review Letters, 74(9), 1570–1573.

    Article  PubMed  CAS  Google Scholar 

  • Ernst, U., Pawelzik, K., & Geisel, T. (1998). Delay-induced multistable synchronization of biological oscillators. Physical Review E, 57(2), 2150–2162.

    Article  CAS  Google Scholar 

  • Gerstner, W. (1995). Time structure of the activity in neural network models. Physical Review E, 51(1), 738–758.

    Article  CAS  Google Scholar 

  • Gerstner, W. (1996). Rapid phase locking in systems of pulse-coupled oscillators with delays. Physical Review Letters, 76(10), 1755–1758.

    Article  PubMed  CAS  Google Scholar 

  • Gerstner, W., & Kistler, W. K. (2002). Spiking neuron models. Cambridge: Cambridge University Press.

    Google Scholar 

  • Golomb, D., Wang, X. J., & Rinzel, J. (1994). Synchronization properties of spindle oscillations in a thalamic reticular nucleus model. Journal of Neurophysiology, 72(3), 1109–1126.

    PubMed  CAS  Google Scholar 

  • Gong, P. L., & van Leeuwen, H. (2007). Dynamically maintained spike timing sequences in networks of pulse-coupled oscillators with delay. Physical Review Letters, 98(4), 048104.

    Article  PubMed  Google Scholar 

  • Jahnke, S., Memmesheimer, R. M., & Timme, M. (2008). Stable irregular dynamics in complex neural networks. Physical Review Letters, 100(4), 048102.

    Article  PubMed  Google Scholar 

  • Kestler, J., & Kinzel, W. (2006). Multifractal distribution of spike intervals for two oscillators coupled by unreliable pulses. Journal of Physics A: Mathematical and General, 39, L461–L466.

    Article  Google Scholar 

  • Kinzel, W. (2008). On the stationary state of a network of inhibitory spiking neurons. Journal of Computational Neuroscience, 24, 105–112.

    Article  PubMed  Google Scholar 

  • Kumar, A., Rotter, S., & Aertsen, A. (2008). Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model. Journal of Neuroscience, 28, 5268–5280.

    Article  PubMed  CAS  Google Scholar 

  • Lisman, J. E. (1997). Bursts as a unit of neural information: Making unreliable synapses reliable. Trends in Neurosciences, 20, 38–43.

    Article  PubMed  CAS  Google Scholar 

  • Maass, W., & Natschläger, T. (2000). A model for fast analog computation based on unreliable synapses. Neural Computation, 12(7), 1679–1704.

    Article  PubMed  CAS  Google Scholar 

  • Mirollo, R. E., & Strogatz, S. H. (1990). Synchronization of pulse-coupled biological oscillators. SIAM Journal on Applied Mathematics, 50(6), 1645–1662.

    Article  Google Scholar 

  • Rosenmund, C., Clements, J., & Westbrook, G. (1993). Nonuniform probability of glutamate release at a hippocampal synapse. Science, 262, 754–757.

    Article  PubMed  CAS  Google Scholar 

  • Senn, W., Markram, H., & Tsodyks, M. (2001). An algorithm for modifying neurotransmitter release probability based on pre- and postsynaptic spike timing. Neural Computation, 13(1), 35–67.

    Article  PubMed  CAS  Google Scholar 

  • Seung, H. S. (2003). Learning in spiking neural networks by reinforcement of stochastic synaptic transmission. Neuron 40(6), 1063–1073.

    Article  PubMed  CAS  Google Scholar 

  • Stevens, C. F., & Wang, Y. (1994). Changes in reliability of synaptic function as a mechanism for plasticity. Nature, 371, 704–707.

    Article  PubMed  CAS  Google Scholar 

  • Timme, M., & Wolf, F. (2008). The simplest problem in the collective dynamics of neural networks: is synchrony stable? Nonlinearity 21, 1579–1599.

    Article  Google Scholar 

  • Timme, M., Wolf, F., & Geisel, T. (2002). Coexistence of regular and irregular dynamics in complex networks of pulse-coupled oscillators. Physical Review Letters, 89, 258701.

    Article  PubMed  Google Scholar 

  • Tuckwell, H. C. (1988). Introduction to theoretical neurobiology. New York: Cambridge University Press.

    Google Scholar 

  • van Vreeswijk, C. (1996). Partial synchronization in populations of pulse-coupled oscillators. Physical Review E, 54(5), 5522–5537.

    Article  Google Scholar 

  • van Vreeswijk, C. (2000). Analysis of the asynchronous state in networks of strongly coupled oscillators. Physical Review Letters, 84(22), 5110–5113.

    Article  PubMed  Google Scholar 

  • van Vreeswijk, C., Abbott, L. F., & Ermentrout, G. B. (1994). When inhibition not excitation synchronizes neural firing. Journal of Computational Neuroscience, 1, 313–321.

    Article  PubMed  Google Scholar 

  • van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 274, 1724–1726.

    Article  PubMed  Google Scholar 

  • Zillmer, R., Livi, R., Politi, A., & Torcini, A. (2007). Stability of the splay state in pulse-coupled networks. Physical Review E 76(4), 046102.

    Article  Google Scholar 

Download references

Acknowledgements

It is a pleasure for us to acknowledge stimulating and fruitful discussions with Ido Kanter and Moshe Abeles.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johannes Friedrich.

Additional information

Action Editor: Nicolas Brunel

Appendices

Appendix A: Distribution of firing times

Though the inputs triggered by the spike volley of all neurons do not arrive at the same time, the approximation that they would is a good one leaving us with

$$ \frac{dV_i}{dt} = 1 - V_i(t) + \delta(t-t'-\tau) \sum_{j=1}^K h_j \frac{g}{K} $$
(17)

where K is the number of afferent synapses for each neuron. The random variable J shall be defined by the normalized sum of the inputs \( \sum_{j=1}^K h_j \frac{g}{K}\). The values of J, denoted as \(\tilde{J}\), are binomial distributed which for large K can be approximated by a Gaussian distributed total pulse strength J with mean and variance

$$ \langle J \rangle=g p {\rm ,}\quad \sigma^2_J=\frac{g^2}{K}p(1-p). $$
(18)

The pulse has the shape of a δ-function, that is fixed in time, but its altitude varies. It can be shown that the error due to the approximation is much smaller than this variation, thus justifying it.

Now we consider how a neuron with potential \(V(0^-)=V_0\) which at time t = 0 receives the pulse of strength \(\tilde{J}\) evolves over the period T.

$$ \begin{array}{rcl} V(0^-) &=& V_0 \\ V(0^+) &=& V_0+\tilde{J} \\ V(t_\theta^-) &=& 1-(1-V(0^+))e^{-t_\theta}\overset{!}{=}\theta \\ V(t_\theta^+) &=& 0 \\ V(T) &=& 1-e^{-(T-t_\theta)}=1-e^{-T}\frac{1-V_0-\tilde{J}}{\vartheta} \end{array} $$

According to the last line V(T) is linear in \(\tilde{J}\). Since \(\tilde{J}\) is a value of the Gaussian distributed random variable J, the values V(T) are Gaussian distributed for given V 0 as well. Thus the transition probability density for the potential to be V(T) after time T under the condition that it was V 0 at time t = 0 is

$$ \mathcal{P}_{V,0\rightarrow V,T}\left(V(T),V_0\right) = \frac{1}{\sqrt{2\pi s^2}}e^{-\frac{(V(T)-m(V_0))^2}{2s^2}}$$
(19)
$${\rm with} \quad m(V_0)= 1-e^{-T}\frac{1-V_0-g p}{\vartheta} $$
(20)
$${\rm and} \quad s = \frac{e^{-T}}{\vartheta}g \sqrt{\frac{p(1-p)}{K}}. $$
(21)

Integrating over the initial distribution of V 0, denoted as \(\mathcal{P}_V(V_0)\), yields the distribution of potentials at time T, which must be identical to the initial one since T is the period. This leads to the following linear homogeneous Fredholm-integral-equation of the second kind where m and s are as defined above in Eq. (20) and Eq. (21).

$$ \mathcal{P}_V(V) = \int_{-\infty}^{\infty} dV' \, \mathcal{P}_V(V') \, \frac{1}{\sqrt{2\pi s^2}}e^{-\frac{(V-m(V'))^2}{2s^2}} $$
(22)

The Fourier-transformed equation

$$\mathcal{P}(V)\rightarrow \Tilde{\mathcal{P}}(k)=\int_{-\infty}^{\infty}e^{- i k V}\mathcal{P}(V)\,dV$$

reads

$$ \Tilde{\mathcal{P}}(a k) = \Tilde{\mathcal{P}}(k) \exp\left({- i k a c} {-\frac{a^2k^2s^2}{2}}\right) $$
(23)

with a : = ϑ e T and \(c := 1-e^{-T}\frac{1-g p}{\vartheta}\). It is solveable using the ansatz \(\Tilde{\mathcal{P}}(k) = \exp\left(\sum_i \alpha_ik^i\right)\) and comparison of coefficients. The result, up to a free complex constant, is

$$ \Tilde{\mathcal{P}}(k) = const \times \exp{\left(-\frac{i a c}{a-1}k-\frac{a^2 s^2}{2(a^2-1)}k^2\right)}. $$
(24)

Transforming back and choosing the constant correctly, so that \(\mathcal{P}_V\) obeys the required properties of a pdf, which are \(\mathcal{P}_V \in \mathbb{R}\) and \(\int \mathcal{P}_V(V)dV=1\), yields a Gaussian distribution of V 0 with mean \( \mu = \frac{a c}{a-1}\) and variance \(\sigma^2 = \frac{a^2}{a^2-1}s^2\). The potential evolves from its reset value 0 over a time τ until the input arrives, thus μ = 1 − e  − τ. Eliminating T yields the result in Eq. (10). Note that τ has to be smaller than T 0, because 0 < μ < θ, which is biologically well founded.

The transformation of the obtained pdf of the potential into a pdf of the spiking times is simply done using \(\mathcal{P}_V dV=\mathcal{P}_{t_\theta} dt_{\theta}\) and the definition of the time to threshold \(t_\theta:=\ln\frac{1-V}{\vartheta}\), resulting in Eq. (11).

Appendix B: Identical clusters

2.1 B.1 Cluster size

We study the pdf for n c clusters in the phase-description. For simplicity we do not remove the selfcoupling. Analogous to the study of the synchronous state we consider the evolution of the phase over one period, cf. Table 1. We here present the procedure for two clusters, which works analogous for higher n c .

Table 1 Evolution of the phases in a state of two presumably unidentical clusters

The inhibitory pulse of strength \(\tilde{J_i}\), triggered by firing of cluster i, results in a phase jump Δϕ that depends on the present phase. Using the relationship Eq. (3) we get:

$$ \begin{array}{rcl} V(\phi)+\tilde{J}&=&V(\phi+\Delta\phi)\\ &\Rightarrow& \Delta\phi(\phi,\tilde{J}) = \log_\vartheta(\vartheta^\phi-\tilde{J})-\phi \end{array} $$
(25)

The Δϕ are determined by the present phase and the pulse strength.

$$ \begin{array}{rcl} \Delta\phi_{1a} &=& \Delta\phi(\phi_1,\tilde{J_1}) \\[6pt] \Delta\phi_{2a} &=& \Delta\phi(\phi_2,\tilde{J_1}) \\[6pt] \Delta\phi_{1b} &=& \Delta\phi(\phi_1+\Delta\phi_{1a}+\delta\phi_a,\tilde{J_2}) \\[6pt] \Delta\phi_{2b} &=& \Delta\phi(\phi_2+\Delta\phi_{2a}+\delta\phi_a-1,\tilde{J_2}) \end{array} $$
(26)

Between these jumps the phase increases linearly by δϕ. We use again the approximation that the inputs due to the spike volley of one cluster arrive at the same time. For derivation of the δϕ the dynamics of whole clusters and not single neurons is relevant. Therefore in a first step the calculation is only carried out using mean values, i.e. the deterministic case is considered. Each cluster i is described by one phase ϕ i and the pulse strengths \(\tilde{J_i}\) are fixed to \(\mu_{J_i}\) instead of being drawn from a Gaussian distribution. The mean values of the pulse strengths are

$$ \begin{array}{rcl} \mu_{J_1} &=& \frac{N_1 g p}{N} = x g p \\ \mu_{J_2} &=& \frac{N_2 g p}{N} = (1-x) g p. \end{array} $$
(27)

where N i is the number of neurons in cluster i and \(x=\frac{N_1}{N}\) is the relative clustersize of cluster 1.

In the time interval \([0^+;t_{\theta_2}+\tau^-]\), which corresponds to δϕ a , the phase of cluster 2 evolves from ϕ 2 + Δϕ 2a towards 1 and then from 0 to ϕ 1. Here the parameter τ comes into play, because

$$ \phi_1=\frac{\tau}{T_0}=-\frac{\tau}{\ln\vartheta}. $$
(28)

This yields the following equation for δϕ a :

$$ \delta\phi_a = \phi_1+1-(\phi_2+\Delta\phi_{2a}) $$
(29)

After the time T, being the period, both phases must have their initial value:

$$ \phi_1+\Delta\phi_{1a} +\delta\phi_a+\Delta\phi_{1b}+\delta\phi_b = \phi_1+1 $$
(30)
$$ \phi_2+\Delta\phi_{2a} +\delta\phi_a+\Delta\phi_{2b}+\delta\phi_b = \phi_2+1 $$
(31)
$$ \qquad\qquad \Rightarrow\quad \Delta\phi_{1a} +\Delta\phi_{1b} = \Delta\phi_{2a} +\Delta\phi_{2b} $$
(32)

Reinserting the Δϕ ij into the last equation leads after some transforms to another equation for δϕ a .

$$ \delta\phi_a = \log_{\vartheta}\left(\left(\frac{\theta}{1-\vartheta^{\phi_2-\phi_1}}-1\right)\frac{\mu_{J_2}}{\mu_{J_1}}\right) $$
(33)

Setting 29 equal to 33 with substituting

$$ \begin{array}{rcl} a &:=& \vartheta^{\phi_1} \\ b &:=& \vartheta^{\phi_2} \end{array} $$

results in a quadratic equation for b. One of its both solutions is ruled out by the fact that b has to be positive.

$$ \frac{\mu_{J_1}}{\mu_{J_2}}\vartheta(a-b) = \mu_{J_1}\vartheta-\mu_{J_1}\frac{b}{a}-b \vartheta+\frac{b^2}{a} $$
(34)
$$ \begin{array}{rcl} b &=& \frac{1}{2}\Bigg(a \vartheta+\mu_{J_1}-\frac{\mu_{J_1}}{\mu_{J_2}}a \vartheta\\ &&{\kern12pt} + \sqrt{\left(a \vartheta+\mu_{J_1}-\frac{\mu_{J_1}}{\mu_{J_2}}a \vartheta\right)^2-4\left(\mu_{J_1} a \vartheta-\frac{\mu_{J_1}}{\mu_{J_2}}a^2\vartheta\right)}\Bigg) \end{array} $$
(35)

Reinserting the definitions of a, b and using 28, we finally got an equation for ϕ 2 in dependence of the given parameters τ, θ, \(\mu_{J_1}\) and \(\mu_{J_2}\). The Δϕ ij can be calculated through their definitions 26, δϕ a and δϕ b using 29 and 30 respectively, resulting in

$$ \delta\phi_a=\delta\phi(x) \quad{\rm and}\quad \delta\phi_b=\delta\phi(1-x) $$
(36)

with δϕ(x) as defined below in Eq. (37).

$$\delta\phi(x):=\log_\vartheta\left(2\vartheta e^{-\tau}\right)-\log_\vartheta\left(\frac{e^{-\tau } (1-2x) \vartheta }{1-x} -x gp + \frac{\sqrt{gp^2 (1-x)^2 x^2+e^{-2 \tau } \left(2 \left(2-e^{\tau } gp\right) (1-x) x \vartheta +(\vartheta -2 x \vartheta )^2\right) }}{1-x}\right) $$
(37)

δϕ a and δϕ b turn out to be equal for \(x=\frac{1}{2}\), i.e. clusters of the same size. If two clusters with the same size emerge, then the interval between them in the spike train is \(\frac{T}{2}\). The other cluster fires exactly in the mid of a period.

Now we turn toward the actual distribution of the phases. Each cluster i is described by a phase distribution \(\mathcal{P}_{\phi_i}(\phi)\). We derive a set of n c integral equations, one for each cluster. Firing of cluster i results in a pulse of strength \(\tilde{J}\) which is drawn from Gaussian distributions \(\mathcal{P}_{J_i}(\tilde{J})\) with means as defined in 27 and variances

$$ \begin{array}{rcl} \sigma_{J_1}^2 &=& N_1 \left(\frac{g}{N}\right)^2 p(1-p) = \frac{x g^2}{N} p(1-p) \\ \\ \sigma_{J_2}^2 &=& N_2 \left(\frac{g}{N}\right)^2 p(1-p) = \frac{(1-x) g^2}{N} p(1-p). \end{array} $$

A pulse of strength \(\tilde{J}\) results in a phase shift Δϕ according to Eq. 25 from ϕ 0 to \(\phi=\phi_0+\Delta\Phi(\phi_0,\tilde{J})=\log_\vartheta(\vartheta^{\phi_0}-\tilde{J})\). Solving this for \(\tilde{J}\) yields

$$ \tilde{J}=\vartheta^{\phi_0}-\vartheta^\phi. $$

The probability that a pulse triggered by cluster i shifts the phase from ϕ 0 to ϕ is thus

$$ \begin{array}{rcl} \mathcal{P}_{\Delta\phi,J_i}(\phi,\phi_0)&=& \mathcal{P}_{J_i}(\tilde{J})\frac{d\tilde{J}}{d\phi}\bigg|_{\tilde{J} =\vartheta^{\phi_0}-\vartheta^\phi}\\[6pt] &=& \mathcal{P}_{J_i}(\vartheta^{\phi_0}-\vartheta^{\phi})\vartheta^{\phi}(-\ln\vartheta) \end{array} $$
(38)

The self-consistent equation for cluster 1 in a 2-cluster state reads

$$ \begin{array}{rcl} \mathcal{P}_{\phi_1}(\phi) &=& \int d\phi' \int d\phi'' \mathcal{P}_{\phi_1}(\phi'') \mathcal{P}_{\Delta\phi,J_1}(\phi'-\delta\phi_a,\phi'')\\[4pt] &&\times \mathcal{P}_{\Delta\phi,J_2}(\phi+1-\delta\phi_b,\phi') \end{array} $$
(39)

Each term can be understood considering Table 1. We start with an initial value ϕ′′ out of the distribution \(\mathcal{P}_{\phi_1}(\phi'')\). The pulse due to firing of cluster 1 shifts the phase from ϕ′′ to ϕ′ − δϕ a , corresponding to \(\mathcal{P}_{\Delta\phi,J_1}(\phi'-\delta\phi_a,\phi'')\). Then the phase evolves linearly to ϕ′ until the pulse due to firing of cluster 2 shifts the phase from ϕ′ to ϕ + 1 − δϕ b , corresponding to \(\mathcal{P}_{{\mathit {\Delta}}\phi,J_2}(\phi+1-\delta\phi_b,\phi')\). The phase evolves linearly, is reset, and evolves finally to ϕ. The integration is done over all initial and intermediate values ϕ′′ and ϕ′ respectively. The integral equation for \(\mathcal{P}_{\phi_2}\) is obtained for \(x \leftrightarrow (1-x)\) and has to be simultaneously valid.

The equations can be solved numerically using discretization into n bins bins. The integrals are replaced by sums and the pdf by a vector:

$$ \int_0^1 d\phi \rightarrow \sum_{i=1}^{n_{bins}}\frac{1}{n_{bins}} {\rm , }\quad\mathcal{P}_{\phi_i}(\phi) \rightarrow \bf{P_i} $$

n bins is the number of bins and characterizes the quality of the discretization. In the limit n bins → ∞ the original integral equation is restored. n bins is further the dimension of the vectors \({\mathbf{P}_{i}}\). P i,k denotes the kth component of this vector. E.g. for cluster 1 of the 2-cluster state:

$$ \begin{array}{rcl} P_{1,k} &=& \sum_{i=1}^{n_{bins}}\sum_{j=1}^{n_{bins}} P_{1,j} \overbrace{ \mathcal{P}_{{\mathit {\Delta}}\Phi,J_1}\left(\frac{i}{n_{bins}}-\delta\phi_a,\frac{j}{n_{bins}}\right) \frac{1}{n_{bins}}}^{A_{ij}}\\ &&\times \underbrace{\mathcal{P}_{{\mathit {\Delta}}\Phi,J_2}\left(\frac{k}{n_{bins}}+1-\delta\phi_b,\frac{i}{n_{bins}}\right) \frac{1}{n_{bins}}}_{B_{ki}} \end{array} $$
(40)

With the introduced A ij and B ki this can be written as a matrix equation

$$ \bf{P_1} = \underbrace{\underline{B}\ \underline{A}}_{\underline{M_1}}\ \bf{P_1}. $$
(41)

In general one can transform the n c integral equations to n c matrix equations of the form \({\bf P_i} = \underline{M_i} \bf{P_i}\). We end up with eigenvalue equations for \(\underline{M_i}\) to the eigenvalue 1. A solution to the integral equations only exists if all \(\underline{M_i}\) have an eigenvalue that equals 1. The related eigenvector is the discretized pdf. The theory of transfer operators tells us that the relevant eigenvalue corresponding to the stationary state is the leading one λ 1. The way matrices \(\underline{M_i}\) are defined they only have positive entries \((\underline{M_i})_{ij}>0\). The Perron-Frobenius theorem applies and therefore the largest eigenvalue is simple. The corresponding eigenvector is unique, i.e. the distribution if λ 1 = 1. We find that a solution only exists for identical clusters, e.g. for n c  = 2 the largest eigenvalue of each matrix equals 1 only if \(x=\frac{1}{2}\). Thus cluster states are stationary only for clusters of the same size.

2.2 B.2 Stability

The matrix equations for these states with identical clusters read in general:

$$ {\bf P}=\underline{B}\ \underbrace{\underline{A}\ \underline{A}\cdots\underline{A}}_{n_c-1 {\rm factors}} {\bf P} = \underline{M}\bf{P} $$
(42)

with \(\underline{M}:=\underline{B}\ \underline{A}\cdots\underline{A}\) and \(\underline{A}\), \(\underline{B}\) defined as:

$$ A_{ij} = \mathcal{P}_{J}\left(\vartheta^{\frac{j}{n_{bins}}}-\vartheta^{\frac{i}{n_{bins}}-\delta\phi}\right) \frac{-\ln(\vartheta)}{n_{bins}} \vartheta^{\frac{i}{n_{bins}}-\delta\phi} $$
(43)
$$ B_{ij} = \mathcal{P}_{J}\left(\vartheta^{\frac{j}{n_{bins}}}-\vartheta^{\frac{i}{n_{bins}}+1-\delta\phi}\right)\frac{-\ln(\vartheta)}{n_{bins}} \vartheta^{\frac{i}{n_{bins}}+1-\delta\phi} $$
(44)

where \(\mathcal{P}_{J}(J)\) is a Gaussian distribution with mean and variance

$$ \mu_J=\frac{g p}{n_c} {\rm , }\quad \sigma_J^2=\frac{g^2p(1-p)}{n_c N}. $$
(45)

δϕ denotes the normalized time elapsed between consecutive firings of clusters. We define \({\mathit {\Phi}}(\phi)\) to map the phase ϕ from one firing time to the phase at the next firing time. After each firing an input occurs with delay τ, thus with Eq. (25)

$$ {\mathit {\Phi}}(\phi): \phi\mapsto\phi+\delta\phi+{\mathit {\Delta}}\phi\left(\phi+\frac{\tau}{T_0},\mu_J\right) $$

Because n c firings occur during one period δϕ is the solution of the nested function

$$ \begin{array}{rcl} {\mathit {\Phi}}\left({\mathit {\Phi}}\left({\mathit {\Phi}}\cdots\left({\mathit {\Phi}}(0)\right)\right)\right)&=&1 \\ {\rm with}\quad {\mathit {\Phi}}(\phi)&:=& \delta\phi+\log_\vartheta\left(\vartheta^\phi-\mu_J e^\tau\right) \end{array} $$
(46)

where \({\mathit {\Phi}}(\phi)\) is nested n c times.

A perturbation δ P 0 evolves over one period according to \({\delta \bf P_{n+1}} = \underline{M}{\delta \bf P_n}\). Powers of \(\underline{M}\) converge against a (right) stochastic matrix

$$ \underline{M}^\infty = \left(\bf{P},\bf{P},\bf{P},\cdots,\bf{P}\right) $$

thus any perturbation converges, up to a constant factor, for n→ ∞ against \(\mathbf P\).

The total variance Var tot of a matrix is defined as sum over the variances in the rows:

$$ Var_{tot}\left(\underline{M}\right) :=\sum_{i=1}^{nb{\kern-1pt}ins}\left( \sum_{j=1}^{nb{\kern-1pt}ins}\frac{\left((\underline{M})_{ij}\right)^2}{n_{bi{\kern-1pt}ns}}- \left(\sum_{j=1}^{nb{\kern-1pt}ins}\frac{(\underline{M})_{ij}}{n_{b{\kern-1pt}ins}}\right)^2 \right) $$
(47)

Appendix C: Number of clusters

Clusters of identical size emerge which is the outcome of numerical simulations and we showed it explicitly for two clusters in Appendix B. This simplifies the algebraic treatment due to symmetries. In Eq. (46) we already defined the function \({\mathit {\Phi}}(\phi)\) that maps the phase from one firing event to the next which happens normalized time δϕ later. Due to the mentioned symmetry we have the same overall distribution of phase at all spiking times, but the labels have changed:

$$ {\mathit {\Phi}}(\phi_{i}(0))=\phi_{i+1}(0) \quad{\rm for } i \neq n_c $$
(48)
$$ {\mathit {\Phi}}(\phi_{n_c}(0))=1 $$
(49)

Into the last equation we plug in the upper bound τ max of Eq. (13), \(\phi_{n_c}(0)=1+\frac{\tau_{max}}{\ln(\vartheta)}\), and solve for δϕ and \({\mathit {\Phi}}(\phi)\) respectively.

$$ \delta\phi = 1 - \frac{\tau_{max}+\ln(\vartheta-\mu_J)}{\ln(\vartheta)} $$
(50)
$$ {\mathit {\Phi}}(\phi) = \log_\vartheta\left(\frac{\vartheta}{\vartheta-\mu_J}\left(\frac{\vartheta^\phi e^{-\tau}-\mu_J}{\vartheta-\mu_J}\right)\right) $$
(51)

Inserting this into Eq. (12) and changing variables ϕϑ ϕ results in the final Eq. (14).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Friedrich, J., Kinzel, W. Dynamics of recurrent neural networks with delayed unreliable synapses: metastable clustering. J Comput Neurosci 27, 65–80 (2009). https://doi.org/10.1007/s10827-008-0127-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-008-0127-1

Keywords

Navigation