Skip to main content

Advertisement

Log in

Spiking networks as efficient distributed controllers

  • Original Article
  • Published:
Biological Cybernetics Aims and scope Submit manuscript

Abstract

In the brain, networks of neurons produce activity that is decoded into perceptions and actions. How the dynamics of neural networks support this decoding is a major scientific question. That is, while we understand the basic mechanisms by which neurons produce activity in the form of spikes, whether these dynamics reflect an overlying functional objective is not understood. In this paper, we examine neuronal dynamics from a first-principles control-theoretic viewpoint. Specifically, we postulate an objective wherein neuronal spiking activity is decoded into a control signal that subsequently drives a linear system. Then, using a recently proposed principle from theoretical neuroscience, we optimize the production of spikes so that the linear system in question achieves reference tracking. It turns out that such optimization leads to a recurrent network architecture wherein each neuron possess integrative dynamics. The network amounts to an efficient, distributed event-based controller where each neuron (node) produces a spike if doing so improves tracking performance. Moreover, the dynamics provide inherent robustness properties, so that if some neurons fail, others will compensate by increasing their activity so that the tracking objective is met.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Todorov E, Jordan MI (2002) Optimal feedback control as a theory of motor coordination. Nat Neurosci 5(11):1226–1235

    Article  CAS  PubMed  Google Scholar 

  2. Paul JW (1989) Neural networks for control and system identification. In: Proceedings of the 28th IEEE conference on decision and control, 1989. IEEE, pp 260–265

  3. Narendra KS, Parthasarathy K (1990) Identification and control of dynamical systems using neural networks. IEEE Trans Neural Netw 1(1):4–27

    Article  CAS  PubMed  Google Scholar 

  4. Eliasmith C, Anderson CH (2004) Neural engineering: computation, representation, and dynamics in neurobiological systems. MIT press, Cambridge

    Google Scholar 

  5. Hunt KJ, Sbarbaro D, Zbikowski R, Gawthrop PJ (1992) Neural networks for control systems—a survey. Automatica 28(6):1083–1112

    Article  Google Scholar 

  6. Miller WT, Werbos PJ, Sutton RS (1995) Neural networks for control. MIT press, Cambridge

    Book  Google Scholar 

  7. Lewis FW, Jagannathan S, Yesildirak A (1998) Neural network control of robot manipulators and non-linear systems. CRC Press, Boca Raton

    Google Scholar 

  8. Lukoševičius M, Jaeger H (2009) Reservoir computing approaches to recurrent neural network training. Comput Sci Rev 3(3):127–149

    Article  Google Scholar 

  9. Bialek W, Rieke F, Van Steveninck De Ruyter RR, Warland D (1991) Reading a neural code. Science 252(5014):1854–1857

    Article  CAS  PubMed  Google Scholar 

  10. Abbott LF, DePasquale B, Memmesheimer R-M (2016) Building functional networks of spiking model neurons. Nat Neurosci 19(3):350–355

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Rao RPN, Ballard DH (1999) Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci 2(1):79

    Article  CAS  PubMed  Google Scholar 

  12. Bastos AM, Usrey WM, Adams RA, Mangun GR, Fries P, Friston KJ (2012) Canonical microcircuits for predictive coding. Neuron 76(4):695–711

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  13. Boerlin M, Denève S (2011) Spike-based population coding and working memory. PLoS Comput Biol 7(2):e1001080

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Martin B, Christian KM, Sophie D (2013) Predictive coding of dynamical variables in balanced spiking networks. PLoS Comput Biol 9(11):e1003258

    Article  CAS  Google Scholar 

  15. Fuqiang H, James R, ShiNung C (2017) Optimizing the dynamics of spiking networks for decoding and control. In: American control conference (ACC), 2017. IEEE, pp 2792–2798

  16. Abbott LF (1999) Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Res Bull 50(5–6):303–304

    Article  CAS  PubMed  Google Scholar 

  17. Bekolay T, Bergstra J, Hunsberger E, DeWolf T, Stewart TC, Rasmussen D, Choo X, Voelker A, Eliasmith C (2014) Nengo: a python tool for building large-scale functional brain models. Front Neuroinf 7:48

    Article  Google Scholar 

  18. Waegeman T, Schrauwen B et al (2012) Feedback control by online learning an inverse model. IEEE Trans Neural Netw Learn Syst 23(10):1637–1648

    Article  PubMed  Google Scholar 

  19. Olfati-Saber R, Fax JA, Murray RM (2007) Consensus and cooperation in networked multi-agent systems. Proc IEEE 95(1):215–233

    Article  Google Scholar 

  20. Seuret A, Prieur C, Tarbouriech S, Zaccarian L (2016) Lq-based event-triggered controller co-design for saturated linear systems. Automatica 74:47–54

    Article  Google Scholar 

  21. Peter D, Abbott LF (2001) Theoretical neuroscience, vol 10. MIT Press, Cambridge

    Google Scholar 

  22. Kalman RE (1963) Mathematical description of linear dynamical systems. J Soc Ind Appl Math Ser A Control 1(2):152–192

    Article  Google Scholar 

  23. Johnson EC, Jones DL, Ratnam R (2016) A minimum-error, energy-constrained neural code is an instantaneous-rate code. J Comput Neurosci 40(2):193–206

    Article  PubMed  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fuqiang Huang.

Additional information

Communicated by Rodolphe Sepulchre.

ShiNung Ching holds a Career Award at the Scientific Interface from the Burroughs-Wellcome Fund. This work was partially supported by AFOSR 15RT0189, NSF ECCS 1509342 and NSF CMMI 1537015, from the US Air Force Office of Scientific Research and the US National Science Foundation, respectively.

This article belongs to the Special Issue on Control Theory in Biology and Medicine. It derived from a workshop at the Mathematical Biosciences Institute, Ohio State University, Columbus, OH, USA.

A Derivation of the spiking rule (6)–(12)

A Derivation of the spiking rule (6)–(12)

The methodology to derive the dynamics of the spiking network is based on the schema originally developed in [14]. Our derivation deviates insofar as we utilize the feedback error directly, consistent with our considering of a control rather than prediction objective.

We begin by quantifying the effect of any added spike on the overall cost. Assume that the kth neuron is silent at time \(t_{s}\) and there are no spikes since time \(t_{s}\), then we can get the expression

$$\begin{aligned} \tilde{r}(t) = {\text {e}}^{-\lambda _{d}(t-t_{s})}r(t_{s}), \end{aligned}$$

where \(\tilde{r}(t)\) denotes the firing rate at time t assuming no spikes fired since time \(t_{s}\).

If the kth neuron spikes at time \(t_{s}\), then a delta function \(\delta (t-t_{s})\) is added to \(o_{k}(t)\) resulting in

$$\begin{aligned} r(t) ={}&{\text {e}}^{-\lambda _{d}(t-t_{s})}r(t_{s}) + \int _{t_{s}}^{t}{\text {e}}^{-\lambda _{d}(t-\tau )}\lambda _{d} o(\tau )\mathrm {d}\tau \\ ={}&\tilde{r}(t) + \int _{t_{s}}^{t}{\text {e}}^{-\lambda _{d}(t-\tau )}\lambda _{d} \bar{e}_{k} \delta (\tau -t_{s})\mathrm {d}\tau \\ ={}&\tilde{r}(t) + {\text {e}}^{-\lambda _{d}(t-t_{s})}\lambda _{d}\bar{e}_{k}. \end{aligned}$$

Define \(\tilde{u}(t)\) and \(\tilde{x}(t) = {\text {e}}^{A(t-t_{s})}x(t_{s}) + \int _{t_{s}}^{t}{\text {e}}^{A(t-\tau )}B\tilde{u}(\tau )\mathrm {d}\tau \) as the decoded output and the system states when there is no spike since time \(t_{s}\), then according to the relationship between u(t) and r(t), o(t), i.e., Eq. (3), we have

$$\begin{aligned} u(t) ={}&\tilde{u}(t) + \frac{1}{\lambda _{d}}\varGamma {\text {e}}^{-\lambda _{d}(t-t_{s})}\lambda _{d}\bar{e}_{k} + \varOmega _{k}o_{k}(t)\\ ={}&\tilde{u}(t) + {\text {e}}^{-\lambda _{d}(t-t_{s})}\varGamma _{k} + \varOmega _{k} o_{k}(t), \end{aligned}$$

where \(\varGamma _{k}\) is the kth column of \(\varGamma \) while \(\varOmega _{k}\) is the kth column of \(\varOmega \). Similarly, by Eq. (2), we obtain

$$\begin{aligned} x(t) ={}&\tilde{x}(t)+ \int _{t_{s}}^{t}{\text {e}}^{A(t-\tau )}B{\text {e}}^{-\lambda _{d}(t-t_{s})}\varGamma _{k} + B\varOmega _{k} o_{k}(\tau ) \mathrm {d}\tau \\ ={}&\tilde{x}(t) + {\text {e}}^{-\lambda _{d}(t-t_{s})}\left( \int _{t_{s}}^{t}{\text {e}}^{\lambda _{d}(t-\tau )}{\text {e}}^{A(t-\tau )} \mathrm {d}\tau \right) B\varGamma _{k}\\&+ \,\int _{t_{s}}^{t}{\text {e}}^{A(t-\tau )}B\varOmega _{k} o_{k}(\tau )\mathrm {d}\tau \\ ={}&\tilde{x}(t) + {\text {e}}^{-\lambda _{d}(t-t_{s})}\left( \int _{0}^{t-t_{s}}{\text {e}}^{\left( A+\lambda _{d}I\right) \zeta } \mathrm {d}\zeta \right) B\varGamma _{k} \\&+\, {\text {e}}^{A(t-t_{s})}B\varOmega _{k}. \end{aligned}$$

In summary, when there is a new spike from the kth neuron at time \(t_{s}\), the firing rate, decoded output and system states have sudden changes as

$$\begin{aligned} r(t)&\rightarrow r(t) + h(t-t_{s})\lambda _{d}\bar{e}_{k} \nonumber \\ u(t)&\rightarrow u(t) + h(t-t_{s})\varGamma _{k} + \varOmega _{k} o_{k}(t) \nonumber \\ x(t)&\rightarrow x(t) + h(t-t_{s})H(t-t_{s})B\varGamma _{k} + {\text {e}}^{A(t-t_{s})}B\varOmega _{k}, \end{aligned}$$
(24)

where

$$\begin{aligned} h(t)&= {\text {e}}^{-\lambda _{d}t}\mathbf {1}(t) \\ H(t)&= \int _{0}^{t}{\text {e}}^{\left( A+\lambda _{d}I\right) \zeta } \mathrm {d}\zeta , \end{aligned}$$

where \(\mathbf {1}(t)\) denotes the unit Heaviside function. For convenience, from this point afterward, we will use h and H to denote \(h(t-t_{s})\) and \(H(t-t_{s})\), respectively.

With the above equations, the spiking assumption (5) can be translated into

$$\begin{aligned} \int _{t_{o}}^{t_{s}+\epsilon }&\Vert \hat{x} - x - hHB\varGamma _{k} - {\text {e}}^{A(\tau -t_{s})}B\varOmega _{k}\Vert _{2}^{2}\\&+\, \nu \Vert r+h\lambda _{d}\bar{e}_{k}\Vert _{1} + \mu \Vert r + h\lambda _{d}\bar{e}_{k}\Vert _{2}^{2} \mathrm {d}\tau \\ < \int _{t_{o}}^{t_{s}+\epsilon }&\Vert \hat{x}-x\Vert _{2}^{2}+\nu \Vert r(\tau )\Vert _{1}+\mu \Vert r(\tau )\Vert _{2}^{2} \mathrm {d}\tau . \end{aligned}$$

With the definitions of \(\ell _1\) and \(\ell _2\) norms , we get

$$\begin{aligned} \int _{t_{o}}^{t_{s}+\epsilon }&-\,2h\varGamma _{k}^\mathrm{T}B^\mathrm{T}H^\mathrm{T}\left( \hat{x}-x\right) + h^{2}\varGamma _{k}^\mathrm{T}B^\mathrm{T}H^\mathrm{T}HB\varGamma _{k}\\&-\, 2\varOmega _{k}^\mathrm{T}B^\mathrm{T}{\text {e}}^{A^\mathrm{T}(\tau -t_{s})}\left( \hat{x}-{x}\right) \\&+\, 2h\varGamma _{k}^\mathrm{T}B^\mathrm{T}H^\mathrm{T}{\text {e}}^{A(t-t_{s})}B\varOmega _{k}\\&+\, \varOmega _{k}^\mathrm{T}B^\mathrm{T}{\text {e}}^{A^\mathrm{T}(\tau -t_{s})}{\text {e}}^{A(\tau -t_{s})}B\varOmega _{k} \\&+\, \nu h\lambda _{d} + 2\mu h\lambda _{d}\bar{e}_{k}^\mathrm{T}r + \mu h^{2}\lambda _{d}^{2} \mathrm {d}\tau < 0. \end{aligned}$$

Note that \(h(\tau -t_{s}) = {\text {e}}^{-\lambda _{d}(\tau -t_{s})}= 0\) and \({\text {e}}^{A(\tau -t_{s})}=0 \) for \(\tau < t_{s}\), and rearrange the inequality to obtain

$$\begin{aligned} \int _{t_{s}}^{t_{s}+\epsilon }&2h\varGamma _{k}^\mathrm{T}B^\mathrm{T}H^\mathrm{T}\left( \hat{x}-{x}\right) + 2\varOmega _{k}^\mathrm{T}B^\mathrm{T}{\text {e}}^{A^\mathrm{T}(\tau -t_{s})}\left( \hat{x}-{x}\right) \\&-\, 2\mu h\lambda _{d}\bar{e}_{k}^\mathrm{T}r \mathrm {d}\tau \\ > \int _{t_{s}}^{t_{s}+\epsilon }&h^{2}\varGamma _{k}^\mathrm{T}B^\mathrm{T}H^\mathrm{T}HB\varGamma _{k} + 2h\varGamma _{k}^\mathrm{T}B^\mathrm{T}H^\mathrm{T}{\text {e}}^{A(t-t_{s})}B\varOmega _{k} \\&+\, \varOmega _{k}^\mathrm{T}B^\mathrm{T}{\text {e}}^{A^\mathrm{T}(\tau -t_{s})}{\text {e}}^{A(\tau -t_{s})}B\varOmega _{k}\\&+\, \nu h\lambda _{d} + \mu h^{2}\lambda _{d}^{2} \mathrm {d}\tau . \end{aligned}$$

By examining \(\epsilon \ll \lambda _{d}\) into the future, we can then approximate the integrands as constants so that (using \(h(\tau -t_{s})\approx 1\), \(H(\tau -t_{s})\approx 0\) and \({\text {e}}^{A(\tau -t_{s})}\approx I\) for \(\tau -t_{s} \sim \epsilon \))

$$\begin{aligned} \varOmega _{k}^\mathrm{T}B^\mathrm{T}\left( \hat{x}-{x}\right) - \mu \lambda _{d}\bar{e}_{k}^\mathrm{T}r > \frac{\varOmega _{k}^\mathrm{T}B^\mathrm{T}B\varOmega _{k} + \nu \lambda _{d} + \mu \lambda _{d}^{2}}{2}. \end{aligned}$$

Defining

$$\begin{aligned}&v_{k}(t) \equiv \varOmega _{k}^\mathrm{T}B^\mathrm{T}\left( \hat{x}-{x}\right) - \mu \lambda _{d}\bar{e}_{k}^\mathrm{T}r \\&\bar{v}_{k} \equiv \frac{\varOmega _{k}^\mathrm{T}B^\mathrm{T}B\varOmega _{k} + \nu \lambda _{d} + \mu \lambda _{d}^{2}}{2}, \end{aligned}$$

the spiking rule becomes

$$\begin{aligned} v_{k} > \bar{v}_{k}. \end{aligned}$$

This implies that when \(v_{k}(t)\) is larger than \(\bar{v}_{k}\), the kth neuron fires a spike, thus decreasing the value of the cost function.

It now remains to deduce the differential form of the dynamics on the latent variable \(v_k(t)\). With \(V = (v_{1},\ldots ,v_{N})\), we can write

$$\begin{aligned} V(t) = \varOmega ^\mathrm{T}B^\mathrm{T}\left( \hat{x}(t)-{x}(t)\right) - \mu \lambda _{d}r(t). \end{aligned}$$
(25)

Let \(e(t) = \hat{x}(t)- x(t)\) and take derivatives of Eq. (25), we could get that

$$\begin{aligned} \dot{V}(t) = \varOmega ^\mathrm{T}B^\mathrm{T}\left( \dot{\hat{x}}(t)-\dot{{x}}(t)\right) - \mu \lambda _{d}\dot{r}(t). \end{aligned}$$

Note that \(u(t) = \frac{1}{\lambda _d}\varGamma r(t)+\varOmega o(t)\), and

$$\begin{aligned} \dot{\hat{x}}(t)&= A\hat{x}(t) + c(t) \\ \dot{{x}}(t)&= Ax(t) + Bu(t) = Ax(t) + \frac{1}{\lambda _d}B\varGamma r(t)+B\varOmega o(t) \\ \dot{r}(t)&= -\lambda _d r(t) + \lambda _d o(t), \end{aligned}$$

then,

$$\begin{aligned} \dot{V}(t) ={}&\varOmega ^\mathrm{T}B^\mathrm{T}\left( \dot{\hat{x}}(t)-\dot{{x}}(t)\right) - \mu \lambda _{d}\dot{r}(t) \\ ={}&\varOmega ^\mathrm{T}B^\mathrm{T}\left( A\hat{x}(t) + c(t)\right) \\&-\, \varOmega ^\mathrm{T}B^\mathrm{T}\left( Ax(t)+\frac{1}{\lambda _{d}}B\varGamma r(t)+B\varOmega o(t)\right) \\&-\,\mu \lambda _{d}\left( -\lambda _{d} r(t) + \lambda _{d} o(t)\right) \\ =&\, \varOmega ^\mathrm{T}B^\mathrm{T}Ae(t) + \varOmega ^\mathrm{T}B^\mathrm{T}c(t) \\&+\, \left( -\frac{1}{\lambda _{d}}\varOmega ^\mathrm{T}B^\mathrm{T}B\varGamma + \mu \lambda _{d}^{2}I\right) r(t) \\&-\, \left( \varOmega ^\mathrm{T}B^\mathrm{T}B\varOmega + \mu \lambda _{d}^{2}I\right) o(t). \end{aligned}$$

This last step highlights the core difference in the network dynamics under the control objective versus the original predictive coding framework. Because \(\dot{\hat{x}}\) and \(\dot{x}\) are both subject to the same (linear) dynamics in our case, the feedback error \((\hat{x} - x)\) can be retained explicitly here.

With the definition in (11) and (12), the voltage differential equation can be finally written as

$$\begin{aligned} \dot{V}(t) = \varOmega ^\mathrm{T}B^\mathrm{T}Ae(t) + \varOmega ^\mathrm{T}B^\mathrm{T}c(t) + W_{1}^{s}r(t) + W_{1}^{f}o(t). \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, F., Ching, S. Spiking networks as efficient distributed controllers. Biol Cybern 113, 179–190 (2019). https://doi.org/10.1007/s00422-018-0769-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00422-018-0769-7

Keywords

Navigation