Elsevier

Neurocomputing

Volume 292, 31 May 2018, Pages 121-129
Neurocomputing

A novel time-event-driven algorithm for simulating spiking neural networks based on circular array

https://doi.org/10.1016/j.neucom.2018.02.085Get rights and content

Abstract

The computing of synaptic currents occupies a major part of computational cost when simulating a large scale spiking neural network. Based on the observation that the probability of a neuron receiving at least one spike from any synapses during a very tiny simulation time step is very small, we propose a time-driven algorithm corrected by an event-driven process (a hybrid time-event-driven algorithm) which consists of two procedures of computation. In the first procedure of the synaptic current computation, we suppose that the neuron in question receives no spike during the simulation time step, and thereby propose a time-driven method of joint decay process to reduce the computational complexity of the synaptic current. In the second procedure of the computation, we suppose that the neuron in question receives spikes during the simulation time step, and propose an event-driven local correction process to correct the total synaptic current that is calculated in the first procedure of the computation. We design a data structure of circular two-dimensional array for storing both conductance coefficients related with presynaptic neurons and correcting conductance related with postsynaptic neurons. Furthermore, in order to realize the local correction process quickly and effectively, we propose a new event-processing method to realize the local correction process based on the data structure of circular two-dimensional array. By theoretically comparing with that of traditional time-driven algorithm, it is found that the proposed time-event-driven algorithm reduces computational cost of synaptic current substantially. The simulation results further show the efficiency of the proposed algorithm.

Introduction

The biological neural systems consist of billions of neurons, each of which connecting to thousands of synapses. Because of the complicated kinetic characteristic of neurons and synapses [1] and the complex topological structure of the neuronal networks, the exploration of the information processing mechanism [2], [3] of biological neural systems is still an open issue. Computer simulations of the biological neural system, using spiking neural networks (SNN) models [4], [5], [6], [7], play an increasingly prominent role in exploring the information processing mechanism in neural systems.

Traditionally, there are two types of fundamental approaches to simulate SNN: time-driven (or synchronous) algorithms [8], [9] and event-driven (or asynchronous) algorithms [10], [11], [12]. Time-driven algorithm uses numerical methods based on the differential expressions of SNN dynamics (eg. Euler or Runge–Kutta), where synaptic currents and the state variables of each neuron are updated simultaneously at each time step. There are many typical simulators like GENESIS [13] and NEURON [14] using the time-driven algorithm. The biggest advantage of the time-driven algorithm is that it can be applied to any model. However, the computational load of this algorithm is too heavy and it consumes too much time for simulating large-scale SNN models. For example, for simulating a fully connected SNN with 2 × 104 neurons, there will be up to 4 × 108 times of synaptic state variable computing and 2 × 104 times of neuronal state variable computing. Since the number of synapses is usually much greater than that of the neurons, the simulation time cost scales approximately linearly with the number of synapses. What is more, there is another aspect that aggravates the computational load of the time-driven algorithm. Due to the time resolution determined by the time step size, the time-driven algorithm is inexact. In order to solve this problem, the time step size should be chosen sufficiently small resulting in more computational load at each unit biological simulating time. In short, the time-driven algorithm can be applied in any model (especially for complex models) but is very time-consuming.

In event-driven algorithm, the simulation advances from one event to the next. Because the simulated time (often called exact time) is advanced by computing the state variables at each event occurrence instants only, event-driven algorithms are more accurate and have a drastic reduction of the computational load comparing with time-driven algorithms. The biggest disadvantage of event-driven algorithms is that it is limited to some simple models, even though many scientists [15] have extended the class of models by adopting many techniques recently.

In this paper, we propose a time-driven algorithm corrected by an event-driven process (a hybrid time-event-driven algorithm) which consists of two procedures of computation, to overcome the great computational complexity of time-driven algorithm. In the first procedure of the synaptic current computation, we suppose that the neuron in question receives no spike during the simulation time step, and thereby propose a time-driven method of joint decay process to reduce the computational complexity of the synaptic current. In the second procedure of the computation, we suppose that the neuron in question receives spikes during the simulation time step, and propose an event-driven local correction process to correct the total input synaptic current that is calculated in the first procedure of the computation. Specifically, the proposed time-event-driven algorithm adopts the following three steps to design a new time-driven algorithm by introducing an event-processing method, where step (1) and step (2) constitute the first procedure of the synaptic current computation, and step (3) is the second procedure of the computation.

  • (1)

    In traditional time-driven algorithm, the computing of synaptic current between two neurons consists of three parts in each step: conductance coefficient for each synapse, input synaptic current for each synapse and the total synaptic current for each postsynaptic neuron by summing up all its input synaptic currents. By analyzing the characteristics of synaptic dynamics, it is found that the first part is dependent on presynaptic neuron and is independent on postsynaptic neuron. Thus, we separate the computing of conductance coefficient from synaptic current, which reduces the computational load of the synaptic current effectively by avoiding the repetitive computing of synaptic conductance coefficient.

  • (2)

    To increase the accuracy of time-driven algorithm when simulating biological neural network, the time step must be extremely small. Thus in each extremely small time step, the probability that a neuron emits a spike during each time step in the network is very low. Therefore, for each postsynaptic neuron, it is a common case that there is no action potential (AP) arrived at each synapse. In this case, by analyzing the synaptic current computing, we find the second part and the third part, mentioned in (1), can be simplified to a computing process called as the joint decay process to avoid computing synaptic conductance one by one. Thereby the idea of joint decay proposed in this paper contributes to reducing the computational load of synaptic current computing.

  • (3)

    In the case of occurrence of spike (called an event), we propose a local correction process to correct the total input synaptic current, after the computing of synaptic current described in steps (1) and (2). By analyzing event transmission mechanism, we designed a data structure of circular two-dimensional array for storing both conductance coefficients related with presynaptic neurons and correcting conductance of each synapse related with post-synaptic neurons. And in order to realize the local correction process quickly and effectively, we propose a new event-processing method to realize the local correction process and thereby propose an efficient time-event-driven algorithm for simulating biological neural networks with those new techniques proposed in this paper.

We compare the performance of the proposed time-event-driven algorithm with traditional algorithm theoretically, and implement the algorithm to simulate a fully connected SNN with IAF neurons and conductance synapse. The theoretical and simulation results show that the proposed algorithm reduces computational load greatly, and outperforms the traditional algorithm.

The rest of the paper is organized as follows. In Section 2, we outline the SNN model and analyze the computational load of this model; Section 3 analyzes the computation of synaptic current by separating synaptic conductance computing from postsynaptic current computing; Section 4 proposes the joint decay process in case of no afferent spikes; Section 5 proposes the local correction process based on the joint decay process in case of afferent spikes; Section 6 proposes a new event-processing method to realize the local correction process quickly and effectively, based on the data structure of circular two-dimensional array, and designs the new time-event-driven algorithm. Finally, Section 7 analyzes in theory the performance of the new algorithm and shows the efficiency of the new algorithm by simulations.

Section snippets

Spiking neural network model

In this paper, we focus on the simulation of spiking neural network (SNN) models [7], which are very useful and popular in exploring the information processing mechanisms in neural systems [2], [3]. For clarity, we use Integrate-and-Fire (IAF) neurons [2], [15] and one kind of conductance synapse model to describe our methods, but we must point out that the new algorithm proposed in this paper can be applied to any other network structure, spiking neuronal model and conductance synapse model.

The computation of synaptic current by separating the computing of synaptic conductance coefficient from postsynaptic current

We can see from Eqs. (2)–(4) that to update the state (membrane potential) of each neuron, we need first to calculate the synaptic conductance coefficient of each synapse connected to that neuron, and then calculate the synaptic current related with that synapse. This results in the calculation of altogether N.(N1) synaptic conductance coefficients to update the states of all neurons. To reduce the computational load of computing the synaptic conductance coefficients, we first calculate and

Joint decay process in case of no afferent spike

To get an accurate simulation of biological neural network, the time step of the time-driven algorithm should be very small. Thus during each very short time step, the probability of a neuron emitting spikes is rather low. For any postsynaptic neuron , it is of high probability that there is no action potential (AP) arriving at any of its synapses. If there is no afferent spikes for neuron i (we consider the case when there are afferent spikes in the next section), we find that computing and

The event-driven local correction process in case of afferent spikes

In the previous section, we suppose that there are no afferent spikes for each neuron. However, there may be afferent spikes for each neuron during each simulation time step, albeit with a low probability.

In fact, according to Eqs. (8)–(11), Ii(t2), deriving from the joint decay of Ii(t1), comes from the decay of each component conductance coefficient, i.e. the decay of the conductance coefficients sj(t2dij),(j=1,2,,Nandji) with the decay factor β for each synapse j → i under the assumption

The hybrid time-event-driven algorithm

According to the analysis in Sections 4 and5, the calculation of the synaptic current at time step t2 comprises two steps. In the first step, the synaptic current of each neuron is calculated according to Eq. (11) under the assumption that each neuron receives no spikes during the time interval [t1, t2]. The synaptic current of neuron i calculated in this stage is called Ii(t2). The first step can be realized by a time-driven algorithm, since the calculation of the synaptic current can be

The theoretical performance analysis

In the following, we analyze the time required to simulate one second of biological time for a fully connected SNN with N neurons, N1 synapse per neuron, average firing rate F [15]. In traditional time-driven algorithm (see Fig. 1), the total computational cost per time step, Ctotal is of order as: Ctotal=Cv.N+(Cs+CI+CI).N.(N1) where Cv is the cost for updating each Vi(t) and grows with the complexity of the neuron models. Cs is the cost for updating each sij(t), CI is the cost of computing

Conclusion and discussion

Reduction of the computational load of the traditional time-driven algorithm is a crucial problem for simulating a large-scale spiking neural network efficiently. We proposed a new hybrid time-event-driven algorithm in this paper to deal with this problem.

Firstly, the computing of synaptic conductance coefficient is separated from postsynaptic current and a common synaptic conductance coefficient for all synapses coming from the same presynaptic neuron is computed. We separate the computation

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grants nos. 11572084, 11472061, 71371046), the Fundamental Research Funds for the Central Universities and DHU Distinguished Young Professor Program (No. 17D210402).

Xia Peng received her B.S. degree from Hunan Normal University, Changsha, China, in 2004, and M.S. degree from National University of Defense Technology, Changsha, China, in 2006. Since September 2012, she has been a Ph.D. candidate in information science and technology at Donghua University, Shanghai, China. Her research interests include computational neuroscience and neural network.

References (31)

  • R.R. Carrillo et al.

    Event-driven simulation of neural population synchronization facilitated by electrical coupling

    Biosystems

    (2007)
  • W. Pugh

    Skip lists: a probabilistic alternative to balanced trees

    Proceedings of the Algorithms and Data Structures, Workshop WADS ’89, August 17–19

    (1989)
  • H. Wagatsuma et al.

    Neural dynamics of the cognitive map in the hippocampus.

    Cognit. Neurodyn.

    (2007)
  • P. Dayan et al.

    Theoretical neuroscience

    (2001)
  • S.A. Neymotin et al.

    Emergence of physiological oscillation frequencies in a computer model of neocortex

    Front. Comput. Neurosci.

    (2011)
  • C. Boucheny et al.

    Real-Time Spiking Neural Network: An Adaptive Cerebellar Model

    (2005)
  • R.R. Carrillo et al.

    Lookup table powered neural event-driven simulator

    Proceedings of the International Conference on Artificial Neural Networks: Computational Intelligence and Bioinspired Systems

    (2005)
  • H. Paugam-Moisy et al.

    Computing with Spiking Neuron Networks

    (2012)
  • A. Hanuschkin et al.

    A general and efficient method for incorporating precise spike times in globally time-driven simulations

    Front. Neuroinf.

    (2010)
  • A. Morrison et al.

    Exact subthreshold integration with continuous spike times in discrete-time neural network simulations

    Neural Comput.

    (2007)
  • R. Brette

    Exact simulation of integrate-and-fire models with exponential currents

    Neural Comput.

    (2007)
  • R. Brette et al.

    Simulation of networks of spiking neurons: a review of tools and strategies

    J. Comput. Neurosci.

    (2007)
  • M. D’Haene et al.

    Accelerating event-driven simulation of spiking neurons with multiple synaptic time constants

    Neural Comput.

    (2009)
  • J.M. Bower et al.

    The Book of Genesis: Exploring Realistic Neural Models with the General Neural Simulation System

    (1995)
  • M.L. Hines et al.

    The Neuron simulation environment

    (1997)
  • Cited by (8)

    • Supervised learning in spiking neural networks: A review of algorithms and evaluations

      2020, Neural Networks
      Citation Excerpt :

      Generally, two simulation strategies are used for SNNs: a clock-driven simulation strategy (Henker, Partzsch, & Schüffny, 2012) and an event-driven simulation strategy (Mattia & Del Giudice, 2000; Ros, Carrillo, Ortigosa, Barbour, & Agís, 2006). Some hybrid simulation strategies have also been proposed in recent years (D’Haene, Hermans, & Schrauwen, 2014; Kaabi, Tonnelier, & Martinez, 2011; Peng, Wang, Han, Song, & Ding, 2018; Zheng, Tonnelier, & Martinez, 2009). Research results show that different simulation strategies will affect the dynamic characteristics and learning performance of SNNs (Brette et al., 2007; Schæfer et al., 2002).

    • A Cooperative Coevolution Wingsuit Flying Search Algorithm with Spherical Evolution

      2021, International Journal of Computational Intelligence Systems
    View all citing articles on Scopus

    Xia Peng received her B.S. degree from Hunan Normal University, Changsha, China, in 2004, and M.S. degree from National University of Defense Technology, Changsha, China, in 2006. Since September 2012, she has been a Ph.D. candidate in information science and technology at Donghua University, Shanghai, China. Her research interests include computational neuroscience and neural network.

    Zhijie Wang received his B.S., M.S and Ph.D. degrees from College of Information Science and Technology, Donghua University, Shanghai, China, in 1991, 1994 and 1997, respectively. He did his Post Doc from the University of Tokyo, Tokyo, Japan in 2002. He is currently a Professor with the College of Information Science and Technology, Donghua University, Shanghai, China. His main research interests are computational neuroscience, neural network, and deep learning.

    Fang Han received her B.S. and M.S. degrees from Beijing Jiaotong University, Beijing, China, in 2003 and 2006, respectively, and the Ph.D. degree from Beihang University, Beijing, China, in 2009. She visited Aberdeen University, Aberdeen, UK for one year as a visiting PhD student in 2008 and New York University, New York, USA as a visiting scholar for one year in 2016, respectively. She is currently an Associate Professor with the College of Information Science and Technology, Donghua University, Shanghai, China. Her main research interests are computational neuroscience and deep learning.

    Guangxiao Song received the M.S. degree in Computer Science and Technology from Yangtze University, China, in 2016. Since September 2016, he has been a PhD candidate in information science and technology at Donghua University, China. His research interests include deep learning and music information retrieval.

    Shenyi Ding received the B.S. degree from College of Information Science and Technology, Donghua University. Since September 2015, he take a successive postgraduate and doctoral program in information science and technology at Donghua University, China. His research interests include deep learning and dynamic system modeling and control.

    View full text