Skip to main content
Log in

The maximum points-based supervised learning rule for spiking neural networks

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

As the third generation of neural networks, Spiking Neural Networks (SNNs) have made great success in pattern recognition fields. However, the existing training methods for SNNs are not efficient enough because of the temporal encoding mechanism. To improve the training efficiency of the supervised SNNs and keep the useful temporal information, the Maximum Points-based Supervised Learning Rule (MPSLR) is proposed in this paper. Three training strategies are adopted in MPSLR to improve the learning performance. Firstly, only the target points and maximum voltage points are trained. By theoretical analyses, we find that the maximum points are effective for the voltage controlling of the non-target points, and the analytic solutions for all maximum voltage points are parallelly obtainable. This improves the training efficiency significantly by avoiding the successive voltage detecting. Secondly, the weight modification for each presynaptic neuron is normalized by a rate function to resizing the output scale. Thirdly, the spiking rates accumulated in a time window are utilized to involve more useful knowledge. Extensive experiments on both synthetic data and four real-world UCI datasets demonstrate that our algorithm achieves significantly better performance and higher efficiency than traditional methods in various situations, including different multi-spike rates and time lengths. Besides, it is more stable to hyper-parameter variations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Abualigah LM, Khader AT (2017) Unsupervised text feature selection technique based on hybrid particle swarm optimization algorithm with genetic operators for the text clustering. J Supercomput 73(11):4773–4795

    Article  Google Scholar 

  • Abualigah LM, Khader AT, Al-Betar MA (2016) Unsupervised feature selection technique based on genetic algorithm for improving the Text Clustering. In: 2016 7th international conference on computer science and information technology (CSIT). IEEE, pp 1–6

  • Alomari OA, Khader AT, Al-Betar MA, Abualigah LM (2017) Gene selection for cancer classification by combining minimum redundancy maximum relevancy and bat-inspired algorithm. Int J Data Min Bioinform 19(1):32–51

    Article  Google Scholar 

  • Bache K, Lichman M (2013) UCI repository. Irvine, CA: University of California, School of Information and Computer Science. http://archive.ics.uci.edu/ml

  • Bansal Shonak (2014) Optimal Golomb ruler sequence generation for FWM crosstalk elimination: soft computing versus conventional approaches. Appl Soft Comput 22:443–457

    Article  MathSciNet  Google Scholar 

  • Bansal Shonak, Gupta Neena, Singh Arun Kumar (2017a) Nature-inspired metaheuristic algorithms to find near-OGR sequences for WDM channel allocation and their performance comparison. Open Math 15(1):520–547

    Article  MathSciNet  MATH  Google Scholar 

  • Bansal Shonak, Singh Arun Kumar, Gupta Neena (2017b) Optimal Golomb Ruler sequences generation for optical WDM systems: a novel parallel hybrid multi-objective Bat algorithm. J Inst Eng (India) Ser B 98(1):43–64

    Article  Google Scholar 

  • Belatreche A, Maguire LP, McGinnity M (2006) Evolutionary design of spiking neural networks. New Math Nat Comput 2(03):237–253

    Article  MathSciNet  MATH  Google Scholar 

  • Benchenane K, Peyrache A, Khamassi M et al (2010) Coherent theta oscillations and reorganization of spike timing in the hippocampal-prefrontal network upon learning. Neuron 66(6):921–936

    Article  Google Scholar 

  • Bohte SM (2004) The evidence for neural information processing with precise spike-times: a survey. Nat Comput 3(2):195–206

    Article  MathSciNet  MATH  Google Scholar 

  • Bohte SM, Kok JN, La Poutre H (2002) Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48(1):17–37

    Article  MATH  Google Scholar 

  • Burbank KS (2015) Mirrored STDP implements autoencoder learning in a network of spiking neurons. PLoS Comput Biol 11(12):e1004566

    Article  Google Scholar 

  • Costa AA, Amon MJ et al (2018) Fractal analyses of networks of integrate-and-fire stochastic spiking neurons. In: International workshop on complex networks, pp 161–171

  • De Berredo RC (2005) A review of spiking neuron models and applications. M. Sc. Dissertation, University of Minas Gerais

  • Dora S, Sundaram S, Sundararajan N (2015) A two stage learning algorithm for a Growing-Pruning Spiking Neural Network for pattern classification problems. In: 2015 international joint conference on neural networks (IJCNN), pp 1–7

  • Dora S, Subramanian K, Suresh S et al (2016) Development of a self-regulating evolving spiking neural network for classification problem. Neurocomputing 171:1216–1229

    Article  Google Scholar 

  • Florian RV (2012) The chronotron: a neuron that learns to fire temporally precise spike patterns. Plos ONE 7(8):e40233

    Article  Google Scholar 

  • Gerstner W, Kistler WM (2002) Spiking neural models: single neurons, populations, plasticity. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Ghosh-Dastidar S, Adeli H (2009) A new supervised learning algorithm for multiple spiking neural networks with application in epilepsy and seizure detection. Neural Netw 22(10):1419–1431

    Article  Google Scholar 

  • Gütig R, Sompolinsky H (2006) The tempotron: a neuron that learns spike timing-based decisions. Nat Neurosci 9(3):420–428

    Article  Google Scholar 

  • Gütig R, Aharonov R, Rotter S (2003) Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity. J Neurosci 23(9):3697–3714

    Article  Google Scholar 

  • Hubel DH, Wiesel TN (1962) Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol 160(1):106–154

    Article  Google Scholar 

  • Hubel DH, Wiesel TN (1968) Receptive fields and functional architecture of monkey striate cortex. J Physiol 195(1):215–243

    Article  Google Scholar 

  • Kasabov NK, Doborjeh MG, Doborjeh ZG (2017) Mapping, learning, visualization, classification, and understanding of fMRI Data in the NeuCube evolving spatiotemporal data machine of spiking neural networks. IEEE Trans Neural Netw Learn Syst 28(4):887–899

    Article  Google Scholar 

  • Kim C, Chow C (2018) Learning recurrent dynamics in spiking networks. arXiv preprint arXiv:1803.06622

  • Liu G, Qiu Z, Qu H (2015a) Computing k shortest paths from a source node to each other node. Soft Comput 19(8):2391–2402

    Article  Google Scholar 

  • Liu G, Qiu Z, Qu H (2015b) Computing k shortest paths using modified pulse-coupled neural network. Neurocomputing 149:1162–1176

    Article  Google Scholar 

  • Markowska-Kaczmar U, Koldowski M (2015) Spiking neural network vs multilayer perceptron: who is the winner in the racing car computer game. Soft Comput 19(12):3465–3478

    Article  Google Scholar 

  • Masquelier T, Guyonneau R, Thorpe SJ (2009) Competitive STDP-based spike pattern learning. Neural Comput 21(5):1259–1276

    Article  MATH  Google Scholar 

  • McKennoch S, Liu D, Bushnell LG (2006) Fast modifications of the spikeprop algorithm. In: International joint conference on neural networks, IJCNN’06, pp 3970–3977

  • Mehta MR, Lee AK, Wilson MA (2002) Role of experience and oscillations in transforming a rate code into a temporal code. Nature 417(6890):741–746

    Article  Google Scholar 

  • Mohemmed A, Schliebs S, Matsuda S (2012) Span: spike pattern association neuron for learning spatio-temporal spike patterns. Int J Neural Syst 22(04):1250012

    Article  Google Scholar 

  • Morro A, Canals V, Oliver A et al (2017) A stochastic spiking neural network for virtual screening. IEEE Trans Neural Netw Learn Syst 29:1371–1375

    Article  Google Scholar 

  • Motieghader Habib, Najafi Ali, Sadeghi Balal, Masoudi-Nejad Ali (2017) A hybrid gene selection algorithm for microarray cancer classification using genetic algorithm and learning automata. J Theor Appl Inf Technol 95(12):246–254

    Google Scholar 

  • Nicola W, Clopath C (2017) Supervised learning in spiking neural networks with FORCE training. Nat Commun 8(1):2208

    Article  Google Scholar 

  • Panda P, Roy K (2016) Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp 299–306

  • Ponulak F, Kasiński A (2010a) Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Comput 22(2):467–510

    Article  MathSciNet  MATH  Google Scholar 

  • Ponulak F, Kasinski A (2010b) Introduction to spiking neural networks: information processing, learning and applications. Acta Neurobiol Exp 71(4):409–433

    MATH  Google Scholar 

  • Qu H, Xie X, Liu Y et al (2015) Improved perception-based spiking neuron learning rule for real-time user authentication. Neurocomputing 151:310–318

    Article  Google Scholar 

  • Schreiber S, Fellous JM, Whitmer D (2003) A new correlation-based measure of spike timing reliability. Neurocomputing 52:925–931

    Article  Google Scholar 

  • Snippe HP (1996) Parameter extraction from population codes: a critical assessment. Neural Comput 8(3):511–529

    Article  Google Scholar 

  • Sporea I, Grüning A (2013) Supervised learning in multilayer spiking neural networks. Neural Comput 25(2):473–509

    Article  MathSciNet  MATH  Google Scholar 

  • Taherkhani A, Belatreche A, Li Y (2015) DL-ReSuMe: a delay learning-based remote supervised method for spiking neurons. IEEE Trans Neural Netw Learn Syst 26(12):3137–3149

    Article  MathSciNet  Google Scholar 

  • Thorpe SJ, Imbert M (1989) Biological constraints on connectionist modelling. In: Pfeifer R, Schreter, Fogelman-Soulie F, Steels L (eds) Connectionism in perspective. NorthHolland/Elsevier Science, Amsterdam, pp 63–92

  • van Rossum MC (2001) A novel spike distance. Neural Comput 13(4):751–763

    Article  MATH  Google Scholar 

  • VanRullen R, Guyonneau R, Thorpe SJ (2005) Spike times make sense. Trends Neurosci 28(1):1–4

    Article  Google Scholar 

  • Victor JD, Purpura KP (1997) Metric-space analysis of spike trains: theory, algorithms and application. Netw Comput Neural Syst 8(2):127–164

    Article  MATH  Google Scholar 

  • Wade JJ, McDaid LJ, Santos JA (2010) SWAT: a spiking neural network training algorithm for classification problems. IEEE Trans Neural Netw 21(11):1817–1830

    Article  Google Scholar 

  • Wu QX, McGinnity TM, Maguire LP et al (2006) Learning under weight constraints in networks of temporal encoding spiking neurons. Neurocomputing 69(16):1912–1922

    Article  Google Scholar 

  • Xie X, Qu H, Liu G (2016) An efficient supervised training algorithm for multilayer spiking neural networks. PloS ONE 11(4):e0150329

    Article  Google Scholar 

  • Xie X, Qu H, Liu G (2017) Efficient training of supervised spiking neural networks via the normalized perceptron based learning rule. Neurocomputing 241:152–163

    Article  Google Scholar 

  • Xie X, Qu H, Yi Z (2017) Efficient training of supervised spiking neural network via accurate synaptic-efficiency adjustment method. IEEE Trans Neural Netw Learn Syst 28(6):1411–1424

    Article  Google Scholar 

  • Xu Y, Zeng X, Zhong S (2013a) A new supervised learning algorithm for spiking neurons. Neural Comput 25(6):1472–1511

    Article  MathSciNet  Google Scholar 

  • Xu Y, Zeng X, Han L (2013b) A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks. Neural Netw 43:99–113

    Article  MATH  Google Scholar 

  • Yu Q, Yan R, Tang H (2016) A spiking neural network system for robust sequence recognition. IEEE Trans Neural Netw Learn Syst 27(3):621–635

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the China Postdoctoral Science Foundation (Grant No. 2018M633348), the National Natural Science Foundation of China (Grant Nos. 61806040 and 61573081), and the fund from the Department of Science and Technology of Sichuan Province (Grant Nos. 2016FZ0108, 2017JY007 and 2016GZ0075).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guisong Liu.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest.

Additional information

Communicated by V. Loia.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

A error measures

To evaluate the training accuracy, the correlation-based method of C (Schreiber et al. 2003) is employed in our simulations to measure the similarity between the target and actual output spike trains. It is calculated by

$$\begin{aligned} C = \frac{\varvec{v_{d}} \varvec{\cdot } \varvec{v_{o}}}{|\varvec{v_{d}}||\varvec{v_{o}}|} \end{aligned}$$
(39)

in which \(\varvec{v_{d}}\) and \(\varvec{v_{o}}\) are vectors obtained by the convolution of the target and actual output spike trains using a Gaussian filter:

$$\begin{aligned} g_i(t)=\sum _{m=1}^{N_i}G_{\sigma /\sqrt{2}}(t-t_m^i) \end{aligned}$$
(40)

\( G_{\sigma /\sqrt{2}}(t)=\exp [-t^2/\sigma ^2]\) is a Gaussian kernel. \(\varvec{v_d\cdot v_o}\) is the inner product, and \(| \varvec{v_d} |\), \(| \varvec{v_o} |\) are the Euclidean norms of \(\varvec{v_d}\) and \(\varvec{v_o}\), respectively. The standard deviations at the target and the actual output time are set to \(\sigma _d\)=\(\sigma _o\)=1 in our study.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xie, X., Liu, G., Cai, Q. et al. The maximum points-based supervised learning rule for spiking neural networks. Soft Comput 23, 10187–10198 (2019). https://doi.org/10.1007/s00500-018-3576-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-018-3576-0

Keywords

Navigation