Skip to main content

Improving the Efficiency of Spiking Network Learning

  • Conference paper
Neural Information Processing (ICONIP 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8227))

Included in the following conference series:

Abstract

Spiking networks are third generation artificial neural networks with a higher level of biological realism. This realism comes at the cost of extra computation, which alongside their complexity makes them impractical for general machine-learning applications. We propose that for some problems, spiking networks can actually be more efficient than second generation networks. This paper presents several enhancements to the supervised learning algorithm SpikeProp, including reduced precision, fewer subconnections, a lookup table and event-driven computation. The cputime required by our new algorithm SpikeProp+ was measured and compared to multilayer perceptron backpropagation. We found SpikeProp+ to use 20 times less CPU than SpikeProp for learning a classifier, but it remains ten times slower than the perceptron network. Our new networks are not optimal however, and several avenues exist for achieving further gains. Our results suggest it may be possible to build highly-efficient neural networks in this way.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Maass, W.: Networks of spiking neurons: The third generation of neural network models. Neural Networks 10(9), 1659–1671 (1997)

    Article  Google Scholar 

  2. Bohte, S.M., Kok, J.N., Poutré, H.L.: Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing 48(1–4), 17–37 (2002)

    Article  MATH  Google Scholar 

  3. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. tech. rep., Institute for Cognitive Science, University of California San Diego (September 1985)

    Google Scholar 

  4. Mattia, M., Giudice, P.D.: Efficient event-driven simulation of large networks of spiking neurons and dynamical synapses. Neural Computation 12(10), 2305–2329 (2000)

    Article  Google Scholar 

  5. Ros, E., Carrillo, R., Ortigosa, E.M., Barbour, B., Agís, R.: Event-driven simulation scheme for spiking neural networks using lookup tables to characterize neuronal dynamics. Neural Computation 18, 2959–2993 (2006)

    Article  MATH  Google Scholar 

  6. Laughlin, S.B.: Energy as a constraint on the coding and processing of sensory information. Current Opinion in Neurobiology 11(11), 475–480 (2001)

    Article  Google Scholar 

  7. Thiruvarudchelvan, V., Crane, J.W., Bossomaier, T.: Analysis of spikeprop convergence with alternative spike response functions. In: IEEE Symposium on Foundations of Computational Intelligence (April 2013)

    Google Scholar 

  8. Esmaeilzadeh, H., Cao, T., Yang, X., Blackburn, S.M., McKinley, K.S.: Looking back and looking forward: power, performance, and upheaval. Commun. ACM 55, 105–114 (2012)

    Article  Google Scholar 

  9. Bache, K., Lichman, M.: UCI Machine Learning Repository (2013), http://archive.ics.uci.edu/ml

  10. Sporea, I., Grüning, A.: Reference time in spikeprop. In: The 2011 International Joint Conference on Neural Networks (IJCNN), July 31-August 5, pp. 1090–1092 (2011)

    Google Scholar 

  11. Wilson, D., Martinez, T.R.: The general inefficiency of batch training for gradient descent learning. Neural Networks 16(10), 1429–1451 (2003)

    Article  Google Scholar 

  12. Schrauwen, B., Van Campenhout, J.: Extending spikeprop. In: Proceedings of the IEEE International Joint Conference on Neural Networks, vol. 1, p. 4, vol. (xlvii+3302) (July 2004)

    Google Scholar 

  13. McKennoch, S., Liu, D., Bushnell, L.: Fast modifications of the spikeprop algorithm. In: International Joint Conference on Neural Networks IJCNN 2006, pp. 3970–3977 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Thiruvarudchelvan, V., Moore, W., Antolovich, M. (2013). Improving the Efficiency of Spiking Network Learning. In: Lee, M., Hirose, A., Hou, ZG., Kil, R.M. (eds) Neural Information Processing. ICONIP 2013. Lecture Notes in Computer Science, vol 8227. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-42042-9_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-42042-9_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-42041-2

  • Online ISBN: 978-3-642-42042-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics