Skip to main content

Simulation of spiking neural networks on different hardware platforms

  • Part VIII: Implementations
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN'97 (ICANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1327))

Included in the following conference series:

Abstract

Substantial evidence indicates that the time structure of neuronal spike trains is relevant in neuronal signal processing. Bio-inspired spiking neural networks are taking these results into account. Applications of these networks to low vision problems, e.g. segmentation, requires that the simulation of large-scale networks must be performed in a reasonable time. On this basis, we investigated the achievable performance of existing hardware platforms for the simulation of spiking neural networks with sizes from 8k neurons up to 512k neurons/50M synapses. We present results for workstations (Sparc-Ultra), digital signal processors (TMS-C8x), neurocomputers (CNAPS, SYNAPSE), small- and large-scale parallel-computers (4xPentium, CM-2, SP2) and discuss the specific implementation issues. According to our investigation, only supercomputers like CM-2 can match the performance requirements for the simulation of very large-scale spiking neural networks. Therefore, there is still. the need for low-cost hardware accelerators.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Aertsen (ed.), “Brain Theory: Spatio-Temporal Aspects of Brain Function”, Elsevier, 1993.

    Google Scholar 

  2. R. Eckhorn, H. J. Reitboeck, M. Arndt, P. Dicke, “Feature linking via stimulus-evoked oscillations: Experimental results from cat visual cortex and functional implication from a network model”, Proc. ICNN I: 723–730, 1989.

    Google Scholar 

  3. C. M. Gray, W. Singer, “Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex”, Proc. Natl. Acad. Sci. USA 86: 1698–1702, 1989

    Google Scholar 

  4. C. von der Malsburg, W. Schneider, “A neural cocktail-party processor”, Biol. Cybern. 54: 29–40, 1986.

    Google Scholar 

  5. F. Crick and C. Koch, “Towards a neurobiological theory of consciousness”, Seminars in The Neuroscience 2: 263–275, 1990.C.

    Google Scholar 

  6. W. Gerstner, R. Ritz, J. L. van Hemmen, “A biologically motivated and analytically soluble model of collective oscillations in the cortex”, Biol. Cybern. 68: 363–374, 1993.

    Google Scholar 

  7. W. Maass, “Lower Bounds for the Computational Power of Networks of Spiking Networks”, Neural Computation 8 (1), 1–40, 1996.

    Google Scholar 

  8. J. Lazarro, J. Wawrzynek, “Silicon Auditory Processors as Computer Peripherals”, NIPS 5: 820–827, 1993.

    Google Scholar 

  9. G. Frank, G. Hartmann, “An Artificial Neural Network Accelerator for Puls-Coded Model-Neurons”, ICNN'95, Perth, Australia, 1995.

    Google Scholar 

  10. A. Jahnke, U. Roth, H. Klar, “Towards Efficient Hardware for Spike-Processing Neural Networks”, Proc. World Congress on Neural Networks, 460–463, 1995.

    Google Scholar 

  11. U. Roth, A. Jahnke, H. Klar, “Hardware Requirements for Spike-Processing Neural Networks”, Proc. IWANN'95, 720–727, 1995.

    Google Scholar 

  12. H.J. Reitback, M. Stocker, C. Hahn, “Object Separation in Dynamic Neural Networks”, Proc.ICNN, II: 638–641, 1993.

    Google Scholar 

  13. D. Hammerstrom, “A VLSI Architecture for High-Performance, Low-Cost, On-Chip Learning,” Proc. IJCNN, 537–543, 1990.

    Google Scholar 

  14. Ramacher U, Beichter J, Brüls N, “Architecture of a General Purpose Neural Signal Processor”, Proc IJCNN I: 443–446, 1991

    Google Scholar 

  15. K. Mohraz, “Parallel Simulation of Pulse-coded Neural Networks”, accepted IMACS 1997.

    Google Scholar 

  16. E. Niebur, D. Brettle, “Efficient Simulation of Biological Neural Networks on Massively Parallel Supercomputers with Hypercube Architecture”, NIPS 6: 904–910, 1993.

    Google Scholar 

  17. A. Jahnke, U. Roth, H. Klar: “A SIMD/Dataflow Architecture for a Neurocomputer for Spike-Processing Neural Networks (NESPINN)”, MicroNeuro 96, 232–237, 1996.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Wulfram Gerstner Alain Germond Martin Hasler Jean-Daniel Nicoud

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jahnke, A., Schönauer, T., Roth, U., Mohraz, K., Klar, H. (1997). Simulation of spiking neural networks on different hardware platforms. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, JD. (eds) Artificial Neural Networks — ICANN'97. ICANN 1997. Lecture Notes in Computer Science, vol 1327. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0020312

Download citation

  • DOI: https://doi.org/10.1007/BFb0020312

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63631-1

  • Online ISBN: 978-3-540-69620-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics