Skip to main content

Linearly expandable partial tree shape architecture for parallel neurocomputer

  • Oral Presentations: Implementations Implementations: Tree-Shaped Architectures
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 96 (ICANN 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1112))

Included in the following conference series:

Abstract

The architecture of linearly expandable partial tree shape neurocomputer (PARNEU) is presented. The system is designed for efficient, general-purpose artificial neural network computations utilizing parallel processing. Linear expandability is due to modular architecture, which combines bus, ring and tree topologies. Mappings of algorithms are presented for Hopfield and perceptron networks, Sparse Distributed Memory, and Self-Organizing Map. Performance is discussed with figures of computational complexity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Haykin, Neural Networks: A Comprehensive Foundation, Macmillan College Publishing Company, NY, USA, 1994.

    Google Scholar 

  2. L. Trucker, and C. Robertson, “Architecture and Applications of the Connection Machine”, IEEE Computer, August 1988, pp. 26–38.

    Google Scholar 

  3. T. Nordström and B. Svensson, “Designing and Using Massively Parallel Computers for Artificial Neural Networks”, Journal of Parallel and Distributed Computing, Vol. 14, No. 3, March 1992, pp. 260–285.

    Google Scholar 

  4. N. Morgan, J. Beck, J. Kohn, J. Bilmes, E. Allman and J. Beer, “The Ring Array Processor: A Multiprocessing Peripheral for Connectionist Applications”, Journal of Parallel and Distributed Computing, Vol. 14, No. 3, March 1992, pp. 248–259.

    Google Scholar 

  5. D. Hammerström, “A Highly Parallel Digital Architecture for Neural Network Emulation”, VLSI for Artificial Intelligence and Neural Networks, J. G. Delgado-Frias and W. R. Moore (Eds.), Plenum Publishing Company, New York, USA, 1990.

    Google Scholar 

  6. U. Ramacher, “SYNAPSE — A Neurocomputer That Synthesizes Neural Algorithms on a Parallel Systolic Engine”, Journal of Parallel and Distributed Computing, Vol. 14, No. 3, March 1992, pp. 306–318.

    Google Scholar 

  7. U. Müller, A. Gunzinger and W. Guggenbühl, “Fast Neural Net Simulation with a DSP Processor Array”, IEEE Transactions on Neural Networks, Vol. 6, No. 1, January 1995, pp. 203–213.

    Google Scholar 

  8. T. Hämäläinen, J. Saarinen and K. Kaski, “TUTNC: A General Purpose Parallel Computer for Neural Network Computations”, Microprocessors and Microsystems, Vol. 19, No. 8, 1995, pp. 447–465.

    Google Scholar 

  9. M. Levy and J. Leonard, “EDN's DSP-Chip Directory” EDN Magazine, May 1995, pp. 40–95.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Christoph von der Malsburg Werner von Seelen Jan C. Vorbrüggen Bernhard Sendhoff

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hämäläinen, T., Kolinummi, P., Kaski, K. (1996). Linearly expandable partial tree shape architecture for parallel neurocomputer. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds) Artificial Neural Networks — ICANN 96. ICANN 1996. Lecture Notes in Computer Science, vol 1112. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61510-5_64

Download citation

  • DOI: https://doi.org/10.1007/3-540-61510-5_64

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61510-1

  • Online ISBN: 978-3-540-68684-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics