Signalling techniques and their effect on neural network implementation sizes

https://doi.org/10.1016/S0020-0255(01)00068-8Get rights and content

Abstract

A series of models are developed which predict the silicon area consumed by a neural network. These models predict the area consumed by different parts of a neural network and the effect of the use of different signalling types. The relative size of neural networks that use these different signalling types may thus be assessed. The silicon area consumed by neural networks implemented with local weights and single line inputs is shown to be orders of magnitude smaller than other possible neural network implementations. The use of single line transmission is shown to be the next most effective method. Differential or parallel digital data transmission techniques are shown to be the least satisfactory options with respect to silicon area consumption. In addition the use of rectangular synapse cells is shown to reduce the interconnect area consumed, while asymmetrical signalling techniques are shown to be advantageous.

Introduction

Hardware implementations of neural networks offer significantly faster speed of training and operation than software implementations [1] with additional advantages in the areas of low power and portable applications. A number of different layout strategies have previously been modelled [2] and results generated show the dominant effect that interconnect has on silicon area consumption [2]. In this paper the above work [2] is significantly enhanced to show the impact of different types of signalling within the neural network. A general model is developed which predicts the silicon area consumed by a neural network. This is then used to develop models for each of the different signalling types.

Four signalling types are considered. The first type was discussed in the original paper and uses a single line to carry information. It requires one line for the input and one line for the weight. Type 2 considers local weights, where there is only one single line to the synapse carrying the input and the weight is fixed at manufacture. Type 3 considers differential signalling, which transmits information over two lines as the difference in signals, and requires four lines per synapse. It is a common signalling method and one used by the Gilbert multiplier [3], [4]. Type 4 deals with digital systems and examines the impact of parallel data transmission. This can be regarded as a generalisation of types 1 and 3.

Section snippets

Neural network structure

A three-layer feedforward neural network is shown in Fig. 1. While there are a wide variety of other neural network structures this is the most common [5]. The neural network silicon usage can be divided into two categories, the active device area and the interconnect area. The active device area refers to the components necessary to perform the active functions of a neural network (e.g., multipliers and function generator circuits). The interconnect refers to lines used to communicate

Local weight synapses

The equations developed in the previous models hold for types 1, 3 and 4. Type 2, which uses local weights, changes the architecture slightly and so a different model is used. A single line signalling technique is also assumed.

The effective number of vertical tracks Ntr is Nx−1 as the use of local weights means that weight lines do not have to be brought into the synapse. Substituting for Ntr in (3) givesAm3=Nx−18λL/nl.As with the previous the physical size of the synapse cell determines L and

Application of model to different signalling types

The general model can be used to predict the relative silicon consumption of the different signalling types. This allows a designer to make a choice about which implementation technique is most suitable for their neural network.

Examples

To show some of the implications of these models, three examples are presented. These use the models with a simple test problem, to demonstrate the differences between the different signalling types and also to identify some potential solutions that appear from the model. The first example shows the relative performance of the different types of signalling. The second example uses the model to examine the effect of different synapse shapes on the interconnect area and identifies one synapse

Conclusion

This paper presents models which allow a designer to make a first-order prediction of the amount of silicon area needed to implement a neural network. The model can also be used to identify the most efficient signalling technique and the optimum synaptic shape.

Table 1 summarises the area required for each type of synapse and their relative performance. Model implications from the original paper and from the examples show that neural network designs are strongly interconnect dependent. As

References (6)

  • J. Bailey, D. Hammerstrom, Why VLSI implementations of associative VLCN's require connection multiplexing, in:...
  • T.M. Mc Ginnity, B. Roche, L.P. Maguire, L.J. Mc Daid, Novel architecture and synapse design for hardware...
  • B Gilbert

    A precise four-quadrant multiplier with subnanosecond response

    IEEE J. Solid State Circuits

    (1968)
There are more references available in the full text version of this article.

Cited by (7)

  • Multi-Ring on-Chip Interconnected Architecture for Spiking Neural Network Hardware Implementations

    2020, Proceedings - 2020 IEEE 22nd International Conference on High Performance Computing and Communications, IEEE 18th International Conference on Smart City and IEEE 6th International Conference on Data Science and Systems, HPCC-SmartCity-DSS 2020
  • EMBRACE-SysC for analysis of NoC-based spiking neural network architectures

    2010, 2010 International Symposium on System-on-Chip Proceedings, SoC 2010
  • A time multiplexing architecture for inter-neuron communications

    2006, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
View all citing articles on Scopus
1

Tel.: +44-(0)28-71375417; fax: +44-(0)28-71375570.

View full text