Abstract:
Neural networks hardware implementation is often required to concretize their parallelism and minimize computing time for real time application requirements. This work de...Show MoreMetadata
Abstract:
Neural networks hardware implementation is often required to concretize their parallelism and minimize computing time for real time application requirements. This work describes hardware implementation issues of neural networks on FPGA environment. Not to loose generality, two examples of NNs are considered: a backpropagation feedforward neural network (BFNN) and an RBF neural network (RBFNN). Although local quantization adds an extra complexity to the design task, it gives minimum quantization errors comparing to the global one. It was carried out that RBFNN is subject to an extra sensitivity to quantization effects. Preserving acceptable design accuracy suggests an increasing of the hidden layer number in the RBFNN. Hardware implementation makes use of a sequential approach with pipeline in order to achieve the best compromise between rapidity and silicon area. The proposed design methodology was applied to an illustration example with a sine input-output function. BFNN networks carried out more compact implementations on FPGA circuit, compared to RBFNN.
Date of Conference: 11-14 December 2005
Date Added to IEEE Xplore: 26 September 2008
CD:978-9972-61-100-1