ABSTRACT
This paper presents an efficient implementation and performance analysis of mapping multilayer perceptron networks with the backpropagation learning rule on SpiNNaker - a massively parallel architecture dedicated for neural network simulation. A new algorithm called pipelined checker-boarding partitioning scheme is proposed for efficient mapping. The new mapping algorithm relies on a checker-board partitioning scheme, but the key advantage comes from introducing a pipelined mode. The six-stage pipelined mode captures the parallelism within each partition of the weight matrix, allowing the overlapping of communication and computation. Not only does the proposed mapping localize communication, but it can also hide a part of or even all the communication for high efficiency.
Index Terms
- Efficient parallel implementation of multilayer backpropagation networks on SpiNNaker
Recommendations
An efficient mapping of multilayer perceptron with backpropagation ANNs on hypercubes
SPDP '93: Proceedings of the 1993 5th IEEE Symposium on Parallel and Distributed ProcessingThis paper proposes a parallel structure, the mesh-of-appendixed-trees (MAT), for efficient implementation of artificial neural networks (ANNs). Algorithms to implement both the recall and the training phases of the multilayer perceptron and ...
Algorithm for Mapping Multilayer BP Networks onto the SpiNNaker Neuromorphic Hardware
ISPDC '10: Proceedings of the 2010 Ninth International Symposium on Parallel and Distributed ComputingThis paper demonstrates the feasibility and evaluates the performance of using the SpiNNaker neuromorphic hardware to simulate traditional non-spiking multi-layer perceptron networks with the back propagation learning rule. In addition to investigating ...
A fuzzy rule based backpropagation method for training binary multilayer perceptrons
In this paper, a general method to train binary multilayer perceptrons is presented. This method is based on the use of fuzzy rules to upgrade the weights as well as to state the desired output of the neurons of the hidden layers. The version for ...
Comments