Loading [a11y]/accessibility-menu.js
Modular expansion of the hidden layer in Single Layer Feedforward neural Networks | IEEE Conference Publication | IEEE Xplore
Scheduled Maintenance: On Tuesday, 25 February, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (1800-2200 UTC). During this time, there may be intermittent impact on performance. We apologize for any inconvenience.

Modular expansion of the hidden layer in Single Layer Feedforward neural Networks


Abstract:

We present a neural network architecture and a training algorithm designed to enable very rapid training, and that requires low computational processing power, memory and...Show More

Abstract:

We present a neural network architecture and a training algorithm designed to enable very rapid training, and that requires low computational processing power, memory and time. The algorithm is based on a modular architecture, which expands the output weights layer constructively, so that the final network can be visualised as a Single Layer Feedforward Network (SLFN) with a large hidden-layer. The method does not use backpropagation, and consequently offers very fast training and very few trainable parameters in each module. It is therefore potentially a useful method for applications which require frequent retraining, or which rely on reduced hardware capability, such as mobile robots or Internet of Things (IoT). We demonstrate the efficacy of the method in two benchmark image classification datasets, MNIST and CIFAR-10. The network produces very favourable results for a SLFN on these benchmarks, with an average of 99.07% correct classification rate on MNIST and nearly 82% on CIFAR-10 when applied to convolutional features. Code for the method has been made available online.
Date of Conference: 24-29 July 2016
Date Added to IEEE Xplore: 03 November 2016
ISBN Information:
Electronic ISSN: 2161-4407
Conference Location: Vancouver, BC, Canada

References

References is not available for this document.