Abstract:
One of the common problems of neural networks, especially those with many layers consists of their lengthy training times. We attempted to solve this problem at the algor...Show MoreMetadata
Abstract:
One of the common problems of neural networks, especially those with many layers consists of their lengthy training times. We attempted to solve this problem at the algorithmic (not hardware) level, proposing a simple parallel design inspired by the parallel circuits found in the human retina. To avoid large matrix calculations, we split the original network vertically into parallel circuits and let the BP algorithm flow in each subnetwork independently. Experimental results have shown the speed advantage of the proposed approach but also pointed out that the reduction is affected by multiple dependencies. The results also suggest that parallel circuits improve the generalization ability of neural networks presumably due to automatic problem decomposition.
Date of Conference: 15-17 August 2015
Date Added to IEEE Xplore: 11 January 2016
ISBN Information:
Electronic ISSN: 2157-9563