Skip to main content

Advertisement

Log in

An Empirical Study on Improving the Speed and Generalization of Neural Networks Using a Parallel Circuit Approach

  • Published:
International Journal of Parallel Programming Aims and scope Submit manuscript

Abstract

One of the common problems of neural networks, especially those with many layers, consists of their lengthy training time. We attempt to solve this problem at the algorithmic level, proposing a simple parallel design which is inspired by the parallel circuits found in the human retina. To avoid large matrix calculations, we split the original network vertically into parallel circuits and let the backpropagation algorithm flow in each subnetwork independently. Experimental results have shown the speed advantage of the proposed approach but also point out that this advantage is affected by multiple dependencies. The results also suggest that parallel circuits improve the generalization ability of neural networks presumably due to automatic problem decomposition. By studying network sparsity, we partly justified this theory and proposed a potential method for improving the design.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Pinter, P.J., Hatfield, J.L., Schepers, J.S., Barnes, E.M., Moran, M.S., Daughtry, C.S.T., Upchurch, D.R.: Remote sensing for crop management. Photogramm. Eng. Remote Sens. 69(6), 647–664 (2003). doi:10.14358/pers.69.6.647

    Article  Google Scholar 

  2. Tsai, F., Chou, M.-J.: Texture augmented analysis of high resolution satellite imagery in detecting invasive plant species. J. Chin. Inst. Eng. 29(4), 581–592 (2006). doi:10.1080/02533839.2006.9671155

    Article  Google Scholar 

  3. Ma, Y., Wu, H., Wang, L., Huang, B., Ranjan, R., Zomaya, A., Jie, W.: Remote sensing big data computing: challenges and opportunities. Future Gener. Comput. Syst. 51, 47–60 (2015). doi:10.1016/j.future.2014.10.029

    Article  Google Scholar 

  4. Mo, D.: A survey on deep learning: one small step toward AI, pp. 1–16 (2012). http://www.cs.unm.edu/~pdevineni/papers/Mo.pdf

  5. Suresh, S., Omkar, S.N., Mani, V.: Parallel implementation of back-propagation algorithm in networks of workstations. IEEE Trans. Parallel Distrib. Syst. 16(1), 24–34 (2005). doi:10.1109/tpds.2005.11

    Article  Google Scholar 

  6. Gollisch, T., Meister, M.: Eye smarter than scientists believed: neural computations in circuits of the retina. Neuron 65(2), 150–164 (2010). doi:10.1016/j.neuron.2009.12.009

    Article  Google Scholar 

  7. Redies, C., Puelles, L.: Modularity in vertebrate brain development and evolution. BioEssays: News Rev. Mol. Cell. Dev. Biol. 23(12), 1100–1111 (2001). doi:10.1002/bies.10014

    Article  Google Scholar 

  8. de Garis, H.: An artificial brain ATR’s CAM-brain project aims to build/evolve an artificial brain with a million neural net modules inside a trillion cell cellular automata machine. New Gener. Comput. 12(2), 215–221 (1994). doi:10.1007/bf03037343

    Article  Google Scholar 

  9. Matsugu, M., Mori, K., Mitari, Y., Kaneda, Y.: Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Netw. 16(5–6), 555–559 (2003). doi:10.1016/s0893-6080(03)00115-1

    Article  Google Scholar 

  10. Kashtan, N., Alon, U.: Spontaneous evolution of modularity and network motifs. Proc. Natl. Acad. Sci. USA 102(39), 13773–13778 (2005). doi:10.1073/pnas.0503610102

    Article  Google Scholar 

  11. Tsrresen, J.: Parallelization of backpropagation training for feed-forward neural networks. Doctoral dissertation, The Norwegian Institute of Technology (1996)

  12. Dahl, G., McAvinney, A., Newhall, T.: Parallelizing neural network training for cluster systems. In: Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks, pp. 220–225 (2008)

  13. Castellano, G., Fanelli, A.M., Pelillo, M.: An iterative pruning algorithm for feedforward neural networks. IEEE Trans. Neural Netw. 8(3), 519–531 (1997). doi:10.1109/72.572092

    Article  Google Scholar 

  14. Signorini, D.F., Slattery, J.M.: Learning both weights and connections for efficient neural networks. Lancet 346(8988), 1500–1500 (1995). doi:10.1016/s0140-6736(95)92525-2

    Article  Google Scholar 

  15. Huang, G.B., Saratchandran, P., Sundararajan, N.: A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans. Neural Netw. 16(1), 57–67 (2005). doi:10.1109/tnn.2004.836241

    Article  Google Scholar 

  16. LeCun, Y., Bengio, Y.: Convolutional networks for images, speech, and time series. In: Michael, A.A. (ed.) The Handbook of Brain Theory and Neural Networks, pp. 255–258. MIT Press, Cambridge (1998)

    Google Scholar 

  17. Khare, V.R., Xin, Y., Sendhoff, B., Yaochu, J., Wersing, H.: Co-evolutionary modular neural networks for automatic problem decomposition. IEEE Congr. Evol. Comput. 3, 2691–2698 (2005). doi:10.1109/cec.2005.1555032

    Google Scholar 

  18. Mitra, V., Wang, C.J., Banerjee, S.: Lidar detection of underwater objects using a neuro-SVM-based architecture. IEEE Trans. Neural Netw. 17(3), 717–731 (2006). doi:10.1109/tnn.2006.873279

    Article  Google Scholar 

  19. Schmidhuber, J.: Multi-column deep neural networks for image classification. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3642–3649 (2012)

  20. Lichman, M.: UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA (2013). http://archive.ics.uci.edu/ml

  21. Wan, L., Zeiler, M.: Regularization of neural networks using dropconnect. In: Proceedings of the 30th International Conference on Machine Learning (ICML-13)(1), pp. 109–111 (2013)

  22. Baldi, P., Sadowski, P.: The dropout learning algorithm. Artif. Intell. 210(1), 78–122 (2014). doi:10.1016/j.artint.2014.02.004

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kien Tuong Phan.

Additional information

The project is sponsored by the Crop for the Future Research Centre.

Appendix

Appendix

See Tables 4, 5, 6, 7, 8 and 9.

Table 4 Network abbreviations and designs
Table 5 Detailed benchmark of 9 classifiers over 5 datasets
Table 6 Average performance of 10 classifiers
Table 7 Sparsity benchmark on breast cancer and leaf datasets
Table 8 Benchmark on sparsity induction
Table 9 Benchmark on sparsity regularization (random coefficient)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Phan, K.T., Maul, T.H. & Vu, T.T. An Empirical Study on Improving the Speed and Generalization of Neural Networks Using a Parallel Circuit Approach. Int J Parallel Prog 45, 780–796 (2017). https://doi.org/10.1007/s10766-016-0435-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10766-016-0435-4

Keywords

Navigation