Abstract
We consider a modified version of the fully connected layer we call a block diagonal inner product layer. These modified layers have weight matrices that are block diagonal, turning a single fully connected layer into a set of densely connected neuron groups. This idea is a natural extension of group, or depthwise separable, convolutional layers applied to the fully connected layers. Block diagonal inner product layers can be achieved by either initializing a purely block diagonal weight matrix or by iteratively pruning off diagonal block entries. This method condenses network storage and speeds up the run time without significant adverse effect on the testing accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
When using block diagonal layers, one should alter the output format of the previous layer and the expected input format of the following layer accordingly, in particular to row major ordering.
References
Boahen, K.: Neurogrid: a mixed-analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102(5), 699–716 (2014)
Chollet, F.: Xception: deep learning with depthwise separable convolutions. arXiv:1610.02357 (2017)
Han, S., et al.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. In: ICLR (2015)
Han, S., et al.: Learning both weights and connections for efficient neural networks. In: NIPS, pp. 1135–1143 (2015)
He, T., et al.: Reshaping deep neural network for fast decoding by node-pruning. In: IEEE ICASSP, pp. 245–249 (2014)
Herculano-Houzel, S.: The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost. PNAS 109(Supplement 1), 10661–10668 (2012)
Hinton, G., et al.: Distilling the knowledge in a neural network. In: NIPS (2014)
Ioannou, Y., et al.: Deep Roots: improving CNN efficiency with hierarchical filter groups. In: CVPR (2017)
Jhurani, C., et al.: A GEMM interface and implementation on NVIDIA GPUs for multiple small matrices. J. Parallel Distrib. Comput. 75, 133–140 (2015)
Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, Computer Science, University of Toronto (2009)
Krizhevsky, A.: Cuda-convnet. Technical report, Computer Science, University of Toronto (2012)
Krizhevsky, A.: Cuda-convnet: high-performance C++/CUDA implementation of convolutional neural networks (2012)
Krizhevsky, A., et al.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1106–1114 (2012)
Lebedev, V., et al.: Fast convnets using group-wise brain damage. In: CVPR (2016)
LeCun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
LeCun, Y., et al.: The MNIST database of handwritten digits. Technical report
Masliah, I., et al.: High-performance matrix-matrix multiplications of very small matrices. In: Dutot, P.-F., Trystram, D. (eds.) Euro-Par 2016. LNCS, vol. 9833, pp. 659–671. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-43659-3_48
Merolla, P.A., et al.: A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345(6197), 668–673 (2014)
Netzer, Y., et al.: Reading digits in natural images with unsupervised feature learning. In: NIPS (2011)
Nickolls, J., et al.: Scalable parallel programming with CUDA. ACM Queue 6(2), 40–53 (2008)
Reed, R.: Pruning algorithms-a survey. IEEE Trans. Neural Netw. 4(5), 740–747 (1993)
Sainath, T.N., et al.: Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In: IEEE ICASSP (2013)
Simonyan, K., et al.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
Sindhwani, V., et al.: Structured transforms for small-footprint deep learning. In: NIPS, pp. 3088–3096 (2015)
Srinivas, S., et al.: Data-free parameter pruning for deep neural networks. arXiv:1507.06149 (2015)
Wen, W., et al.: Learning structured sparsity in deep neural networks. In: NIPS, pp. 2074–2082 (2016)
Yuan, M., et al.: Model selection and estimation in regression with grouped variables. J. Royal Stat. Soc. Ser. B 68(1), 49–67 (2006)
Zeiler, M.D., et al.: Visualizing and understanding convolutional networks. arXiv:1311.2901 (2013)
Zhang, X., et al.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices. arXiv:1707.01083 (2017)
Acknowledgments
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1256260. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575. Specifically, it used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Nesky, A., Stout, Q.F. (2018). Neural Networks with Block Diagonal Inner Product Layers. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds) Artificial Neural Networks and Machine Learning – ICANN 2018. ICANN 2018. Lecture Notes in Computer Science(), vol 11141. Springer, Cham. https://doi.org/10.1007/978-3-030-01424-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-01424-7_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01423-0
Online ISBN: 978-3-030-01424-7
eBook Packages: Computer ScienceComputer Science (R0)