Abstract
We propose Mixed Precision Weight Networks (MPWNs), neural networks that jointly utilize weights with varied precision in the layers. MPWNs constrain the weight layers to either 1-bit binary \(\{-1,1\}\), 2-bit ternary \(\{-1,0,1\}\), or the original 32-bit full precision weights. Each weight space contains unique properties for instance, high classification accuracy, small number of bit, and high sparsity. Hence, the properties of MPWNs can be adjusted by varying the combinations and orders of the weight layers. In this study, we identify three heuristic rules for effectively setting each of the weight layers. Therefore, MPWNs successfully utilize the robust properties from each weight space while avoiding their disadvantages. We evaluated MPWNs with MNIST, CIFAR-10, and CIFAR-100 training datasets. Our evaluation revealed that MPWNs models trained on CIFAR-10 and CIFAR-100 achieved the best overall properties comparing to conventional methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cifar-10 and cifar-100 datasets. https://www.cs.toronto.edu/kriz/cifar.html. Last Accessed 21 Sep 2018
Mnist handwritten digit database. http://yann.lecun.com/exdb/mnist. Last Accessed 21 Sep 2018
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., et al.: Tensorflow: a system for large-scale machine learning. arXiv preprint arXiv:1603.04467 (2016)
Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems, pp. 3123–3131 (2015)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)
Han, S., et al.: EIE: efficient inference engine on compressed deep neural network. In: Proceedings of the 43rd Annual International Symposium on Computer Architecture, pp. 243–254 (2016)
Han, S., Mao, H., Dally, W.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149 (2015)
Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Advances in Neural Information Processing Systems, pp. 1135–1143 (2015)
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE Conference on Computer Vision, pp. 2980–2988 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Li, F., Zhang, B., Liu, B.: Ternary weight networks. arXiv preprint arXiv:1605.04711 (2016)
Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. arXiv preprint arXiv:1612.08242 (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Acknowledgement
This research was supported by JSPS KAKENHI Grant Numbers 17H01798 and 17K20010.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Fuengfusin, N., Tamukoh, H. (2018). Mixed Precision Weight Networks: Training Neural Networks with Varied Precision Weights. In: Cheng, L., Leung, A., Ozawa, S. (eds) Neural Information Processing. ICONIP 2018. Lecture Notes in Computer Science(), vol 11302. Springer, Cham. https://doi.org/10.1007/978-3-030-04179-3_54
Download citation
DOI: https://doi.org/10.1007/978-3-030-04179-3_54
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-04178-6
Online ISBN: 978-3-030-04179-3
eBook Packages: Computer ScienceComputer Science (R0)