Abstract
The convolutional neural networks are widely used in deep learning model because of its advantages in image classification, speech recognition and natural language processing. However, training large-scale networks is very time and resource consuming, because it is both compute-intensive and memory-intensive. In this paper, we proposed to use the fixed point arithmetic to train CNN with popular deep learning framework Caffe. We propose our framework FixCaffe (Fixed Point Caffe), where fixed point matrix multiply function is substitute for part of the original floating point matrix multiply function in Caffe. We analyze the range of the operands during the training process, and choose the proper scaling factor for transform floating point operands to fixed point operands. Training LeNet-S model, obtained by modifying LeNet-5, on the MNIST benchmark, the result shows that after training 1000 iterations, FixCaffe with 8-bit fixed point multiplications only leads to about 0.5% loss in the classification accuracy compared to the single-precision floating point Caffe baseline. Using Xilinx V7 690T to implement the multiplier, the cost of computing resource can save up to 83.3%, and the on-chip storage overhead for the LeNet-S model’s parameters can save 75%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
https://eigen.tuxfamily.org/dox-devel/TopicUsingBlasLapack.html
Courbariaux, M., Bengio, Y., David, J.P.: Low precision arithmetic for deep learning. Eprint Arxiv (2014)
Courbariaux, M., Bengio, Y., David, J.P.: Training deep neural networks with low precision multiplications. Computer Science (2014)
David, J.P., Kalach, K., Tittley, N.: Hardware complexity of modular multiplication and exponentiation. IEEE Trans. Comput. 56(10), 1308–1319 (2007)
Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. Computer Science (2015)
Gysel, P., Motamedi, M., Ghiasi, S.: Hardware-oriented approximation of convolutional neural networks (2016)
Han, S., Mao, H., Dally, W.J.: A deep neural network compression pipeline: pruning, quantization, huffman encoding (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Jouppi, N.P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., Bates, S., Bhatia, S., Boden, N., Borchers, A.: In-datacenter performance analysis of a tensor processing unit (2017)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp. 1097–1105 (2012)
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Lin, D.D., Talathi, S.S., Sreekanth Annapureddy, V.: Fixed point quantization of deep convolutional networks. Computer Science (2016)
Lin, Z., Courbariaux, M., Memisevic, R., Bengio, Y.: Neural networks with few multiplications (2016)
Mao, J., Chen, X., Nixon, K.W., Krieger, C., Chen, Y.: MoDNN: local distributed mobile computing system for deep neural network. In: Design, Automation Test in Europe Conference Exhibition (DATE), pp. 1396–1401, March 2017
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions, pp. 1–9 (2014)
Zhao, Y.: Deep Learning: Learn Caffe in 21 Days (2016)
Acknowlegement
Funding provided by China NSFC 61402501, 61602498. Thanks to the anonymous reviewers.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Guo, S., Wang, L., Chen, B., Dou, Q., Tang, Y., Li, Z. (2017). FixCaffe: Training CNN with Low Precision Arithmetic Operations by Fixed Point Caffe. In: Dou, Y., Lin, H., Sun, G., Wu, J., Heras, D., Bougé, L. (eds) Advanced Parallel Processing Technologies. APPT 2017. Lecture Notes in Computer Science(), vol 10561. Springer, Cham. https://doi.org/10.1007/978-3-319-67952-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-67952-5_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67951-8
Online ISBN: 978-3-319-67952-5
eBook Packages: Computer ScienceComputer Science (R0)