Abstract
Deep Neural Networks (DNNs) are very powerful and successful but suffer from high computation and memory cost. As a useful attempt, binary neural networks represent weights and activations with binary values, which can significantly reduce resource consumption. However, the simultaneous binarization introduces the coupling effect, aggravating the difficulty of training. In this paper, we develop a novel framework named TP-ADMM that decouples the binarization process into two iteratively optimized stages. Firstly, we propose an improved target propagation method to optimize the network with binary activations in a more stable format. Secondly, we apply the alternating direction method (ADMM) with a varying penalty to get the weights binarized, making weights binarization a discretely constrained optimization problem. Experiments on three public datasets for image classification show that the proposed method outperforms the existing methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cheng, Y., Wang, D., Zhou, P., et al.: Model compression and acceleration for deep neural networks: the principles, progress, and challenges. IEEE Signal Process. Mag. 35(1), 126–136 (2018)
Cheng, J., Wang, P., Li, G., et al.: Recent advances in efficient computation of deep convolutional neural networks. Front. Inf. Technol. Electron. Eng. 19(1), 64–77 (2018)
Courbariaux, M., Bengio, Y., David, J.P.: BinaryConnect: training deep neural networks with binary weights during propagations. In: International Conference on Neural Information Processing Systems, pp. 3123–3131 (2015)
Hubara, I., Soudry, D., Yaniv, R.E: Binarized neural networks. In: Advances in Neural Information Processing Systems (2016)
Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32
Zhou, S., Wu, Y., Ni, Z., et al.: DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint. arXiv:1606.06160 (2016)
Bengio, Y., Léonard, N., et al.: Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint. arXiv:1308.3432 (2013)
Cai, Z., He, X., Sun, J., et al.: Deep learning with low precision by half-wave Gaussian quantization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5406–5414 (2017)
Wang, P., Hu, Q., Zhang, Y., et al.: Two-step quantization for low-bit neural networks. In: CVPR, pp. 4376–4384 (2018)
Lee, D.-H., Zhang, S., Fischer, A., Bengio, Y.: Difference target propagation. In: Appice, A., Rodrigues, P.P., Santos Costa, V., Soares, C., Gama, J., Jorge, A. (eds.) ECML PKDD 2015. LNCS (LNAI), vol. 9284, pp. 498–515. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23528-8_31
Friesen, A.L., Domingos, P.: Deep learning as a mixed convex-combinatorial optimization problem. In: International Conference on Learning Representations (2018)
Boyd, S., Parikh, N., Chu, E.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
Leng, C., Li, H., Zhu, S.: Extremely low bit neural network: squeeze the last bit out with ADMM. In: AAAI Conference on Artificial Intelligence (2018)
Hou, L., Yao, Q., Kwok, J.T.: Loss-aware weight quantization of deep networks. In: International Conference on Learning Representations (2018)
Li, F., Zhang, B., Liu, B.: Ternary weight networks. arXiv preprint. arXiv:1605.04711 (2016)
Acknowledgments
All correspondences should be forwarded to Chen Chen, the corresponding author, via chen.chen@ia.ac.cn. This work was supported by the National Science Foundation of China under Grant NSFC 61571438.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Yuan, Y., Chen, C., Hu, X., Peng, S. (2019). TP-ADMM: An Efficient Two-Stage Framework for Training Binary Neural Networks. In: Gedeon, T., Wong, K., Lee, M. (eds) Neural Information Processing. ICONIP 2019. Communications in Computer and Information Science, vol 1142. Springer, Cham. https://doi.org/10.1007/978-3-030-36808-1_63
Download citation
DOI: https://doi.org/10.1007/978-3-030-36808-1_63
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36807-4
Online ISBN: 978-3-030-36808-1
eBook Packages: Computer ScienceComputer Science (R0)