Abstract
Training of large-scale machine learning models presents a hefty communication challenge to the Stochastic Gradient Descent (SGD) algorithm. In a distributed computing environment, frequent exchanges of gradient parameters between computational nodes are inevitable in model training, which introduces enormous communication overhead. To improve communication efficiency, a gradient compression technique represented by gradient sparseness and gradient quantization is proposed. Base on that, we proposed a novel approach named Gaussian Averaging SGD (GASGD), which transmits 32 bits between nodes in one iteration and achieves a communication complexity of \(\mathcal {O}(1)\). A theoretical analysis demonstrates that GASGD has a similar convergence performance compared with other distributed algorithms with a significantly smaller communication cost. Our experiments validate the theoretical conclusion and demonstrate that GASGD significantly reduces the communication traffic per worker.
Keywords
Supported by the Open Fund from the State Key Laboratory of High Performance Computing (No. 201901-11).
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Aji, K.A.F., Heafield, K.: Sparse communication for distributed gradient descent. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 440–445 (2017). https://doi.org/10.18653/v1/d17-1045
Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. In: Advances in Neural Information Processing Systems 30, pp. 1709–1720 (2017)
Bernstein, J., Wang, Y.X., Azizzadenesheli, K., Anandkumar, A.: signSGD: compressed optimisation for non-convex problems. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, vol. 80, pp. 560–569. Stockholmsmässan, Stockholm, 10–15 July 2018
Bottou, L.: Online learning and stochastic approximations (1998)
Chen, C., Choi, J., Brand, D., Agrawal, A., Zhang, W., Gopalakrishnan, K.: AdaComp: adaptive residual gradient compression for data-parallel distributed training. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 2827–2835, January 2018
Dettmers, T.: 8-bit approximations for parallelism in deep learning. In: International Conference on Learning Representations (2016)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90
Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: Advances in Neural Information Processing Systems 26, pp. 315–323 (2013)
Krizhevsky, A.: Learning Multiple Layers of Features from Tiny Images. University of Toronto, May 2012
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 25, pp. 1097–1105 (2012)
Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: reducing the communication bandwidth for distributed training. In: International Conference on Learning Representations, May 2018
Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and application to data-parallel distributed training of speech DNNs. In: Interspeech 2014 (2014)
Shi, S., Chu, X., Cheung, K.C., See, S.: Understanding top-k sparsification in distributed deep learning. In: International Conference on Learning Representations (2020)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations, May 2015
Stich, S.U., Cordonnier, J.B., Jaggi, M.: Sparsified SGD with memory. In: Advances in Neural Information Processing Systems 31, pp. 4447–4458 (2018)
Strom, N.: Scalable distributed DNN training using commodity GPU cloud computing. In: INTERSPEECH, pp. 1488–1492 (2015)
Wangni, J., Wang, J., Liu, J., Zhang, T.: Gradient sparsification for communication-efficient distributed optimization. In: Advances in Neural Information Processing Systems 31, pp. 1299–1309 (2018)
Wen, W., et al.: TernGrad: ternary gradients to reduce communication in distributed deep learning. In: Advances in Neural Information Processing Systems 30, pp. 1509–1519 (2017)
Xu, H., et al.: Compressed communication for distributed deep learning: survey and quantitative evaluation. Technical report, KAUST, April 2020
Yu, Y., Wu, J., Huang, J.: Exploring fast and communication-efficient algorithms in large-scale distributed networks. In: Proceedings of Machine Learning Research, vol. 89, pp. 674–683 (2019)
Zhou, S., Ni, Z., Zhou, X., Wen, H., Wu, Y., Zou, Y.: DoReFa-Net: training low bitwidth convolutional neural networks with low bitwidth gradients. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, K., Wu, C., Zhu, E. (2021). Efficient Distributed Stochastic Gradient Descent Through Gaussian Averaging. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2021. Lecture Notes in Computer Science(), vol 12736. Springer, Cham. https://doi.org/10.1007/978-3-030-78609-0_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-78609-0_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78608-3
Online ISBN: 978-3-030-78609-0
eBook Packages: Computer ScienceComputer Science (R0)