skip to main content
10.1145/3377170.3377245acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicitConference Proceedingsconference-collections
research-article

Distributed Deep Neural Network Training with Important Gradient Filtering, Delayed Update and Static Filtering

Published: 20 March 2020 Publication History

Abstract

With the increasing number of computing nodes in current computer clusters, the performance of large-scale deep neural network training is essentially limited by the communicational cost, especially for transferring gradients among nodes during iteration. In this paper, three methods are proposed to reduce the communicational cost: important gradient filtering, delayed update and static filtering. Important gradient filtering algorithm selects the most important gradients to reduce the size of gradients to be transferred and help convergence. While delayed update algorithm significantly reduces the gradient broadcasting time. Static filtering filters the gradient with very small variance. Results show that a combination of the proposed methods achieves 2.91× to 5.58× communication cost reduction in a cluster with inexpensive commodity Gigabit Ethernet interfaces.

References

[1]
Aji, A. F. and Heafield, K. (2017). Sparse communication for distributed gradient descent. arXiv preprint arXiv:1704.05021.
[2]
Amodei, D., Anubhai, R., Battenberg, E., Case, C., Casper, J., Catanzaro, B., Chen, J., Chrzanowski, M., Coates, A., Diamos, G., et al. (2016). End to end speech recognition in en- glish and mandarin.
[3]
[Chilimbi et al., 2014] Chilimbi, T. M., Suzue, Y., Apacible, J., and Kalyanaraman, K. (2014). Project adam: Building an efficient and scalable deep learning training system. In OSDI, volume 14, pages 571--582.
[4]
Coates, A., Huval, B., Wang, T., Wu, D., Catanzaro, B., and Andrew, N. (2013). Deep learning with cots hpc systems. In International Conference on Machine Learning, pages 1337--1345.
[5]
Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Senior, A., Tucker, P., Yang, K., Le, Q. V., et al. (2012b). Large scale distributed deep networks. In Advances in neural information processing systems, pages 1223--1231.
[6]
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248--255. Ieee.
[7]
Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. (2017). Accurate, large mini-batch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677.
[8]
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770--778.
[9]
Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., and Kalenichenko, D. (2017). Quantization and training of neural networks for efficient integer-arithmetic- only inference. arXiv preprint arXiv:1712.05877.
[10]
Jia, X., Song, S., He, W., Wang, Y., Rong, H., Zhou, F., Xie, L., Guo, Z., Yang, Y., Yu, L., et al. (2018). Highly scalable deep learning training system with mixed-precision: Training imagenet in four minutes. arXiv preprint arXiv:1807.11205.
[11]
Jin, P. H., Yuan, Q., Iandola, F., and Keutzer, K. (2016). How to scale distributed deep learning? arXiv preprint arXiv:1611.04581.
[12]
[Krizhevsky et al., 2012] Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097--1105.
[13]
[Lin et al., 2017] Lin, Y., Han, S., Mao, H., Wang, Y., and Dally, W. J. (2017). Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv preprint arXiv:1712.01887.
[14]
Shi, S., Wang, Q., Chu, X., and Li, B. (2018). Modeling and evaluation of synchronous stochastic gradient descent in distributed deep learning on multiple gpus. arXiv preprint arXiv:1805.03812.
[15]
Strom, N. (2015). Scalable distributed dnn training using commodity gpu cloud computing. In Sixteenth Annual Conference of the International Speech Communication Association.
[16]
Wen, W., Xu, C., Yan, F., Wu, C., Wang, Y., Chen, Y., and Li, H. (2017). Terngrad: Ternary gradients to reduce communication in distributed deep learning. In Advances in neural information processing systems, pages 1509--1519.
[17]
Zheng, S., Meng, Q., Wang, T., Chen, W., Yu, N., Ma, Z., and Liu, T. (2016). Asynchronous stochastic gradient descent with delay compensation for distributed deep learning. CoRR, abs/1609.08326.
[18]
Zhou et al., 2016] Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., and Zou, Y. (2016). Dorefa-net: Training low bitwidth convolutional neural net- works with low bitwidth gradients. arXiv preprint arXiv:1606.06160.

Index Terms

  1. Distributed Deep Neural Network Training with Important Gradient Filtering, Delayed Update and Static Filtering

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICIT '19: Proceedings of the 2019 7th International Conference on Information Technology: IoT and Smart City
    December 2019
    601 pages
    ISBN:9781450376631
    DOI:10.1145/3377170
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    In-Cooperation

    • Shanghai Jiao Tong University: Shanghai Jiao Tong University
    • The Hong Kong Polytechnic: The Hong Kong Polytechnic University
    • University of Malaya: University of Malaya

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 March 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Sparse gradients
    2. bandwidth friendly
    3. large scale training
    4. neural networks

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    ICIT 2019
    ICIT 2019: IoT and Smart City
    December 20 - 23, 2019
    Shanghai, China

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 60
      Total Downloads
    • Downloads (Last 12 months)2
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 15 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media