Abstract
Intensive communication cost for gradients and parameters is becoming the bottleneck of distributed deep learning training. It is crucial for optimizing such communication bottleneck through measuring communication operations effectively. However, many existing communication measurement tools, such as MXNet profiler, still suffer from serious limitations. Specifically, they cannot satisfy two requirements simultaneously, that is, fine-grained collection of low-level communication operations and user-friendly analysis of comprehensive measurement results. In this paper, we make the first attempt to propose an open-sourced, fine-grained and user-friendly communication measurement tool on top of MXNet, called vSketchDLC. vSketchDLC can trace low-level communication events between framework and communication library interface, and capture end-to-end push and pull communications between workers and servers. It supports to generate communication records in standard format, enabling users to analyze the communication traces by merely using standard visualization tools such as Chrome Trace Viewer. Our design exploits in-memory buffers and asynchronous record writes to ensure measurement activities do not impact training performance. We conduct extensive experiments on a public-cloud GPU cluster to verify the effectiveness of vSketchDLC for MXNet. Experimental results show that vSketchDLC can empower users to analyze fine-grained communication records through friendly interactions, and identify potential training bottlenecks from multiple perspectives, including training timeline and iterations, DNN layers, workers or servers, etc. We can observe the relationship between different communications visually, i.e., to highlight a selected period of communication traces, to zoom in or zoom out, such that identifying the root causes of communication bottleneck and seeking to improve training performance.
Y. Wang and S. Ouyang—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
vSketchDLC. GitHub repository. https://github.com/HiNAopen/vSketchDLC
Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), pp. 265–283 (2016)
Chen, T., et al.: Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274 (CoRR) (2015)
Google: Trace viewer. GitHub repository. https://github.com/catapult-project/catapult/tree/master/tracing
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Hintjens, P.: Zeromq: the guide. http://zguide.zeromq.org/page:all
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012)
Li, M., et al.: Improving the performance of distributed mxnet with rdma. Int. J. Parallel Program. 47, 467–480 (2019)
Ouyang, S., Dong, D., Xu, Y., Xiao, L.: Communication optimization strategies for distributed deep neural network training: a survey. J. Parallel Distrib. Comput. 149, 52–65 (2021)
PARATERA: Paracloud. https://cloud.paratera.com
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems (NIPS), pp. 8026–8037 (2019)
Peng, Y., et al.: A generic communication scheduler for distributed DNN training acceleration. In: Proceedings of the 27th ACM Symposium on Operating Systems Principles (SOSP), pp. 16–29 (2019)
Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211–252 (2015)
Sergeev, A., Del Balso, M.: Horovod: fast and easy distributed deep learning in tensorflow. arXiv preprint arXiv:1802.05799 (CoRR) (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations (ICLR) (2015)
Xu, Y., Dong, D., Xu, W., Liao, X.: SketchDLC: a sketch on distributed deep learning communication via trace capturing. ACM Trans. Arch. Code Optim. (TACO) 16, 1–26 (2019)
Xu, Y., Dong, D., Zhao, Y., Xu, W., Liao, X.: OD-SGD: one-step delay stochastic gradient descent for distributed training. ACM Trans. Arch. Code Optim. (TACO) 17, 1–26 (2020)
Acknowledgment
This work is supported by the National Key R&D Program of China (Grant No.2018YFB0204300), Excellent Youth Foundation of Hunan Province (Dezun Dong) and National Postdoctoral Program for Innovative Talents under Grant No. BX20190091.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 IFIP International Federation for Information Processing
About this paper
Cite this paper
Wang, Y., Ouyang, S., Dong, D., Yu, E., Liao, X. (2022). vSketchDLC: A Sketch on Distributed Deep Learning Communication via Fine-grained Tracing Visualization. In: Cérin, C., Qian, D., Gaudiot, JL., Tan, G., Zuckerman, S. (eds) Network and Parallel Computing. NPC 2021. Lecture Notes in Computer Science(), vol 13152. Springer, Cham. https://doi.org/10.1007/978-3-030-93571-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-93571-9_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93570-2
Online ISBN: 978-3-030-93571-9
eBook Packages: Computer ScienceComputer Science (R0)