Skip to main content

Scheduling Deep Learning Training in GPU Cluster Using the Model-Similarity-Based Policy

  • Conference paper
  • First Online:
Intelligent Information and Database Systems (ACIIDS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13996))

Included in the following conference series:

Abstract

Training large neural networks with huge amount of data using multiple Graphic Processing Units (GPUs) became widespread with the emergence of Deep Learning (DL) technology. It is usually operated in datacenters featuring multiple GPU clusters, which are shared amongst users. However, different GPU architectures co-exist on the market and differ in training performance. To maximise the utilisation of a GPU cluster, the scheduler plays an important role in managing the resources by dispatching the jobs to the GPUs. An efficient scheduling strategy should take into account that the training performance of each GPU architecture varies for the different DL models. In this work, an original model-similarity-based scheduling policy is introduced that takes into account the GPU architectures that match with the DL models. The results show that using the model-similarity-based scheduling policy for distributed training across multiple GPUs of a DL model with a large batch size can reduce the makespan.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Albahar, H., Dongare, S., Du, Y., Zhao, N., Paul, A.K., Butt, A.R.: Schedtune: a heterogeneity-aware gpu scheduler for deep learning. In: 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid), pp. 695–705 (2022). https://doi.org/10.1109/CCGrid54584.2022.00079

  2. Amazon web services inc.: deep learning AMI: Developer guide. Technical Report (2022)

    Google Scholar 

  3. Chaudhary, S., Ramjee, R., Sivathanu, M., Kwatra, N., Viswanatha, S.: Balancing efficiency and fairness in heterogeneous GPU clusters for deep learning. In: The Fifteenth European Conference on Computer Systems, pp. 1–16 (2020)

    Google Scholar 

  4. Chollet, F.: Deep learning with Python. Manning Publications, Shelter Island (2017)

    Google Scholar 

  5. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    Google Scholar 

  6. Goyal, P., et al.: Accurate, large minibatch sgd: Training imagenet in 1 hour. Technical Report (2017)

    Google Scholar 

  7. Gu, J., et al.: Tiresias: A GPU cluster manager for distributed deep learning. In: 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19), pp. 485–500 (2019)

    Google Scholar 

  8. Han, J., Kamber, M., Pei, J.: Data Mining, pp. 39–82. The Morgan Kaufmann Series in Data Management Systems, Morgan Kaufmann, Boston, 3 edn. (2012). https://doi.org/10.1016/B978-0-12-381479-1.00002-2

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

    Google Scholar 

  10. Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  11. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  12. Karen, S., Andrew, Z.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)

    Google Scholar 

  13. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical Report (2009)

    Google Scholar 

  14. Margery, D., Morel, E., Nussbaum, L., Richard, O., Rohr, C.: Resources description, selection, reservation and verification on a large-scale testbed. In: TRIDENTCOM - 9th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities (2014)

    Google Scholar 

  15. Mattson, P., et al.: Mlperf training benchmark. Proc. Mach. Learn. Syst. 2, 336–349 (2020)

    Google Scholar 

  16. Narayanan, D., Santhanam, K., Kazhamiaka, F., Phanishayee, A., Zaharia, M.: Heterogeneity-aware cluster scheduling policies for deep learning workloads. In: 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pp. 481–498. USENIX Association (2020)

    Google Scholar 

  17. Peng, Y., Bao, Y., Chen, Y., Wu, C., Guo, C.: Optimus: an efficient dynamic resource scheduler for deep learning clusters. In: The Thirteenth EuroSys Conference, pp. 1–14 (2018). https://doi.org/10.1145/3190508.3190517

  18. Peng, Y., Bao, Y., Chen, Y., Wu, C., Meng, C., Lin, W.: Dl2: A deep learning-driven scheduler for deep learning clusters. IEEE Trans. Parallel Distrib. Syst. 32(8), 1947–1960 (2021)

    Article  Google Scholar 

  19. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: Inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018). https://doi.org/10.1109/CVPR.2018.00474

  20. Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)

    Google Scholar 

  21. Torres, J.: Train a neural network on multi-gpu with tensorflow - supercomputing for artificial intelligence, March 2023. https://towardsdatascience.com/train-a-neural-network-on-multi-gpu-with-tensorflow-42fa5f51b8af

  22. Wu, J.: Introduction to convolutional neural networks. Natl Key Lab Novel Softw. Technol. 5(23), 495 (2017)

    Google Scholar 

  23. Xiao, W., et al.: Gandiva: introspective cluster scheduling for deep learning. In: 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), pp. 595–610 (2018)

    Google Scholar 

  24. Xiao, W., et al.: Antman: Dynamic scaling on GPU clusters for deep learning. In: 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pp. 533–548. USENIX Association, November 2020. https://www.usenix.org/conference/osdi20/presentation/xiao

  25. Yeung, G., Borowiec, D., Yang, R., Friday, A., Harper, R., Garraghan, P.: Horus: Interference-aware and prediction-based scheduling in deep learning systems. IEEE Trans. Parallel Distrib. Syst. 33(1), 88–100 (2022). https://doi.org/10.1109/TPDS.2021.3079202

    Article  Google Scholar 

  26. Yu, G.X., Gao, Y., Golikov, P., Pekhimenko, G.: Habitat: a runtime-based computational performance predictor for deep neural network training. In: 2021 USENIX Annual Technical Conference (USENIX ATC 21), pp. 503–521. USENIX Association (2021)

    Google Scholar 

  27. Zhang, H., Stafman, L., Or, A., Freedman, M.J.: Slaq: quality-driven scheduling for distributed machine learning. In: The 2017 Symposium on Cloud Computing, pp. 390–404 (2017)

    Google Scholar 

Download references

Acknowledgements

The authors are grateful for Grid’5000, which provides computing resources throughout this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kittichai Lavangnananda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Thanapol, P., Lavangnananda, K., Leprévost, F., Schleich, J., Bouvry, P. (2023). Scheduling Deep Learning Training in GPU Cluster Using the Model-Similarity-Based Policy. In: Nguyen, N.T., et al. Intelligent Information and Database Systems. ACIIDS 2023. Lecture Notes in Computer Science(), vol 13996. Springer, Singapore. https://doi.org/10.1007/978-981-99-5837-5_30

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-5837-5_30

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-5836-8

  • Online ISBN: 978-981-99-5837-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics