Abstract
As Transformer-based large models become the mainstream of AI training, the development of hardware devices (e.g., GPUs) cannot keep up with the rapid increase of model scale. Although the development of various parallel training techniques enables models to be trained on multiple GPUs, it still requires high costs that most researchers cannot afford. The increase of the hardware threshold for AI model training has affected the application of deep learning. In fact, CPU memory and external disk memory can be used as cache, which can reduce the occupation of high-cost GPU memory. In this paper, we analyze two types of intermediate data used in AI model training and propose a multi-level intermediate data offloading policy for the training process. Firstly, we propose a dynamic management policy via warm-up to optimize GPU memory usage according to the characteristics of the AI training process. Secondly, we asynchronously offload the optimizer state data with a specified ratio to the HDD, which can further optimize CPU memory usage. We conduct experiments on the large pre-trained model GPT-2 to verify the effectiveness of our method, and the results indicate that the multi-level storage optimization of intermediate data can help to achieve a larger AI model training under constrained hardware resources.
J. Fu and Y. Yang—These authors contributed equally to this work and should be considered co-first authors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Belady, L.A.: A study of replacement algorithms for virtual-storage computer. IBM Syst. J. 5(2), 78–101 (1966)
Bian, Z., et al.: Colossal-AI: a unified deep learning system for large-scale parallel training. CoRR abs/2110.14883 (2021)
Bian, Z., Xu, Q., Wang, B., You, Y.: Maximizing parallelism in distributed training for huge neural networks. CoRR abs/2105.14450 (2021)
Brown, T.B., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems, NeurIPS (2020)
Chen, T., Xu, B., Zhang, C., Guestrin, C.: Training deep nets with sublinear memory cost. CoRR abs/1604.06174 (2016)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, vol. 1 (Long and Short Papers), pp. 4171–4186 (2019)
Dosovitskiy, A., et al.: An image is worth \(16 \times 16\) words: transformers for image recognition at scale. In: 9th International Conference on Learning Representations, ICLR 2021 (2021)
Fang, J., et al.: Parallel training of pre-trained models via chunk-based dynamic memory management. IEEE Trans. Parallel Distrib. Syst. 34(1), 304–315 (2023)
Gokaslan, A., Cohen, V., Pavlick, E., Tellex, S.: (2019). https://Skylion007.github.io/OpenWebTextCorpus
Hildebrand, M., Khan, J., Trika, S., Lowe-Power, J., Akella, V.: Autotm: automatic tensor movement in heterogeneous memory systems using integer linear programming. In: ASPLOS ’20: Architectural Support for Programming Languages and Operating Systems, Lausanne, pp. 875–890 (2020)
Huang, C., Jin, G., Li, J.: Swapadvisor: pushing deep learning beyond the GPU memory limit via smart swapping. In: ASPLOS ’20: Architectural Support for Programming Languages and Operating Systems, pp. 1341–1355 (2020)
Huang, Y., et al.: Efficient training of giant neural networks using pipeline parallelism. In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems, NeurIPS 2019, pp. 103–112 (2019)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings (2015)
Lin, Y., Han, S., Mao, H., Wang, Y., Dally, B.: Deep gradient compression: reducing the communication bandwidth for distributed training. In: 6th International Conference on Learning Representations, ICLR 2018, Conference Track Proceedings (2018)
Liu, E.Z., Hashemi, M., Swersky, K., Ranganathan, P., Ahn, J.: An imitation learning approach for cache replacement. In: Proceedings of the 37th International Conference on Machine Learning, ICML 2020, pp. 6237–6247 (2020)
Miao, X., et al.: HET: scaling out huge embedding model training via cache-enabled distributed framework. Proc. VLDB Endow. 15(2), 312–320 (2021)
Narayanan, D., et al.: Pipedream: generalized pipeline parallelism for DNN training. In: Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP 2019, pp. 1–15 (2019)
Nie, X., Miao, X., Yang, Z., Cui, B.: TSPLIT: fine-grained GPU memory management for efficient DNN training via tensor splitting. In: 38th IEEE International Conference on Data Engineering, ICDE 2022, pp. 2615–2628 (2022)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training. OpenAI (2018)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI (2019)
Rajbhandari, S., Rasley, J., Ruwase, O., He, Y.: Zero: memory optimizations toward training trillion parameter models. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020, p. 20 (2020)
Rajbhandari, S., Ruwase, O., Rasley, J., Smith, S., He, Y.: Zero-infinity: breaking the GPU memory wall for extreme scale deep learning. In: International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2021, p. 59 (2021)
Ren, J., et al.: Zero-offload: democratizing billion-scale model training. In: 2021 USENIX Annual Technical Conference, USENIX ATC 2021, pp. 551–564 (2021)
Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., Catanzaro, B.: Megatron-LM: training multi-billion parameter language models using model parallelism. CoRR abs/1909.08053 (2019)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, NeurIPS 2017, pp. 5998–6008 (2017)
Wang, B., Xu, Q., Bian, Z., You, Y.: 2.5-dimensional distributed model training. CoRR abs/2105.14500 (2021)
Xu, Q., You, Y.: An efficient 2D method for training super-large deep learning models. In: IEEE International Parallel and Distributed Processing Symposium, IPDPS 2023, pp. 222–232 (2023)
Zeng, W., et al.: PanGu-\(\alpha \): large-scale autoregressive pretrained Chinese language models with auto-parallel computation. CoRR abs/2104.12369 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fu, J., Yang, Y., Hu, G., Luo, X., Shao, J. (2024). Multi-level Storage Optimization for Intermediate Data in AI Model Training. In: Bao, Z., Borovica-Gajic, R., Qiu, R., Choudhury, F., Yang, Z. (eds) Databases Theory and Applications. ADC 2023. Lecture Notes in Computer Science, vol 14386. Springer, Cham. https://doi.org/10.1007/978-3-031-47843-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-47843-7_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47842-0
Online ISBN: 978-3-031-47843-7
eBook Packages: Computer ScienceComputer Science (R0)