Abstract:
Deep Neural Network (DNN) training on a large scale is extremely time-consuming and computationally intensive, which is accelerated by distributed training. In recent yea...Show MoreMetadata
Abstract:
Deep Neural Network (DNN) training on a large scale is extremely time-consuming and computationally intensive, which is accelerated by distributed training. In recent years, pipeline parallelism has been developed, which enables partitioning the model across several devices, e.g. GPU, and training efficiency is improved by dividing data batches into micro-batches, with each of them processed by a different stage of the model. Currently, parallel training assumes pipeline placement and partitioning are static, with parameters updating each iteration, without accounting for freezing. This results in computational resources not being fully utilized. In this paper, we propose FreezePipe, a novel method for optimizing deep learning training that combines the freezing mechanism with pipeline parallel training. In FreezePipe, a lightweight method for determining the freezing strategy based on gradient changes is employed. Considering that resources need to be released based on the frozen layer, a lightweight model partitioning algorithm was designed to determine the optimal strategy for pipeline partitioning. Experimental results show that FreezePipe can reduce the training time by 64.5% compared to Torchgpipe on CIFAR-10 dataset without compromising any model performance.
Published in: 2023 26th International Conference on Computer Supported Cooperative Work in Design (CSCWD)
Date of Conference: 24-26 May 2023
Date Added to IEEE Xplore: 22 June 2023
ISBN Information: