Abstract:
GPUs have become the defacto hardware devices for accelerating Deep Neural Network (DNN) inference workloads. However, the conventional sequential execution mode of DNN o...Show MoreMetadata
Abstract:
GPUs have become the defacto hardware devices for accelerating Deep Neural Network (DNN) inference workloads. However, the conventional sequential execution mode of DNN operators in mainstream deep learning frameworks cannot fully utilize GPU resources, even with the operator fusion enabled, due to the increasing complexity of model structures and a greater diversity of operators. Moreover, the inadequate operator launch order in parallelized execution scenarios can lead to GPU resource wastage and unexpected performance interference among operators. In this paper, we propose Opara, a resource- and interference-aware DNN Operator parallel scheduling framework to accelerate DNN inference on GPUs. Specifically, Opara first employs CUDA Streams and CUDA Graph to parallelize the execution of multiple operators automatically. To further expedite DNN inference, Opara leverages the resource demands of operators to judiciously adjust the operator launch order on GPUs, overlapping the execution of compute-intensive and memory-intensive operators. We implement and open source a prototype of Opara based on PyTorch in a non-intrusive manner. Extensive prototype experiments with representative DNN and Transformer-based models demonstrate that Opara outperforms the default sequential CUDA Graph in PyTorch and the state-of-the-art operator parallelism systems by up to 1.68\boldsymbol{\times} and 1.29\boldsymbol{\times}, respectively, yet with acceptable runtime overhead.
Published in: IEEE Transactions on Computers ( Volume: 74, Issue: 1, January 2025)