Loading [a11y]/accessibility-menu.js
Analysis of Sub-Routines in NVIDIA cuBLAS Library for a series of Matrix-Matrix Multiplications in Transformer | IEEE Conference Publication | IEEE Xplore

Analysis of Sub-Routines in NVIDIA cuBLAS Library for a series of Matrix-Matrix Multiplications in Transformer


Abstract:

The general matrix-matrix multiplication (GEMM) is a key operation used in a variety of areas such as Computational Science, Data Science, Machine Learning, and so on. In...Show More

Abstract:

The general matrix-matrix multiplication (GEMM) is a key operation used in a variety of areas such as Computational Science, Data Science, Machine Learning, and so on. In transformers which are foundation models, Multi-Head Attention (MHA) has a series of matrix-matrix multiplications. To perform the MHA on GPUs, we need to exploit highly optimized sub-routines for GEMM, provided their hardware vendor. On NVIDIA GPUs, the cuBLAS library is provided in order to support basic linear algebra subprograms (BLAS). In this paper, we examine and analyze several sub-routines to handle a series of matrix-matrix multiplications used in the transformer model on NVIDIA GPUs.
Date of Conference: 19-21 October 2022
Date Added to IEEE Xplore: 25 November 2022
ISBN Information:

ISSN Information:

Conference Location: Jeju Island, Korea, Republic of

Funding Agency:


References

References is not available for this document.