Abstract:
The general matrix-matrix multiplication (GEMM) is a key operation used in a variety of areas such as Computational Science, Data Science, Machine Learning, and so on. In...Show MoreMetadata
Abstract:
The general matrix-matrix multiplication (GEMM) is a key operation used in a variety of areas such as Computational Science, Data Science, Machine Learning, and so on. In transformers which are foundation models, Multi-Head Attention (MHA) has a series of matrix-matrix multiplications. To perform the MHA on GPUs, we need to exploit highly optimized sub-routines for GEMM, provided their hardware vendor. On NVIDIA GPUs, the cuBLAS library is provided in order to support basic linear algebra subprograms (BLAS). In this paper, we examine and analyze several sub-routines to handle a series of matrix-matrix multiplications used in the transformer model on NVIDIA GPUs.
Published in: 2022 13th International Conference on Information and Communication Technology Convergence (ICTC)
Date of Conference: 19-21 October 2022
Date Added to IEEE Xplore: 25 November 2022
ISBN Information: