Abstract
We implemented the quadruple precision Basic Linear Algebra Subprograms (BLAS) functions, AXPY, GEMV and GEMM, on graphics processing units (GPUs), and evaluated their performance. We used DD-type quadruple precision operations, which combine two double precision values to represent a quadruple precision value. On an NVIDIA Tesla C1060, our BLAS functions are up to approximately 30 times faster than the existing quadruple precision BLAS on an Intel Core i7 920. Additionally, the execution time of quadruple precision AXPY takes only approximately 2.7 times longer than that of double precision AXPY on the Tesla C1060. We have shown that quadruple precision BLAS operations are suitable for GPUs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bailey, D.H.: QD (C++/Fortran-90 double–double and quad-double package), http://crd.lbl.gov/~dhbailey/mpdist/
Corporation, N.: CUBLAS Library (including CUDA Toolkit), http://developer.nvidia.com/object/cuda_download.html
Goto, K.: GotoBLAS2, http://www.tacc.utexas.edu/tacc-projects/gotoblas2/
Graça, G.D., Defour, D.: Implementation of float-float operators on graphics hardware. In: Proc. 7th Conference on Real Numbers and Computers, RNC7 (2006)
Harris, M.: Optimizing Parallel Reduction in CUDA, http://developer.download.nvidia.com/compute/cuda/1_1/Website/projects/reduction/doc/reduction.pdf
Hasegawa, H.: Utilizing the quadruple-precision floating-point arithmetic operation for the Krylov Subspace Methods. In: Proc. SIAM Conference on Applied Linear Algebra, LA 2003 (2003)
Hida, Y., Li, X.S., Bailey, D.H.: Algorithms for Quad-Double Precision Floating Point Arithmetic. In: Proc. 15th Symposium on Computer Arithmetic (2001)
Li, X.S., Demmel, J.W., Bailey, D.H., Hida, Y., Iskandar, J., Kapur, A., Martin, M.C., Thompson, B., Tung, T., Yoo, D.J.: XBLAS – Extra Precise Basic Linear Algebra Subroutines, http://www.netlib.org/xblas/
Lu, M., He, B., Luo, Q.: Supporting Extended Precision on Graphics Processors. In: Proc. Sixth International Workshop on Data Management on New Hardware, DaMoN 2010 (2010)
Nakata, M.: The MPACK; Multiple precision arithmetic BLAS (MBLAS) and LAPACK (MLAPACK), http://mplapack.sourceforge.net/
Shewchuk, J.R.: Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates. Discrete and Computational Geometry 18, 305–363 (1997)
Thall, A.: Extended-Precision Floating-Point Numbers for GPU Computation. In: ACM SIGGRAPH 2006 Research Posters (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Mukunoki, D., Takahashi, D. (2012). Implementation and Evaluation of Quadruple Precision BLAS Functions on GPUs. In: Jónasson, K. (eds) Applied Parallel and Scientific Computing. PARA 2010. Lecture Notes in Computer Science, vol 7133. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-28151-8_25
Download citation
DOI: https://doi.org/10.1007/978-3-642-28151-8_25
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-28150-1
Online ISBN: 978-3-642-28151-8
eBook Packages: Computer ScienceComputer Science (R0)