Skip to main content
Log in

Error-free transformations of matrix multiplication by using fast routines of matrix multiplication and its applications

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

This paper is concerned with accurate matrix multiplication in floating-point arithmetic. Recently, an accurate summation algorithm was developed by Rump et al. (SIAM J Sci Comput 31(1):189–224, 2008). The key technique of their method is a fast error-free splitting of floating-point numbers. Using this technique, we first develop an error-free transformation of a product of two floating-point matrices into a sum of floating-point matrices. Next, we partially apply this error-free transformation and develop an algorithm which aims to output an accurate approximation of the matrix product. In addition, an a priori error estimate is given. It is a characteristic of the proposed method that in terms of computation as well as in terms of memory consumption, the dominant part of our algorithm is constituted by ordinary floating-point matrix multiplications. The routine for matrix multiplication is highly optimized using BLAS, so that our algorithms show a good computational performance. Although our algorithms require a significant amount of working memory, they are significantly faster than ‘gemmx’ in XBLAS when all sizes of matrices are large enough to realize nearly peak performance of ‘gemm’. Numerical examples illustrate the efficiency of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. IEEE Standard for Floating-Point Arithmetic, Std 754–2008 (2008)

  2. Bailey, D.H.: A Fortran-90 based multiprecision system. ACM Trans. Math. Softw. 21(4), 379–387 (1995)

    Article  MATH  Google Scholar 

  3. Dekker, T.J.: A floating-point technique for extending the available precision. Numer. Math. 18(3), 224–242 (1971)

    Article  MATH  MathSciNet  Google Scholar 

  4. Demmel, J., Hida, Y.: Accurate and efficient floating point summation. SIAM J. Sci. Comput. 25(4), 1214–1248 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  5. Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. The Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  6. Goto, K., Geijn, R.V.D.: High-performance implementation of the level-3 BLAS. ACM Trans. Math. Softw. 35(1), Article no. 4 (2008)

    Article  Google Scholar 

  7. Li, X., Demmel, J., Bailey, D., Henry, G., Hida, Y., Iskandar, J., Kahan, W., Kang, S., Kapur, A., Martin, M., Thompson, B., Tung, T., Yoo, D.: Design, implementation and testing of extended and mixed precision BLAS. ACM Trans. Math. Softw. 28(2), 152–205 (2002)

    Article  Google Scholar 

  8. Higham, N.J.: Accuracy and Stability of Numerical Algorithms, 2nd edn. SIAM Publications, Philadelphia (2002)

    Book  MATH  Google Scholar 

  9. Ogita, T., Rump, S.M., Oishi, S.: Accurate sum and dot product. SIAM J. Sci. Comput. 26, 1955–1988 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  10. Rump, S.M.: Ultimately fast accurate summation. SIAM J. Sci. Comput. 31(5), 3466–3502 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  11. Rump, S.M., Ogita, T., Oishi, S.: Accurate floating-point summation part I: faithful rounding. SIAM J. Sci. Comput. 31(1), 189–224 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  12. Rump, S.M., Ogita, T., Oishi, S.: Accurate floating-point summation part II: sign, K-fold faithful and rounding to nearest. SIAM J. Sci. Comput. 31(2), 1269–1302 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  13. Whaley, C.R., Petitet, A., Dongarra, J.J.: Automated empirical optimizations of software and the ATLAS project. Parallel Comput. 27, 3–35 (2001)

    Article  MATH  Google Scholar 

  14. The MPFR Library: http://www.mpfr.org/. Accessed 7 Feb 2010

  15. exflib—extend precision floating-point arithmetic library: http://www-an.acs.i.kyoto-u.ac.jp/~fujiwara/exflib/exflib-index.html. Accessed 7 Feb 2010

  16. http://www.eecs.berkeley.edu/~yozo/. Accessed 25 Dec 2009

  17. The NIST Sparse BLAS: http://math.nist.gov/spblas/original.html. Accessed 7 Feb 2010

  18. http://www.cise.ufl.edu/research/sparse/umfpack/. Accessed 7 Feb 2010

  19. MATLAB Programming Version 7, The MathWorks (2005)

  20. Rump, S.M.: INTLAB—INTerval LABoratory. In: Csendes, T. (ed.) Developments in Reliable Computing, pp. 77–104. Kluwer Academic, Dordrecht (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Katsuhisa Ozaki.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ozaki, K., Ogita, T., Oishi, S. et al. Error-free transformations of matrix multiplication by using fast routines of matrix multiplication and its applications. Numer Algor 59, 95–118 (2012). https://doi.org/10.1007/s11075-011-9478-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-011-9478-1

Keywords

Navigation