skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A Framework for Error-Bounded Approximate Computing, with an Application to Dot Products

Journal Article · · SIAM Journal on Scientific Computing
DOI:https://doi.org/10.1137/21m1406994· OSTI ID:1959416
 [1];  [1];  [1]
  1. Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States)

Approximate computing techniques, which trade off the computation accuracy of an algorithm for better performance and energy efficiency, have been successful in reducing computation and power costs in several domains. However, error sensitive applications in high-performance computing are unable to benefit from existing approximate computing strategies that are not developed with guaranteed error bounds. While approximate computing techniques can be developed for individual high-performance computing applications by domain specialists, this often requires additional theoretical analysis and potentially extensive software modification. Hence, the development of low-level error-bounded approximate computing strategies that can be introduced into any high-performance computing application without requiring additional analysis or significant software alterations is desirable. In this paper, we provide a contribution in this direction by proposing a general framework for designing error-bounded approximate computing strategies and apply it to the dot product kernel to develop \bf qdot---an error-bounded approximate dot product kernel. Following the introduction of qdot, here we perform a theoretical analysis that yields a deterministic bound on the relative approximation error introduced by qdot. Empirical tests are performed to illustrate the tightness of the derived error bound and to demonstrate the effectiveness of qdot on a synthetic dataset, as well as two scientific benchmarks---the conjugate gradient (CG) and power methods. In some instances, using qdot for the dot products in CG can result in many components being quantized to half precision without increasing the iteration count required for convergence to the same solution as CG using a double precision dot product.

Research Organization:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA); USDOE Laboratory Directed Research and Development (LDRD) Program
Grant/Contract Number:
AC52-07NA27344
OSTI ID:
1959416
Report Number(s):
LLNL-JRNL-820357; 1031489
Journal Information:
SIAM Journal on Scientific Computing, Vol. 44, Issue 3; ISSN 1064-8275
Publisher:
Society for Industrial and Applied Mathematics (SIAM)Copyright Statement
Country of Publication:
United States
Language:
English

References (13)

Data-driven Mixed Precision Sparse Matrix Vector Multiplication for GPUs journal December 2019
Fuzzy Memoization for Floating-Point Multimedia Applications journal July 2005
Inexact Newton Methods journal April 1982
Numerical behavior of NVIDIA tensor cores journal January 2021
IMPACT: IMPrecise adders for low-power approximate computing
  • Gupta, Vaibhav; Mohapatra, Debabrata; Park, Sang Phill
  • 2011 International Symposium on Low Power Electronics and Design (ISLPED), IEEE/ACM International Symposium on Low Power Electronics and Design https://doi.org/10.1109/ISLPED.2011.5993675
conference August 2011
Methods of conjugate gradients for solving linear systems journal December 1952
Fixed-Rate Compressed Floating-Point Arrays journal December 2014
“Memo” Functions and Machine Learning journal April 1968
Praktische Verfahren der Gleichungsauflösung . journal January 1929
A Survey of Techniques for Approximate Computing journal May 2016
EnerJ: approximate data types for safe and general low-power computation journal June 2011
Low-Power FPGA Design Using Memoization-Based Approximate Computing journal August 2016
Accelerating sparse matrix–matrix multiplication with GPU Tensor Cores journal December 2020