Skip to main content

Efficient minimization of numerical summation errors

  • Conference paper
  • First Online:
Automata, Languages and Programming (ICALP 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1443))

Included in the following conference series:

Abstract

Given a multiset X={x 1,... x n } of real numbers, the floating-point set summation (FPS) problem asks for S n = x 1+...+ x n , and the floating point prefix set summation problem (FPPS) asks for S k =x 1+...+x k for all k = 1, ... n. Let E *k denote the minimum worst-case error over all possible orderings of evaluating S k . We prove that if X has both positive and negative numbers, it is NP-hard to compute S n with the worst-case error equal to E *n . We then give the first known polynomial-time approximation algorithm for computing Sn that has a provably small error for arbitrary X. Our algorithm incurs a worstcase error at most 2([log(n−1)]+1) E *n . After X is sorted, it runs in O(n) time, yielding an O(n 2)-time approximation algorithm for computing S k for all k = 1, ..., n such that the worst-case error for each S k is less than 2(⌈ log(k−1)⌋+ 1)E *k .

For the case where X is either all positive or all negative, we give another approximation algorithm for computing S n with a worst-case error at most ⌈ log log nE *n . Even for unsorted X, this algorithm runs in O(n) time. Previously, the best linear-time approximation algorithm had a worst-case error at most ⌈ log nE *n , while E *n was known to be attainable in O(n log n) time using Huffman coding. Consequently, FPPS is solvable in O(n 2) time such that the worst-case error for each S k is the minimum. To improve this quadratic time bound in practice, we design two on-line algorithms that calculate the next S k by taking advantage of the current Sk and thus reduce redundant computation.

Supported in part by NSF Grant CCR-9531028.

Supported in part by NSF Grant CCR-9424164.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. T. H. Cormen, C. E. Leiserson, and R. L. Rivest, Introduction to Algorithms,McGraw Hill, 1990.

    Google Scholar 

  2. J. W. Demmel, Underflow and the reliability of numerical software, SIAM J. Sci. Statis. Comput., 5 (1984), pp. 887–919.

    Article  MATH  MathSciNet  Google Scholar 

  3. M. R. Garey and D. S. Johnson, Computer and Intractability, W. H. Freeman and Company, New York, 1979.

    Google Scholar 

  4. D. Goldberg, What every computer scientist should know about floating-point arithmetic, ACM Computing Surveys, 23 (1990), pp. 5–48.

    Article  Google Scholar 

  5. N. J. Higham, The accuracy of floating point summation, SIAM Journal on Scientific Computing, 14 (1993), pp. 783–799.

    Article  MATH  MathSciNet  Google Scholar 

  6. N. J. Higham, Accuracy and Stability of Numerical Algorithms. SIAM Press, 1996.

    Google Scholar 

  7. D. E. Knuth, The Art of Computer Programming II: Seminumerical Algorithms, Addison-Wesley, Reading, Massachusetts, second ed., 1981.

    Google Scholar 

  8. U. W. Kulisch and W. L. Miranker, The arithmetic of the digital computer: a new approach, SIAM Review, 28 (1986), pp. 1–40.

    Article  MATH  MathSciNet  Google Scholar 

  9. J. V. Leeuwen, On the construction of Huffman trees, in Proceedings of the 3rd International Colloquium on Automata, Languages, and Programming, 1976, pp. 382–410.

    Google Scholar 

  10. T. G. Robertazzi and S. C. Schwartz, Best “ordering” for floating-point addition, ACM Transactions on Mathematical Software, 14 (1988), pp. 101–110.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Kim G. Larsen Sven Skyum Glynn Winskel

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kao, MY., Wang, J. (1998). Efficient minimization of numerical summation errors. In: Larsen, K.G., Skyum, S., Winskel, G. (eds) Automata, Languages and Programming. ICALP 1998. Lecture Notes in Computer Science, vol 1443. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0055068

Download citation

  • DOI: https://doi.org/10.1007/BFb0055068

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64781-2

  • Online ISBN: 978-3-540-68681-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics