Skip to main content

Studying the Numerical Quality of an Industrial Computing Code: A Case Study on Code_aster

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10381))

Abstract

We present in this paper a process which is suitable for the complete analysis of the numerical quality of a large industrial scientific computing code. Random rounding, using the Verrou diagnostics tool, is first used to evaluate the numerical stability, and locate the origin of errors in the source code. Once a small code part is identified as unstable, it can be isolated and studied using higher precision computations and interval arithmetic to compute guaranteed reference results. An alternative implementation of this unstable algorithm is then proposed and experimentally evaluated. Finally, error bounds are given for the proposed algorithm, and the effectiveness of the proposed corrections is assessed in the computing code.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Project page URL: http://github.com/edf-hpc/verrou.

References

  1. Code_Aster: Structures and thermomechanics analysis for studies and research. http://www.code-aster.org/

  2. Benz, F., Hildebrandt, A., Hack, S.: A dynamic program analysis to find floating-point accuracy problems. In: 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pp. 453–462. ACM, New York, June 2012

    Google Scholar 

  3. Denis, C., de Oliveira Castro, P., Petit, E.: Verificarlo: checking floating point accuracy through Monte Carlo Arithmetic. In: 23rd IEEE International Symposium on Computer Arithmetic (ARITH 2016) (2016)

    Google Scholar 

  4. de Dinechin, F., Lauter, C., Melquiond, G.: Certifying the floating-point implementation of an elementary function using Gappa. IEEE Trans. Comput. 60(2), 242–253 (2011)

    Article  MathSciNet  Google Scholar 

  5. Févotte, F., Lathuilière, B.: VERROU: Assessing Floating-Point Accuracy Without Recompiling, October 2016. https://hal.archives-ouvertes.fr/hal-01383417

  6. Févotte, F., Lathuilière, B.: VERROU: a CESTAC evaluation without recompilation. In: International Symposium on Scientific Computing, Computer Arithmetics and Verified Numerics (SCAN), Uppsala, Sweden, September 2016

    Google Scholar 

  7. Graillat, S., Jézéquel, F., Picot, R.: Numerical Validation of Compensated Algorithms with Stochastic Arithmetic, September 2016. https://hal.archives-ouvertes.fr/hal-01367769

  8. IEEE Standard for Floating-Point Arithmetic. IEEE Std 754–2008, pp. 1–70 (2008)

    Google Scholar 

  9. Jézéquel, F., Chesneaux, J.M., Lamotte, J.L.: A new version of the CADNA library for estimating round-off error propagation in Fortran programs. Comput. Phys. Commun. 181(11), 1927–1928 (2010)

    Article  Google Scholar 

  10. Lam, M.O., Hollingsworth, J.K., Stewart, G.: Dynamic floating-point cancellation detection. Parallel Comput. 39(3), 146–155 (2013)

    Article  MathSciNet  Google Scholar 

  11. Lamotte, J.L., Chesneaux, J.M., Jézéquel, F.: CADNA_C: a version of CADNA for use with C or C++ programs. Comput. Phys. Commun. 181(11), 1925–1926 (2010)

    Article  Google Scholar 

  12. Montan, S.: Sur la validation numérique des codes de calcul industriels. Ph.D. thesis, Université Pierre et Marie Curie (Paris 6), France (2013). (in French)

    Google Scholar 

  13. Nethercote, N., Seward, J.: Valgrind: a framework for heavyweight dynamic binary instrumentation. In: ACM SIGPLAN 2007 Conference on Programming Language Design and Implementation (PLDI) (2007)

    Google Scholar 

  14. Neumaier, A.: Rundungsfehleranalyse einiger verfahren zur summation endlicher summen. ZAMM (Zeitschrift für Angewandte Mathematik und Mechanik) 54, 39–51 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  15. Ogita, T., Rump, S.M., Oishi, S.: Accurate sum and dot product. SIAM J. Sci. Comput. 26, 1955–1988 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  16. Panchekha, P., Sanchez-Stern, A., Wilcox, J.R., Tatlock, Z.: Automatically improving accuracy for floating point expressions. In: ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2015), Portland, Oregon, USA, June 2015

    Google Scholar 

  17. Sanchez-Stern, A., Panchekha, P., Lerner, S., Tatlock, Z.: Finding root causes of floating point error with herbgrind, arXiv:1705.10416v1 [cs.PL]

  18. Sanders, D.P., Benet, L., Kryukov, N.: The julia package ValidatedNumerics.jl and its application to the rigorous characterization of open billiard models. In: International Symposium on Scientific Computing, Computer Arithmetics and Verified Numerics (SCAN), Uppsala, Sweden, September 2016

    Google Scholar 

  19. Sterbenz, P.H.: Floating Point Computation. Prentice-Hall, Englewood Cliffs (1974)

    Google Scholar 

  20. Stott Parker, D.: Monte Carlo arithmetic: exploiting randomness in floating-point arithmetic. Technical report CSD-970002, University of California, Los Angeles (1997)

    Google Scholar 

  21. Vignes, J.: A stochastic arithmetic for reliable scientific computation. Math. Comput. Simul. 35, 233–261 (1993)

    Article  MathSciNet  Google Scholar 

  22. Zeller, A.: Why Programs Fail, 2nd edn. Morgan Kaufmann, Boston (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to François Févotte .

Editor information

Editors and Affiliations

A Complete Proof

A Complete Proof

1.1 A.1 Case 1: a Almost Equal to b

We treat in a first step the case where the condition in the “if” statement at line 3 of Algorithm 1 applies. In this case, a and b are close enough to one another for Sterbenz lemma [19] to hold, which means that no additional error is made when computing n:

$$\begin{aligned} n = c - 1 = x\,(1+\varepsilon _{\mathrm {c}}) - 1. \end{aligned}$$

The condition tested at the beginning of Algorithm 1 therefore implies that:

$$\begin{aligned} \frac{1-5\,\mathbf {u}}{1+\mathbf {u}} \le x \le \frac{1+5\,\mathbf {u}}{1-\mathbf {u}},\\ \end{aligned}$$

and

$$\begin{aligned} \frac{-6\,\mathbf {u}}{1+\mathbf {u}} \le x-1 \le \frac{6\,\mathbf {u}}{1-\mathbf {u}}. \end{aligned}$$
(8)

The algorithm returns a in this case, instead of the exact value

$$\begin{aligned} f(a,b) = a\,\frac{x - 1}{\log (x)}, \end{aligned}$$

so that the relative error is given by:

$$\begin{aligned} e_0&= \frac{a - f(a,b)}{f(a,b)}\nonumber \\&= \frac{\log (x)}{x-1} - 1\nonumber \\&= \frac{\log (1+\epsilon )}{\epsilon } - 1&\text {(where}~\epsilon =x-1\text {)}\nonumber \\&= \frac{1}{\epsilon } \left( \sum _{n=0}^\infty \frac{(-1)^{n} \, \epsilon ^{n+1}}{n+1}\right) -1\nonumber&\text {(Taylor expansion of the log function)}\\&= \sum _{n=1}^\infty \frac{(-1)^{n} \, \epsilon ^{n}}{n+1}. \end{aligned}$$
(9)

Assuming \(\epsilon \ge 0\), we have

$$\begin{aligned} \forall n \in \mathbb {N}, \qquad \frac{\epsilon ^n}{n+1} > \frac{\epsilon ^{n+1}}{n+2}, \end{aligned}$$

so that grouping terms in pairs in (9) yields

The case where \(\epsilon <0\) is treated similarly, so that we get

$$\begin{aligned} \vert e_0\vert&\le \frac{\vert \epsilon \vert }{2} \le \frac{3\,\mathbf {u}}{1-\mathbf {u}}, \end{aligned}$$

where we injected (8) in the last inequality. This last result shows that in this case, the relative error is bound by 3 ulps in the first order.

1.2 A.2 Case 2a: \(x \notin \left[ \frac{1}{2}, 2\right] \)

We assume in this case that \(x \notin \left[ \frac{1}{2}, 2\right] \), and will focus on the sub-case where \(x > 2\) (the other subcase, \(x<\frac{1}{2}\), can be handled in a similar way).

Starting from (3), and knowing that the logarithm is a monotonically increasing function, we have:

$$\begin{aligned}&\log (c) = \log (x\,(1+\varepsilon _{\mathrm {c}})) = \log (x) + \log (1+\varepsilon _{\mathrm {c}}),\\ \implies&\vert \log (c) - \log (x)\vert \le \log (1+\mathbf {u}) \le \mathbf {u}, \end{aligned}$$

where the last inequality was obtained by noting that the logarithm is convex, and its derivative in 1 is 1. A rather simple Gappa script, presented in Fig. 5 can prove the rest. In this script, all capital letters are ideal, real values corresponding to the approximations computed in Algorithm 1 and represented by lower-case letters. We denote \(\texttt {LX} = \log (x)\), and \(\texttt {LE} = \log (c) - \log (x)\). The bound on LE used as hypothesis comes from the simple computation above, the bound on LX are those of the logarithm over the range of double-precision floating-point numbers. Other bounds come from double-precision floating-point limits.

Gappa can prove that the relative error produced by Algorithm 1 in this case is bounded by approximately \(8.9\times 10^{-16}\), which is compatible with the bounds stated in Theorem 1. It should be noted however that Gappa can’t validate this script for too small values of a, probably denoting a problem with denormalized values.

Fig. 5.
figure 5

Gappa script used to prove case 2a

1.3 A.3 Case 2b: \(x\in \left[ \frac{1}{2}, 2\right] \)

We finally study here the case when a and b are close to one another: \(x\in \left[ \frac{1}{2}, 2\right] \). Let us define

$$\begin{aligned} g(x)&= \frac{\log (x)}{x-1}, \end{aligned}$$

so that, recalling the expression of \(E_1\) from (7),

$$\begin{aligned} E_1&= \frac{g(x)}{g(c)} = \frac{g(x)}{g(x + x\,\varepsilon _{\mathrm {c}})}. \end{aligned}$$

We have:

$$\begin{aligned} \vert g(x+x\,\varepsilon _{\mathrm {c}}) - g(x) \vert&\le x\,\vert \varepsilon _{\mathrm {c}}\vert \; \sup _{y\in [x, x+x\,\varepsilon _{\mathrm {c}}]} \left| g^\prime (y)\right| \\&\le x\,\vert \varepsilon _{\mathrm {c}}\vert \; \sup _{y\in [\frac{1-\mathbf {u}}{2}, 2+2\mathbf {u}]} \left| g^\prime (y)\right| \\&\le 0.6 \; \vert \varepsilon _{\mathrm {c}}\vert , \end{aligned}$$

where the last inequality was obtained by noticing that

$$ \forall y\in \left[ \frac{1-\mathbf {u}}{2}, 2+2\mathbf {u}\right] , \quad g^\prime (y)\in [-0.3, -0.1], $$

as shown by a simple interval analysis. A similar interval analysis shows that

$$ \forall y\in \left[ \frac{1}{2}, 2\right] , \quad g(y) \ge \frac{1}{2}, $$

so that

$$\begin{aligned} \left| \frac{g(x+x\,\varepsilon _{\mathrm {c}}) - g(x)}{g(x)} \right|&\le 1.2\;\vert \varepsilon _{\mathrm {c}}\vert , \end{aligned}$$

and thus, recalling the expression of \(E_1\) from (7),

$$\begin{aligned} \frac{1}{1+1.2\,\vert \varepsilon _{\mathrm {c}}\vert } \le E_1 \le \frac{1}{1-1.2\,\vert \varepsilon _{\mathrm {c}}\vert }. \end{aligned}$$

Putting all previous results together, we therefore have

$$\begin{aligned} \frac{\left( 1-\mathbf {u}\right) ^3}{\left( 1+1.2\,\mathbf {u}\right) \left( 1+\mathbf {u}\right) } \le 1+e \le \frac{\left( 1+\mathbf {u}\right) ^3}{\left( 1-1.2\,\mathbf {u}\right) \left( 1-\mathbf {u}\right) }, \end{aligned}$$

which proves that, in the first order, the relative error in this case is bounded by 6 ulps. It is interesting to note here that, depending on the specific floating-point implementation of the logarithm, l might not be correctly rounded and error term \(\varepsilon _{\mathrm {l}}\) might be bounded by several ulps. Should this happen, the relative error on the result of Algorithm 1 would be higher, but still bounded.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Févotte, F., Lathuilière, B. (2017). Studying the Numerical Quality of an Industrial Computing Code: A Case Study on Code_aster. In: Abate, A., Boldo, S. (eds) Numerical Software Verification. NSV 2017. Lecture Notes in Computer Science(), vol 10381. Springer, Cham. https://doi.org/10.1007/978-3-319-63501-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-63501-9_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-63500-2

  • Online ISBN: 978-3-319-63501-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics