Skip to main content

Improving Automatic Complexity Analysis of Integer Programs

  • Chapter
  • First Online:
The Logic of Software. A Tasting Menu of Formal Methods

Abstract

In [16], we developed an approach for automatic complexity analysis of integer programs, based on an alternating modular inference of upper runtime and size bounds for program parts. In this paper, we show how recent techniques to improve automated termination analysis of integer programs (like the generation of multiphase-linear ranking functions and control-flow refinement) can be integrated into our approach for the inference of runtime bounds. The power of the resulting approach is demonstrated by an extensive experimental evaluation with our new re-implementation of the corresponding tool KoAT.

Funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 235950644 (Project GI 274/6-2) and DFG Research Training Group 2236 UnRAVeL.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    So in the special case where \(\mathcal {T}'_{>} = \mathcal {T}'\) and \(\mathcal {T}'\) is a singleton, our Lemma 18 corresponds to [12, Lemma 6] for nested \(\text {M}\varPhi \text {RFs}\).

  2. 2.

    As usual, a graph is strongly connected if there is a path from every node to every other node. A strongly connected component is a maximal strongly connected sub-graph.

  3. 3.

    As usual, an SCC is non-trivial if it contains at least one transition.

  4. 4.

    In [20], different heuristics are presented to choose such abstraction layers. In our implementation, we use these heuristics as a black box.

  5. 5.

    To ensure the equivalence of the transformed program according to Definition 23, we call iRankFinder with a flag to prevent the “chaining” of transitions. This ensures that partial evaluation does not change the lengths of evaluations.

References

  1. Ahrendt, W., Beckert, B., Bubel, R., Hähnle, R., Schmitt, P.H., Ulbrich, M.: Deductive Software Verification - The KeY Book - From Theory to Practice. LNCS, vol. 10001. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-319-49812-6

    Book  Google Scholar 

  2. Albert, E., Arenas, P., Genaim, S., Puebla, G.: Automatic inference of upper bounds for recurrence relations in cost analysis. In: Alpuente, M., Vidal, G. (eds.) SAS 2008. LNCS, vol. 5079, pp. 221–237. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69166-2_15

    Chapter  MATH  Google Scholar 

  3. Albert, E., Arenas, P., Genaim, S., Puebla, G., Zanardini, D.: Cost analysis of object-oriented bytecode programs. Theor. Comput. Sci. 413(1), 142–159 (2012). https://doi.org/10.1016/j.tcs.2011.07.009

    Article  MathSciNet  MATH  Google Scholar 

  4. Albert, E., Genaim, S., Masud, A.N.: On the inference of resource usage upper and lower bounds. ACM Trans. Comput. Log. 14(3), 22:1–22:35 (2013). https://doi.org/10.1145/2499937.2499943

  5. Albert, E., Bubel, R., Genaim, S., Hähnle, R., Puebla, G., Román-Díez, G.: A formal verification framework for static analysis. Softw. Syst. Model. 15(4), 987–1012 (2015). https://doi.org/10.1007/s10270-015-0476-y

    Article  Google Scholar 

  6. Albert, E., Bofill, M., Borralleras, C., Martín-Martín, E., Rubio, A.: Resource analysis driven by (conditional) termination proofs. Theory Pract. Log. Program. 19(5–6), 722–739 (2019). https://doi.org/10.1017/S1471068419000152

    Article  MathSciNet  MATH  Google Scholar 

  7. Albert, E., Genaim, S., Martin-Martin, E., Merayo, A., Rubio, A.: Lower-bound synthesis using loop specialization and Max-SMT. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12760, pp. 863–886. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81688-9_40

    Chapter  Google Scholar 

  8. Alias, C., Darte, A., Feautrier, P., Gonnord, L.: Multi-dimensional rankings, program termination, and complexity bounds of flowchart programs. In: Cousot, R., Martel, M. (eds.) SAS 2010. LNCS, vol. 6337, pp. 117–133. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15769-1_8

    Chapter  Google Scholar 

  9. Avanzini, M., Moser, G.: A combination framework for complexity. In: van Raamsdonk, F. (ed.) RTA 2013. LIPIcs, vol. 21, pp. 55–70. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2013). https://doi.org/10.4230/LIPIcs.RTA.2013.55

  10. Avanzini, M., Moser, G., Schaper, M.: TcT: Tyrolean complexity tool. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 407–423. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49674-9_24

    Chapter  Google Scholar 

  11. Ben-Amram, A.M., Genaim, S.: Ranking functions for linear-constraint loops. J. ACM 61(4), 26:1–26:55 (2014). https://doi.org/10.1145/2629488

  12. Ben-Amram, A.M., Genaim, S.: On multiphase-linear ranking functions. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10427, pp. 601–620. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63390-9_32

    Chapter  Google Scholar 

  13. Ben-Amram, A.M., Doménech, J.J., Genaim, S.: Multiphase-linear ranking functions and their relation to recurrent sets. In: Chang, B.-Y.E. (ed.) SAS 2019. LNCS, vol. 11822, pp. 459–480. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32304-2_22

    Chapter  Google Scholar 

  14. Borralleras, C., Brockschmidt, M., Larraz, D., Oliveras, A., Rodríguez-Carbonell, E., Rubio, A.: Proving termination through conditional termination. In: Legay, A., Margaria, T. (eds.) TACAS 2017. LNCS, vol. 10205, pp. 99–117. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-54577-5_6

    Chapter  Google Scholar 

  15. Bradley, A.R., Manna, Z., Sipma, H.B.: The polyranking principle. In: Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) ICALP 2005. LNCS, vol. 3580, pp. 1349–1361. Springer, Heidelberg (2005). https://doi.org/10.1007/11523468_109

    Chapter  Google Scholar 

  16. Brockschmidt, M., Emmes, F., Falke, S., Fuhs, C., Giesl, J.: Analyzing runtime and size complexity of integer programs. ACM Trans. Program. Lang. Syst. 38(4), 13:1–13:50 (2016). https://doi.org/10.1145/2866575

  17. Carbonneaux, Q., Hoffmann, J., Shao, Z.: Compositional certified resource bounds. In: Grove, D., Blackburn, S.M. (eds.) PLDI 2015, pp. 467–478 (2015). https://doi.org/10.1145/2737924.2737955

  18. Clang Compiler. https://clang.llvm.org/

  19. Doménech, J.J., Genaim, S.: “iRankFinder”. In: Lucas, S. (ed.) WST 2018, p. 83 (2018). http://wst2018.webs.upv.es/wst2018proceedings.pdf

  20. Doménech, J.J., Gallagher, J.P., Genaim, S.: Control-flow refinement by partial evaluation, and its application to termination and cost analysis. Theory Pract. Log. Program. 19(5–6), 990–1005 (2019). https://doi.org/10.1017/S1471068419000310

    Article  MathSciNet  MATH  Google Scholar 

  21. Falke, S., Kapur, D., Sinz, C.: Termination analysis of C programs using compiler intermediate languages. In: Schmidt-Schauss, M. (ed.) RTA 2011. LIPIcs, vol. 10, pp. 41–50. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2011). https://doi.org/10.4230/LIPIcs.RTA.2011.41

  22. Flores-Montoya, A., Hähnle, R.: Resource analysis of complex programs with cost equations. In: Garrigue, J. (ed.) APLAS 2014. LNCS, vol. 8858, pp. 275–295. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-12736-1_15

    Chapter  Google Scholar 

  23. Flores-Montoya, A.: Upper and lower amortized cost bounds of programs expressed as cost relations. In: Fitzgerald, J., Heitmeyer, C., Gnesi, S., Philippou, A. (eds.) FM 2016. LNCS, vol. 9995, pp. 254–273. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48989-6_16

    Chapter  Google Scholar 

  24. Frohn, F., Giesl, J.: Complexity analysis for Java with AProVE. In: Polikarpova, N., Schneider, S. (eds.) IFM 2017. LNCS, vol. 10510, pp. 85–101. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66845-1_6

    Chapter  Google Scholar 

  25. Frohn, F., Giesl, J., Hensel, J., Aschermann, C., Ströder, T.: Lower bounds for runtime complexity of term rewriting. J. Autom. Reason. 59(1), 121–163 (2016). https://doi.org/10.1007/s10817-016-9397-x

    Article  MathSciNet  MATH  Google Scholar 

  26. Frohn, F., Giesl, J.: Proving non-termination via loop acceleration. In: Barrett, C.W., Yang, J. (eds.) FMCAD 2019, pp. 221–230 (2019). https://doi.org/10.23919/FMCAD.2019.8894271

  27. Frohn, F., Naaf, M., Brockschmidt, M., Giesl, J.: Inferring lower runtime bounds for integer programs. ACM Trans. Program. Lang. Syst. 42(3), 13:1–13:50 (2020). https://doi.org/10.1145/3410331

  28. Gallagher, J.P.: Polyvariant program specialisation with property-based abstraction. In: VPT@Programming. EPTCS, vol. 299, pp. 34–48 (2019). https://doi.org/10.4204/EPTCS.299.6

  29. Giesl, J., et al.: Analyzing program termination and complexity automatically with AProVE. J. Autom. Reason. 58(1), 3–31 (2016). https://doi.org/10.1007/s10817-016-9388-y

    Article  MathSciNet  Google Scholar 

  30. Giesl, J., Giesl, P., Hark, M.: Computing expected runtimes for constant probability programs. In: Fontaine, P. (ed.) CADE 2019. LNCS (LNAI), vol. 11716, pp. 269–286. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29436-6_16

    Chapter  Google Scholar 

  31. Giesl, J., Rubio, A., Sternagel, C., Waldmann, J., Yamada, A.: The termination and complexity competition. In: Beyer, D., Huisman, M., Kordon, F., Steffen, B. (eds.) TACAS 2019. LNCS, vol. 11429, pp. 156–166. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17502-3_10

    Chapter  Google Scholar 

  32. Hoffmann, J., Aehlig, K., Hofmann, M.: Multivariate amortized resource analysis. ACM Trans. Program. Lang. Syst. 34(3), 14:1–14:62 (2012). https://doi.org/10.1145/2362389.2362393

  33. Hoffmann, J., Das, A., Weng, S.-C.: Towards automatic resource bound analysis for OCaml. In: Castagna, G., Gordon, A.D. (eds.) POPL 2017, pp. 359–373 (2017). https://doi.org/10.1145/3009837.3009842

  34. Jeannet, B., Miné, A.: Apron: a library of numerical abstract domains for static analysis. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 661–667. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02658-4_52

    Chapter  Google Scholar 

  35. Kaminski, B.L., Katoen, J.-P., Matheja, C.: Expected runtime analysis by program verification. In: Barthe, G., Katoen, J.-P., Silva, A. (eds.) Foundations of Probabilistic Programming, pp. 185–220. Cambridge University Press (2020). https://doi.org/10.1017/9781108770750.007

  36. Königsberger, K.: Analysis 1, 6th edn. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-642-18490-1

    Book  MATH  Google Scholar 

  37. Lattner, C., Adve, V.S.: LLVM: a compilation framework for lifelong program analysis & transformation. In: CGO 2004, pp. 75–88. IEEE Computer Society (2004). https://doi.org/10.1109/CGO.2004.1281665

  38. Leike, J., Heizmann, M.: Ranking templates for linear loops. Log. Methods Comput. Sci. 11(1) (2015). https://doi.org/10.2168/LMCS-11(1:16)2015

  39. Meyer, F., Hark, M., Giesl, J.: Inferring expected runtimes of probabilistic integer programs using expected sizes. In: TACAS 2021. LNCS, vol. 12651, pp. 250–269. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72016-2_14

    Chapter  MATH  Google Scholar 

  40. Moser, G., Schaper, M.: From Jinja bytecode to term rewriting: a complexity reflecting transformation. Inf. Comput. 261, 116–143 (2018). https://doi.org/10.1016/j.ic.2018.05.007

    Article  MathSciNet  MATH  Google Scholar 

  41. de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24

    Chapter  Google Scholar 

  42. KoAT: Web Interface, Experiments, Source Code, Binary, and Docker Image. https://aprove-developers.github.io/ComplexityMprfCfr/

  43. Noschinski, L., Emmes, F., Giesl, J.: Analyzing innermost runtime complexity of term rewriting by dependency pairs. J. Autom. Reason. 51(1), 27–56 (2013). https://doi.org/10.1007/s10817-013-9277-6

    Article  MathSciNet  MATH  Google Scholar 

  44. Podelski, A., Rybalchenko, A.: A complete method for the synthesis of linear ranking functions. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 239–251. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24622-0_20

    Chapter  Google Scholar 

  45. RaML (Resource Aware ML). https://www.raml.co/interface/

  46. Sinn, M., Zuleger, F., Veith, H.: Complexity and resource bound analysis of imperative programs using difference constraints. J. Autom. Reason. 59(1), 3–45 (2017). https://doi.org/10.1007/s10817-016-9402-4

    Article  MathSciNet  MATH  Google Scholar 

  47. Srikanth, A., Sahin, B., Harris, W.R.: Complexity verification using guided theorem enumeration. In: Castagna, G., Gordon, A.D. (eds.) POPL 2017, pp. 639–652 (2017). https://doi.org/10.1145/3009837.3009864

  48. TPDB (Termination Problems Data Base). https://github.com/TermCOMP/TPDB

  49. Wang, D., Hoffmann, J.: Type-guided worst-case input generation. Proc. ACM Program. Lang. 3(POPL), 13:1–13:30 (2019). https://doi.org/10.1145/3290326

  50. Yuan, Y., Li, Y., Shi, W.: Detecting multiphase linear ranking functions for single-path linear-constraint loops. Int. J. Softw. Tools Technol. Transfer 23(1), 55–67 (2019). https://doi.org/10.1007/s10009-019-00527-1

    Article  Google Scholar 

Download references

Acknowledgments

This paper is dedicated to Reiner Hähnle whose ground-breaking results on functional verification and symbolic execution of Java programs with the KeY tool [1], on automatic resource analysis [22], and on its combination with deductive verification (e.g., [5]) were a major inspiration for us. Reiner’s work motivated us to develop and improve KoAT such that it can be used as a backend for complexity analysis of languages like Java [24].

We are indebted to Samir Genaim and Jesús J. Doménech for their help and advice with integrating multiphase-linear ranking functions and partial evaluation into our approach, and for providing us with a suitable version of iRankFinder which we could use in KoAT’s backend. Moreover, we are grateful to Albert Rubio and Enrique Martín-Martín for providing us with a static binary of MaxCore, to Antonio Flores-Montoya and Florian Zuleger for their help in running CoFloCo and Loopus for our experiments, and to Florian Frohn for help and advice.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jürgen Giesl .

Editor information

Editors and Affiliations

A Proofs

A Proofs

1.1 A.1 Proof of Lemma 18

We first present lemmas which give an upper and a lower bound for sums of powers. These lemmas will be needed in the proof of Lemma 18.

Lemma 28

(Upper Bound for Sums of Powers). For any \(i\ge 2\) and \(k \ge 1\) we have \(\sum _{j=1}^{k-1} j^{i-2} \le \tfrac{k^{i-1}}{i-1}\).

Proof

We have \(\sum _{j=1}^{k-1} j^{i-2} \; \le \; \sum _{j=1}^{k-1} \int _{j}^{j+1} x^{i-2} \, dx \; \le \; \int _{0}^{k} x^{i-2} \, dx \; = \; \tfrac{k^{i-1}}{i-1}\).    \(\square \)

For the lower bound, we use the summation formula of Euler (see, e.g., [36]).

Lemma 29

(Summation Formula of Euler). We define the periodic function \(H : \mathbb {R}\rightarrow \mathbb {R}\) as \(H(x) = x - \lfloor x \rfloor - \tfrac{1}{2}\) if \(x \in \mathbb {R}\setminus \mathbb {Z}\) and as \(H(x) = 0\) if \(x\in \mathbb {Z}\). Note that H(x) is bounded by \(-\tfrac{1}{2}\) and \(\tfrac{1}{2}\). Then for any continuously differentiable function \(f: [1,n] \rightarrow \mathbb {C}\) with \(n\in \mathbb {N}\), we have \(\sum _{j=1}^{k} f(j) \; = \; \int _{1}^{k} f(x) \, dx + \tfrac{1}{2}\cdot (f(1)+f(k)) + \int _{1}^{k} H(x)\cdot f'(x)\, dx\).

This then leads to the following result.

Lemma 30

(Lower Bound for Sums of Powers). For any \(i\ge 2\) and \(k \ge 1\) we have \(\sum _{j=1}^{k-1} j^{i-1}\ge \tfrac{k^{i}}{i} - k^{i-1}\).

Proof

Consider \(f(x) = x^i\) with the derivative \(f'(x) = i \cdot x^{i-1}\). We get

figure d

Since \(\left| H(x)\right| \le \tfrac{1}{2}\), we have \(\left| \int _{1}^{k} H(x) \cdot i \cdot x^{i-1} \, dx\right| \; \le \; \tfrac{1}{2} \cdot \left| \int _{1}^{k} i \cdot x^{i-1} \, dx\right| \; =\; \tfrac{1}{2} \cdot i \cdot \left| \tfrac{k^i}{i} - \tfrac{1}{i}\right| \; = \; \tfrac{k^i - 1}{2}\). Thus, we obtain

$$ - \tfrac{1}{i+1} + \tfrac{1}{2} \cdot (1+k^i) + \tfrac{k^i - 1}{2} \ge R \ge - \tfrac{1}{i+1} + \tfrac{1}{2} \cdot (1+k^i) - \tfrac{k^i - 1}{2} $$

or, equivalently \(- \tfrac{1}{i+1} + k^i \ge R \ge - \tfrac{1}{i+1} +1\). This implies \(k^i> R > 0\). Hence, we get \(\sum _{j=1}^{k} j^i \; = \; \tfrac{k^{i+1}}{i+1} + R \; \ge \; \tfrac{k^{i+1}}{i+1}\) and thus, \(\sum _{j=1}^{k-1} j^i \; = \; \sum _{j=1}^{k} j^i - k^i \; \ge \; \tfrac{k^{i+1}}{i+1} - k^i\). With the index shift \(i\rightarrow i - 1\) we finally obtain the lower bound \(\sum _{j=1}^{k-1} j^{i-1}\ge \tfrac{k^{i}}{i} - k^{i-1}\).    \(\square \)

Proof of Lemma 18. To ease notation, in this proof \(\ell _0\) does not denote the initial location of the program \(\mathcal {T}\), but an arbitrary location from \(\mathcal {L}\). Then we can write \((\ell _0,\sigma _0)\) instead of \((\ell ,\sigma )\), \((\ell _n,\sigma _n)\) instead of \((\ell ',\sigma ')\), and consider an evaluation

$$ (\ell _{0},\sigma _{0}) \, (\rightarrow ^*_{\mathcal {T}'\setminus \mathcal {T}'_{>}} \circ \rightarrow _{\mathcal {T}'_{>}}) \, (\ell _{1},\sigma _{1}) \, (\rightarrow ^*_{\mathcal {T}'\setminus \mathcal {T}'_{>}} \circ \rightarrow _{\mathcal {T}'_{>}}) \, \ldots \, (\rightarrow ^*_{\mathcal {T}'\setminus \mathcal {T}'_{>}} \circ \rightarrow _{\mathcal {T}'_{>}}) \, (\ell _{n},\sigma _{n}). $$

Let \(M = \max \{0,\sigma _0\left( f_1(\ell _0)\right) , \dots , \sigma _0\left( f_d(\ell _0)\right) \}\). We first prove that for all \(1 \le i \le d\) and all \(0 \le k \le n\), we have

$$\begin{aligned} \sigma _k(f_i(\ell _k)) \; \le \; -k \text { if M = 0} \quad \text {and} \quad \sigma _k(f_i(\ell _k)) \; \le \; \gamma _i \cdot M \cdot k^{i-1} - \tfrac{k^i}{i!} \; \text { if M > 0.} \end{aligned}$$
(2)

The proof is done by induction on i. So in the base case, we have \(i = 1\). Since \(\gamma _1 = 1\), we have to show that \(\sigma _k\left( f_1(\ell _k)\right) \le M \cdot k^{0} - \tfrac{k^1}{1!} = M - k\).

For all \(0 \le j \le k-1\), the step from \((\ell _j,\sigma _j)\) to \((\ell _{j+1},\sigma _{j+1})\) corresponds to the evaluation of transitions from \(\mathcal {T}'\setminus \mathcal {T}'_{>}\) followed by a transition from \(\mathcal {T}'_{>}\), i.e., we have \((\ell _j,\sigma _j) \rightarrow ^*_{\mathcal {T}'\setminus \mathcal {T}'_{>}} (\ell '_j,\sigma '_j) \rightarrow _{\mathcal {T}'_{>}} (\ell _{j+1},\sigma _{j+1})\) for some configuration \((\ell '_j,\sigma '_j)\). Since f is an \(\text {M}\varPhi \text {RF}\) and all transitions in \(\mathcal {T}'\setminus \mathcal {T}'_{>}\) are non-increasing, we obtain \(\sigma _j(f_1(\ell _j)) \ge \sigma '_{j}(f_1(\ell '_{j}))\). Moreover, since the transitions in \(\mathcal {T}'_{>}\) are decreasing, we have \(\sigma '_j(f_{0}(\ell '_j)) + \sigma '_j(f_{1}(\ell '_j)) = \sigma '_j(f_{1}(\ell '_j)) \ge \sigma _{j+1}(f_{1}(\ell _{j+1})) + 1\). So together, this implies \(\sigma _j(f_{1}(\ell _j)) \ge \sigma _{j+1}(f_{1}(\ell _{j+1})) + 1\) and thus, \(\sigma _0\left( f_1(\ell _0)\right) \ge \sigma _1\left( f_1(\ell _1)\right) + 1\ge \ldots \ge \sigma _k\left( f_1(\ell _k)\right) + k\) or equivalently, \(\sigma _0\left( f_1(\ell _0)\right) - k\ge \sigma _k\left( f_1(\ell _k)\right) \). Furthermore, we have \(\sigma _0\left( f_1(\ell _0)\right) \le \max \{0,\sigma _0\left( f_1(\ell _0)\right) ,\dots ,\sigma _0\left( f_d(\ell _0)\right) \} = M\). Hence, we obtain \(\sigma _k\left( f_1(\ell _k)\right) \le \sigma _0\left( f_1(\ell _0)\right) - k \le M - k\). So in particular, if \(M = 0\), then we have \(\sigma _k(f_1(\ell _k)) \le -k\).

In the induction step, we assume that for all \(0 \le k \le n\), we have \(\sigma _k(f_{i-1}(\ell _k)) \, \le \, -k\) if \(M = 0\) and \(\sigma _k(f_{i-1}(\ell _k)) \; \le \; \gamma _{i-1} \cdot M \cdot k^{i-2} - \tfrac{k^{i-1}}{(i-1)!}\) if \(M > 0\). To show that the inequations also hold for i, we first transform \(\sigma _k(f_{i}(\ell _k))\) into a telescoping sum.

$$\begin{aligned} \sigma _k\left( f_i(\ell _k)\right) = \sigma _0\left( f_i(\ell _0)\right) + \sum _{j=0}^{k-1} (\sigma _{j+1}\left( f_i(\ell _{j+1})\right) - \sigma _{j}\left( f_i(\ell _{j})\right) ) \end{aligned}$$

For all \(0 \le j \le k-1\), the step from \((\ell _j,\sigma _j)\) to \((\ell _{j+1},\sigma _{j+1})\) again has the form \((\ell _j,\sigma _j) \rightarrow ^*_{\mathcal {T}'\setminus \mathcal {T}'_{>}} (\ell '_j,\sigma '_j) \rightarrow _{\mathcal {T}'_{>}} (\ell _{j+1},\sigma _{j+1})\) for some configuration \((\ell '_j,\sigma '_j)\). Since f is an \(\text {M}\varPhi \text {RF}\) and all transitions in \(\mathcal {T}'\setminus \mathcal {T}'_{>}\) are non-increasing, we obtain \(\sigma _j(f_{i-1}(\ell _j)) \ge \sigma '_{j}(f_{i-1}(\ell '_{j}))\) and \(\sigma _j(f_i(\ell _j)) \ge \sigma '_{j}(f_i(\ell '_{j}))\). Moreover, since the transitions in \(\mathcal {T}'_{>}\) are decreasing, we have \(\sigma '_j(f_{i-1}(\ell '_j)) + \sigma '_j(f_{i}(\ell '_j)) \ge \sigma _{j+1}(f_{i}(\ell _{j+1})) + 1\). So together, this implies \(\sigma _j(f_{i-1}(\ell _j)) + \sigma _j(f_{i}(\ell _j)) \ge \sigma _{j+1}(f_{i}(\ell _{j+1})) + 1\) or equivalently, \(\sigma _{j + 1}\left( f_i (\ell _{j+1})\right) - \sigma _{j}\left( f_i (\ell _j)\right) < \sigma _{j}\left( f_{i-1} (\ell _{j})\right) \). Hence, we obtain

$$\begin{aligned} \sigma _{k}\left( f_i(\ell _{k})\right)&= \sigma _{0}\left( f_i(\ell _{0})\right) + \sum _{j=0}^{k-1} (\sigma _{j+1}\left( f_i(\ell _{j+1})\right) - \sigma _{j}\left( f_i(\ell _{j})\right) ) \\&< \sigma _{0}\left( f_i(\ell _{0})\right) + \sum _{j=0}^{k-1} \sigma _{j}\left( f_{i-1}(\ell _{j})\right) . \end{aligned}$$

If \(M = 0\), then we obviously have \(\sigma _{0}(f_i (\ell _{0})) \le 0\) for all \(1\le i \le d\). For \(k \ge 1\), we obtain

$$\begin{aligned}&\sigma _0\left( f_i(\ell _0)\right) + \sum _{j=0}^{k-1} \sigma _{j}\left( f_{i-1}(\ell _{j})\right) \\ {} \le {}&0 + \sum _{j=0}^{k-1} -j \qquad \qquad \qquad \text {(by the induction hypothesis)} \\ {} \le {}&- k + 1. \end{aligned}$$

Hence, we have \(\sigma _k\left( f_i(\ell _k)\right) < - k + 1\) and thus, \(\sigma _k\left( f_i(\ell _k)\right) \le -k\).

If \(M > 0\), then we obtain

figure e

Hence, (2) is proved.

In the case \(M = 0\), (2) implies \(\sigma _n(f_i(\ell _n)) \le -n \le -\beta = -1 < 0\) for all \(1\le i \le d\) which proves the lemma.

Hence, it remains to regard the case \(M > 0\). Now (2) implies

$$\begin{aligned} \sigma _n(f_i(\ell _n)) \; \le \; \gamma _i \cdot M \cdot n^{i-1} - \tfrac{n^i}{i!}. \end{aligned}$$
(3)

We now prove that for \(i > 1\) we always have \(i! \cdot \gamma _i \ge (i-1)! \cdot \gamma _{i-1}\).

$$\begin{aligned}&i! \cdot \gamma _i \\ {} = {}&i! \cdot \left( 2 + \tfrac{\gamma _{i-1}}{i-1} + \tfrac{1}{(i-1)!}\right) \\ {} = {}&i! \cdot 2 + i \cdot (i-2)! \cdot \gamma _{i-1} + i \\ {} \ge {}&(i-1) \cdot (i-2)! \cdot \gamma _{i-1} \\ {} = {}&(i-1)! \cdot \gamma _{i-1}. \end{aligned}$$

Thus,

$$\begin{aligned} d! \cdot \gamma _d \ge i! \cdot \gamma _i \quad \text {for all } 1 \le i \le d. \end{aligned}$$
(4)

Hence, for \(n \ge \beta = 1 + d! \cdot \gamma _d \cdot M\) we obtain:

figure f

Finally, to show that \(\beta \in \mathbb {N}\), note that by induction on i, one can easily prove that \((i-1)! \cdot \gamma _i \in \mathbb {N}\) holds for all \(i \ge 1\). Hence, in contrast to \(\gamma _i\), the number \(i! \cdot \gamma _i\) is a natural number for all \(i \in \mathbb {N}\). This implies \(\beta \in \mathbb {N}\).    \(\square \)

1.2 A.2 Proof of Theorem 20

Proof

We prove Theorem 20 by showing that for all \(t\in \mathcal {T}\) and all \(\sigma _0 \in \varSigma \) we have

$$\begin{aligned} \left| \sigma _0\right| \left( {\mathcal {RB}}'(t)\right) \ge \sup \lbrace k \in \mathbb {N}\mid \ell \in \mathcal {L}, \sigma \in \varSigma , (\ell _0, \sigma _0) \, (\rightarrow ^* \circ \rightarrow _{t})^k \, (\ell , \sigma ) \rbrace . \end{aligned}$$
(5)

The case \(t \notin \mathcal {T}'_{>}\) is trivial, since \({\mathcal {RB}}'(t) = {\mathcal {RB}}(t)\) and \({\mathcal {RB}}\) is a runtime bound.

Now we prove (5) for a transition \(t_>\in \mathcal {T}'_{>}\), i.e., we show that for all \(\sigma _0 \in \varSigma \) we have

$$ \begin{array}{r@{\;\;}c@{\;\;}l} \left| \sigma _0\right| \left( {\mathcal {RB}}'(t_>)\right) &{} = &{} \sum _{\ell \in \mathcal {E}_{\mathcal {T}'}} \sum _{t \in \mathcal {T}_\ell } \left| \sigma _0\right| \left( {\mathcal {RB}}(t)\right) \cdot \left| \sigma _0\right| \left( {\mathcal {SB}}(t, \cdot )(\beta _{\ell })\right) \\ &{} \ge &{} \sup \lbrace k \in \mathbb {N}\mid \ell \in \mathcal {L}, \sigma \in \varSigma , (\ell _0, \sigma _0) \, (\rightarrow ^* \circ \rightarrow _{t_>})^k \, (\ell , \sigma ) \rbrace . \end{array} $$

So let \((\ell _0, \sigma _0) \, (\rightarrow ^* \circ \rightarrow _{t_>})^k \, (\ell , \sigma )\) and we have to show \(\left| \sigma _0\right| \left( {\mathcal {RB}}'(t_>)\right) \ge k\). If \(k = 0\), then we clearly have \(\left| \sigma _0\right| \left( {\mathcal {RB}}'\left( t_{>}\right) \right) \ge 0 = k\). Hence, we consider \(k > 0\). We represent the evaluation as follows:

$$ \begin{array}{l@{\quad }c@{\quad }l@{\quad }c} (\ell _0, \sigma _0) &{} \rightarrow ^{\tilde{k}_0}_{\mathcal {T}\setminus \mathcal {T}'} &{} ({\tilde{\ell }}_1, {\tilde{\sigma }}_1) &{} \rightarrow ^{k_1'}_{\mathcal {T}'} \\ (\ell _1, \sigma _1) &{} \rightarrow ^{\tilde{k}_1}_{\mathcal {T}\setminus \mathcal {T}'} &{} ({\tilde{\ell }}_2, {\tilde{\sigma }}_2) &{} \rightarrow ^{k_2'}_{\mathcal {T}'} \\ &{} &{} \ldots \\ (\ell _{m-1}, \sigma _{m-1}) &{} \rightarrow ^{\tilde{k}_{m-1}}_{\mathcal {T}\setminus \mathcal {T}'} &{} ({\tilde{\ell }}_m, {\tilde{\sigma }}_m) &{} \rightarrow ^{k_m'}_{\mathcal {T}'} \\ (\ell _m, \sigma _m) \end{array} $$

So for the evaluation from \((\ell _i, \sigma _i)\) to \(({\tilde{\ell }}_{i+1}, {\tilde{\sigma }}_{i+1})\) we only use transitions from \(\mathcal {T}\setminus \mathcal {T}'\), and for the evaluation from \(({\tilde{\ell }}_i, {\tilde{\sigma }}_i)\) to \((\ell _i, \sigma _i)\) we only use transitions from \(\mathcal {T}'\). Thus, \(t_>\) can only occur in the following finite sequences of evaluation steps:

$$\begin{aligned} ({\tilde{\ell }}_i, {\tilde{\sigma }}_i) \rightarrow _{\mathcal {T}'} ({\tilde{\ell }}_{i,1}, {\tilde{\sigma }}_{i,1}) \rightarrow _{\mathcal {T}'} \dots \rightarrow _{\mathcal {T}'} ({\tilde{\ell }}_{i,k_i'-1}, {\tilde{\sigma }}_{i,k_i'-1}) \rightarrow _{\mathcal {T}'} (\ell _i, \sigma _i). \end{aligned}$$
(6)

For every \(1 \le i \le m\), let \(k_i \le k_i'\) be the number of times that \(t_>\) is used in the evaluation (6). Clearly, we have

$$\begin{aligned} \sum _{i=1}^{m} k_i = k. \end{aligned}$$
(7)

By Lemma 18, all functions \(f_1,\dots ,f_d\) are negative after executing \(t_>\) at least \(1 + d! \cdot \gamma _d \cdot \max \{0, {\tilde{\sigma }}_i(f_1({\tilde{\ell }}_i)),\dots , {\tilde{\sigma }}_i(f_d({\tilde{\ell }}_i))\}\) times in an evaluation with \(\mathcal {T}'\). If all the \(f_i\) are negative, then \(t_>\) cannot be executed anymore as f is an \(\text {M}\varPhi \text {RF}\) for \(\mathcal {T}'_{>}\) with \(t_> \in \mathcal {T}'_{>}\) and \(\mathcal {T}'\). Thus, for all \(1 \le i \le m\) we have

$$\begin{aligned} 1 + d! \cdot \gamma _d\cdot \max \left\{ 0,{\tilde{\sigma }}_i\left( f_1({\tilde{\ell }}_i)\right) ,\dots ,{\tilde{\sigma }}_i\left( f_d({\tilde{\ell }}_i)\right) \right\} \ge k_i. \end{aligned}$$
(8)

Let \(t_i\) be the entry transition reaching \(({\tilde{\ell }}_i, {\tilde{\sigma }}_i)\), i.e., \({\tilde{\ell }}_i \in \mathcal {E}_{\mathcal {T}'}\) and \(t_i \in \mathcal {T}_{{\tilde{\ell }}_i}\). As \((\ell _0, \sigma _0) \rightarrow ^*_\mathcal {T}\circ \rightarrow _{t_i} ({\tilde{\ell }}_i, {\tilde{\sigma }}_i)\), by Definition 12 we have \(\left| \sigma _0\right| \left( {\mathcal {SB}}(t_i, v)\right) \ge |{\tilde{\sigma }}_i(v)|\) for all \(v \in \mathcal {PV}\) and thus,

figure g

In the last part of this proof we need to analyze how often such evaluations \(({\tilde{\ell }}_i, {\tilde{\sigma }}_i) \rightarrow ^*_{\mathcal {T}'} (\ell _i, \sigma _i)\) can occur. Again, let \(t_i\) be the entry transition reaching \(({\tilde{\ell }}_i, {\tilde{\sigma }}_i)\). Every entry transition \(t_i\) can occur at most \(\left| \sigma _0\right| \left( {\mathcal {RB}}(t_i)\right) \) times in the complete evaluation, as \({\mathcal {RB}}\) is a runtime bound. Thus, we have

   \(\square \)

1.3 A.3 Proof of Theorem 24

Let \(\mathcal {P}' = (\mathcal {PV},\mathcal {L}',\ell _0,\mathcal {T}')\). First note that for every evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}'} (\ell ',\sigma )\) there is obviously also a corresponding evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}} (\ell ,\sigma )\). To obtain the evaluation with \(\mathcal {T}\) one simply has to remove the labels from the locations. Then the claim follows because the guards of the transitions in \(\mathcal {T}'\) always imply the guards of the respective original transitions in \(\mathcal {T}\) and the updates of the transitions have not been modified in the transformation from \(\mathcal {T}\) to \(\mathcal {T}'\).

For the other direction, we show by induction on \(k \in \mathbb {N}\) that for every evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}} (\ell ,\sigma )\) there is a corresponding evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}'} (\ell ',\sigma )\) where either \(\ell ' = \ell \) or \(\ell ' = \langle \ell , \varphi \rangle \) for some constraint \(\varphi \) with \(\sigma (\varphi ) = \texttt {true}\).

In the induction base, we have \(k = 0\) and the claim is trivial. In the induction step \(k > 0\) the evaluation has the form

$$ (\ell _0,\sigma _0)\rightarrow _{t_1}(\ell _1,\sigma _1)\rightarrow _{t_2} \cdots \rightarrow _{t_{k-1}}(\ell _{k-1},\sigma _{k-1}) \rightarrow _{t_k}(\ell _{k},\sigma _{k}) $$

with \(t_1, \ldots , t_k \in \mathcal {T}\). By the induction hypothesis, there is a corresponding evaluation

$$ (\ell _0,\sigma _0)\rightarrow _{t_1'}(\ell _1',\sigma _1) \rightarrow _{t_2'} \cdots \rightarrow _{t_{k-1}'}(\ell _{k-1}',\sigma _{k-1}) $$

with \(t_1', \ldots , t_k' \in \mathcal {T}'\) where \(\ell _{k-1}' = \ell _{k-1}\) or \(\ell _{k-1}' = \langle \ell _{k-1}, \varphi \rangle \) for some constraint \(\varphi \) with \(\sigma _{k-1}(\varphi ) = \texttt {true}\). We distinguish two cases:

  • Case 1: \(t_{k}\not \in \mathcal {T}_ SCC \). If \(\ell _{k-1}' = \ell _{k-1}\) and \(\ell _k \notin \mathcal {E}_{\mathcal {T}_{ SCC }}\), then \(t_k\) has not been modified in the transformation from \(\mathcal {P}\) to \(\mathcal {P}'\). Thus, we have the evaluation \((\ell _0,\sigma _0)\rightarrow _{t_1'}(\ell _1',\sigma _1) \rightarrow _{t_2'} \cdots \rightarrow _{t_{k-1}'}(\ell _{k-1}',\sigma _{k-1}) = (\ell _{k-1},\sigma _{k-1}) \rightarrow _{t_k} (\ell _{k},\sigma _{k})\) with \(t_k \in \mathcal {T}'\). If \(\ell _{k-1}' = \ell _{k-1}\) and \(\ell _k \in \mathcal {E}_{\mathcal {T}_{ SCC }}\), then for \(t_k = (\ell _{k-1},\tau ,\eta ,\ell _{k})\), we set \(\ell _k' = \langle \ell _k, \texttt {true}\rangle \) and obtain that \(t_k' = (\ell _{k-1},\tau ,\eta ,\ell _{k}') \in \mathcal {T}'\). So we get the evaluation \((\ell _0,\sigma _0)\rightarrow _{t_1'}(\ell _1',\sigma _1) \rightarrow _{t_2'} \cdots \rightarrow _{t_{k-1}'}(\ell _{k-1}',\sigma _{k-1}) = (\ell _{k-1},\sigma _{k-1}) \rightarrow _{t_k'} (\ell _{k}',\sigma _{k})\). Finally, we regard the case \(\ell _{k-1}' = \langle \ell _{k-1}, \varphi \rangle \) where \(\sigma _{k-1}(\varphi ) = \texttt {true}\). As \(t_k = (\ell _{k-1},\tau ,\eta ,\ell _{k})\in \mathcal {T}\setminus \mathcal {T}_{ SCC }\), and \(\mathcal {T}_{ SCC }\) is an SCC, there is a \(t_k' = (\langle \ell _{k-1}, \varphi \rangle , \varphi \wedge \tau , \eta , \ell _{k}) \in \mathcal {T}'\). Then \((\ell _0,\sigma _0)\rightarrow _{t_1'}(\ell _1',\sigma _1) \rightarrow _{t_2'} \cdots \rightarrow _{t_{k-1}'} (\ell _{k-1}',\sigma _{k-1}) = (\langle \ell _{k-1}, \varphi \rangle ,\sigma _{k-1}) \rightarrow _{t_{k}'}(\ell _{k},\sigma _{k})\) is an evaluation with \(\mathcal {T}'\). The evaluation step with \(t_k'\) is possible, since \(\sigma _{k-1}(\varphi ) = \texttt {true}\) and \(\sigma _{k-1}(\tau ) = \texttt {true}\) (due to the evaluation step \((\ell _{k-1},\sigma _{k-1}) \rightarrow _{t_k} (\ell _{k},\sigma _{k})\)). Note that the step with \(t_k'\) also results in the state \(\sigma _{k}\), because both \(t_k\) and \(t_k'\) have the same update \(\eta \).

  • Case 2: \(t_{k}\in \mathcal {T}_ SCC \). Here, \(\ell _{k-1}'\) has the form \(\langle \ell _{k-1}, \varphi \rangle \) where \(\sigma _{k-1}(\varphi ) = \texttt {true}\). As \(\ell _k\) is part of the SCC and hence has an incoming transition from \(\mathcal {T}_ SCC \), at some point it is refined by Algorithm 2. Thus, for \(t_k = (\ell _{k-1}, \tau ,\eta , \ell _{k})\), there is some \(t_{k}' = \left( \langle \ell _{k-1}, \varphi \rangle ,\varphi \wedge \tau ,\eta , \left\langle \ell _{k},\alpha _{\ell _{k}}(\varphi _{ new })\right\rangle \right) \in \mathcal {T}'\) where \(\alpha _{\ell _{k}}(\varphi _{ new })\) is constructed as in Line 8. This leads to the corresponding evaluation \((\ell _0,\sigma _0)\rightarrow _{t_1'}(\ell _1',\sigma _1) \rightarrow _{t_2'} \cdots \rightarrow _{t_{k-1}'}(\langle \ell _{k-1}, \varphi \rangle ,\sigma _{k-1})\rightarrow _{t_{k}'}(\langle \ell _{k}, \alpha _{\ell _{k}}(\varphi _{ new })\rangle ,\sigma _{k})\). Again, the evaluation step with \(t_k'\) is possible, since \(\sigma _{k-1}(\varphi ) = \texttt {true}\) and \(\sigma _{k-1}(\tau ) = \texttt {true}\) (due to the evaluation step \((\ell _{k-1},\sigma _{k-1}) \rightarrow _{t_k} (\ell _{k},\sigma _{k})\)). And again, the step with \(t_k'\) also results in the state \(\sigma _{k}\), because both \(t_k\) and \(t_k'\) have the same update \(\eta \). Finally, note that we have \(\sigma _k(\alpha _{\ell _{k}}(\varphi _{ new })) = \texttt {true}\). The reason is that \(\models (\varphi \wedge \tau ) \rightarrow \eta (\varphi _{ new })\) and \(\sigma _{k-1}(\varphi \wedge \tau )= \texttt {true}\) implies \(\sigma _{k-1}(\eta (\varphi _{ new })) = \texttt {true}\). Hence, we also have \(\sigma _{k}(\varphi _{ new }) = \sigma _{k-1}(\eta (\varphi _{ new })) = \texttt {true}\). Therefore, \(\models \varphi _{ new } \rightarrow \alpha _{\ell _{k}}(\varphi _{ new })\) implies \(\sigma _{k}(\alpha _{\ell _{k}}(\varphi _{ new }))= \texttt {true}\).    \(\square \)

1.4 A.4 Proof of Theorem 25

Let \(\mathcal {P}' = (\mathcal {PV},\mathcal {L}',\ell _0,\mathcal {T}')\) result from \(\mathcal {P}\) by Algorithm 3. As in the proof of Theorem 24, for every evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}'} (\ell ',\sigma )\) there is also a corresponding evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}} (\ell ,\sigma )\), which is obtained by removing the labels from the locations.

For the other direction, we show that for each evaluation \((\ell _0,\sigma _0)\rightarrow _{t_1}(\ell _1,\sigma _1)\rightarrow _{t_2} \cdots \rightarrow _{t_k}(\ell _k,\sigma _k)\) with \(t_1,\ldots , t_k \in \mathcal {T}\) there is a corresponding evaluation \((\ell _0,\sigma _0)\rightarrow _{\mathcal {T}'}^k (\ell _k',\sigma _k)\) in \(\mathcal {P}'\). To obtain this evaluation, we handle all evaluation fragments separately which use programs \(\mathcal {Q}\) from \(\mathcal {S}\). This is possible, since different programs in \(\mathcal {S}\) do not share locations, i.e., entry and outgoing transitions of \(\mathcal {Q}\) cannot be part of another \(\mathcal {Q}'\) from \(\mathcal {S}\). Such an evaluation fragment has the form

$$\begin{aligned} (\ell _i,\sigma _i)\rightarrow _{t_{i+1}}(\ell _{i+1},\sigma _{i+1}) \rightarrow _{t_{i+2}} \cdots \rightarrow _{t_{n-1}}(\ell _{n-1},\sigma _{n-1}) \rightarrow _{t_n}(\ell _n,\sigma _n) \end{aligned}$$
(9)

where \(t_{i+1}\) is an entry transition to \(\mathcal {Q}\), \(t_n\) is an outgoing transition from \(\mathcal {Q}\), and the transitions \(t_{i+2}, \ldots , t_{n-1}\) belong to \(\mathcal {Q}\). By Theorem 24 it follows that there is a corresponding evaluation using the transitions \(t_{i+2}', \ldots , t_{n-1}'\) from the refined version of \(\mathcal {Q}\), such that with the new redirected entry transition \(t_{i+1}'\) and the new redirected outgoing transition \(t_n'\) we have

$$\begin{aligned} (\ell _i,\sigma _i)\rightarrow _{t_{i+1}'}(\ell _{i+1}',\sigma _{i+1}) \rightarrow _{t_{i+2}'} \cdots \rightarrow _{t_{n - 1}'}(\ell _{n - 1}',\sigma _{n - 1}) \rightarrow _{t_{n}'}(\ell _n,\sigma _n) \end{aligned}$$
(10)

Thus, by substituting each evaluation fragment (9) in an evaluation of \(\mathcal {P}\) by its refinement (10), we get a corresponding evaluation in \(\mathcal {P}'\).

   \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Giesl, J., Lommen, N., Hark, M., Meyer, F. (2022). Improving Automatic Complexity Analysis of Integer Programs. In: Ahrendt, W., Beckert, B., Bubel, R., Johnsen, E.B. (eds) The Logic of Software. A Tasting Menu of Formal Methods. Lecture Notes in Computer Science, vol 13360. Springer, Cham. https://doi.org/10.1007/978-3-031-08166-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08166-8_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08165-1

  • Online ISBN: 978-3-031-08166-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics