Abstract
In [16], we developed an approach for automatic complexity analysis of integer programs, based on an alternating modular inference of upper runtime and size bounds for program parts. In this paper, we show how recent techniques to improve automated termination analysis of integer programs (like the generation of multiphase-linear ranking functions and control-flow refinement) can be integrated into our approach for the inference of runtime bounds. The power of the resulting approach is demonstrated by an extensive experimental evaluation with our new re-implementation of the corresponding tool KoAT.
Funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 235950644 (Project GI 274/6-2) and DFG Research Training Group 2236 UnRAVeL.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
As usual, a graph is strongly connected if there is a path from every node to every other node. A strongly connected component is a maximal strongly connected sub-graph.
- 3.
As usual, an SCC is non-trivial if it contains at least one transition.
- 4.
In [20], different heuristics are presented to choose such abstraction layers. In our implementation, we use these heuristics as a black box.
- 5.
To ensure the equivalence of the transformed program according to Definition 23, we call iRankFinder with a flag to prevent the “chaining” of transitions. This ensures that partial evaluation does not change the lengths of evaluations.
References
Ahrendt, W., Beckert, B., Bubel, R., Hähnle, R., Schmitt, P.H., Ulbrich, M.: Deductive Software Verification - The KeY Book - From Theory to Practice. LNCS, vol. 10001. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-319-49812-6
Albert, E., Arenas, P., Genaim, S., Puebla, G.: Automatic inference of upper bounds for recurrence relations in cost analysis. In: Alpuente, M., Vidal, G. (eds.) SAS 2008. LNCS, vol. 5079, pp. 221–237. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69166-2_15
Albert, E., Arenas, P., Genaim, S., Puebla, G., Zanardini, D.: Cost analysis of object-oriented bytecode programs. Theor. Comput. Sci. 413(1), 142–159 (2012). https://doi.org/10.1016/j.tcs.2011.07.009
Albert, E., Genaim, S., Masud, A.N.: On the inference of resource usage upper and lower bounds. ACM Trans. Comput. Log. 14(3), 22:1–22:35 (2013). https://doi.org/10.1145/2499937.2499943
Albert, E., Bubel, R., Genaim, S., Hähnle, R., Puebla, G., Román-Díez, G.: A formal verification framework for static analysis. Softw. Syst. Model. 15(4), 987–1012 (2015). https://doi.org/10.1007/s10270-015-0476-y
Albert, E., Bofill, M., Borralleras, C., Martín-Martín, E., Rubio, A.: Resource analysis driven by (conditional) termination proofs. Theory Pract. Log. Program. 19(5–6), 722–739 (2019). https://doi.org/10.1017/S1471068419000152
Albert, E., Genaim, S., Martin-Martin, E., Merayo, A., Rubio, A.: Lower-bound synthesis using loop specialization and Max-SMT. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12760, pp. 863–886. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81688-9_40
Alias, C., Darte, A., Feautrier, P., Gonnord, L.: Multi-dimensional rankings, program termination, and complexity bounds of flowchart programs. In: Cousot, R., Martel, M. (eds.) SAS 2010. LNCS, vol. 6337, pp. 117–133. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15769-1_8
Avanzini, M., Moser, G.: A combination framework for complexity. In: van Raamsdonk, F. (ed.) RTA 2013. LIPIcs, vol. 21, pp. 55–70. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2013). https://doi.org/10.4230/LIPIcs.RTA.2013.55
Avanzini, M., Moser, G., Schaper, M.: TcT: Tyrolean complexity tool. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 407–423. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49674-9_24
Ben-Amram, A.M., Genaim, S.: Ranking functions for linear-constraint loops. J. ACM 61(4), 26:1–26:55 (2014). https://doi.org/10.1145/2629488
Ben-Amram, A.M., Genaim, S.: On multiphase-linear ranking functions. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10427, pp. 601–620. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63390-9_32
Ben-Amram, A.M., Doménech, J.J., Genaim, S.: Multiphase-linear ranking functions and their relation to recurrent sets. In: Chang, B.-Y.E. (ed.) SAS 2019. LNCS, vol. 11822, pp. 459–480. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32304-2_22
Borralleras, C., Brockschmidt, M., Larraz, D., Oliveras, A., Rodríguez-Carbonell, E., Rubio, A.: Proving termination through conditional termination. In: Legay, A., Margaria, T. (eds.) TACAS 2017. LNCS, vol. 10205, pp. 99–117. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-54577-5_6
Bradley, A.R., Manna, Z., Sipma, H.B.: The polyranking principle. In: Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) ICALP 2005. LNCS, vol. 3580, pp. 1349–1361. Springer, Heidelberg (2005). https://doi.org/10.1007/11523468_109
Brockschmidt, M., Emmes, F., Falke, S., Fuhs, C., Giesl, J.: Analyzing runtime and size complexity of integer programs. ACM Trans. Program. Lang. Syst. 38(4), 13:1–13:50 (2016). https://doi.org/10.1145/2866575
Carbonneaux, Q., Hoffmann, J., Shao, Z.: Compositional certified resource bounds. In: Grove, D., Blackburn, S.M. (eds.) PLDI 2015, pp. 467–478 (2015). https://doi.org/10.1145/2737924.2737955
Clang Compiler. https://clang.llvm.org/
Doménech, J.J., Genaim, S.: “iRankFinder”. In: Lucas, S. (ed.) WST 2018, p. 83 (2018). http://wst2018.webs.upv.es/wst2018proceedings.pdf
Doménech, J.J., Gallagher, J.P., Genaim, S.: Control-flow refinement by partial evaluation, and its application to termination and cost analysis. Theory Pract. Log. Program. 19(5–6), 990–1005 (2019). https://doi.org/10.1017/S1471068419000310
Falke, S., Kapur, D., Sinz, C.: Termination analysis of C programs using compiler intermediate languages. In: Schmidt-Schauss, M. (ed.) RTA 2011. LIPIcs, vol. 10, pp. 41–50. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2011). https://doi.org/10.4230/LIPIcs.RTA.2011.41
Flores-Montoya, A., Hähnle, R.: Resource analysis of complex programs with cost equations. In: Garrigue, J. (ed.) APLAS 2014. LNCS, vol. 8858, pp. 275–295. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-12736-1_15
Flores-Montoya, A.: Upper and lower amortized cost bounds of programs expressed as cost relations. In: Fitzgerald, J., Heitmeyer, C., Gnesi, S., Philippou, A. (eds.) FM 2016. LNCS, vol. 9995, pp. 254–273. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48989-6_16
Frohn, F., Giesl, J.: Complexity analysis for Java with AProVE. In: Polikarpova, N., Schneider, S. (eds.) IFM 2017. LNCS, vol. 10510, pp. 85–101. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66845-1_6
Frohn, F., Giesl, J., Hensel, J., Aschermann, C., Ströder, T.: Lower bounds for runtime complexity of term rewriting. J. Autom. Reason. 59(1), 121–163 (2016). https://doi.org/10.1007/s10817-016-9397-x
Frohn, F., Giesl, J.: Proving non-termination via loop acceleration. In: Barrett, C.W., Yang, J. (eds.) FMCAD 2019, pp. 221–230 (2019). https://doi.org/10.23919/FMCAD.2019.8894271
Frohn, F., Naaf, M., Brockschmidt, M., Giesl, J.: Inferring lower runtime bounds for integer programs. ACM Trans. Program. Lang. Syst. 42(3), 13:1–13:50 (2020). https://doi.org/10.1145/3410331
Gallagher, J.P.: Polyvariant program specialisation with property-based abstraction. In: VPT@Programming. EPTCS, vol. 299, pp. 34–48 (2019). https://doi.org/10.4204/EPTCS.299.6
Giesl, J., et al.: Analyzing program termination and complexity automatically with AProVE. J. Autom. Reason. 58(1), 3–31 (2016). https://doi.org/10.1007/s10817-016-9388-y
Giesl, J., Giesl, P., Hark, M.: Computing expected runtimes for constant probability programs. In: Fontaine, P. (ed.) CADE 2019. LNCS (LNAI), vol. 11716, pp. 269–286. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29436-6_16
Giesl, J., Rubio, A., Sternagel, C., Waldmann, J., Yamada, A.: The termination and complexity competition. In: Beyer, D., Huisman, M., Kordon, F., Steffen, B. (eds.) TACAS 2019. LNCS, vol. 11429, pp. 156–166. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17502-3_10
Hoffmann, J., Aehlig, K., Hofmann, M.: Multivariate amortized resource analysis. ACM Trans. Program. Lang. Syst. 34(3), 14:1–14:62 (2012). https://doi.org/10.1145/2362389.2362393
Hoffmann, J., Das, A., Weng, S.-C.: Towards automatic resource bound analysis for OCaml. In: Castagna, G., Gordon, A.D. (eds.) POPL 2017, pp. 359–373 (2017). https://doi.org/10.1145/3009837.3009842
Jeannet, B., Miné, A.: Apron: a library of numerical abstract domains for static analysis. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 661–667. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02658-4_52
Kaminski, B.L., Katoen, J.-P., Matheja, C.: Expected runtime analysis by program verification. In: Barthe, G., Katoen, J.-P., Silva, A. (eds.) Foundations of Probabilistic Programming, pp. 185–220. Cambridge University Press (2020). https://doi.org/10.1017/9781108770750.007
Königsberger, K.: Analysis 1, 6th edn. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-642-18490-1
Lattner, C., Adve, V.S.: LLVM: a compilation framework for lifelong program analysis & transformation. In: CGO 2004, pp. 75–88. IEEE Computer Society (2004). https://doi.org/10.1109/CGO.2004.1281665
Leike, J., Heizmann, M.: Ranking templates for linear loops. Log. Methods Comput. Sci. 11(1) (2015). https://doi.org/10.2168/LMCS-11(1:16)2015
Meyer, F., Hark, M., Giesl, J.: Inferring expected runtimes of probabilistic integer programs using expected sizes. In: TACAS 2021. LNCS, vol. 12651, pp. 250–269. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72016-2_14
Moser, G., Schaper, M.: From Jinja bytecode to term rewriting: a complexity reflecting transformation. Inf. Comput. 261, 116–143 (2018). https://doi.org/10.1016/j.ic.2018.05.007
de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24
KoAT: Web Interface, Experiments, Source Code, Binary, and Docker Image. https://aprove-developers.github.io/ComplexityMprfCfr/
Noschinski, L., Emmes, F., Giesl, J.: Analyzing innermost runtime complexity of term rewriting by dependency pairs. J. Autom. Reason. 51(1), 27–56 (2013). https://doi.org/10.1007/s10817-013-9277-6
Podelski, A., Rybalchenko, A.: A complete method for the synthesis of linear ranking functions. In: Steffen, B., Levi, G. (eds.) VMCAI 2004. LNCS, vol. 2937, pp. 239–251. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24622-0_20
RaML (Resource Aware ML). https://www.raml.co/interface/
Sinn, M., Zuleger, F., Veith, H.: Complexity and resource bound analysis of imperative programs using difference constraints. J. Autom. Reason. 59(1), 3–45 (2017). https://doi.org/10.1007/s10817-016-9402-4
Srikanth, A., Sahin, B., Harris, W.R.: Complexity verification using guided theorem enumeration. In: Castagna, G., Gordon, A.D. (eds.) POPL 2017, pp. 639–652 (2017). https://doi.org/10.1145/3009837.3009864
TPDB (Termination Problems Data Base). https://github.com/TermCOMP/TPDB
Wang, D., Hoffmann, J.: Type-guided worst-case input generation. Proc. ACM Program. Lang. 3(POPL), 13:1–13:30 (2019). https://doi.org/10.1145/3290326
Yuan, Y., Li, Y., Shi, W.: Detecting multiphase linear ranking functions for single-path linear-constraint loops. Int. J. Softw. Tools Technol. Transfer 23(1), 55–67 (2019). https://doi.org/10.1007/s10009-019-00527-1
Acknowledgments
This paper is dedicated to Reiner Hähnle whose ground-breaking results on functional verification and symbolic execution of Java programs with the KeY tool [1], on automatic resource analysis [22], and on its combination with deductive verification (e.g., [5]) were a major inspiration for us. Reiner’s work motivated us to develop and improve KoAT such that it can be used as a backend for complexity analysis of languages like Java [24].
We are indebted to Samir Genaim and Jesús J. Doménech for their help and advice with integrating multiphase-linear ranking functions and partial evaluation into our approach, and for providing us with a suitable version of iRankFinder which we could use in KoAT’s backend. Moreover, we are grateful to Albert Rubio and Enrique Martín-Martín for providing us with a static binary of MaxCore, to Antonio Flores-Montoya and Florian Zuleger for their help in running CoFloCo and Loopus for our experiments, and to Florian Frohn for help and advice.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
A Proofs
A Proofs
1.1 A.1 Proof of Lemma 18
We first present lemmas which give an upper and a lower bound for sums of powers. These lemmas will be needed in the proof of Lemma 18.
Lemma 28
(Upper Bound for Sums of Powers). For any \(i\ge 2\) and \(k \ge 1\) we have \(\sum _{j=1}^{k-1} j^{i-2} \le \tfrac{k^{i-1}}{i-1}\).
Proof
We have \(\sum _{j=1}^{k-1} j^{i-2} \; \le \; \sum _{j=1}^{k-1} \int _{j}^{j+1} x^{i-2} \, dx \; \le \; \int _{0}^{k} x^{i-2} \, dx \; = \; \tfrac{k^{i-1}}{i-1}\). \(\square \)
For the lower bound, we use the summation formula of Euler (see, e.g., [36]).
Lemma 29
(Summation Formula of Euler). We define the periodic function \(H : \mathbb {R}\rightarrow \mathbb {R}\) as \(H(x) = x - \lfloor x \rfloor - \tfrac{1}{2}\) if \(x \in \mathbb {R}\setminus \mathbb {Z}\) and as \(H(x) = 0\) if \(x\in \mathbb {Z}\). Note that H(x) is bounded by \(-\tfrac{1}{2}\) and \(\tfrac{1}{2}\). Then for any continuously differentiable function \(f: [1,n] \rightarrow \mathbb {C}\) with \(n\in \mathbb {N}\), we have \(\sum _{j=1}^{k} f(j) \; = \; \int _{1}^{k} f(x) \, dx + \tfrac{1}{2}\cdot (f(1)+f(k)) + \int _{1}^{k} H(x)\cdot f'(x)\, dx\).
This then leads to the following result.
Lemma 30
(Lower Bound for Sums of Powers). For any \(i\ge 2\) and \(k \ge 1\) we have \(\sum _{j=1}^{k-1} j^{i-1}\ge \tfrac{k^{i}}{i} - k^{i-1}\).
Proof
Consider \(f(x) = x^i\) with the derivative \(f'(x) = i \cdot x^{i-1}\). We get

Since \(\left| H(x)\right| \le \tfrac{1}{2}\), we have \(\left| \int _{1}^{k} H(x) \cdot i \cdot x^{i-1} \, dx\right| \; \le \; \tfrac{1}{2} \cdot \left| \int _{1}^{k} i \cdot x^{i-1} \, dx\right| \; =\; \tfrac{1}{2} \cdot i \cdot \left| \tfrac{k^i}{i} - \tfrac{1}{i}\right| \; = \; \tfrac{k^i - 1}{2}\). Thus, we obtain
or, equivalently \(- \tfrac{1}{i+1} + k^i \ge R \ge - \tfrac{1}{i+1} +1\). This implies \(k^i> R > 0\). Hence, we get \(\sum _{j=1}^{k} j^i \; = \; \tfrac{k^{i+1}}{i+1} + R \; \ge \; \tfrac{k^{i+1}}{i+1}\) and thus, \(\sum _{j=1}^{k-1} j^i \; = \; \sum _{j=1}^{k} j^i - k^i \; \ge \; \tfrac{k^{i+1}}{i+1} - k^i\). With the index shift \(i\rightarrow i - 1\) we finally obtain the lower bound \(\sum _{j=1}^{k-1} j^{i-1}\ge \tfrac{k^{i}}{i} - k^{i-1}\). \(\square \)
Proof of Lemma 18. To ease notation, in this proof \(\ell _0\) does not denote the initial location of the program \(\mathcal {T}\), but an arbitrary location from \(\mathcal {L}\). Then we can write \((\ell _0,\sigma _0)\) instead of \((\ell ,\sigma )\), \((\ell _n,\sigma _n)\) instead of \((\ell ',\sigma ')\), and consider an evaluation
Let \(M = \max \{0,\sigma _0\left( f_1(\ell _0)\right) , \dots , \sigma _0\left( f_d(\ell _0)\right) \}\). We first prove that for all \(1 \le i \le d\) and all \(0 \le k \le n\), we have
The proof is done by induction on i. So in the base case, we have \(i = 1\). Since \(\gamma _1 = 1\), we have to show that \(\sigma _k\left( f_1(\ell _k)\right) \le M \cdot k^{0} - \tfrac{k^1}{1!} = M - k\).
For all \(0 \le j \le k-1\), the step from \((\ell _j,\sigma _j)\) to \((\ell _{j+1},\sigma _{j+1})\) corresponds to the evaluation of transitions from \(\mathcal {T}'\setminus \mathcal {T}'_{>}\) followed by a transition from \(\mathcal {T}'_{>}\), i.e., we have \((\ell _j,\sigma _j) \rightarrow ^*_{\mathcal {T}'\setminus \mathcal {T}'_{>}} (\ell '_j,\sigma '_j) \rightarrow _{\mathcal {T}'_{>}} (\ell _{j+1},\sigma _{j+1})\) for some configuration \((\ell '_j,\sigma '_j)\). Since f is an \(\text {M}\varPhi \text {RF}\) and all transitions in \(\mathcal {T}'\setminus \mathcal {T}'_{>}\) are non-increasing, we obtain \(\sigma _j(f_1(\ell _j)) \ge \sigma '_{j}(f_1(\ell '_{j}))\). Moreover, since the transitions in \(\mathcal {T}'_{>}\) are decreasing, we have \(\sigma '_j(f_{0}(\ell '_j)) + \sigma '_j(f_{1}(\ell '_j)) = \sigma '_j(f_{1}(\ell '_j)) \ge \sigma _{j+1}(f_{1}(\ell _{j+1})) + 1\). So together, this implies \(\sigma _j(f_{1}(\ell _j)) \ge \sigma _{j+1}(f_{1}(\ell _{j+1})) + 1\) and thus, \(\sigma _0\left( f_1(\ell _0)\right) \ge \sigma _1\left( f_1(\ell _1)\right) + 1\ge \ldots \ge \sigma _k\left( f_1(\ell _k)\right) + k\) or equivalently, \(\sigma _0\left( f_1(\ell _0)\right) - k\ge \sigma _k\left( f_1(\ell _k)\right) \). Furthermore, we have \(\sigma _0\left( f_1(\ell _0)\right) \le \max \{0,\sigma _0\left( f_1(\ell _0)\right) ,\dots ,\sigma _0\left( f_d(\ell _0)\right) \} = M\). Hence, we obtain \(\sigma _k\left( f_1(\ell _k)\right) \le \sigma _0\left( f_1(\ell _0)\right) - k \le M - k\). So in particular, if \(M = 0\), then we have \(\sigma _k(f_1(\ell _k)) \le -k\).
In the induction step, we assume that for all \(0 \le k \le n\), we have \(\sigma _k(f_{i-1}(\ell _k)) \, \le \, -k\) if \(M = 0\) and \(\sigma _k(f_{i-1}(\ell _k)) \; \le \; \gamma _{i-1} \cdot M \cdot k^{i-2} - \tfrac{k^{i-1}}{(i-1)!}\) if \(M > 0\). To show that the inequations also hold for i, we first transform \(\sigma _k(f_{i}(\ell _k))\) into a telescoping sum.
For all \(0 \le j \le k-1\), the step from \((\ell _j,\sigma _j)\) to \((\ell _{j+1},\sigma _{j+1})\) again has the form \((\ell _j,\sigma _j) \rightarrow ^*_{\mathcal {T}'\setminus \mathcal {T}'_{>}} (\ell '_j,\sigma '_j) \rightarrow _{\mathcal {T}'_{>}} (\ell _{j+1},\sigma _{j+1})\) for some configuration \((\ell '_j,\sigma '_j)\). Since f is an \(\text {M}\varPhi \text {RF}\) and all transitions in \(\mathcal {T}'\setminus \mathcal {T}'_{>}\) are non-increasing, we obtain \(\sigma _j(f_{i-1}(\ell _j)) \ge \sigma '_{j}(f_{i-1}(\ell '_{j}))\) and \(\sigma _j(f_i(\ell _j)) \ge \sigma '_{j}(f_i(\ell '_{j}))\). Moreover, since the transitions in \(\mathcal {T}'_{>}\) are decreasing, we have \(\sigma '_j(f_{i-1}(\ell '_j)) + \sigma '_j(f_{i}(\ell '_j)) \ge \sigma _{j+1}(f_{i}(\ell _{j+1})) + 1\). So together, this implies \(\sigma _j(f_{i-1}(\ell _j)) + \sigma _j(f_{i}(\ell _j)) \ge \sigma _{j+1}(f_{i}(\ell _{j+1})) + 1\) or equivalently, \(\sigma _{j + 1}\left( f_i (\ell _{j+1})\right) - \sigma _{j}\left( f_i (\ell _j)\right) < \sigma _{j}\left( f_{i-1} (\ell _{j})\right) \). Hence, we obtain
If \(M = 0\), then we obviously have \(\sigma _{0}(f_i (\ell _{0})) \le 0\) for all \(1\le i \le d\). For \(k \ge 1\), we obtain
Hence, we have \(\sigma _k\left( f_i(\ell _k)\right) < - k + 1\) and thus, \(\sigma _k\left( f_i(\ell _k)\right) \le -k\).
If \(M > 0\), then we obtain

Hence, (2) is proved.
In the case \(M = 0\), (2) implies \(\sigma _n(f_i(\ell _n)) \le -n \le -\beta = -1 < 0\) for all \(1\le i \le d\) which proves the lemma.
Hence, it remains to regard the case \(M > 0\). Now (2) implies
We now prove that for \(i > 1\) we always have \(i! \cdot \gamma _i \ge (i-1)! \cdot \gamma _{i-1}\).
Thus,
Hence, for \(n \ge \beta = 1 + d! \cdot \gamma _d \cdot M\) we obtain:

Finally, to show that \(\beta \in \mathbb {N}\), note that by induction on i, one can easily prove that \((i-1)! \cdot \gamma _i \in \mathbb {N}\) holds for all \(i \ge 1\). Hence, in contrast to \(\gamma _i\), the number \(i! \cdot \gamma _i\) is a natural number for all \(i \in \mathbb {N}\). This implies \(\beta \in \mathbb {N}\). \(\square \)
1.2 A.2 Proof of Theorem 20
Proof
We prove Theorem 20 by showing that for all \(t\in \mathcal {T}\) and all \(\sigma _0 \in \varSigma \) we have
The case \(t \notin \mathcal {T}'_{>}\) is trivial, since \({\mathcal {RB}}'(t) = {\mathcal {RB}}(t)\) and \({\mathcal {RB}}\) is a runtime bound.
Now we prove (5) for a transition \(t_>\in \mathcal {T}'_{>}\), i.e., we show that for all \(\sigma _0 \in \varSigma \) we have
So let \((\ell _0, \sigma _0) \, (\rightarrow ^* \circ \rightarrow _{t_>})^k \, (\ell , \sigma )\) and we have to show \(\left| \sigma _0\right| \left( {\mathcal {RB}}'(t_>)\right) \ge k\). If \(k = 0\), then we clearly have \(\left| \sigma _0\right| \left( {\mathcal {RB}}'\left( t_{>}\right) \right) \ge 0 = k\). Hence, we consider \(k > 0\). We represent the evaluation as follows:
So for the evaluation from \((\ell _i, \sigma _i)\) to \(({\tilde{\ell }}_{i+1}, {\tilde{\sigma }}_{i+1})\) we only use transitions from \(\mathcal {T}\setminus \mathcal {T}'\), and for the evaluation from \(({\tilde{\ell }}_i, {\tilde{\sigma }}_i)\) to \((\ell _i, \sigma _i)\) we only use transitions from \(\mathcal {T}'\). Thus, \(t_>\) can only occur in the following finite sequences of evaluation steps:
For every \(1 \le i \le m\), let \(k_i \le k_i'\) be the number of times that \(t_>\) is used in the evaluation (6). Clearly, we have
By Lemma 18, all functions \(f_1,\dots ,f_d\) are negative after executing \(t_>\) at least \(1 + d! \cdot \gamma _d \cdot \max \{0, {\tilde{\sigma }}_i(f_1({\tilde{\ell }}_i)),\dots , {\tilde{\sigma }}_i(f_d({\tilde{\ell }}_i))\}\) times in an evaluation with \(\mathcal {T}'\). If all the \(f_i\) are negative, then \(t_>\) cannot be executed anymore as f is an \(\text {M}\varPhi \text {RF}\) for \(\mathcal {T}'_{>}\) with \(t_> \in \mathcal {T}'_{>}\) and \(\mathcal {T}'\). Thus, for all \(1 \le i \le m\) we have
Let \(t_i\) be the entry transition reaching \(({\tilde{\ell }}_i, {\tilde{\sigma }}_i)\), i.e., \({\tilde{\ell }}_i \in \mathcal {E}_{\mathcal {T}'}\) and \(t_i \in \mathcal {T}_{{\tilde{\ell }}_i}\). As \((\ell _0, \sigma _0) \rightarrow ^*_\mathcal {T}\circ \rightarrow _{t_i} ({\tilde{\ell }}_i, {\tilde{\sigma }}_i)\), by Definition 12 we have \(\left| \sigma _0\right| \left( {\mathcal {SB}}(t_i, v)\right) \ge |{\tilde{\sigma }}_i(v)|\) for all \(v \in \mathcal {PV}\) and thus,

In the last part of this proof we need to analyze how often such evaluations \(({\tilde{\ell }}_i, {\tilde{\sigma }}_i) \rightarrow ^*_{\mathcal {T}'} (\ell _i, \sigma _i)\) can occur. Again, let \(t_i\) be the entry transition reaching \(({\tilde{\ell }}_i, {\tilde{\sigma }}_i)\). Every entry transition \(t_i\) can occur at most \(\left| \sigma _0\right| \left( {\mathcal {RB}}(t_i)\right) \) times in the complete evaluation, as \({\mathcal {RB}}\) is a runtime bound. Thus, we have

\(\square \)
1.3 A.3 Proof of Theorem 24
Let \(\mathcal {P}' = (\mathcal {PV},\mathcal {L}',\ell _0,\mathcal {T}')\). First note that for every evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}'} (\ell ',\sigma )\) there is obviously also a corresponding evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}} (\ell ,\sigma )\). To obtain the evaluation with \(\mathcal {T}\) one simply has to remove the labels from the locations. Then the claim follows because the guards of the transitions in \(\mathcal {T}'\) always imply the guards of the respective original transitions in \(\mathcal {T}\) and the updates of the transitions have not been modified in the transformation from \(\mathcal {T}\) to \(\mathcal {T}'\).
For the other direction, we show by induction on \(k \in \mathbb {N}\) that for every evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}} (\ell ,\sigma )\) there is a corresponding evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}'} (\ell ',\sigma )\) where either \(\ell ' = \ell \) or \(\ell ' = \langle \ell , \varphi \rangle \) for some constraint \(\varphi \) with \(\sigma (\varphi ) = \texttt {true}\).
In the induction base, we have \(k = 0\) and the claim is trivial. In the induction step \(k > 0\) the evaluation has the form
with \(t_1, \ldots , t_k \in \mathcal {T}\). By the induction hypothesis, there is a corresponding evaluation
with \(t_1', \ldots , t_k' \in \mathcal {T}'\) where \(\ell _{k-1}' = \ell _{k-1}\) or \(\ell _{k-1}' = \langle \ell _{k-1}, \varphi \rangle \) for some constraint \(\varphi \) with \(\sigma _{k-1}(\varphi ) = \texttt {true}\). We distinguish two cases:
-
Case 1: \(t_{k}\not \in \mathcal {T}_ SCC \). If \(\ell _{k-1}' = \ell _{k-1}\) and \(\ell _k \notin \mathcal {E}_{\mathcal {T}_{ SCC }}\), then \(t_k\) has not been modified in the transformation from \(\mathcal {P}\) to \(\mathcal {P}'\). Thus, we have the evaluation \((\ell _0,\sigma _0)\rightarrow _{t_1'}(\ell _1',\sigma _1) \rightarrow _{t_2'} \cdots \rightarrow _{t_{k-1}'}(\ell _{k-1}',\sigma _{k-1}) = (\ell _{k-1},\sigma _{k-1}) \rightarrow _{t_k} (\ell _{k},\sigma _{k})\) with \(t_k \in \mathcal {T}'\). If \(\ell _{k-1}' = \ell _{k-1}\) and \(\ell _k \in \mathcal {E}_{\mathcal {T}_{ SCC }}\), then for \(t_k = (\ell _{k-1},\tau ,\eta ,\ell _{k})\), we set \(\ell _k' = \langle \ell _k, \texttt {true}\rangle \) and obtain that \(t_k' = (\ell _{k-1},\tau ,\eta ,\ell _{k}') \in \mathcal {T}'\). So we get the evaluation \((\ell _0,\sigma _0)\rightarrow _{t_1'}(\ell _1',\sigma _1) \rightarrow _{t_2'} \cdots \rightarrow _{t_{k-1}'}(\ell _{k-1}',\sigma _{k-1}) = (\ell _{k-1},\sigma _{k-1}) \rightarrow _{t_k'} (\ell _{k}',\sigma _{k})\). Finally, we regard the case \(\ell _{k-1}' = \langle \ell _{k-1}, \varphi \rangle \) where \(\sigma _{k-1}(\varphi ) = \texttt {true}\). As \(t_k = (\ell _{k-1},\tau ,\eta ,\ell _{k})\in \mathcal {T}\setminus \mathcal {T}_{ SCC }\), and \(\mathcal {T}_{ SCC }\) is an SCC, there is a \(t_k' = (\langle \ell _{k-1}, \varphi \rangle , \varphi \wedge \tau , \eta , \ell _{k}) \in \mathcal {T}'\). Then \((\ell _0,\sigma _0)\rightarrow _{t_1'}(\ell _1',\sigma _1) \rightarrow _{t_2'} \cdots \rightarrow _{t_{k-1}'} (\ell _{k-1}',\sigma _{k-1}) = (\langle \ell _{k-1}, \varphi \rangle ,\sigma _{k-1}) \rightarrow _{t_{k}'}(\ell _{k},\sigma _{k})\) is an evaluation with \(\mathcal {T}'\). The evaluation step with \(t_k'\) is possible, since \(\sigma _{k-1}(\varphi ) = \texttt {true}\) and \(\sigma _{k-1}(\tau ) = \texttt {true}\) (due to the evaluation step \((\ell _{k-1},\sigma _{k-1}) \rightarrow _{t_k} (\ell _{k},\sigma _{k})\)). Note that the step with \(t_k'\) also results in the state \(\sigma _{k}\), because both \(t_k\) and \(t_k'\) have the same update \(\eta \).
-
Case 2: \(t_{k}\in \mathcal {T}_ SCC \). Here, \(\ell _{k-1}'\) has the form \(\langle \ell _{k-1}, \varphi \rangle \) where \(\sigma _{k-1}(\varphi ) = \texttt {true}\). As \(\ell _k\) is part of the SCC and hence has an incoming transition from \(\mathcal {T}_ SCC \), at some point it is refined by Algorithm 2. Thus, for \(t_k = (\ell _{k-1}, \tau ,\eta , \ell _{k})\), there is some \(t_{k}' = \left( \langle \ell _{k-1}, \varphi \rangle ,\varphi \wedge \tau ,\eta , \left\langle \ell _{k},\alpha _{\ell _{k}}(\varphi _{ new })\right\rangle \right) \in \mathcal {T}'\) where \(\alpha _{\ell _{k}}(\varphi _{ new })\) is constructed as in Line 8. This leads to the corresponding evaluation \((\ell _0,\sigma _0)\rightarrow _{t_1'}(\ell _1',\sigma _1) \rightarrow _{t_2'} \cdots \rightarrow _{t_{k-1}'}(\langle \ell _{k-1}, \varphi \rangle ,\sigma _{k-1})\rightarrow _{t_{k}'}(\langle \ell _{k}, \alpha _{\ell _{k}}(\varphi _{ new })\rangle ,\sigma _{k})\). Again, the evaluation step with \(t_k'\) is possible, since \(\sigma _{k-1}(\varphi ) = \texttt {true}\) and \(\sigma _{k-1}(\tau ) = \texttt {true}\) (due to the evaluation step \((\ell _{k-1},\sigma _{k-1}) \rightarrow _{t_k} (\ell _{k},\sigma _{k})\)). And again, the step with \(t_k'\) also results in the state \(\sigma _{k}\), because both \(t_k\) and \(t_k'\) have the same update \(\eta \). Finally, note that we have \(\sigma _k(\alpha _{\ell _{k}}(\varphi _{ new })) = \texttt {true}\). The reason is that \(\models (\varphi \wedge \tau ) \rightarrow \eta (\varphi _{ new })\) and \(\sigma _{k-1}(\varphi \wedge \tau )= \texttt {true}\) implies \(\sigma _{k-1}(\eta (\varphi _{ new })) = \texttt {true}\). Hence, we also have \(\sigma _{k}(\varphi _{ new }) = \sigma _{k-1}(\eta (\varphi _{ new })) = \texttt {true}\). Therefore, \(\models \varphi _{ new } \rightarrow \alpha _{\ell _{k}}(\varphi _{ new })\) implies \(\sigma _{k}(\alpha _{\ell _{k}}(\varphi _{ new }))= \texttt {true}\). \(\square \)
1.4 A.4 Proof of Theorem 25
Let \(\mathcal {P}' = (\mathcal {PV},\mathcal {L}',\ell _0,\mathcal {T}')\) result from \(\mathcal {P}\) by Algorithm 3. As in the proof of Theorem 24, for every evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}'} (\ell ',\sigma )\) there is also a corresponding evaluation \((\ell _0,\sigma _0) \rightarrow ^{k}_{\mathcal {T}} (\ell ,\sigma )\), which is obtained by removing the labels from the locations.
For the other direction, we show that for each evaluation \((\ell _0,\sigma _0)\rightarrow _{t_1}(\ell _1,\sigma _1)\rightarrow _{t_2} \cdots \rightarrow _{t_k}(\ell _k,\sigma _k)\) with \(t_1,\ldots , t_k \in \mathcal {T}\) there is a corresponding evaluation \((\ell _0,\sigma _0)\rightarrow _{\mathcal {T}'}^k (\ell _k',\sigma _k)\) in \(\mathcal {P}'\). To obtain this evaluation, we handle all evaluation fragments separately which use programs \(\mathcal {Q}\) from \(\mathcal {S}\). This is possible, since different programs in \(\mathcal {S}\) do not share locations, i.e., entry and outgoing transitions of \(\mathcal {Q}\) cannot be part of another \(\mathcal {Q}'\) from \(\mathcal {S}\). Such an evaluation fragment has the form
where \(t_{i+1}\) is an entry transition to \(\mathcal {Q}\), \(t_n\) is an outgoing transition from \(\mathcal {Q}\), and the transitions \(t_{i+2}, \ldots , t_{n-1}\) belong to \(\mathcal {Q}\). By Theorem 24 it follows that there is a corresponding evaluation using the transitions \(t_{i+2}', \ldots , t_{n-1}'\) from the refined version of \(\mathcal {Q}\), such that with the new redirected entry transition \(t_{i+1}'\) and the new redirected outgoing transition \(t_n'\) we have
Thus, by substituting each evaluation fragment (9) in an evaluation of \(\mathcal {P}\) by its refinement (10), we get a corresponding evaluation in \(\mathcal {P}'\).
\(\square \)
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Giesl, J., Lommen, N., Hark, M., Meyer, F. (2022). Improving Automatic Complexity Analysis of Integer Programs. In: Ahrendt, W., Beckert, B., Bubel, R., Johnsen, E.B. (eds) The Logic of Software. A Tasting Menu of Formal Methods. Lecture Notes in Computer Science, vol 13360. Springer, Cham. https://doi.org/10.1007/978-3-031-08166-8_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-08166-8_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-08165-1
Online ISBN: 978-3-031-08166-8
eBook Packages: Computer ScienceComputer Science (R0)