Skip to main content
Log in

Convergence Analysis of Spatially Adaptive Rothe Methods

  • Original Paper
  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

This paper is concerned with the convergence analysis of the horizontal method of lines for evolution equations of the parabolic type. Following a semidiscretization in time by \(S\)-stage one-step methods, the resulting elliptic stage equations per time step are solved with adaptive space discretization schemes. We investigate how the tolerances in each time step must be tuned in order to preserve the asymptotic temporal convergence order of the time stepping also in the presence of spatial discretization errors. In particular, we discuss the case of linearly implicit time integrators and adaptive wavelet discretizations in space. Using concepts from regularity theory for partial differential equations and from nonlinear approximation theory, we determine an upper bound for the degrees of freedom for the overall scheme that are needed to adaptively approximate the solution up to a prescribed tolerance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. I. Babuška, Advances in the p and h-p versions of the finite element method. A survey, Numerical mathematics, Proc. Int. Conf., Singapore 1988, ISNM, Int. Ser. Numer. Math. 86 (1988), 31–46.

  2. I. Babuška and W.C. Rheinboldt, A survey of a posteriori error estimators and adaptive approaches in the finite element method, Finite element methods. Proc. China-France Symp., Beijing/China (1983), 1–56.

  3. R.E. Bank and A. Weiser, Some a posteriori error estimators for elliptic partial differential equations, Math. Comput. 44 (1985), 283–301.

    Google Scholar 

  4. J. Bergh and J. Löfström, Interpolation Spaces. An Introduction, Springer, Berlin, 1976.

  5. P. Binev, W. Dahmen, and R. DeVore, Adaptive finite element methods with convergence rates, Numer. Math. 97 (2004), 219–268.

    Google Scholar 

  6. F.A. Bornemann, B. Erdmann, and R. Kornhuber, A posteriori error estimates for elliptic problems in two and three space dimensions, SIAM J. Numer. Anal. 33 (1996), 1188–1204.

    Google Scholar 

  7. C. Canuto, A. Tabacco, and K. Urban, The wavelet element method. II: Realization and additional features in 2D and 3D, Appl. Comput. Harmon. Anal. 8 (2000), 123–165.

    Google Scholar 

  8. A. Cohen, Wavelet Methods in Numerical Analysis, North-Holland/Elsevier, Amsterdam, 2000.

  9. A. Cohen, W. Dahmen, and R.A. DeVore, Adaptive wavelet methods for elliptic operator equations: Convergence rates, Math. Comput. 70 (2001), 27–75.

    Google Scholar 

  10. A. Cohen, W. Dahmen, and R.A. DeVore, Adaptive wavelet methods. II: Beyond the elliptic case, Found. Comput. Math. 2 (2002), 203–245.

    Google Scholar 

  11. M. Crouzeix and V. Thomée, On the discretization in time of semilinear parabolic equations with nonsmooth initial data, Math. Comput. 49 (1987), 359–377.

    Google Scholar 

  12. S. Dahlke, Besov regularity for elliptic boundary value problems in polygonal domains, Appl. Math. Lett. 12 (1999), 31–36.

    Google Scholar 

  13. S. Dahlke, W. Dahmen, and R.A. DeVore, Nonlinear approximation and adaptive techniques for solving elliptic operator equations, Multiscale wavelet methods for partial differential equations (W. Dahmen, A. Kurdila, and P. Oswald, eds.), Academic Press, San Diego, 1997, pp. 237–284.

  14. S. Dahlke, W. Dahmen, R. Hochmuth, and R. Schneider, Stable multiscale bases and local error estimation for elliptic problems, Appl. Numer. Math. 23 (1997), 21–47.

    Google Scholar 

  15. S. Dahlke and R.A. DeVore, Besov regularity for elliptic boundary value problems, Commun. Partial Differ. Equations 22 (1997), 1–16.

    Google Scholar 

  16. S. Dahlke, M. Fornasier, T. Raasch, R. Stevenson, and M. Werner, Adaptive frame methods for elliptic operator equations: The steepest descent approach, IMA J. Numer. Anal. 27 (2007), 717–740.

    Google Scholar 

  17. S. Dahlke, E. Novak, and W. Sickel, Optimal approximation of elliptic problems by linear and nonlinear mappings. I, J. Complexity 22 (2006), 29–49.

    Google Scholar 

  18. S. Dahlke and W. Sickel, On Besov regularity of solutions to nonlinear elliptic partial differential equations, Rev. Mat. Complut. 26 (2013), 115–145.

    Google Scholar 

  19. W. Dahmen and R. Schneider, Wavelets with complementary boundary conditions – functions spaces on the cube, Result. Math. 34 (1998), 255–293.

    Google Scholar 

  20. W. Dahmen and R. Schneider, Composite wavelet bases for operator equations, Math. Comput. 68 (1999), 1533–1567.

    Google Scholar 

  21. W. Dahmen and R. Schneider, Wavelets on manifolds. I: Construction and domain decomposition, SIAM J. Math. Anal. 31 (1999), 184–230.

    Google Scholar 

  22. R.A. DeVore, Nonlinear approximation, Acta Numerica 7 (1998), 51–150.

  23. W. Dörfler, A convergent adaptive algorithm for Poisson’s equation, SIAM J. Numer. Anal. 33 (1996), 737–785.

    Google Scholar 

  24. K. Eriksson, An adaptive finite element method with efficient maximum norm error control for elliptic problems, Math. Models Methods Appl. Sci. 4 (1994), 313–329.

    Google Scholar 

  25. K. Eriksson and C. Johnson, Adaptive finite element methods for parabolic problems. I: A linear model problem, SIAM J. Numer. Anal. 28 (1991), 43–77.

    Google Scholar 

  26. K. Eriksson and C. Johnson, Adaptive finite element methods for parabolic problems. II: Optimal error estimates in \(L_\infty L_2\) and \(L_\infty L_\infty \), SIAM J. Numer. Anal. 32 (1995), 706–740.

  27. K. Eriksson, C. Johnson, and S. Larsson, Adaptive finite element methods for parabolic problems. VI: Analytic semigroups, SIAM J. Numer. Anal. 35 (1998), 1315–1325.

    Google Scholar 

  28. M. Hanke-Bourgeois, Foundations of Numerical Mathematics and Scientific Computing, Vieweg+Teubner, Wiesbaden, 2009.

  29. P. Hansbo and C. Johnson, Adaptive finite element methods in computational mechanics, Comput. Methods Appl. Mech. Eng. 101 (1992), 143–181.

    Google Scholar 

  30. D. Jerison and C.E. Kenig, The inhomogeneous Dirichlet problem in Lipschitz domains, J. Funct. Anal. 130 (1995), 161–219.

    Google Scholar 

  31. C. Johnson, Numerical Solution of Partial Differential Equations by the Finite Element Method, Dover Publications, Mineola, 2009.

  32. T. Kato, Perturbation Theory for Linear Operators. 2nd corr. print. of the 2nd ed., Springer, Berlin, 1984.

  33. J. Lang, Adaptive Multilevel Solution of Nonlinear Parabolic PDE Systems. Theory, Algorithm, and Applications, Springer, Berlin, 2001.

  34. C. Lubich and A. Ostermann, Linearly implicit time discretization of nonlinear parabolic equations, IMA J. Numer. Anal. 15 (1995), 555–583.

    Google Scholar 

  35. A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, Applied Mathematical Sciences, 44. New York: Springer-Verlag, 1983.

  36. R. Stevenson, Optimality of a standard adaptive finite element method, Found. Comput. Math. 7 (2007), 245–269.

    Google Scholar 

  37. V. Thomée, Galerkin Finite Element Methods for Parabolic Problems, Springer, Berlin, 2006.

  38. R. Verfürth, A posteriori error estimation and adaptive mesh-refinement techniques, J. Comput. Appl. Math. 50 (1994), 67–83.

    Google Scholar 

  39. R. Verfürth, A Review of A Posteriori Error Estimation and Adaptive Mesh-Refinement Techniques, Wiley-Teubner Series Advances in Numerical Mathematics. Chichester: Wiley. Stuttgart: B. G. Teubner, 1996.

  40. J.G. Verwer, E.J. Spee, J.G. Blom, and W. Hundsdorfer, A second-order Rosenbrock method applied to photochemical dispersion problems, SIAM J. Sci. Comput. 20 (1999), 1456–1480.

    Google Scholar 

Download references

Acknowledgments

This work was supported by the Deutsche Forschungsgemeinschaft (DFG, Grants DA 360/12-2, DA 360/13-2, RI 599/4-2, SCHI 419/5-2), a doctoral scholarship of the Philipps-Universität Marburg, and the LOEWE Center for Synthetic Microbiology (Synmikro), Marburg.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Kinzel.

Additional information

Communicated by Andrew Stuart.

Appendices

Appendix 1:Variational Operators

In the preceding sections, we very often considered the same problem on different spaces, e.g., we switched from an operator equation defined on \(V\) to the same equation defined on \(U\). In this section, we want to clarify in more detail why this is justified.

Let \((V,\langle \cdot ,\cdot \rangle _{V})\) be a separable real Hilbert space. Furthermore, let

$$\begin{aligned} a(\cdot ,\cdot ): V\times V\rightarrow \mathbb {R}\end{aligned}$$

be a continuous, symmetric, and elliptic bilinear form. This means that there exist two constants \(c_{\text {ell}}, C_{\text {ell}}>0\) such that for arbitrary \(u,v\in V\) the bilinear form satisfies the following conditions:

$$\begin{aligned} c_{\text {ell}}\, \Vert u\Vert _{V}^2\le a(u,u),\quad a(u,v)=a(v,u),\quad |a(u,v)|\le C_{\text {ell}}\, \Vert u\Vert _{V} \Vert v\Vert _{V}. \end{aligned}$$
(70)

Then, by the Lax–Milgram theorem, the operator

$$\begin{aligned} A:V&\rightarrow V^*\nonumber \\ v&\mapsto Av:= -a(v,\cdot ) \end{aligned}$$
(71)

is boundedly invertible. Let us now assume that \(V\) is densely embedded into a real Hilbert space \((U,\langle \cdot ,\cdot \rangle _{U})\) via a linear embedding \(j\). We write

$$\begin{aligned} V\overset{j}{\hookrightarrow } U. \end{aligned}$$

Furthermore, we identify the Hilbert space \(U\) with its topological dual space \(U^*\) via the Riesz isomorphism \( U\ni u\mapsto \varPhi u:=\langle u,\cdot \rangle _U\in U^*\). The adjoint map \(j^*:U^*\rightarrow V^*\) of \(j\) embeds \(U^*\) densely into the topological dual \(V^*\) of \(V\). All in all we have a so-called Gelfand triple \((V,U,V^*)\):

$$\begin{aligned} V\overset{j}{\hookrightarrow } U \overset{\varPhi }{\equiv } U^*\overset{\,\,\,j^*}{\hookrightarrow }V^*. \end{aligned}$$

Using \(\langle \cdot ,\cdot \rangle _{V^*\times V}\) to denote the dual pairs of \(V\) and \(V^*\), we have

$$\begin{aligned} \langle j(v_1),j(v_2)\rangle _U = \langle j^* \, \varPhi \, j(v_1),v_2\rangle _{V^*\times V} \qquad \text {for all} \quad v_1,v_2\in V. \end{aligned}$$
(72)

In this setting, we can consider the operator \(A:V\rightarrow V^*\) as an unbounded operator on the intermediate space \(U\). More precisely, set

$$\begin{aligned} D(A):=D(A;U):=\{u\in V\,:\, Au \in j^*\varPhi (U)\}, \end{aligned}$$

and define the operator

$$\begin{aligned} \tilde{A}:D(\tilde{A}):=j(D(A;U))\subseteq U&\rightarrow U\\ u&\mapsto \tilde{A}u:= \varPhi ^{-1} {j^*}^{-1} A \, j^{-1} u. \end{aligned}$$

Such an (unbounded) linear operator is sometimes called variational. It is densely defined since \(U^*\) is densely embedded in \(V^*\). Furthermore, the symmetry of the bilinear form \(a(\cdot ,\cdot )\) implies that \(\tilde{A}\) is self-adjoint. At the same time, it is strictly negative definite because of the ellipticity of \(a\). Moreover, since \(A:V\rightarrow V^*\) is boundedly invertible, the operator \(\tilde{A}^{-1}:U\rightarrow U\), defined by \(\tilde{A}^{-1}:= j A^{-1} j^*\varPhi \), is the bounded inverse of \(\tilde{A}\). It is compact if the embedding \(j\) of \(V\) in \(U\) is compact.

Let us now fix \(\tau >0\) and consider the bilinear form

$$\begin{aligned} a_\tau : V\times V&\rightarrow \mathbb {R}\\* (u,v)&\mapsto a_\tau (u,v):=\tau \langle j(u),j(v)\rangle _U + a(u,v), \end{aligned}$$

which is also continuous, symmetric, and elliptic in the sense of (72). Obviously, for \(u,v\in V\) we have the identity

$$\begin{aligned} a_\tau (u,v)&= \tau \langle j^* \varPhi j (u),v\rangle _{V^*\times V}- \langle A u,v\rangle _{V^*\times V}\\&=\langle (\tau j^*\varPhi j - A)u,v\rangle _{V^*\times V}, \end{aligned}$$

so that applying again the Lax–Milgram theorem, we can conclude that \((\tau j^* \varPhi j - A):V\rightarrow V^*\) is boundedly invertible. Therefore, the operator

$$\begin{aligned} (\tau I- \tilde{A}):D(\tilde{A})\subseteq U&\rightarrow U\\ u&\mapsto (\tau I-\tilde{A})u:=\tau u- \tilde{A}u, \end{aligned}$$

which coincides with \(\varPhi ^{-1}{j^*}^{-1} (\tau j^*\varPhi j- A)j^{-1}\) on \(D(\tilde{A})\), possesses a bounded inverse \((\tau I- \tilde{A})^{-1}=j (\tau j^*\varPhi j - A)^{-1} j^*\varPhi : U \rightarrow U\). Thus, the resolvent set \(\varrho (\tilde{A})\) of \(\tilde{A}\) contains all \(\tau \ge 0\). In particular, for any \(\tau >0\) the range of the operator \((\tau I-\tilde{A})\) is all of \(U\). Since, furthermore, \(\tilde{A}\) is dissipative, the Lumer–Phillips theorem implies that \(\tilde{A}\) generates a strongly continuous semigroup \(\{e^{t\tilde{A}}\}_{t\ge 0}\) of contractions on \(U\) (e.g., [35, Theorem 1.4.3]). Thus, an application of the Hille–Yosida theorem (e.g., [35, Theorem 1.3.1]) shows that the operator \(L_{\tau }^{-1}:= (I-\tau \tilde{A})^{-1} = \tau (\tau I-\tilde{A})^{-1}:U\rightarrow U\) is a contraction for each \(\tau >0\).

By an abuse of notation, we sometimes write \(A\) instead of \(\tilde{A}\).

Appendix 2: Proofs of Lemmas 3.9 and 4.6

Proof of Lemma 3.9

By (38) and (39) the stage equations (30) read as

$$\begin{aligned} (I-\tau \gamma _{1,1} A) w_{k,1} =&\ A u_k+f(t_k), \\ (I-\tau \gamma _{2,2} A) w_{k,2} =&\ A (u_k+\tau a_{2,1}w_{k,1})+f(t_k+a_2\tau )+c_{2,1}w_{k,1}. \end{aligned}$$

We begin with an application of the basic observation that

$$\begin{aligned} I=(I-CA)^{-1}(I-CA) \end{aligned}$$

implies

$$\begin{aligned} (I-C A)^{-1} A = -\frac{1}{C} I + \frac{1}{C} (I-C A)^{-1}. \end{aligned}$$

It follows that

$$\begin{aligned} w_{k,1}&=\big ((-{\textstyle \frac{1}{\tau \gamma _{1,1}}} I + {\textstyle \frac{1}{\tau \gamma _{1,1}}} (I-\tau \gamma _{1,1}A)^{-1}\big ) u_k +(I-\tau \gamma _{1,1}A)^{-1}f(t_k)\\&= -{\textstyle \frac{1}{\tau \gamma _{1,1}}} u_k +L_{\tau ,1}^{-1} \big ( {\textstyle \frac{1}{\tau \gamma _{1,1}}} u_k + f(t_k)\big ). \end{aligned}$$

We denote

$$\begin{aligned} v_{k,1} = L_{\tau ,1}^{-1} \big ({\textstyle \frac{1}{\tau \gamma _{1,1}}} u_k + f(t_k)\big ). \end{aligned}$$

A similar computation for the second-stage equation yields

$$\begin{aligned} w_{k,2}&= \big (-{\textstyle \frac{1}{\tau \gamma _{2,2}}} I + {\textstyle \frac{1}{\tau \gamma _{2,2}}} (I-\tau \gamma _{2,2}A)^{-1}\big ) (u_k+\tau a_{2,1}w_{k,1})\\&\quad + (I-\tau \gamma _{2,2}A)^{-1} \big (f(t_k+a_2\tau ) +c_{2,1}w_{k,1} \big )\\&= -{\textstyle \frac{1}{\tau \gamma _{2,2}}} \big ( (1-{\textstyle \frac{a_{2,1}}{\gamma _{1,1}}}) u_k +\tau a_{2,1} v_{k,1} \big )\\&\quad + L_{\tau ,2}^{-1} \Big ( {\textstyle \frac{1}{\tau \gamma _{2,2}}} \big ( (1-{\textstyle \frac{a_{2,1}}{\gamma _{1,1}}}) u_k +\tau a_{2,1} v_{k,1} \big ) \\&\quad + f(t_k+a_2\tau ) +c_{2,1}(-{\textstyle \frac{1}{\tau \gamma _{1,1}}}u_k+v_{k,1}) \Big ) \\&= -{\textstyle \frac{1}{\tau \gamma _{2,2}}} (1-{\textstyle \frac{a_{2,1}}{\gamma _{1,1}}}) u_k - {\textstyle \frac{a_{2,1}}{\gamma _{2,2}} }v_{k,1}\\&\qquad + L_{\tau ,2}^{-1} \Big ( \big ( {\textstyle \frac{1}{\tau \gamma _{2,2}}} (1-{\textstyle \frac{a_{2,1}}{\gamma _{1,1}}}) - {\textstyle \frac{c_{2,1}}{\tau \gamma _{1,1}}} \big ) u_k \\&\qquad + ( {\textstyle \frac{a_{2,1}}{\gamma _{2,2}}} +c_{2,1}) v_{k,1} +f(t_k+a_2\tau ) \Big ). \end{aligned}$$

We denote

$$\begin{aligned} v_{k,2} = L_{\tau ,2}^{-1} \Big ( \big ( {\textstyle \frac{1}{\tau \gamma _{2,2}}} (1-{\textstyle \frac{a_{2,1}}{\gamma _{1,1}}}) - {\textstyle \frac{c_{2,1}}{\tau \gamma _{1,1}}} \big ) u_k + ({\textstyle \frac{a_{2,1}}{\gamma _{2,2}}} +c_{2,1}) v_{k,1} +f(t_k+a_2\tau ) \Big ) \end{aligned}$$

and arrive at

$$\begin{aligned} u_{k+1}&= u_k + \tau m_1(-{\textstyle \frac{1}{\tau \gamma _{1,1}}}u_k+v_{k,1}) \\&\quad + \tau m_2\big ( -{\textstyle \frac{1}{\tau \gamma _{2,2}}} (1-{\textstyle \frac{a_{2,1}}{\gamma _{1,1}}}) u_k - {\textstyle \frac{a_{2,1}}{\gamma _{2,2}}} v_{k,1} +v_{k,2} \big )\\&= \big (1-{\textstyle \frac{m_1}{\gamma _{1,1}}}-{\textstyle \frac{m_2}{\gamma _{2,2}}}(1-{\textstyle \frac{a_{2,1}}{\gamma _{1,1}}})\big )u_k +(\tau m_1-\tau m_2{\textstyle \frac{a_{2,1}}{\gamma _{2,2}}})v_{k,1} + \tau m_2 v_{k,2}. \end{aligned}$$

\(\square \)

Proof of Lemma 4.6

We start with the estimate

$$\begin{aligned} \Vert \hat{w}_{k,i}\Vert _{B^s_{q}(L_{q}(\mathcal O))}&= \big \Vert L_{\tau ,i}^{-1}R_{\tau ,k,i}(\tilde{u}_k,\tilde{w}_{k,1},...,\tilde{w}_{k,i-1})\big \Vert _{B^s_{q}(L_{q}(\mathcal O))} \\&\le \Vert L_{\tau ,i}^{-1}\Vert _{\mathcal {L}(L_2(\mathcal O),B^s_{q}(L_{q}(\mathcal O)))}\, \big \Vert R_{\tau ,k,i}(\tilde{u}_k,\tilde{w}_{k,1},...,\tilde{w}_{k,i-1})\big \Vert _{L_2(\mathcal O)}. \end{aligned}$$

The Lipschitz continuity of \(R_{\tau ,k,i}\) implies the linear growth property

$$\begin{aligned}&\big \Vert R_{\tau ,k,i}(\tilde{u}_k,\tilde{w}_{k,1},...,\tilde{w}_{k,i-1})\big \Vert _{L_2(\mathcal O)}\\&\quad \le C^{\text {Lip,R}}_{\tau ,k,(i)} \Big ( \Vert \tilde{u}_k\Vert _{L_2(\mathcal O)} + \sum _{j=1}^{i-1}\Vert \tilde{w}_{k,j}\Vert _{L_2(\mathcal O)} \Big ) + \big \Vert R_{\tau ,k,i}(0,...,0)\big \Vert _{L_2(\mathcal O)}\\&\quad \le \max \! \Big \{ C^{\text {Lip,R}}_{\tau ,k,(i)}, \big \Vert R_{\tau ,k,i}(0,...,0)\big \Vert _{L_2(\mathcal O)} \Big \} \times \Big (1\! +\! \Vert \tilde{u}_k\Vert _{L_2(\mathcal O)}\! + \!\sum _{j=1}^{i-1}\!\Vert \tilde{w}_{k,j}\Vert _{L_2(\mathcal O)} \Big )\\&\quad \le \max \! \Big \{ C^{\text {Lip,R}}_{\tau ,k,(i)}, \big \Vert R_{\tau ,k,i}(0,...,0)\big \Vert _{L_2(\mathcal O)} \Big \} \times \Big ( 1\! +\! \Vert u_k\Vert _{L_2(\mathcal O)}\! + \!\sum _{j=1}^{i-1}\! \Vert w_{k,j}\Vert _{L_2(\mathcal O)}\\&\qquad + \Vert u_k-\tilde{u}_k\Vert _{L_2(\mathcal O)} + \sum _{j=1}^{i-1}\Vert w_{k,j}-\tilde{w}_{k,j}\Vert _{L_2(\mathcal O)} \Big ). \end{aligned}$$

As before, the Lipschitz continuity of \(L_{\tau ,i}^{-1}R_{\tau ,k,i}\) implies

$$\begin{aligned}&\Vert w_{k,i}\Vert _{L_2(\mathcal O)}= \big \Vert L_{\tau ,i}^{-1}R_{\tau ,k,i}(u_k,w_{k,1},\ldots ,w_{k,i-1})\big \Vert _{L_2(\mathcal O)}\\&\quad \le \max \Big \{ C^{\text {Lip}}_{\tau , {k},({i})}, \big \Vert L_{\tau ,i}^{-1}R_{\tau ,k,i}(0,...,0)\big \Vert _{L_2(\mathcal O)} \Big \} \Big (1\! + \!\Vert u_k\Vert _{L_2(\mathcal O)}\! + \! \sum _{j=1}^{i-1}\! \Vert w_{k,j}\Vert _{L_2(\mathcal O)}\Big ). \end{aligned}$$

By induction, we estimate

$$\begin{aligned}&1 + \Vert u_k\Vert _{L_2(\mathcal O)} +\sum _{j=1}^{i-1} \Vert w_{k,j}\Vert _{L_2(\mathcal O)}\\&\quad \le \prod _{l=1}^{i-1} \Big (1+\max \Big \{ C^{\text {Lip}}_{\tau , {k},({l})}, \big \Vert L_{\tau ,l}^{-1}R_{\tau ,k,l}(0,\ldots ,0)\big \Vert _{L_2(\mathcal O)} \Big \} \Big ) \big (1+\Vert u_k\Vert _{L_2(\mathcal O)}\big ). \end{aligned}$$

Note that

$$\begin{aligned} \Vert \tilde{w}_{k,i}-\hat{w}_{k,i}\Vert _{L_2(\mathcal O)} \le \Vert \tilde{w}_{k,i}-\hat{w}_{k,i}\Vert _{H^{\nu }(\mathcal O)} \le \varepsilon _{k,i}. \end{aligned}$$

This enables us to follow similar lines as in the proof of Theorem 2.21. We estimate

$$\begin{aligned}&\Vert u_k-\tilde{u}_k\Vert _{L_2(\mathcal O)} + \sum _{j=1}^{i-1}\Vert w_{k,j}-\tilde{w}_{k,j}\Vert _{L_2(\mathcal O)} \\&\quad \le (1+C^{\text {Lip}}_{\tau , {k},({i-1})}) \Big (\Vert u_k-\tilde{u}_k\Vert _{L_2(\mathcal O)}+\sum _{j=1}^{i-2}\Vert w_{k,j}-\tilde{w}_{k,j}\Vert _{L_2(\mathcal O)} \Big ) \\&\qquad + \Big \Vert L_{\tau ,{i-1}}^{-1}R_{\tau ,k,{i-1}}(\tilde{u}_k, \tilde{w}_{k,1}, \ldots , \tilde{w}_{k,i-2}) \\&\qquad - \big [L_{\tau ,{i-1}}^{-1}R_{\tau ,k,{i-1}}(\tilde{u}_k, \tilde{w}_{k,1}, \ldots , \tilde{w}_{k,i-2})\big ]_{\varepsilon _{k,{i-1}}} \Big \Vert _{L_2(\mathcal O)}\\&\quad \le (1+C^{\text {Lip}}_{\tau , {k},({i-1})}) \Big ( \Vert u_k-\tilde{u}_k\Vert _{L_2(\mathcal O)}+\sum _{j=1}^{i-2}\Vert w_{k,j}-\tilde{w}_{k,j}\Vert _{L_2(\mathcal O)} \Big ) + \varepsilon _{k,{i-1}} \end{aligned}$$

and conclude, by induction,

$$\begin{aligned}&\Vert u_k-\tilde{u}_k\Vert _{L_2(\mathcal O)} + \sum _{j=1}^{i-1}\Vert w_{k,j}-\tilde{w}_{k,j}\Vert _{L_2(\mathcal O)} \\&\quad \le \left( \,\prod _{l=1}^{i-1}(1+C^{\text {Lip}}_{\tau , {k},({l})})\right) \Vert u_k-\tilde{u}_k\Vert _{L_2(\mathcal O)} + \sum _{j=1}^{i-1}\varepsilon _{k,j}\prod _{l=j+1}^{i-1}(1+C^{\text {Lip}}_{\tau , {k},({l})}). \end{aligned}$$

The proof is completed by

$$\begin{aligned} \Vert u_k-\tilde{u}_k\Vert _{L_2(\mathcal O)} \le \sum _{j=0}^{k-1} \Big (\prod _{l=j+1}^{k-1} (C^\prime _{\tau ,l,(0)}-1)\Big ) \sum _{i=1}^S C^\prime _{\tau ,j,(i)} \varepsilon _{j,i}, \end{aligned}$$

which is shown as in Theorem 2.21. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cioica, P.A., Dahlke, S., Döhring, N. et al. Convergence Analysis of Spatially Adaptive Rothe Methods. Found Comput Math 14, 863–912 (2014). https://doi.org/10.1007/s10208-013-9183-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-013-9183-7

Keywords

Mathematics Subject Classification

Navigation