Exponential fitting BDF algorithms and their properties

https://doi.org/10.1016/j.amc.2007.01.008Get rights and content

Abstract

We present two families of explicit and implicit BDF formulas, capable of the exact integration (with only round-off errors) of differential equations whose solutions belong to the space generated by the linear combinations of exponential of matrices, products of the exponentials by polynomials and products of those matrices by ordinary polynomials. Those methods are suitable for stiff and highly oscillatory problems, then we will study their properties of local truncation error and stability. Plots of their 0-stability regions in terms of Ah are provided. Plots of their regions of absolute stability that include all the negative real axis in the implicit case are provided. Exponential fitting algorithms depend on a parameter Ah, how can we find this parameter is a big question, here we show different ways to find a good parameter. Finally, numerical examples show the efficiency of the proposed codes, specially when we are integrating stiff problems where the jacobian of the function has complex eigenvalues or problems where the jacobian has positive eigenvalues but the solutions of the problems have not positive exponentials.

Introduction

In this paper, we are going to derive explicit and implicit BDF methods, that solve the IVP problemy(x)=G(x,y(x)),y(x0)=y0,where y=[y1,,ym]t, and G=[g1,,gm]t.

The most important improve of these new methods shall be that they are able to integrate problems where the jacobian has positive eigenvalues but the solutions of the problems have not positive exponentials. We will see that these problems are very hard, and most of the classical methods (including classical methods for stiff problems) are not stable with these problems. We are going to show very good results in problems that appear in a recent bibliography (see [11] or [25]).

The new methods are very efficient with other two kind of numerical examples: stiff and highly oscillatory problems. The most common definition of stiffness (see Definition 2.3 in [39], and for example, [11] or [25]) is: stiffness occurs if for most classical explicit methods, the largest step size hn, guaranteeing numerical stability is much smaller than the largest step hn for which the local discretization error is still sufficiently small (in norm), i.e., hnhn. With this definition stiff problems include highly oscillatory problems, however, many scientists do not consider highly oscillatory problems as stiff problems.

Stiff problems (and highly oscillatory problems) are very common problems in many fields of the applied sciences (see [1]): atmosphere, biology, combustion, control theory, dynamics of missile guidance, dispersed phases, electronic circuit theory, fluids, heat transfer, chemical kinetics (this is the most common area), lasers, mechanics, molecular dynamics, nuclear, process industries, process vessels, reactor kinetics, … Then, stiff computation is a very interesting area and it is impossible to include all the contributions.

Many methods have been considered to integrate stiff problems, but implicit Runge–Kutta (Radau, see [17]; STRIDE, see [2], [10] or [32]; DIRK, [2] or [7]; SDIRK, Gauss, Rosembrock, …) and implicit BDF (MEBDF, see [5]; DASSL, see [34] or [35]; LSODE, see [18], …) codes have frequently been used.

The idea of using exponential fitting/adapted formulae with the numerical integration of stiff systems has recently received considerable attention (see [6], [22], [26], …).

The basic idea behind exponential fitting is to integrate exactly (1) when y(x) belongs to a large space where linear combinations of exponential of matrices are included, and we will denote exponential fitting algorithms, in this paper, with the capital letters EF.

While we will use the name adapted methods with the methods that solve the IVP problemy(x)-Ay(x)=F(x,y(x)),y(x0)=y0(where y=[y1,,ym]t, A is a m × m constant matrix and F=[f1,,fm]t) exactly when y(x) belongs to a space where linear combinations of exponential of matrices are included, here, we will denote those algorithms with the capital letter A.

This is, exponentially fitting methods will depend on G(x, y(x)), while adapted methods (see [27], [42]) depend on F(x,y(x))=G(x,y(x))-Ay(x). In this case, implicit adapted methods function as explicit methods with linear problems.

Exponential fitting and adapted methods are very similar and we will show a relation between them in the following sections. In Section 2 we will derive two families of exponential fitting methods, while in Section 3 we will provide the local truncation error of those methods. In Section 4 we will provide plots of the 0-stability of the methods, and then, in Section 5 the definition of absolute stability of the methods and some plots of those regions are provided, in those Sections 4 0-Stability, 5 Absolute stability regions, we will see that adapted and exponential fitting algorithms have different properties of stability and we will explained this apparent contradiction. Finally in Section 6 we will compare our algorithms with well known ODE solvers for the numerical solution of many different kind of problems.

Section snippets

Derivation of the exponential fitting/adapted methods

Through the basic theory of difference equations (see [41]), we are going to build two families of exponential fitting/adapted methods. The first one integrate with no truncation error the problem (1) when y(x) is a vector of functions that belongs to the space generated by the linear combinations of eAx,1,x,,xr-1 (we will denote them with EF-I-r or A-I-r), while the second family integrate exactly the problem (1) when y(x) is a vector of functions that belongs to the space generated by the

Local truncation error

 

Theorem 3

The multistep implicit adapted method of type IPj=0r-1βj0jyn=hfnof r steps is convergent of order r. Its local truncation error can be expressed in the form:hr+1βr0(D-A)Dry(x)+O(hr+2).

Proof

As we have seen in [27] the operator providing us with the local truncation error of (21) is written in the formPβr0ry(x)+O(hr+2),and the differences satisfy ry(x)=hrDry(x)+O(hr+1), and Py(x)=h(D-A)y(x)+O(h2), then the local truncation error of (21) is written in the formhr+1βr0(D-A)Dry(x)+O(hr+2).

Theorem 4

The

0-Stability

It is common to assume that if the method is exponential fitting or adapted then the method is 0-stable, but this fact is not true in the case of BDF type formulas. There are some regions where the method is not 0-stable although it is really easy to select a step-size that avoid those regions.

In attempting to set up zero-stability, we consider the linear constant homogeneous systemy(x)=Ay(x),where the eigenvalues λi,i=1,,m of the m × m matrix A satisfy Re(λi)<0.

It follows that solutions of the

Absolute stability regions

The classical definitions of regions of absolute stability and A-stability were designed for linear multistep methods with constant coefficients. In this section those definitions are modified so as to provide a basis for linear stability analysis of exponential fitting methods for ordinary differential equations of type (1). The stability properties of proposed methods are analyzed, to demonstrate its relevance to stiff oscillatory problems. The way is very similar to that used in [8] to

Numerical examples

We have explained that the improve of these methods is not only the fact that one is explicit, but the behavior with problems where the jacobian of the function has a positive eigenvalue (but the solution of the ODE has not positive exponentials) or with stiff oscillatory problems, too.

We will say that one ODE is stiff oscillatory if y=f(x,y) and the jacobian fy has eigenvalues λi,λj, where Re(λi)0 comparing with the interval of integration and |Im(λj)Re(λj)|1.

In this kind of problems we

Conclusions

We have shown different types of exponential fitting methods that integrates (1) when y(x) (the solution of the problem) belongs to the space generated by the linear combinations of eAx,,xk-1eAx,1 or the space generated by eAx,1,x,,xr-1.

In [28] the A-I-r algorithms are shown. The A-I-r methods and the EF-I-r algorithms really verify the same methods, the only one difference is the presentation of the ODE solver.

We have shown that the absolute stability regions of the EF-I-r and the A-I-r

References (43)

  • J.R. Cash et al.

    An MEBDF code for stiff initial value problems

    ACM Trans. Math. Soft.

    (1992)
  • J.R. Cash

    On the exponential fitting of composite multiderivative linear multistep methods

    SIAM J. Numer. Anal.

    (1981)
  • J.R. Cash

    Diagonally implicit Runge–Kutta formulae with error estimates

    J. IMA

    (1979)
  • J.P. Coleman et al.

    P-stability and exponential-fitting methods for y = f(x, y)

    IMA J. Numer. Anal.

    (1996)
  • G.F. Corliss et al.

    Hihg-order ODE solvers via automatic differentiation and rational prediction

    (1997)
  • M. Crouzeix, Sur l’approximation des equations differentiales operationelles lineaires par des methodes Runge–Kutta,...
  • K. Dekker et al.

    Stability of Runge–Kutta methods for stiff nonlinear differential equations

    (1984)
  • W.H. Enright et al.

    Comparing numerical methods for stiff systems of ODE’s

    BIT

    (1975)
  • R.A. Friesner et al.

    A method for exponential propagation of large systems of stiff nonlinear differential equations

    J. Sci. Comput.

    (1989)
  • E. Gallopoulos et al.

    Efficient solution of parabolic equations by Krylov approximation methods

    SIAM J. Sci. Stat. Comput.

    (1992)
  • E. Hairer et al.

    Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems

    (1991)
  • Cited by (14)

    • Exponential fitting BDF-Runge-Kutta algorithms

      2008, Computer Physics Communications
      Citation Excerpt :

      Several papers (see [40], for example) showed, using perturbation theory, a way to get linear equations from nonlinear ones. The solutions of both IVP's are very similar, then we can choose the parameter Ah from the linear problem and Ah allows us to get good results as we did in [28] with several stiff oscillatory problems. Now, in the numerical example 3, we shall show that with the new EF-BDF-RK methods we get better results than with the EF-BDF algorithms that we showed in [28].

    View all citing articles on Scopus
    View full text