Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter October 23, 2018

A second-order weak approximation of SDEs using a Markov chain without Lévy area simulation

  • Toshihiro Yamada EMAIL logo and Kenta Yamamoto

Abstract

This paper proposes a new Markov chain approach to second-order weak approximations of stochastic differential equations (SDEs) driven by d-dimensional Brownian motion. The scheme is explicitly constructed by polynomials of Brownian motions up to second order, and any discrete moment-matched random variables or the Lévy area simulation method are not used. The required number of random variables is still d in one-step simulation of the implementation of the scheme. In the Markov chain, a correction term with Lie bracket of vector fields associated with SDEs appears as the cost of not using moment-matched random variables.

Award Identifier / Grant number: 16K13773

Funding statement: This work is supported by JSPS KAKENHI (Grant Number 16K13773) from MEXT, Japan and a research fund from Tokio Marine Kagami Memorial Foundation.

A Appendix

Before we show the proofs of Lemma 2.1, Proposition 2.2, Theorem 2.4 and Corollary 2.6, we prepare some notations and results on stochastic calculus.

A.1 Stochastic calculus

We summarize the analysis of Wiener functionals, which will be a key tool for the proofs. Please see [17, 13] for the details. Let 𝒮 be the set of random variables of type

F=f(0Th1(s)dWs,,0Thn(s)dWs),

where hiL2([0,T];d), i=1,,n, n1, and f:m is an infinitely continuously differentiable function such that f and its derivatives have polynomial growth. The derivative for F𝒮 is defined as the d-valued stochastic process {DtF}t0 given by

DtF=j=1njf(0Th1(s)dWs,,0Thn(s)dWs)hj(t)

or

Di,tF=j=1njf(0Th1(s)dWs,,0Thn(s)dWs)hji(t),1id.

The iterated derivatives can be defined as Dt1,,tkkF=Dt1DtkF, k. We can regard DkF as a square-integrable stochastic process indexed by [0,T]k. For any p1, Dk is closable on 𝒮. Let 𝔻k,p, k, p1, be the closure of 𝒮 with respect to the norm

Fk,p=(Fp+j=1kE[DjFL2([0,T]j)])1/p.

For F𝔻k,p, DjF is called the Malliavin derivative of order j, 1jk. Let

2={{ut}={(ut1,,utd)}0tT;E[0T|ut|2dt]<}.

The adjoint operator δ:2L2(Ω) of D is densely defined through the duality formula

E[0tDsFusds]=E[Fδu],F𝔻1,2.

In particular, for F=(F1,,Fm)(𝔻1,2)m, a square-integrable adapted process u:[0,T]×Ω and φCb(m), we have

(A.1)E[0tDi,sφ(F)usds]=E[φ(F)0tusdWsi],t[0,T],i=1,,d.

Here we note that the stochastic integral on the right-hand side is the Itô integral, and also

Di,sφ(F)=j=1mjφ(F)Di,sFj,s0,i=1,,d,

holds by the chain rule of the Malliavin derivative.

Define the space of smooth Wiener functionals 𝔻=k,p𝔻k,p. We call F𝔻 the nondegenerate functional if the matrix

σi,jF=0TDk,tFiDk,tFjdt,1i,jm,

is invertible a.s. and det(σF)-1p< for all p>1. For φCb(m) and a nondegenerate functional F=(F1,,Fm)(𝔻)m, we have

(A.2)E[iφ(F)G]=E[φ(F)H(i)(F,G)],G𝔻,1im,

where

H(i)(F,G)=δ(j=1mγi,jFDFjG).

Here, (γi,jF)1i,jm is the inverse matrix of (σi,jF)1i,jm.

Recall that the space of Watanabe distributions 𝔻- is given as the dual of the space 𝔻. Then the generalized expectation F,G𝔻𝔻- for F𝔻- and G𝔻 is defined as coupling. Moreover, we denote by 𝒮(m) and 𝒮(m) the space of rapidly decreasing Schwartz functions on m and its dual, the space of Schwartz-tempered distributions, respectively. Then the composition δy(F) of the Dirac delta function δy𝒮(m) mass at ym and nondegenerate F(𝔻)m is well-defined as an element of 𝔻-, and one has

(A.3)E[φ(F)G]=mφ(y)δy(F),G𝔻𝔻-dy,φCb(m).

Furthermore,

(A.4)iδy(F),G𝔻𝔻-=δy(F),H(i)(F,G)𝔻𝔻-,i=1,,N,G𝔻.

In particular, let F=Wt for t>0. Then we have

(A.5)iδy(Wt),G𝔻𝔻-=δy(Wt),1tGWti-1t0tDi,sGds𝔻𝔻-.

See, for example, [8, 14, 15, 19, 21, 20] for the details on computations for Wiener functionals and applications.

A.2 Key lemma

In the proofs of Lemma 2.1, Proposition 2.2 and Theorem 2.4, the following two lemmas will play an important role.

Lemma A.1.

Let h be a bounded adapted process. Then, for p2, there exists C>0 such that

0<t1<<tk<th(t1)dWt1j1dWtkjkpCt12(#{l;jl0}+2#{l;jl=0})

for all t(0,1].

See [2, 3] for the proof.

Lemma A.2.

Let k=3,4,5 and J=(j1,,jk) with J=#{l;jl0}+2#{l;jl=0}5. Let h be a bounded adapted process and gCb(RN). Then there exists C>0 such that

supxN|E[g(X¯tEM(x))0<t1<<tk<th(t1)dWt1j1dWtkjk]|C6-Jgt3,

for all t(0,1].

Proof.

Let t(0,1] and xN. We will mainly use the integration by parts (A.1) to get the assertion. If jk0, we can see that

E[g(X¯tEM(x))0<t1<<tk<th(t1)dWt1j1dWtkjk]=E[0tDjk,tkg(X¯tEM(x))0<t1<<tk-1<tkh(t1)dWt1j1dWtk-1jk-1dtk]=E[0ti1=1Ni1g(X¯tEM(x))Djk,tkX¯tEM,i1(x)0<t1<<tk-1<tkh(t1)dWt1j1dWtk-1jk-1dtk]=i1=1NVjki1(x)0tE[i1g(X¯tEM(x))0<t1<<tk-1<tkh(t1)dWt1j1dWtk-1jk-1]dtk.

Also, if jk=0, we have

E[g(X¯tEM(x))0<t1<<tk<th(t1)dWt1j1dWtkjk]=0tE[g(X¯tEM(x))0<t1<<tk-1<tkh(t1)dWt1j1dWtk-1jk-1]dtk.

Define the multi-index J-1:=(j1,jk-1), and apply the above computations for jk-1=0 or 0. Also, define J-p:=(j1,jk-p), p=1,,k, and iterate this procedure p times until p=n(J)=6-J. Then there exists a multi-index α=(α1,,αk){0,1,,d}k with α=6 such that

E[g(X¯tEM(x))0<t1<<tk<th(t1)dWt1j1dWtkjk]=i1,,in(J)=1NE[i1in(J)g(X¯tEM(x))0<t1<<tk<tr=1n(J)Djlr,tlrX¯tEM,ir(x)h(t1)dWt1α1dWtkαk],

where jlr0 and αlr=0 for lr{k-n(J),,n(J)} satisfying lq1lq2, 1q1,q2n(J). Since we can see that

0<t1<<tk<tr=1n(J)Djlr,tlrX¯tEM,ir(x)h(t1)dWt1α1dWtkαk=r=1n(J)Vjlrii(x)0<t1<<tr<th(t1)dWt1α1dWtkαk,

it holds

E[g(X¯tEM(x))0<t1<<tk<th(t1)dWt1j1dWtkjk]=i1,,in(J)=1NE[i1in(J)g(X¯tEM(x))r=1n(J)Vjlrir(x)0<t1<<tk<th(t1)dWt1α1dWtrαr].

Therefore we have

supxN|E[g(X¯tEM(x))0<t1<<tk<th(t1)dWt1j1dWtkjk]|n(J)gCtα/2=6-JgCt3

for some C>0 independent of t(0,1]. ∎

Lemma A.2 is a slight generalization of [20, Lemma 1]. In the following, we typically take the function h as h(s)=V^j1V^jk-1Vjki(Xs(x)) or h(s)=V^j1V^jk-1Vjki(x).

A.3 Proof of Lemma 2.1

  1. Let t(0,1] and xN. In the following, the generic constant C>0 might depend on Vi, i=0,1,,d, and is independent of t(0,1] whose value might change from line to line. Using the Itô–Taylor expansion, Xt(x) is expanded as

    Xt(x)=X¯tEM(x)+j1,j2=0dV^j1Vj2(x)I(j1,j2)(t)+j1,j2,j3=0dV^j1V^j2Vj3(x)I(j1,j2,j3)(t)+(t,x),

    where (t,x)=(1(t,x),,N(t,x)) is the residual of the expansion given by

    i(t,x)=j1,,j4=0d0<t1<<t4<tV^j1V^j2V^j3Vj4(Xt1(x))dWt1j1dWt2j2dWt3j3dWt4j4,i=1,,N.

    We expand E[φ(Xt(x))] around E[φ(X¯tEM(x))] as

    (A.6)E[φ(Xt(x))]=E[φ(X¯tEM(x))]+i=1Nj1,j2=0dE[iφ(X¯tEM(x))V^j1Vj2i(x)I(j1,j2)(t)]+i1,i2=1Nj1,,j4=0d12E[i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j3Vj4i2(x)I(j1,j2)(t)I(j3,j4)(t)]+rφ(t,x),

    where

    rφ(t,x)=i=1NE[iφ(X¯tEM(x)){j1,j2,j3=0dV^j1V^j2Vj3i(x)I(j1,j2,j3)(t)+i(t,x)}]+i1,i2=1Nj1,,j5=0dE[i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j3V^j4Vj5i2(x)I(j1,j2)(t)I(j3,j4,j5)(t)]+i1,i2=1Nj1,j2=0dE[i1i2φ(X¯tEM(x))V^j1Vj2i1(x)I(j1,j2)(t)i2(t,x)]+i1,i2=1Nj1,j2,j3=0dE[i1i2φ(X¯tEM(x))V^j1V^j2Vj3i1(x)I(j1,j2,j3)(t)i2(t,x)]+i1,i2=1Nj1,,j6=0d12E[i1i2φ(X¯tEM(x))V^j1V^j2Vj3i1(x)V^j4V^j5Vj6i2(x)I(j1,j2,j3)(t)I(j4,j5,j6)(t)]+i1,i2=1N12E[i1i2φ(X¯tEM(x))i1(t,x)i2(t,x)]=:rφ,1(t,x)+rφ,2(t,x)+rφ,3(t,x)+rφ,4(t,x)+rφ,5(t,x)+rφ,6(t,x).

    We immediately obtain

    supxN|rφ,k(t,x)|C2φt3,k=3,4,5,6,

    since, for p2, 1i1,i2N, 0j1,,j6d and t(0,1], we have

    V^j1Vj2i1(x)I(j1,j2)(t)i2(t,x))pCt3,
    V^j1V^j2Vj3i1(x)I(j1,j2,j3)(t)i2(t,x)pCt7/2,
    V^j1V^j2Vj3i1(x)V^j4V^j5Vj6i2(x)I(j1,j2,j3)(t)I(j4,j5,j6)(t)pCt3,
    i1(t,x)i2(t,x)pCt4.

    Here, we used Lemma A.1, in particular, for p2, 1j1,j2,j3d and 1iN,

    I(j1,j2)(t)pCt,I(j1,j2,j3)(t)pCt3/2,i(t,x)pCt2.

    To obtain supxN|rφ(t,x)|Cl=14lφt3 for t(0,1], it suffices to prove

    supxN|rφ,1(t,x)|Cl=14lφt3andsupxN|rφ,2(t,x)|Cl=14lφt3

    for t(0,1].

    Using Lemma A.2, for t(0,1], we have

    |E[iφ(X¯tEM(x))j1,j2,j3=0dV^j1V^j2Vj3i(x)I(j1,j2,j3)(t)]|Cl=14lφt3,
    |E[iφ(X¯tEM(x))j1,j2,j3,j4=0d0<t1<<t4<tV^j1V^j2V^j3Vj4i(Xt1(x))dWt1j1dWt4j4]|Cl=13lφt3.

    Then we get supxN|rφ,1(t,x)|Cl=14lφt3 for t(0,1]. Next, we deal with the term rφ,2(t,x). By an easy computation with the Itô formula, for 0j1,,j5d, we have

    I(j1,j2)(t)I(j3,j4,j5)(t)=I(j3,j4,j5,j1,j2)(t)+I(j3,j4,j1,j5,j2)(t)+I(j1,j3,j4,j5,j2)(t)+I(j3,j1,j4,j5,j2)(t)+I(0,j4,j5,j2)(t)𝟏j1=j30+I(j3,0,j5,j2)(t)𝟏j1=j40+I(j3,j4,0,j2)(t)𝟏j1=j50+I(j3,j4,j1,j2,j5)(t)+I(j3,j1,j4,j2,j5)(t)+I(j1,j3,j4,j2,j5)(t)+I(j1,j2,j3,j4,j5)(t)+I(j1,j3,j2,j4,j5)(t)+I(j3,j1,j2,j4,j5)(t)+I(0,j2,j4,j5)(t)𝟏j1=j30+I(j1,0,j4,j5)(t)𝟏j2=j50+I(0,j4,j2,j5)(t)𝟏j1=j30+I(j3,0,j2,j5)(t)𝟏j1=j40+I(j3,j1,0,j5)(t)𝟏j2=j40+I(j1,j3,0,j5)(t)𝟏j2=j40+I(0,0,j5)(t)𝟏j1=j30,j2=j40+I(j3,j4,j1,0)(t)𝟏j2=j50+I(j3,j1,j4,0)(t)𝟏j2=j50+I(j1,j3,j4,0)(t)𝟏j2=j50+I(j3,0,0)(t)𝟏j1=j40𝟏j2=j50+I(0,j4,0)(t)𝟏j1=j30𝟏j2=j50.

    We use Lemma A.2 again to attain supxN|rφ,2(t,x)|Cl=14lφt3. Therefore we get

    supxN|rφ(t,x)|Cl=14lφt3.

    We also remark that the product of iterated Itô integrals in the third term on the right-hand side of (A.6) is represented through the Itô formula as

    (A.7)I(j1,j2)(t)I(j3,j4)(t)=I(j3,j4,j1,j2)(t)+I(j3,j1,j4,j2)(t)+I(j1,j3,j4,j2)(t)+I(j3,0,j2)(t)𝟏j1=j40+I(0,j4,j2)(t)𝟏j1=j30+I(j1,j2,j3,j4)(t)+I(j1,j3,j2,j4)(t)+I(j3,j1,j2,j4)(t)+I(j1,0,j4)(t)𝟏j2=j30+I(0,j2,j4)(t)𝟏j1=j30+I(j3,j1,0)(t)𝟏j2=j40+I(j1,j3,0)(t)𝟏j2=j40+I(0,0)(t)𝟏j1=j30,i2=i40

    for 0j1,,j4d. Then, from Lemma A.2, for i1,i2=1,,N, we can see that

    E[i1i2φ(X¯tEM(x))j1,,j4=0dV^j1Vj2i1(x)V^j3Vj4i1(x)I(j1,j2)(t)I(j3,j4)(t)]=E[i1i2φ(X¯tEM(x))j1,,j4=0dV^j1Vj2i1(x)V^j3Vj4i1(x)I(0,0)(t)𝟏j1=j30,j2=j40]+r^φ(t,x),

    where r^φ(t,x) satisfies supxN|r^φ(t,x)|Cl=24lφt3. Therefore we have

    (A.8)E[φ(Xt(x))]=E[φ(X¯tEM(x))]+i=1Nj1,j2=0dE[iφ(X¯tEM(x))V^j1Vj2i(x)I(j1,j2)(t)]+i1,i2=1Nj1,j2=0dE[12i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j1Vj2i2(x)I(0,0)(t)𝟏j1=j30,j2=j40]+Rφ(t,x),

    where Rφ(t,x) is given by Rφ(t,x)=rφ(t,x)+r^φ(t,x), which satisfies supxN|Rφ(t,x)|Cl=14lφt3.

    We finally compute the second and third term on the right-hand side of (A.8). Using equation (A.3), for i=1,,N, j1,j2=0,1,,d,

    E[iφ(X¯tEM(x))V^j1Vj2i(x)I(j1,j2)(t)]=diφ(x+V0(x)t+j=1dVj(x)yj)δy(Wt),V^j1Vj2i(x)0t0t2dWt1j1dWt2j2𝔻𝔻-dy.

    By the integration by parts for Watanabe distributions (A.4), with the computation (A.5), we have

    δy(Wt),V^j1Vj2i(x)0t0t2dWt1j1dWt2j2𝔻𝔻-=j1δy(Wt),V^j1Vj2i(x)0t0t2dWt1j1dt2𝔻𝔻-=j1δy(Wt),V^j1Vj2i(x)0t(t-t1)dWt1j1𝔻𝔻-=j1j2δy(Wt),V^j1Vj2i(x)12t2𝔻𝔻-=δy(Wt),V^j1Vj2i(x)12{Wtj1Wtj2-t𝟏j1=j20}𝔻𝔻-

    and get

    E[iφ(X¯tEM(x))V^j1Vj2i(x)I(j1,j2)(t)]=E[iφ(X¯tEM(x))V^j1Vj2i(x)12{Wtj1Wtj2-t𝟏j1=j20}].

    We obviously have

    E[i1i2φ(X¯tEM(x))j1,,j4=0dV^j1Vj2i1(x)V^j3Vj4i1(x)I(0,0)(t)𝟏j1=j30,j2=j40]=E[i1i2φ(X¯tEM(x))j1,j2=1dV^j1Vj2i1(x)V^j1Vj2i1(x)12t2].

    Therefore, we obtain the assertion. ∎

A.4 Proof of Proposition 2.2

  1. Let t(0,1] and xN. In the proof, the generic constant C>0 depends on Vi, i=0,1,,d, and is independent of t(0,1]. First, we define

    X^t(x)=X¯tEM(x)+(j1,j2){0,1,,d}2V^j1Vj2(x)12{Wtj1Wtj2-t𝟏j1=j20}

    and expand E[φ(X^t(x))] around E[φ(X¯tEM(x))].

    Then we get

    (A.9)E[φ(X^t(x))]=E[φ(X¯tEM(x))]+E[i=1Nj1,j2=0diφ(X¯tEM(x))V^j1Vj2i(x)12{Wtj1Wtj2-t𝟏j1=j20}]+E[i1,i2=1Nj1,j2=0d12i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j3Vj4i2(x)×12{Wtj1Wtj2-t𝟏j1=j20}12{Wtj3Wtj4-t𝟏j3=j40}]+rφ~(t,x),

    where rφ~(t,x) is the residual given by

    rφ~(t,x)=0112(1-ξ)2i1,i2,i3=1NE[i1i2i3φ(X¯tEM(x)+ξ(X^t(x)-X¯tEM(x)))×k=13{jk,1,jk,2=1dV^jk,1Vjk,2ik(x)12(Wtjk,1Wtjk,2-t𝟏jk,1=jk,20)}]dξ.

    Then we can immediately observe that supxN|rφ~(t,x)|C3φt3 holds. We note that one has

    (A.10)12{Wtj1Wtj2-t𝟏j1=j20}12{Wtj3Wtj4-t𝟏j3=j40}=12(I(j1,j2)(t)+I(j2,j1)(t))12(I(j3,j4)(t)+I(j4,j3)(t))=14(I(j1,j2)(t)I(j3,j4)(t)+I(j1,j2)(t)I(j4,j3)(t)+I(j2,j1)(t)I(j3,j4)(t)+I(j2,j1)(t)I(j4,j3)(t)).

    Applying formula (A.7) and Lemma A.2 to (A.10), we have the following expansion for the third term on the right-hand side of (A.9):

    E[i1,i2=1Nj1,j2,j3,j4=0d12i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j3Vj4i2(x)12{Wtj1Wtj2-t𝟏j1=j20}12{Wtj3Wtj4-t𝟏j3=j40}]=E[i1,i2=1Nj1,j2,j3,j4=0d12i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j3Vj4i2(x)×14{I(0,0)(t)𝟏j1=j30,j2=j40+I(0,0)(t)𝟏j1=j40,j2=j30+I(0,0)(t)𝟏j1=j30,j2=j40+I(0,0)(t)𝟏j1=j40,j2=j30}]+rφ¯(t,x),

    where rφ¯(t,x) satisfies supxN|rφ¯(t,x)|Cl=24lφt3.

    Then we have

    (A.11)E[φ(X^t(x))]=E[φ(X¯tEM(x))]+E[i=1Nj1,j2=0diφ(X¯tEM(x))V^j1Vj2i(x)12{Wtj1Wtj2-t𝟏j1=j20}]+E[i1,i2=1Nj1,j2=1d18i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j1Vj2i2(x)t2]+E[i1,i2=1Nj1,j2=1d18i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j2Vj1i2(x)t2]+Rφ~(t,x),

    where Rφ~(t,x) is the residual given by Rφ~(t,x)=rφ~(t,x)+rφ¯(t,x) such that

    supxN|Rφ~(t,x)|Cl=24lφt3.

    To obtain the same expansion in Lemma 2.1, we need to adjust the third and the fourth terms on the right-hand side of (A.11) by adding some correction terms to X^t(x). We then introduce a new process

    (A.12)X¯t(x)=X¯tEM(x)+j1,j2=0dV^j1Vj2(x)12{Wtj1Wtj2-t𝟏j1=j20}+j1,j2,j3=1dk1=1N18V^j1Vj2(x){V^j1Vj2k1(x)-V^j2Vj1k1(x)}t2H(k1)(X¯tEM(x),1).

    Here, the Malliavin weights H(i)(X¯tEM(x),1), i=1,,N, are explicitly given by

    (A.13)H(i)(X¯tEM(x),1)=k=1dj=1N1tAij(x)Vkj(x)Wtk.

    We will use the relation

    E[ig(X¯tEM(x))]=E[g(X¯tEM(x))H(i)(X¯tEM(x),1)],gCb(N),

    in the following. The expectation E[φ(X¯t(x))] can be expanded around E[φ(X¯tEM(x))] as

    (A.14)E[φ(X¯t(x))]=E[φ(X¯tEM(x))]+E[i=1Nj1,j2=0diφ(X¯tEM(x))V^j1Vj2i(x)12{Wtj1Wtj2-t𝟏j1=j20}]+E[i1,i2=1Nj1,j2=1d18i1φ(X¯tEM(x))H(i2)(X¯tEM(x),1)V^j1Vj2i1(x){V^j1Vj2i2(x)-V^j2Vj1i2(x)}t2]+E[i1,i2=1Nj1,j2=1d18i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j1Vj2i2(x)t2]+E[i1,i2=1Nj1,j2=1d18i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j2Vj1i2(x)t2]+R¯φ(t,x),

    where R¯φ(t,x) is given by

    R¯φ(t,x)=R¯φ,1(t,x)+R¯φ,2(t,x)+R¯φ,3(t,x)+rφ¯(t,x)

    with

    R¯φ,1(t,x)=0112(1-ξ)2i1,i2,i3=1NE[i1i2i3φ(X¯tEM(x)+ξ(X¯t(x)-X¯tEM(x)))k=13(X¯tik(x)-X¯tEM,ik(x))]dξ,
    R¯φ,2(t,x)=i1,i2=1NE[12i1i2φ(X¯tEM(x))164t4k=12jk,1,jk,2=1dlk=1NH(lk)(X¯tEM(x),1)×V^jk,1Vjk,2ik(x){V^jk,1Vjk,2lk(x)-V^jk,2Vjk,1lk(x)}],
    R¯φ,3(t,x)=i1,i2=1NE[i1i2φ(X¯tEM(x))j1,j2=1dj3,j4=0dl=1N18H(l)(X¯tEM(x),1)×V^j1Vj2i1(x){V^j1Vj2l(x)-V^j2Vj1l(x)}×t2V^j3Vj4i2(x)12{Wtj3Wtj4-t𝟏j3=j40}].

    From the definition (A.12) of X¯t(x) with the representation (A.13), we easily see that

    X¯tk(x)-X¯tEM,k(x)pCt,p2,k=1,,N.

    Then we have supxN|R¯φ,1(t,x)|Ct3.

    Also, since

    H(i)(X¯tEM(x),1)pCt-1/2,p2,i=1,,N,

    holds by (A.13), we have

    t4H(l1)(X¯tEM(x),1)H(l2)(X¯tEM(x),1)pCt3,p2,1l1,l2N.

    Then we obtain supxN|R¯φ,2(t,x)|Ct3.

    In order to show the bound of R¯φ,3(t,x), we need to estimate the terms of types

    E[i1i2φ(X¯tEM(x))tWtj1Wtj2Wtj3]andE[i1i2φ(X¯tEM(x))t2Wtj1]

    considering (A.13). Note that it holds

    tWtj1Wtj2Wtj3=t{I(j2,j3,j1)(t)+I(j2,j1,j3)(t)+I(j1,j2,j3)(t)-I(0,j3)𝟏j1=j20-I(0,j1)𝟏j2=j30-I(j1,0)𝟏j2=j30},0j1,j2,j3d.

    By applying Lemma A.2, we have

    |E[i1i2φ(X¯tEM(x))tWtj1Wtj2Wtj3]|Cl=23lφt3,1i1,i2N,1j1,j2,j3d.

    Also, for 1i1,i2N, 0j1d,

    |E[i1i2φ(X¯tEM(x))t2Wtj1]|={|i3=1NE[i1i2i3φ(X¯tEM(x))Vj1i3(x)t3]|ifj10,|E[i1i2φ(X¯tEM(x))t3]|ifj1=0Cl=23lφt3.

    Then we obtain

    supxN|R¯φ,3(t,x)|Cl=23lφt3,

    and therefore the error R¯φ(t,x)=R¯φ,1(t,x)+R¯φ,2(t,x)+R¯φ,3(t,x)+rφ¯(t,x) satisfies

    supxN|R¯φ(t,x)|Cl=23lφt3.

    The final and crucial step is as follows. We apply the integration by parts (A.2) to the third term on the right-hand side of (A.14) as

    E[i1i2φ(X¯tEM(x))]=E[i1φ(X¯tEM(x))H(i2)(X¯tEM(x),1)],1i1,i2N,

    to get the following representation for E[φ(X¯t(x))]-R¯φ(t,x):

    E[φ(X¯tEM(x))]+i=1Nj1,j2=0dE[iφ(X¯tEM(x))V^j1Vj2i(x)12{Wtj1Wtj2-t𝟏j1=j20}]+i1,i2=1Nj1,j2=1d18E[i1i2φ(X¯tEM(x))V^j1Vj2i1(x)t2{V^j1Vj2i2(x)-V^j2Vj1i2(x)}]+i1,i2=1Nj1,j2=1dE[18i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j1Vj2i2(x)t2]+i1,i2=1Nj1,j2=1dE[18i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j2Vj1i2(x)t2]=E[φ(X¯tEM(x))]+i=1Nj1,j2=0dE[iφ(X¯tEM(x))V^j1Vj2i(x)12{Wtj1Wtj2-t𝟏j1=j20}]+i1,i2=1Nj1,j2=1dE[i1i2φ(X¯tEM(x))V^j1Vj2i1(x)V^j1Vj2i2(x)14t2].

A.5 Proof of Theorem 2.4

  1. The one-step small time approximation in Theorem 2.3 is summarized as follows: there is a C>0 such that

    (A.15)Psφ-QsφCi=14iφs3,φCb(N),s(0,1].

    In the proof, the generic constant C>0 might depend on Vi, i=0,1,,d, and T but is independent of n1, whose value might change from line to line. The difference PTf(x)-Qs1Qs2Qsnf(x), xN, can be decomposed as

    PTf(x)-Qs1Qs2Qsnf(x)=(Ps1-Qs1)PT-t1f(x)+k=2n-1Qs1Qsk-1(Psk-Qsk)PT-tkf(x)+Qs1Qsk(Psn-Qsn)f(x).

    We easily see that Qs1φφ for the bounded function φ on N. Then we have

    PTf-Qs1Qs2Qsnfk=1n-1PskPT-tkf-QskPT-tkf+Psnf-Qsnf.

    In order to get the global approximation, it suffices to show the bound of PsPT-tf-QsPT-tf for all s(0,1] and t(0,T]. The following estimate (A.16) holds for the higher-order differentiation of PT-tf only using the first-order differentiation bound of f, i.e. f:

    (A.16)iPT-tfCf1(T-t)(i-1)/2,i,t(0,T),

    by Kusuoka and Stroock [6]. By (A.15) and (A.16), we have

    PsPT-tf-QsPT-tfCi=14iPT-tfs3Cfi=03s3(T-t)i/2,s(0,1],t(0,T).

    For γ>m-1j, m=5, j=3,4,5, we have

    k=1n-1sk(m+1)/2(T-tk)(m+1-j)/2=Cn-(m-1)/2

    using a similar argument as in [4, 19]. When γ>4/3,

    PTf-Qs1QsnfCf1n2+Psnf-Qsnf.

    We finally show the bound of Psnf-Qsnf. Observe that

    Xt(x)-X¯tEM(x)=j1,j2=0d0<t1<t2<tV^j1Vj2(Xt1(x))dWt1j1dWt2j2,
    X¯t(x)-X¯tEM(x)=j1,j2=0dV^j1Vj2i(x)12{Wtj1Wtj2-t𝟏j1=j20}+k1,k2=1Nj1,j2,j3=1d18[Vj1,Vj2]k1(x)V^j1Vj2i(x)Ak1,k2(x)Vj3k2(x)Wtj3t.

    Then the following small time approximations around E[f(X¯tEM(x))] hold using the bound f:

    |E[f(Xt(x))]-E[f(X¯tEM(x))]|Cft,
    |E[f(X¯t(x))]-E[f(X¯tEM(x))]|Cft

    for (t,x)(0,1]×N. Thus one has Psnf-QsnfCfsn. By choosing γ2, we can raise the accuracy near the terminal as Psnf-Qsnf=O(n-2) since the final time interval is given by sn=tn-tn-1=T(1n)γ. Therefore, for γ2, we have

    PTf-Qs1Qs2QsnfCf1n2.

A.6 Proof of Corollary 2.6

  1. When N=d, we can use the Bismut type formula

    i=1dE[ig(X¯tEM(x))hi(x)]=i,j=1dE[g(X¯tEM(x))1t[V-1]ji(x)hj(x)Wti],t>0,xN,

    where g,hiCb(d), i=1,,d. Then we introduce

    X¯t(x)=X¯tEM(x)+(j1,j2){0,1,,d}2V^j1Vj2(x)12{Wtj1Wtj2-t𝟏j1=j20}+j1,j2,j3=1d18V^j1Vj2(x)t2i,l=1d1t[V-1]li(x){V^j1Vj2l(x)-V^j2Vj1l(x)}Wti

    for t>0 and xd. We obtain the following local approximation: there is a constant C>0 such that

    |E[φ(Xt(x))]-E[φ(X¯t(x))]|Cl=14lφt3

    for all φCb(d) and t(0,1], and immediately have the weak approximation with error O(1/n2). The proof is essentially the same as the arguments in Appendix A.4 and A.5, and therefore we omit it. ∎

References

[1] L. G. Gyurkó and T. J. Lyons, Efficient and practical implementations of cubature on Wiener space, Stochastic Analysis 2010, Springer, Heidelberg (2011), 73–111. 10.1007/978-3-642-15358-7_5Search in Google Scholar

[2] Y. Hu and S. Watanabe, Donsker’s delta functions and approximation of heat kernels by the time discretization methods, J. Math. Kyoto Univ. 36 (1996), no. 3, 499–518. 10.1215/kjm/1250518506Search in Google Scholar

[3] P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, Springer, Berlin, 1992. 10.1007/978-3-662-12616-5Search in Google Scholar

[4] S. Kusuoka, Approximation of expectation of diffusion process and mathematical finance, Taniguchi Conference on Mathematics Nara ’98, Adv. Stud. Pure Math. 31, The Mathematical Society of Japan, Tokyo (2001), 147–165. 10.2969/aspm/03110147Search in Google Scholar

[5] S. Kusuoka, Approximation of expectation of diffusion processes based on Lie algebra and Malliavin calculus, Advances in Mathematical Economics. Vol. 6, Adv. Math. Econ. 6, Springer, Tokyo (2004), 69–83. 10.1007/978-4-431-68450-3_4Search in Google Scholar

[6] S. Kusuoka and D. Stroock, Applications of the Malliavin calculus. I, Stochastic Analysis (Katata/Kyoto 1982), North-Holland Math. Libr. 32, North-Holland, Amsterdam (1984), 271–306. 10.1016/S0924-6509(08)70397-0Search in Google Scholar

[7] T. Lyons and N. Victoir, Cubature on Wiener space, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 460 (2004), no. 2041, 169–198. 10.1098/rspa.2003.1239Search in Google Scholar

[8] P. Malliavin, Stochastic Analysis, Grundlehren Math. Wiss. 313, Springer, Berlin, 1997. 10.1007/978-3-642-15074-6Search in Google Scholar

[9] G. Maruyama, Continuous Markov processes and stochastic equations, Rend. Circ. Mat. Palermo (2) 4 (1955), 48–90. 10.1007/BF02846028Search in Google Scholar

[10] G. N. Mil’šteĭn, Approximate integration of stochastic differential equations, Teor. Verojatn. Primenen. 19 (1974), 583–588. Search in Google Scholar

[11] G. N. Mil’šteĭn, A method with second order accuracy for the integration of stochastic differential equations, Teor. Verojatn. Primenen. 23 (1978), no. 2, 414–419. 10.1137/1123045Search in Google Scholar

[12] G. N. Mil’shteĭn, Weak approximation of solutions of systems of stochastic differential equations, Theory Probab. Appl. 30 (1986), 750–766. 10.1137/1130095Search in Google Scholar

[13] D. Nualart, The Malliavin Calculus and Related topics, Springer, Berlin, 2006. Search in Google Scholar

[14] A. Takahashi and T. Yamada, An asymptotic expansion with push-down of Malliavin weights, SIAM J. Financial Math. 3 (2012), no. 1, 95–136. 10.1137/100807624Search in Google Scholar

[15] A. Takahashi and T. Yamada, A weak approximation with asymptotic expansion and multidimensional Malliavin weights, Ann. Appl. Probab. 26 (2016), no. 2, 818–856. 10.1214/15-AAP1105Search in Google Scholar

[16] D. Talay, Efficient numerical schemes for the approximation of expectations of functionals of the solution of a SDE and applications, Filtering and Control of Random Processes (Paris 1983), Lecture Notes in Control and Inform. Sci. 61, Springer, Berlin (1984), 294–313. 10.1007/BFb0006577Search in Google Scholar

[17] S. Watanabe, Analysis of Wiener functionals (Malliavin calculus) and its applications to heat kernels, Ann. Probab. 15 (1987), no. 1, 1–39. 10.1214/aop/1176992255Search in Google Scholar

[18] M. Wiktorsson, Joint characteristic function and simultaneous simulation of iterated Itô integrals for multiple independent Brownian motions, Ann. Appl. Probab. 11 (2001), no. 2, 470–487. 10.1214/aoap/1015345301Search in Google Scholar

[19] T. Yamada, A higher order weak approximation scheme of multidimensional stochastic differential equations using Malliavin weights, J. Comput. Appl. Math. 321 (2017), 427–447. 10.1016/j.cam.2017.03.001Search in Google Scholar

[20] T. Yamada and K. Yamamoto, A second order discretization with Malliavin weight and Quasi-Monte Carlo method for option pricing, Quant. Finance (2018), 10.1080/14697688.2018.1430371. 10.1080/14697688.2018.1430371Search in Google Scholar

[21] T. Yamada, Weak Milstein scheme without commutativity condition and its error bound, Appl. Numer. Math. 131 (2018), 95–108. 10.1016/j.apnum.2018.04.007Search in Google Scholar

Received: 2018-04-28
Revised: 2018-09-25
Accepted: 2018-09-28
Published Online: 2018-10-23
Published in Print: 2018-12-01

© 2018 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/mcma-2018-2024/html
Scroll to top button