Skip to main content
Log in

Realized Laplace transforms for pure jump semimartingales with presence of microstructure noise

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

This paper considers the estimation of integrated Laplace transform of local ‘volatility’ by using noisy high-frequency data. We allow for the presence of microstructure noise under a pure jump semimartingale over a fixed time interval [0, t]. We propose an efficient estimator for the integrated Laplace transform of volatility via applying the pre-averaging method. Under some mild conditions on the Lévy density, the asymptotic properties of the estimator including consistency and asymptotic normality are established. Simulation studies further confirm our theoretical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Aït-Sahalia Y, Jacod J (2010) Is brownian motion necessary to model high frequency data? Ann Stat 38(5):3093–3128

    Article  MathSciNet  MATH  Google Scholar 

  • Aït-Sahalia Y, Mykland PA (2004) Estimators of diffusions with randomly spaced discrete observations: a general theory. Ann Stat 32(5):2186–2222

    Article  MathSciNet  MATH  Google Scholar 

  • Aït-Sahalia Y, Mykland P, Zhang L (2005) How often to sample a continuous-time process in the presence of market microstructure noise. Rev Financ Stud 18:351–416

    Article  MATH  Google Scholar 

  • Aït-Sahalia Y, Fan J, Xiu D (2010) High frequency covariance estimates with noisy and asynchronous data. J Am Stat Assoc 160:1504–1517

    Article  MathSciNet  MATH  Google Scholar 

  • Aït-Sahalia Y, Mykland P, Zhang L (2011) Ultra high frequency volatility estimation with dependent microstructure noise. J Econom 160(1):160–175

    Article  MathSciNet  MATH  Google Scholar 

  • Aldous D, Eagleson G (1978) On mixing and stability of limit theorems. Ann Probab 6(2):325–331

    Article  MathSciNet  MATH  Google Scholar 

  • Andersen TG, Bollerslev T, Diebold F, Labys P (2003) Modeling and forecasting realized volatility. Econometrica 71(3):579–625

    Article  MathSciNet  MATH  Google Scholar 

  • Andrews B, Calder M, Davis RA (2009) Maximum likelihood estimation for \(\alpha \)-stable autoregressive processes. Ann Stat 37:1946–1982

    Article  MathSciNet  MATH  Google Scholar 

  • Bachelier L (1900) Théorie de la speculation. Gauthier-Villars, Paris

    Book  MATH  Google Scholar 

  • Barndorff-Nielsen OE, Shephard N (2007) In Advances in economics and econometrics: theory and applications. In: Ninth World Congress. Econometric Society Monographs. Cambridge University Press

  • Barndorff-Nielsen OE, Hansen PR, Lunde A, Shephard N (2008) Designing realised kernels to measure the ex-post variation of equity prices in the presence of noise. Econometrica 76(6):1481–1536

    Article  MathSciNet  MATH  Google Scholar 

  • Barndorff-Nielsen OE, Hansen PR, Lunde A, Shephard N (2011) Multivariate realised kernels: consistent positive semi-definite estimators of the covariation of equity prices with noise and non-synchronous trading. J Econom 162(2):149–169

    Article  MathSciNet  MATH  Google Scholar 

  • Bates DS (1996) Jumps and stochastic volatility: exchange rate processes implicity in deutsche mark opitions. Rev Financ Stud 9(1):69–107

    Article  Google Scholar 

  • Black F, Scholes M (1973) The pricing of options and corporate liabilities. J Polit Econ 81(3):637–654

    Article  MathSciNet  MATH  Google Scholar 

  • Cont R, Tankov P (2004) Financial Modelling with Jump Processes. CRC Press, Boca Raton

    MATH  Google Scholar 

  • Heston SL (1993) A closed-form solution for options with stochastic volatility with applications to bond and currency options. Rev Financ Stud 6(2):327–343

    Article  MATH  MathSciNet  Google Scholar 

  • Jacod J, Protter P (2012) Discretization of Processes. Stochastic Modelling and Applied Probbability, vol 67. Springer, Heidelberg

    MATH  Google Scholar 

  • Jacod J, Shiryayev AV (2003) Limit Theorems for Stochastic Processes. Springer, New York

    Book  Google Scholar 

  • Jacod J, Li Y, Mykland PA, Podolskij M, Vetter M (2009) Microstructure noise in the continuous case: the pre-averaging approach. Stoch Process Appl 119(7):2249–2276

    Article  MathSciNet  MATH  Google Scholar 

  • Jing BY, Kong XB, Liu Z (2012) Modeling high-frequency financial data by pure jump processes. Ann Stat 40(2):759–784

    Article  MathSciNet  MATH  Google Scholar 

  • Jing BY, Liu Z, Kong XB (2014) On the estimation of integrated volatility with jumps and microstructure noise. J Bus Econ Stat 32(3):457–467

    Article  MathSciNet  Google Scholar 

  • Karlin S, Taylor HM (1975) A First Course in Stochastic Processes. Academic Press, Cambridge

    MATH  Google Scholar 

  • Klebaner FC (1998) Introduction to Stochastic Calculus with Applications. Imperial College Press, London

    Book  MATH  Google Scholar 

  • Klüppelberg C, Meyer-Brandis T, Schmidt A (2010) Electricity spot price modelling with a view towards extreme spike risk. Quant Finance 10:963–974

    Article  MathSciNet  MATH  Google Scholar 

  • Kong XB, Liu Z, Jing BY (2015) Testing for pure-jump processes for high-frequency data. Ann Stat 43(2):847–877

    Article  MathSciNet  MATH  Google Scholar 

  • Kou SG (2002) A jump-diffuion model for option pricing. Manage Sci 48(8):1086–1101

    Article  MATH  Google Scholar 

  • Li J (2013) Robust estimation and inference for jumps in noisy high frequency data: a local-to-continuity theory for the pre-averaging method. Econometrica 81(4):1673–1693

    Article  MathSciNet  MATH  Google Scholar 

  • McCulloch JH (1996) Financial applications of stable distributions. Handb Stat 14:393–425

    Article  MathSciNet  Google Scholar 

  • Merton RC (1973) Theory of rational option pricing. Bell J Econ Manag Sci (RAND Corp) 4(1):141–183

    Article  MathSciNet  MATH  Google Scholar 

  • Mikosch T, Resnick S, Rootzén H, Stegeman A (2002) Is network traffic approximated by stable lévy motion or fractional brownian motion? Ann Appl Probab 12:23–68

    Article  MathSciNet  MATH  Google Scholar 

  • Mykland P, Zhang L (2009) Inference for continuous semimartingales observed at high frequency: a general approach. Econometrica 77(5):1403–1445

    Article  MathSciNet  MATH  Google Scholar 

  • Nikias CL, Shao M (1995) Signal Processing with Alpha-Stable Distributions and Applications. Wiley, New York

    Google Scholar 

  • Renyi A (1963) On stable sequences of events. Sankhya: Indian J Stat Ser A 25(3):293–302

    MathSciNet  MATH  Google Scholar 

  • Sato KI (1999) Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Todorov V, Tauchen G (2012a) Inverse realized laplace transforms for nonparametric volatility density estimation in jump-diffusions. J Am Stat Assoc 107(498):622–635

    Article  MathSciNet  MATH  Google Scholar 

  • Todorov V, Tauchen G (2012b) The realized laplace transform of volatility. Econometrica 80(3):1105–1127

    Article  MathSciNet  MATH  Google Scholar 

  • Todorov V, Tauchen G (2012c) Realized laplace transforms for pure-jump semimartingales. Ann Stat 40:1233–1262

    Article  MathSciNet  MATH  Google Scholar 

  • Wang L, Liu Z, Xia X (2017) Rate efficient estimation of realized Laplace transform of volatility with microstructure noise. Working paper. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3179378

  • Wu L (2007) Modeling financial security returns using Levy processes. Handb Oper Res Manag Sci 15:117–162

    Google Scholar 

  • Xiu D (2010) Quasi-maximum likelihood estimation of volatility with high frequency data. J Econ 159:235–250

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang L (2006) Efficient estimation of stochastic volatility using noisy observations: a multi-scale approach. Bernoulli 12(6):1019–1043

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Liu’s work is supported by FDCT of Macau (No. 078/2013/A3) and Xia’s work is supported by Hubei Provincial Natural Science Foundation of China (Grant No. 2017CFB141).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Wang.

Ethics declarations

Conflict of interest

The author declares that they have no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Communicated by V. Loia.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs

Appendix: Proofs

In the following, C denotes a generic constant, we let \(t_i^n:=i\Delta _n\), \(g_j^n:=g\left( \frac{j}{k_n}\right) \), \(E_i^n[\cdot ]:=E[\cdot |\mathcal {F}_{i\Delta _n}]\).

Proof of Theorem 1

First step, we use Taylor expansion to separate latent process \(X_t\) and noise \(\varepsilon _t\), then select a suitable \(k_n\) to weaken the noise effect.

$$\begin{aligned}&\cos \left( (2u)^{\frac{1}{\beta }}\left( \phi _\beta ^n k_n\Delta _n\right) ^{-\frac{1}{\beta }}\Delta _i^n\overline{U}\right) \\&\quad = \cos \left( (2u)^{\frac{1}{\beta }}\left( \phi _\beta ^n k_n\Delta _n\right) ^{-\frac{1}{\beta }}\Delta _i^n\overline{X}\right) \\&\qquad -\,\sin \left( (2u)^{\frac{1}{\beta }}\left( \phi _\beta ^n k_n\Delta _n\right) ^{-\frac{1}{\beta }}\Delta _i^n\overline{X}\right) \frac{(2u)^{\frac{1}{\beta }}}{\left( \phi _\beta ^n k_n\Delta _n\right) ^{\frac{1}{\beta }}}\Delta _i^n\overline{\varepsilon }\\&\qquad -\,\frac{1}{2}\cos (x^*)\left( (2u)^{\frac{1}{\beta }}\left( \phi _\beta ^n k_n\Delta _n\right) ^{-\frac{1}{\beta }}\Delta _i^n\overline{\varepsilon }\right) ^2\\&\quad :=\cos \left( (2u)^{\frac{1}{\beta }}\left( \phi _\beta ^n k_n\Delta _n\right) ^{-\frac{1}{\beta }}\Delta _i^n\overline{X}\right) +R_i^{(1)}+R_i^{(2)}. \end{aligned}$$

For noise \(\varepsilon \), from Assumption 4, we have \(E(\varepsilon _t)=0\), \(\mathrm {Var}(\varepsilon _t)=\omega _t^2\le C\) and \(\varepsilon _t\) are independent,

$$\begin{aligned} \Delta _i^n\overline{\varepsilon }= & {} \sum _{j=1}^{k_n-1}g_j^n(\varepsilon _{i+j}-\varepsilon _{i+j-1})\\= & {} \sum _{j=0}^{k_n-1}\left( g_j^n-g_{j+1}^n\right) \varepsilon _{i+j}. \end{aligned}$$

When \(0<r\le 2\), applying Hölder inequality, we have

$$\begin{aligned} E\left( \left| \Delta _i^n \overline{\varepsilon }\right| ^r\right) \le Ck_n^{-\frac{r}{2}}. \end{aligned}$$

Then it can be easily shown that

$$\begin{aligned}&E\left[ \left( \sum _{i=0}^{n-k_n+1}\Delta _nR_i^{(1)}\right) ^2\right] \\&\quad \le \Delta _n^2k_n\sum _{i=0}^{n-k_n+1}E\left[ \sin ^2\left( (2u)^{\frac{1}{\beta }}\left( \phi _{\beta }^nk_n\Delta _n\right) ^{-\frac{1}{\beta }}\Delta _i^n\overline{X}\right) \right. \\&\qquad \left. \cdot (2u)^{\frac{2}{\beta }}\left( \phi _{\beta }^nk_n\Delta _n\right) ^{-\frac{2}{\beta }}\left( \Delta _i^n\overline{\varepsilon }\right) ^2\right] \\&\quad \le Ck_n^{-\frac{2}{\beta }}\Delta _n^{1-\frac{2}{\beta }},\\&E\left[ \left| \sum _{i=0}^{n-k_n+1}\Delta _nR_i^{(2)}\right| \right] \le Ck_n^{-\frac{2}{\beta }-1}\Delta _n^{-\frac{2}{\beta }}. \end{aligned}$$

To ensure the effect of noise ignorable, we need \(k_n=O(n^{\frac{2}{\beta +2}+\tau }),0<\tau <\frac{\beta }{\beta +2}\).

Note that \(X_t\) has an integral form representation:

$$\begin{aligned} X_t = X_0+\int _0^t {\alpha }_s\mathrm {d}s+\int _0^t\int _\mathbb {R}\sigma _{s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x) +Y_t \end{aligned}$$

where \(\mu \) is homogenous Poisson measure with compensator \(\nu (\mathrm {d}x) = \frac{A}{|x|^{\beta +1}}\mathrm {d}x\) and \(\tilde{\mu } = \mu -\nu \), according to the Lévy density \(\nu (x)=\frac{A}{|x|^{\beta +1}}\). We define \(L_t = \int _0^t\int _{\mathbb {R}}x \tilde{\mu }(\mathrm {d}s, \mathrm {d}x)\). Next we make a decomposition as follows:

$$\begin{aligned}&V_T(U,\Delta _n, \beta , u, g) - \int _{0}^{T}e^{-u|\sigma _s|^\beta } ds \nonumber \\&\quad = \sum _{i=0}^{n-k_n+1} \Delta _n \cos \left( (2u)^{\frac{1}{\beta }}\left( \phi _\beta ^n\Delta _n k_n\right) ^{-\frac{1}{\beta }}\Delta _i^n\overline{X}\right) \nonumber \\&\qquad - \int _{0}^{T}e^{-u|\sigma _t|^\beta } dt + \sum _{i=0}^{n-k_n+1}\Delta _n\left( R_i^{(1)}+R_i^{(2)}\right) \nonumber \\&\quad = \sum _{i=0}^{n-k_n+1} \Delta _n\left[ \cos \left( \frac{(2u)^{\frac{1}{\beta }}}{\left( \phi _\beta ^n\Delta _n k_n\right) ^{\frac{1}{\beta }}}\sigma _{t_i^n}\Delta _i^n\overline{L}\right) - e^{-u|\sigma _{t_i^n}|^{\beta }}\right] \nonumber \\&\qquad + \sum _{i=0}^{n-k_n+1} \int _{t_i^n}^{t_{i+1}^n} \left( e^{-u|\sigma _{t_i^n}|^\beta }-e^{-u|\sigma _{s}|^\beta }\right) ds \nonumber \\&\qquad + \sum _{i=0}^{n-k_n+1} \Delta _n \left[ \cos \left( (2u)^{\frac{1}{\beta }}\left( \phi _\beta ^n\Delta _n k_n\right) ^{-\frac{1}{\beta }}\Delta _i^n\overline{X}\right) \right. \nonumber \\&\left. \qquad - \cos \left( (2u)^{\frac{1}{\beta }}\left( \phi _\beta ^n\Delta _n k_n\right) ^{-\frac{1}{\beta }}\sigma _{t_i^n}\Delta _i^n\overline{L}\right) \right] \nonumber \\&\qquad - \int _{(n-k_n)\Delta _n}^{T} e^{-u|\sigma _s|^\beta } ds + \sum _{i=0}^{n-k_n+1}\Delta _n \left( R_i^{(1)}+R_i^{(2)}\right) \nonumber \\&\quad {:=} \sum _{i=0}^{n-k_n+1} \xi _{i,u}^{(1)} + \sum _{i=0}^{n-k_n+1} \xi _{i,u}^{(2)} + \sum _{i=0}^{n-k_n+1} \xi _{i,u}^{(3)} + Re_u \nonumber \\&\qquad + \sum _{i=0}^{n-k_n+1} \Delta _n \left( R_i^{(1)}+R_i^{(2)}\right) , \end{aligned}$$
(7)

where the definitions of \(\xi _{i,u}^{(1)}\), \(\xi _{i,u}^{(2)}\), \(\xi _{i,u}^{(3)}\) and \(Re_u\) are clear.

For the term \(\xi _{i,u}^{(1)}\), by the self-similarity of the stable process \(L_t\), we have \(\Delta _i^nL_t=\Delta _i^nZ_t=\Delta _n^{\frac{1}{\beta }}S\), where S has characteristic function \(E(e^{iuS})=e^{-\frac{|u|^{\beta }}{2}}\), then we can conclude that

$$\begin{aligned} \left\{ \begin{array}{ll} &{} E_i^n\left( \cos \left( \frac{(2u)^{\frac{1}{\beta }}}{(\phi _\beta ^n\Delta _n k_n)^{\frac{1}{\beta }}}\sigma _{t_i^n} \Delta _i^n\overline{L}\right) -e^{-u|\sigma _{t_i^n}|^\beta }\right) = 0,\\ &{} E_i^n\left( \cos \left( \frac{(2u)^{\frac{1}{\beta }}}{(\phi _\beta ^n\Delta _n k_n)^{\frac{1}{\beta }}}\sigma _{t_i^n} \Delta _i^n\overline{L}\right) -e^{-u|\sigma _{t_i^n}|^\beta }\right) ^2\\ &{}~~= G_\beta (u^{\frac{1}{\beta }}|\sigma _{t_i^n}|)_0^n, \\ &{} E_i^n\left( \cos \left( \frac{(2u)^{\frac{1}{\beta }}}{(\phi _\beta ^n\Delta _n k_n)^{\frac{1}{\beta }}}\sigma _{t_i^n} \Delta _i^n\overline{L}\right) -e^{-u|\sigma _{t_i^n}|^\beta }\right) ^4\le C, \end{array} \right. \end{aligned}$$
(8)

where \(G_\beta (x)_0^n=\frac{e^{-2^\beta x^\beta }-2e^{-2x^\beta }+1}{2}\).

For the term \(\xi _{i,u}^{(2)}\), we first subtract and add a midterm \(e^{-u|\sigma _{t_i^n}+\int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r|^{\beta }}\) and then take Taylor expansion of \(e^{-u|\sigma _{t_i^n}+\int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r|^{\beta }}\) at point \(\sigma _{t_i^n}\). It follows that

$$\begin{aligned} \xi _{i,u}^{(2)}= & {} \int _{t_i^n}^{t_{i+1}^n}e^{-u|\sigma _{t_i^n}|^{\beta }}u\cdot \text {sign}(\sigma _{t_i^n}) \beta |\sigma _{t_i^n}|^{\beta -1}\int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r\mathrm {d}s\\&+\,\int _{t_i^n}^{t_{i+1}^n}\left[ \left( e^{-u|\sigma _s^*|^{\beta }}u\cdot \text {sign}(\sigma ^*) \beta |\sigma _s^*|^{\beta -1}\right. \right. \\&\left. \left. -\,e^{-u|\sigma _{t_i^n}|^{\beta }}u\cdot \text {sign}(\sigma _{t_i^n}) \beta |\sigma _{t_i^n}|^{\beta -1}\right) \int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r\right] \mathrm {d}s\\&+\,\int _{t_i^n}^{t_{i+1}^n}(e^{-u|\sigma _{t_i^n}+\int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r|^{\beta }}-e^{-u|\sigma _s|^{\beta }})\mathrm {d}s\\:= & {} \xi _{i,u}^{(2)}(1)+\xi _{i,u}^{(2)}(2)+\xi _{i,u}^{(2)}(3), \end{aligned}$$

where \(\sigma _s^*\) is between \(\sigma _{t_i^n}\) and \(\sigma _{t_i^n}+\int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r\). Since \(E_{i}^n\left( \xi _{i,u}^{(2)}(1)\right) = 0\), using Cauchy Schwarz inequality, Itô isometry, and Assumption 3, we can get

$$\begin{aligned} E_i^n\left( \xi _{i,u}^{(2)}(1)^2\right)\le & {} CE_i^n\left( \int _{t_i^n}^{t_{i+1}^n}\int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r\mathrm {d}s\right) ^2\le C\Delta _n^3. \end{aligned}$$
(9)

To consider \(\xi _{i,u}^{(2)}(2)\), we use the similar arguments in Todorov and Tauchen (2012c). By Cauchy–Schwarz inequality, Itô isometry, Assumption 3, Burkholder–Davis–Gundy inequality, then we have

$$\begin{aligned}&E|\xi _{i,u}^{(2)}(2)|\nonumber \\&\quad \le C\Delta _n^{\frac{3}{2}}\left[ E\left( \sup _{s\in [t_i^n,t_{i+1}^n]}\left( e^{-u|\sigma _s^*|^{\beta }}u\cdot \text {sign}(\sigma ^*) \beta |\sigma _s^*|^{\beta -1}\right. \right. \right. \nonumber \\&\qquad \left. \left. \left. -e^{-u|\sigma _{t_i^n}|^{\beta }}u\cdot \text {sign}(\sigma _{t_i^n}) \beta |\sigma _{t_i^n}|^{\beta -1}\right) ^2\right) \right] ^{\frac{1}{2}}\nonumber \\&\quad \le C\Delta _n^{1+\frac{\beta }{2}}. \end{aligned}$$
(10)

By Taylor expansion at \(|\sigma _{t_i^n}+\int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r|^{\beta }\), and using inequality \(||x-y|^p-|x|^p|\le Cp(|y|^p+|x|^{p-1}|y|), p>1\), we obtain,

$$\begin{aligned}&E_i^n\left( \left| \xi _{i,u}^{(2)}(3)\right| \right) \nonumber \\&\quad =E_i^n\left| \int _{t_i^n}^{t_{i+1}^n}ue^{-u|\sigma _s^{**}|^{\beta }}\left( |\sigma _s|^{\beta }-|\sigma _{t_i^n}+\int _{t_i^n}^s\tilde{\sigma _r}\mathrm {d}W_r|^{\beta }\right) \mathrm {d}s\right| \nonumber \\&\quad \le C\Delta _n^2, \end{aligned}$$
(11)

where \(\sigma _s^{**}\) is between \(\sigma _{t_i^n}+\int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r\) and \(\sigma _s=\sigma _{t_i^n}+\int _{t_i^n}^s\tilde{\alpha }_r\mathrm {d}r+\int _{t_i^n}^s\tilde{\sigma }_r\mathrm {d}W_r\).

For the term \(\xi _{i,u}^{(3)}\), we first subtract and add a same term, \(\cos \left( (2u)^{\frac{1}{\beta }}(\phi _{\beta }^n\Delta _nk_n)^{-\frac{1}{\beta }}\left( \sum _{j=1}^{k_n-1}g_j^n(\int _{(i+j-1)\Delta _n}^{(i+j)\Delta _n}\alpha _s ds +\int _{(i+j-1)\Delta _n}^{(i+j)\Delta _n}\sigma _{s} d{L_s})\right) \right) \). Then, by trigonometric function \(\cos (x)-\cos (y)=-2\sin (\frac{1}{2}(x+y))\sin (\frac{1}{2}(x-y))\) and second-order Taylor expansion separately, we have

$$\begin{aligned}&\sum _{i=0}^{n-k_n+1}\xi _{i,u}^{(3)}\\= & {} \sum _{i=0}^{n-k_n+1}\left[ -2\Delta _n\sin \left( \frac{1}{2}\frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n\right. \right. \\&~~\left. \cdot \left( 2\int _{t_{i+j-1}^n}^{t_{i+j}^n}\alpha _sds+2\int _{t_{i+j-1}^n}^{t_{i+j}^n}\sigma _{s}dL_s+\Delta _{i+j}^nY\right) \right) \\&~~\cdot \sin \left( \frac{1}{2}\frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n\Delta _{i+j}^nY\right) \\&-\Delta _n\sin \left( \frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n \sigma _{t_i^n} \Delta _{i+j}^nL \right) \\&~~\cdot \frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\alpha _sds\\&-\Delta _n\sin \left( \frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n \sigma _{t_i^n} \Delta _{i+j}^nL \right) \\&~~\cdot \frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}(\sigma _s-\sigma _{t_i^n})dL_s\\&-\Delta _n\cos (\tilde{\chi })\left( \frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\left( \sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\alpha _sds\right. \right. \\&\left. \left. \left. +\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}(\sigma _s-\sigma _{t_i^n})dL_s\right) \right) ^2\right] \\:= & {} \sum _{i=0}^{n-k_n+1}\left[ \xi _{i,u}^{(3)}(1)+\xi _{i,u}^{(3)}(2)+\xi _{i,u}^{(3)}(3)+\xi _{i,u}^{(3)}(4)\right] , \end{aligned}$$

where \(\tilde{\chi }\) denoting some value between \((2u)^{\frac{1}{\beta }}(\phi _{\beta }^n\Delta _nk_n)^{-\frac{1}{\beta }} \sum _{j=1}^{k_n-1}g_j^n\sigma _{t_i^n}\Delta _{i+j}^nL\) and \((2u)^{\frac{1}{\beta }}(\phi _{\beta }^n\Delta _nk_n)^{-\frac{1}{\beta }} \sum _{j=1}^{k_n-1}g_j^n\cdot (\int _{t_{i+j-1}^n}^{t_{i+j}^n}\alpha _s\mathrm {d}s+\int _{t_{i+j-1}^n}^{t_{i+j}^n}\sigma _{s-} d{L_s})\). Using the basic inequality \(|\sin (x)|\le |x|\) and Assumption 2, we have

$$\begin{aligned}&E\left| \xi _{i,u}^{(3)}(1)\right| \le C\Delta _nE\left| (2u)^{\frac{1}{\beta }}\left( \phi _\beta ^n\Delta _nk_n\right) ^{-\frac{1}{\beta }}\sum _{j=1}^{k_n-1}g_j^n\Delta _{i+j}^nY\right| \nonumber \\&\quad \le C\Delta _n^{1+\frac{1}{2\beta '}} k_n^{1-\frac{1}{\beta }}. \end{aligned}$$
(12)

We next divide \(\xi _{i,u}^{(3)}(2)\) into two parts,

$$\begin{aligned}&\xi _{i,u}^{(3)}(2)\\= & {} -\frac{\Delta _n(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sin \left( \frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n \sigma _{t_i^n} \Delta _{i+j}^nL \right) \\&~~\cdot \left( \sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\alpha _{i\Delta _n}ds\right) \\&-\frac{\Delta _n(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sin \left( \frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n \sigma _{t_i^n} \Delta _{i+j}^nL \right) \\&~~\cdot \left( \sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}(\alpha _s-\alpha _{i\Delta _n})ds\right) \\:= & {} \xi _{i,u}^{(3)}(2,a)+\xi _{i,u}^{(3)}(2,b). \end{aligned}$$

Noting that \(E_i^n(\Delta _i^nL)=0\) and by Assumption 3, Cauchy Schwarz inequality, we have

$$\begin{aligned}&E_i^n\left[ \big |\xi _{i,u}^{(3)}(2,a)\big |^2\right] \le C(\Delta _nk_n)^{3-\frac{2}{\beta }},\\&E_i^n\left[ \big |\xi _{i,u}^{(3)}(2,b)\big |\right] \le C(\Delta _nk_n)^{\frac{3}{2}-\frac{1}{\beta }}. \end{aligned}$$

Then we can obtain

$$\begin{aligned} E_i^n\left[ \left| \sum _{i=0}^{n-k_n+1}\xi _{i,u}^{(3)}(2)\right| \right] \le C(\Delta _nk_n)^{\frac{3}{2}-\frac{1}{\beta }}. \end{aligned}$$
(13)

For \(\xi _{i,u}^{(3)}(3)\), we first split

$$\begin{aligned} \sigma _s-\sigma _{i\Delta _n}=\sigma _{1s}+\sigma _{2s}, \end{aligned}$$

where

$$\begin{aligned}&\sigma _{1s}=\int _{t_i^n}^s\tilde{\alpha }_udu+\int _{t_i^n}^s(\tilde{\sigma }_u-\tilde{\sigma }_{i\Delta _n})dW_u,\\&\sigma _{2s}=\int _{t_i^n}^s\tilde{\sigma }_{i\Delta _n}dW_u. \end{aligned}$$

Then we have the inequality

$$\begin{aligned}&E_i^n\left| \xi _{i,u}^{(3)}(3)\right| \\&\quad \le CE_i^n\left| \frac{\Delta _n}{(\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\int _{|x|\ge \Delta _n^{\frac{1}{\beta }}}\sigma _{1s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x)\right| \\&\qquad +\,CE_i^n\left| \frac{\Delta _n}{(\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\int _{|x|<\Delta _n^{\frac{1}{\beta }}}\sigma _{1s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x)\right| \\&\qquad +\,CE_i^n\left| \frac{\Delta _n}{(\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\int _{|x|\ge \Delta _n^{\frac{1}{\beta }}}\sigma _{2s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x)\right| \\&\qquad +\,CE_i^n\left| \frac{\Delta _n}{(\Delta _nk_n)^{\frac{1}{\beta }}}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\int _{|x|<\Delta _n^{\frac{1}{\beta }}}\sigma _{2s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x)\right| . \end{aligned}$$

Because \(L_s\) is a martingale, using Cauchy–Schwarz inequality and Assumption 3, we get

$$\begin{aligned}&E_i^n\left| \Delta _n^{1-\frac{1}{\beta }}k_n^{-\frac{1}{\beta }}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\int _{|x|\ge \Delta _n^{\frac{1}{\beta }}}\sigma _{1s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x)\right| \\&\quad \le C\Delta _n^2k_n^{\frac{3}{2}-\frac{1}{\beta }}. \end{aligned}$$

For the second term, Cauchy–Schwarz inequality, Burkholder–Davis–Gundy inequality and Assumption 3 imply

$$\begin{aligned}&E_i^n\left| \Delta _n^{1-\frac{1}{\beta }}k_n^{-\frac{1}{\beta }}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\int _{|x|<\Delta _n^{\frac{1}{\beta }}}\sigma _{1s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x)\right| \\&\quad \le \left[ C\sum _{j=1}^{k_n-1}(g_j^n)^2E_i^n\left\langle \int _{t_{i+j-1}^n}^{t_{i+j}^n}\int _{|x|<\Delta _n^{\frac{1}{\beta }}}\sigma _{1s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x)\right\rangle \right. \\&\qquad \left. \cdot \Delta _n^{2-\frac{2}{\beta }}k_n^{-\frac{2}{\beta }}\right] ^\frac{1}{2} \le C\Delta _n^2k_n^{\frac{3}{2}-\frac{1}{\beta }}. \end{aligned}$$

Similarly, we have

$$\begin{aligned}&E_i^n\left| \Delta _n^{1-\frac{1}{\beta }}k_n^{-\frac{1}{\beta }}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\int _{|x|\ge \Delta _n^{\frac{1}{\beta }}}\sigma _{2s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x)\right| \\&\quad \le C\Delta _n^{\frac{3}{2}}k_n^{1-\frac{1}{\beta }},\\&E_i^n\left| \Delta _n^{1-\frac{1}{\beta }}k_n^{-\frac{1}{\beta }}\sum _{j=1}^{k_n-1}g_j^n\int _{t_{i+j-1}^n}^{t_{i+j}^n}\int _{|x|<\Delta _n^{\frac{1}{\beta }}}\sigma _{2s}x\tilde{\mu }(\mathrm {d}s,\mathrm {d}x)\right| \\&\quad \le C\Delta _n^{\frac{3}{2}}k_n^{1-\frac{1}{\beta }}. \end{aligned}$$

Martingale property of \(L_s\) yields

$$\begin{aligned} E_i^n\left| \sum _{i=0}^{n-k_n+1}\xi _{i,u}^{(3)}(3)\right| \le C\Delta _nk_n^{\frac{3}{2}-\frac{1}{\beta }}. \end{aligned}$$
(14)

By Assumption 3, we obtain

$$\begin{aligned} E_i^n\left| \xi _{i,u}^{(3)}(4)\right| \le C\Delta _n^{3-\frac{2}{\beta }}k_n^{2-\frac{2}{\beta }}. \end{aligned}$$
(15)

For the fourth term \(Re_u\), it can be easily shown that

$$\begin{aligned} E|\hbox {Re}_u|\le Ck_n\Delta _n \end{aligned}$$
(16)

uniformly in u. Combining (7), (8)–(16) and the bound of \(R_i\), we have

$$\begin{aligned}&E|V_T(U,\Delta _n, \beta , u, g) - \int _{0}^{T}e^{-u|\sigma _s|^\beta } ds|\nonumber \\\le & {} C\left( (\Delta _nk_n)^{\frac{1}{2}}+(\Delta _nk_n)^{2-\frac{2}{\beta }}+\frac{1}{(\Delta _nk_n)^{\frac{2}{\beta }}{k_n}}\right) , \end{aligned}$$
(17)

which gives the desired result. \(\square \)

Proof of Theorem 2

First, we separately deal with the main term and the noise term of

$$\begin{aligned}&\frac{1}{\sqrt{\Delta _nk_n}}\left( V(U,\Delta _n,u,g)-\int _0^Te^{-u|\sigma _s|^{\beta }}\mathrm {d}s\right) \\&\quad =\frac{1}{\sqrt{\Delta _nk_n}}\left( \sum _{i=0}^{n-k_n+1}(\xi _{i,u}^{(1)}+\xi _{i,u}^{(2)}+\xi _{i,u}^{(3)})+Re(u)\right. \\&\qquad \left. +\sum _{i=0}^{n-k_n+1}\Delta _n(R_i^{(1)}+R_i^{(2)})\right) . \end{aligned}$$

To find an upper bound of noise term, we have

$$\begin{aligned}&E\left[ \frac{1}{\sqrt{\Delta _nk_n}}\left| \sum _{i=0}^{n-k_n+1}\Delta _n\left( R_i^{(1)}+R_i^{(2)}\right) \right| \right] \\&\quad \le Ck_n^{-\frac{1}{\beta }-\frac{1}{2}}\Delta _n^{-\frac{1}{\beta }}+Ck_n^{-\frac{2}{\beta }-\frac{3}{2}}\Delta _n^{-\frac{2}{\beta }-\frac{1}{2}}. \end{aligned}$$

By selecting \(k_n=O\left( n^{\frac{2}{2+\beta }+\tau }\right) , ~ \tau \in \left( \frac{\beta ^2}{(\beta +2)(3\beta +4)},\frac{\beta }{\beta +2}\right) \), we can show that the noise term converges in probability to zero.

For the second term, invoking Theorem 1 and (9)–(11), we get

$$\begin{aligned} E\left( \frac{1}{\sqrt{\Delta _nk_n}}\sum _{i=0}^{n-k_n+1}\xi _{i,u}^{(2)}\right) ^2\le C\Delta _n^{-1+\beta }k_n^{-1}\rightarrow 0. \end{aligned}$$

For the third term, from (12)–(14), we have

$$\begin{aligned}&E\left| \frac{1}{\sqrt{\Delta _nk_n}}\sum _{i=0}^{n-k_n+1}\left( \xi _{i,u}^{(3)}(1)+\xi _{i,u}^{(3)}(2)+\xi _{i,u}^{(3)}(3)\right) \right| \\&\quad \le C\Delta _n^{-\frac{1}{2}+\frac{1}{2\beta '}}k_n^{\frac{1}{2}-\frac{1}{\beta }}+C(\Delta _nk_n)^{1-\frac{1}{\beta }}+C(\Delta _nk_n)^{\frac{1}{2}}k_n^{\frac{1}{2}-\frac{1}{\beta }} \end{aligned}$$

which converges to 0, when \(n\rightarrow \infty \). It follows from (15) that

$$\begin{aligned} E\left| \frac{1}{\sqrt{\Delta _nk_n}}\sum _{i=0}^{n-k_n+1}\xi _{i,u}^{(3)}(4)\right| \le C(\Delta _nk_n)^{\frac{3}{2}-\frac{2}{\beta }}. \end{aligned}$$

To let this term converge to 0 as \(n\rightarrow \infty \), we need \(\beta >\frac{4}{3}\). When \(n\rightarrow \infty \),

$$\begin{aligned} E\left| \frac{1}{\sqrt{\Delta _nk_n}}\hbox {Re}(u)\right| \le C\sqrt{\Delta _nk_n} \rightarrow 0 . \end{aligned}$$

So under the conditions of Theorem 2,

$$\begin{aligned}&E\Bigg |\frac{1}{\sqrt{\Delta _nk_n}}\Bigg [\sum _{i=0}^{n-k_n+1}\left( \xi _{i,u}^{(2)}+\xi _{i,u}^{(3)}+\Delta _nR_i^{(1)}+\Delta _nR_i^{(2)}\right) \nonumber \\&\quad +\hbox {Re}(u)\Bigg ]\Bigg |\rightarrow 0,\quad n\rightarrow \infty . \end{aligned}$$

To prove the central limit theorem, we only need to prove that as \(\Delta _n \rightarrow 0\),

$$\begin{aligned} \frac{1}{\sqrt{\Delta _nk_n}}\sum _{i=0}^{n-k_n+1}\xi _{i,u}^{(1)}{\mathop {\longrightarrow }\limits ^{\mathcal {L}_s}} \sqrt{\int _0^T \int _0^1G_\beta \left( u^{\frac{1}{\beta }}|\sigma _s|\right) _r \mathrm{d}r \mathrm{d}s}\times \mathcal{N}. \end{aligned}$$

To this end, we apply the technique of “big blocks, small blocks” as introduced in Jacod et al. (2009). The limiting distribution of the proposed estimator stems from the “big blocks”. Whereas the “small blocks” does not affect the final asymptotical behavior, which is eventually proven to be negligible but ensure the conditional independence between the “big” blocks. For a given integer p, we let \(i_n(p)=\left[ \frac{[T/\Delta _n]-k_n+1}{(p+1)k_n} \right] - 1\) and for \(i=0,\ldots , i_n(p)\), write \(a_i(p)=i(p+1)k_n+1\), \(b_i(p)=i(p+1)k_n +pk_n\) and denote the i-th “big” block by \(\mathrm {A}_i=\{k: a_i(p)\le k \le b_i(p), k\in N^{+}\}\) and the i-th “small block” by \(\mathrm {B}_i =\{k: b_i(p)< k < a_{i+1}(p), k\in N^{+}\} \). We define

$$\begin{aligned}&\zeta _i(p,1) = \sum _{j\in \mathrm {A}_i} \xi _{j,u}^{(1)},\quad \zeta '_i(p,1) = E_{a_i(p)-1}^n\left( \zeta _i(p,1) \right) , \\&\zeta _i(p,2)=\sum _{j\in \mathrm {B}_i} \xi _{j,u}^{(1)},\quad \zeta '_i(p,2) = E_{a_i(p)-1}^n\left( \zeta _i(p,2) \right) , \end{aligned}$$

where \(\xi _{j,u}^{(1)}\) is defined in (7). Then, we denote

$$\begin{aligned}&M(p) = \sum _{i=0}^{i_n(p)} \left( \zeta _i(p,1)-\zeta '_i(p,1)\right) ,\quad M'(p) =\sum _{i=0}^{i_n(p)} \zeta '_i(p,1), \\&N(p) = \sum _{i=0}^{i_n(p)} \left( \zeta _i(p,2) - \zeta '_i(p,2) \right) ,\quad N'(p) = \sum _{i=0}^{i_n(p)}\zeta '_i(p,2), \\&C(p) = \sum _{i=i_n(p)+1}^{[T/\Delta _n]-k_n+1} \xi _{i,u}^{(1)} . \end{aligned}$$

Since the convergence rate of V is \((\Delta _nk_n)^{\frac{1}{2}}\). Therefore, we have the following decomposition,

$$\begin{aligned}&\frac{1}{\sqrt{\Delta _n k_n}}\left( \sum _{i=0}^{n-k_n+1}\xi _{i,u}^{(1)} - \int _{0}^{T}e^{-u|\sigma _s|^{\beta }} ds \right) \\&\quad = \frac{1}{\sqrt{\Delta _n k_n}}\left[ M(p) + M'(p) + N(p) + N'(p) + C(p)\right] .\nonumber \end{aligned}$$
(18)

Following similar steps in Jacod et al. (2009), we obtain that for any \(\epsilon >0\),

$$\begin{aligned}&P\left( |M'(p)| + |N(p)| + | N'(p)| + |C(p)| >(\Delta _n k_n)^{\frac{1}{2}} \epsilon \right) \\&\quad \rightarrow 0. \end{aligned}$$

Hence, it suffices to prove

$$\begin{aligned} (\Delta _n k_n)^{-\frac{1}{2}} M(p) {\mathop {\longrightarrow }\limits ^{\mathcal {L}_s}} \sqrt{\int _0^T\int _0^1G_{\beta }\left( u^{\frac{1}{\beta }}|\sigma _s|\right) \mathrm{d}r\mathrm{d}s}\times \mathcal {N}. \nonumber \\ \end{aligned}$$
(19)

To this end, we apply the martingale central limit theorem which is presented in Theorem IX.7.28 in Jacod and Shiryayev (2003), with which we need to verify the following properties:

$$\begin{aligned}&\frac{1}{\Delta _nk_n}\sum _{i=0}^{i_n(p)} E_{a_i(p)-1}^n\left( \left( \zeta _i(p,1) - \zeta '_{i}(p,1)\right) ^2 \right) \nonumber \\&\quad {\mathop {\longrightarrow }\limits ^{P}} \int _{0}^{T}\int _{0}^{1}G_{\beta }\left( {u}^{\frac{1}{\beta }}|\sigma _{s}|\right) _r \mathrm{d}r \mathrm{d}s, \end{aligned}$$
(20)
$$\begin{aligned}&(\Delta _nk_n)^{-2}\sum _{i=0}^{i_n(p)} E_{a_i(p)-1}^n\left( \zeta _i^4(p,1) \right) {\mathop {\longrightarrow }\limits ^{P}} 0, \end{aligned}$$
(21)
$$\begin{aligned}&(\Delta _nk_n)^{-\frac{1}{2}}\sum _{i=0}^{i_n(p)}E_{a_i(p)-1}^n\left( \zeta _i(p,1)\Delta H(p)_i\right) {\mathop {\longrightarrow }\limits ^{P}} 0, \end{aligned}$$
(22)

where, \(\Delta H(p)_i=H_{b_i(p)\Delta _n} - H_{a_i(p)\Delta _n}\), H is a bounded martingale defined on the original probability space. We remark that in deriving formula (20), we let p be fixed first and then tend to infinity.

From (8), we get \({\zeta '_i}(p,1) = 0\). From self-similarity property of stable process, we know \(\Delta _i^nL=\Delta _n^{\frac{1}{\beta }}S_1\), where \(S_1\) is a stable process with characteristic function \(E(e^{iuS_1})=e^{-|u|^\beta /2}\). Applying these arguments and using the basic equalities that \(\cos (a+b)=\cos (a)\cos (b)-\sin (a)\sin (b)\), \(\sin (a)\cos (b)=\left( \sin (a+b)+\sin (a-b)\right) /2\) and \(\cos (a)=(e^{ia}+e^{-ia})/2\), we can obtain

$$\begin{aligned}&\frac{1}{\Delta _nk_n}\sum _{i=0}^{i_n(p)}E_{a_i(p)-1}^n\left( \left( \zeta _i(p,1) - \zeta '_{i}(p,1)\right) ^2 \right) \nonumber \\&\quad =\frac{1}{\Delta _nk_n}\sum _{i=0}^{i_n(p)}E_{a_i(p)-1}^n\left[ \left( \sum _{j\in A_i}\Delta _n\left( \cos (\frac{(2u)^{\frac{1}{\beta }}}{(\phi _{\beta }^n\Delta _nk_n)^{\frac{1}{\beta }}}\cdot \sigma _{t_i^n}\Delta _i^n\overline{L})\right. \right. \right. \nonumber \\&\qquad \left. \left. \left. -e^{-u|\sigma _{t_i^n}|^{\beta }}\right) \right) ^2\right] \nonumber \\&\quad = \frac{\Delta _n}{k_n}\sum _{i=0}^{i_n(p)}\bigg ( \sum _{j=a_i(p)}^{b_i(p)} G_{\beta }({u}^{\frac{1}{\beta }}|\sigma _{j\Delta _n}|)_{0}^n\nonumber \\&\qquad + 2\sum _{j_1=a_i(p)}^{b_i(p)}\sum _{j_2>j_1} G_{\beta }\left( {u}^{\frac{1}{\beta }}|\sigma _{j_1\Delta _n}|\right) _{j_2-j_1}^n + O_p(\Delta _n^{\frac{1}{2}}k_n^{\frac{5}{2}}p) \bigg ) \nonumber \\&\quad =\frac{\Delta _n}{k_n}\sum _{i=0}^{i_n(p)} \sum _{j=a_i(p)}^{b_i(p)}\bigg ( 2\sum _{s=0}^{k_n-1} G_{\beta }({u}^{\frac{1}{\beta }}|\sigma _{j\Delta _n}|)_{s}^n- G_{\beta }({u}^{\frac{1}{\beta }}|\sigma _{j\Delta _n}|)_{0}^n\bigg ) \nonumber \\&\qquad + O_p\left( \frac{p}{p+1}(\Delta _n k_n)^{\frac{1}{2}}\right) \nonumber \\&\quad = 2 \frac{\Delta _n}{k_n} \sum _{j=0}^{[ T/\Delta _n ]-1} \sum _{s=0}^{k_n-1} G_{\beta }({u}^{\frac{1}{\beta }}|\sigma _{j\Delta _n}|)_{s}^n+O_p((p+1)\Delta _n) \nonumber \\&\qquad +O_p(\frac{p}{(p+1)k_n})+ O_p\left( \frac{p}{p+1}(\Delta _nk_n)^{\frac{1}{2}}\right) + O_p\left( \frac{1}{p+1}\right) ,\nonumber \\ \end{aligned}$$
(23)

where \(\phi _\beta ^n=k_n^{-1}\sum _{j=0}^{k_n-1}(g(\frac{j}{k_n}))^\beta \) and

$$\begin{aligned}&G_{\beta }(x)_{s}^n \\&\quad = \frac{1}{2}\left[ e^{-\frac{x^\beta }{\phi _\beta ^nk_n}\left( \sum _{l=1}^{s}(g_l^n)^\beta +\sum _{l=k_n-s}^{k_n-1}(g_l^n)^\beta \right) }\right. \nonumber \\&\qquad \cdot \left( e^{-\frac{x^\beta }{\phi _\beta ^nk_n}\sum _{l=s+1}^{k_n-1}(g_l^n+g_{l-s}^n)^\beta }+e^{-\frac{x^\beta }{\phi _\beta ^nk_n}\sum _{l=s+1}^{k_n-1}(g_l^n-g_{l-s}^n)^\beta }\right) \nonumber \\&\qquad \left. -2e^{-2x^\beta }\right] . \end{aligned}$$

On the other hand, because for any \(r\in (0,1)\), \(|G'_{\beta }(x)_r| \le C\), applying Taylor expansion,

$$\begin{aligned}&E\left( \sum _{i=0}^{[T/\Delta _n] -1} \sum _{s=1}^{k_n} \int _{t_i^n}^{t_{i+1}^n} \int _{(s-1)/k_n}^{s/k_n} \Big | G_{\beta }\left( {u}^{\frac{1}{\beta }}|\sigma _{t}|\right) _r \right. \nonumber \\&\left. \quad - G_{\beta }\left( {u}^{\frac{1}{\beta }}|\sigma _{t_i^n}|\right) _s^n\Big |\mathrm{d}r\mathrm{d}t \right) \le C\left( \sqrt{\Delta _n} + k_n^{-1}\right) \rightarrow 0. \end{aligned}$$
(24)

Collecting (23) and (24) yields the property (20). Following the arguments in calculating (23), (24) and applying the steps in the proof of Theorem 1, we obtain (21). Then we need to show (22). When H is discontinuous martingale. Using the property of predictable quadratic variation, quadratic variation and Itô isometry, we can prove that \(\sum _{i=1}^{i_n(p)}E_{a_i(p)-1}^n\left( (\Delta _nk_n)^{-\frac{1}{2}}\zeta _i(p,1) \Delta H(p)_i \right) \) converges in probability to 0. The detail is similar to the proof in Todorov and Tauchen (2012c). As pure jump and continuous martingales are orthogonal, we have \(E_{a_i(p)-1}^n(\zeta _i(p,1)\Delta H(p)_i)=0\) when H is a continuous martingale. Then proof of Theorem 2 is completed. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, L., Liu, Z. & Xia, X. Realized Laplace transforms for pure jump semimartingales with presence of microstructure noise. Soft Comput 23, 5739–5752 (2019). https://doi.org/10.1007/s00500-018-3237-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-018-3237-3

Keywords

Navigation