Skip to main content
Log in

Simulation of Student–Lévy processes using series representations

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Lévy processes have become very popular in many applications in finance, physics and beyond. The Student–Lévy process is one interesting special case where increments are heavy-tailed and, for 1-increments, Student t distributed. Although theoretically available, there is a lack of path simulation techniques in the literature due to its complicated form. In this paper we address this issue using series representations with the inverse Lévy measure method and the rejection method and prove upper bounds for the mean squared approximation error. In the numerical section we discuss a numerical inversion scheme to find the inverse Lévy measure efficiently. We extend the existing numerical inverse Lévy measure method to incorporate explosive Lévy tail measures. Monte Carlo studies verify the error bounds and the effectiveness of the simulation routine. As a side result we obtain series representations of the so called inverse gamma subordinator which are used to generate paths in this model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Asmussen S, Rosiński J (2001) Approximations of small jumps of Lévy processes with a view towards simulation. J Appl Probab 38(2):482–493

    Article  MathSciNet  MATH  Google Scholar 

  • Barndorff-Nielsen OE, Shephard N (2001) Non-Gaussian Ornstein–Uhlenbeck-based models and some of their uses in financial economics. J R Stat Soc Ser B (Stat Methodol) 63(2):167–241

    Article  MathSciNet  MATH  Google Scholar 

  • Barndorff-Nielsen O, Halgreen C (1977) Infinite divisibility of the hyperbolic and generalized inverse Gaussian distributions. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 38(4):309–311

    Article  MathSciNet  MATH  Google Scholar 

  • Blattberg RC, Gonedes NJ (1974) A comparison of the stable and Student distributions as statistical models for stock prices. J Bus 47(2):244–280

    Article  Google Scholar 

  • Bondesson L (1982) On simulation from infinitely divisible distributions. Adv Appl Probab 14(4):855–869

    Article  MathSciNet  MATH  Google Scholar 

  • Bouchaud J-P, Potters M (2003) Theory of financial risk and derivative pricing: from statistical physics to risk management. Cambridge University Press, New York

    Book  MATH  Google Scholar 

  • Cassidy DT (2011) Describing n-day returns with Student’s t-distributions. Phys A 390(15):2794–2802

    Article  Google Scholar 

  • Cassidy DT, Hamp MJ, Ouyed R (2010) Pricing European options with a log Student’s t-distribution: a Gosset formula. Phys A 389(24):5736–5748

    Article  Google Scholar 

  • Cohen S, Rosiński J (2007) Gaussian approximation of multivariate Lévy processes with applications to simulation of tempered stable processes. Bernoulli 13(1):195–210

    Article  MathSciNet  MATH  Google Scholar 

  • Cufaro Petroni N (2007) Mixtures in nonstable Lévy processes. J Phys A Math Theor 40(10):2227

    Article  MathSciNet  MATH  Google Scholar 

  • Cufaro Petroni N, De Martino S, De Siena S, Illuminati F (2005) Lévy-Student distributions for halos in accelerator beams. Phys Rev E 72:066502

    Article  Google Scholar 

  • Derflinger G, Hörmann W, Leydold J (2010) Random variate generation by numerical inversion when only the density is known. ACM Trans Model Comput Simul 20(4):18:1–18:25

    Article  MATH  Google Scholar 

  • Devroye L (1981) On the computer generation of random variables with a given characteristic function. Comput Math Appl 7(6):547–552

    Article  MathSciNet  MATH  Google Scholar 

  • Ferguson TS, Klass MJ (1972) A representation of independent increment processes without Gaussian components. Ann Math Stat 43(5):1634–1643

    Article  MathSciNet  MATH  Google Scholar 

  • Grigelionis B (2012) Student’s t-distribution and related stochastic processes, Springer briefs in statistics. Springer, Berlin

    Google Scholar 

  • Grosswald E (1976) The Student t-distribution of any degree of freedom is infinitely divisible. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 36(2):103–109

    Article  MathSciNet  MATH  Google Scholar 

  • Grothe O, Schmidt R (2010) Scaling of Lévy Student processes. Phys A 389(7):1455–1463

    Article  Google Scholar 

  • Guo B-N, Qi F, Zhao J-L, Luo Q-M (2015) Sharp inequalities for polygamma functions. Math Slov 65(1):103–120

    MathSciNet  MATH  Google Scholar 

  • Heyde CC, Leonenko NN (2005) Student processes. Adv Appl Probab 37(2):342–365

    Article  MathSciNet  MATH  Google Scholar 

  • Hilber N, Reich N, Schwab C, Winter C (2009) Numerical methods for Lévy processes. Finance Stoch 13(4):471–500

    Article  MathSciNet  MATH  Google Scholar 

  • Hörmann W, Leydold J, Derflinger G (2004) Automatic nonuniform random variate generation, statistics and computing. Springer, Berlin

    Book  MATH  Google Scholar 

  • Hubalek F (2005) On the simulation from the marginal distribution of a Student t and generalized hyperbolic Lévy process, Working paper. https://pdfs.semanticscholar.org/4368/3935c410951d8145211a3d79148151cb07d8.pdf. Accessed 14 May 2018

  • Imai J, Kawai R (2011) On finite truncation of infinite shot noise series representation of tempered stable laws. Phys A 390(23):4411–4425

    Article  MathSciNet  Google Scholar 

  • Imai J, Kawai R (2013) Numerical inverse Lévy measure method for infinite shot noise series representation. J Comput Appl Math 253:264–283

    Article  MathSciNet  MATH  Google Scholar 

  • Massing T (2018) Local asymptotic normality for Student-Lévy processes under high-frequency sampling, Working paper. https://www.oek.wiwi.uni-due.de/fileadmin/fileupload/VWL-OEK/dokumente/LANHrB-Student-Levy.pdfHrB. Accessed 14 May 2018

  • Piessens R, Branders M (1974) A note on the optimal addition of abscissas to quadrature formulas of Gauss and Lobatto type. Math Comput 28(125):135–139

    Article  MathSciNet  MATH  Google Scholar 

  • Rosiński J (1990) On series representations of infinitely divisible random vectors. Ann Probab 18(1):405–430

    Article  MathSciNet  MATH  Google Scholar 

  • Rosiński J (2001) Series representations of Lévy processes from the perspective of point processes. In: Barndorff-Nielsen OE, Mikosch T, Resnick SI (eds) Lévy prcesses: theory and applications. Birkhäuser Boston, Boston, pp 401–415

    Chapter  Google Scholar 

  • Sato K-I (1999) Lévy processes and infinitely divisible distributions, Cambridge studies in advanced mathematics. Cambridge University Press, Cambridge

    Google Scholar 

  • Schafheitlin P (1906) Die Lage der Nullstellen der Besselschen Funktionen zweiter Art. Sitzungsber Berlin Math Gesellschaft 5:82–93

    MATH  Google Scholar 

  • Tankov P, Cont R (2015) Financial modelling with jump processes, second edition, Chapman and Hall/CRC financial mathematics series. Taylor & Francis, London

    Google Scholar 

  • Todorov V, Tauchen G (2006) Simulation methods for Lévy-driven continuous-time autoregressive moving average (CARMA) stochastic volatility models. J Bus Econ Stat 24(4):455–469

    Article  Google Scholar 

  • Watson G (1995) A treatise on the theory of bessel functions, Cambridge mathematical library. Cambridge University Press, New York

    Google Scholar 

Download references

Acknowledgements

The author is grateful to Christoph Hanck and Yannick Hoga as well as the anonymous reviewers for valuable comments which helped to substantially improve this paper and Friedrich Hubalek for graciously sharing his results. I thank Martin Arnold for excellent research assistance. Last but not least, I thank Theresa Kemper for carefully reading my work. Full responsibility is taken for all remaining errors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Till Massing.

Electronic supplementary material

A Appendix

A Appendix

In the “Appendix” we prove the statements of Sect. 3.

Proof of Lemma 2

Recall that

$$\begin{aligned} Q(\mathrm {d}u)=u^{-1}\int _0^{\infty }e^{-su}\nu 2\left[ \pi ^2 2\nu s\left( J_{\frac{\nu }{2}}^2(\sqrt{2\nu s})+Y_{\frac{\nu }{2}}^2(\sqrt{2\nu s})\right) \right] ^{-1} \mathrm {d}s\mathrm {d}u. \end{aligned}$$

We use the inequality

$$\begin{aligned} J_{\nu }^2(x)+Y_{\nu }^2(x)>\frac{2}{\pi x}, \end{aligned}$$
(32)

first derived by Schafheitlin (1906); an elegant proof can be found in Watson (1995). Hence,

$$\begin{aligned} \nu g_{\frac{\nu }{2}}(2\nu s)=\nu \left[ \pi ^2\nu s\left( J_{\frac{\nu }{2}}^2(\sqrt{2\nu s})+Y_{\frac{\nu }{2}}^2(\sqrt{2\nu s})\right) \right] ^{-1}<\sqrt{\frac{\nu }{2\pi ^2s}}. \end{aligned}$$

By standard integration,

$$\begin{aligned} Q(\mathrm {d}u)<u^{-1}\int _0^{\infty }e^{-su}\sqrt{\frac{\nu }{2\pi ^2s}}\mathrm {d}s\mathrm {d}u=\sqrt{\frac{\nu }{2\pi u^3}}\mathrm {d}u \end{aligned}$$

such that (16) follows. To derive (17) the tail mass function is bounded by

$$\begin{aligned} Q([z,\infty ))=\int _z^{\infty }Q(\mathrm {d}u)<\int _{z}^{\infty }\sqrt{\frac{\nu }{2\pi u^3}}\mathrm {d}u=\sqrt{\frac{2\nu }{\pi z}}. \end{aligned}$$

Since \(Q([z,\infty ))\) is strictly decreasing and continuous, \(Q^{\leftarrow }(y)\), is the true inverse and

$$\begin{aligned} Q^{\leftarrow }(y)&=\mathrm {inf}\{z>0:Q([z,\infty ))<y\}\\&<\mathrm {inf}\{z>0:\sqrt{\frac{2\nu }{\pi z}}<y\}\\&=\frac{2\nu }{\pi y^2}, \end{aligned}$$

which completes the proof. \(\square \)

Proof of Theorem 2

Note that

$$\begin{aligned} Y_t-Y_t^{(n)}&=\sum _{i=1}^{\infty }Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}}-\sum _{i=1}^nQ^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}}\\&=\sum _{i=n+1}^{\infty }Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}} \end{aligned}$$

Denote by \(q(y):=\frac{2\nu }{\pi y^2}\) the bound for \(Q^{\leftarrow }\) derived in Lemma 2. We start by proving (19). Taking expectations to obtain

$$\begin{aligned} E\left[ \sum _{i=n+1}^{\infty }Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}}\right]&=\sum _{i=n+1}^{\infty }E\left[ Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) \right] E\left[ {\mathbb {1}}_{\{U_i\le t\}}\right] \nonumber \\&< \frac{t}{T}\sum _{i=n+1}^{\infty }E\left[ q\left( \frac{\Gamma _i}{T}\right) \right] , \end{aligned}$$
(33)

where we have used the monotonicity of the expected value. Since the \(\Gamma _i\)s are \(\Gamma (i,1)\) distributed (with density function denoted by \(\gamma _i(x)\)), (33) is equal to

$$\begin{aligned} \frac{t}{T}\sum _{i=n+1}^{\infty }\int _0^{\infty }q(x/T)\gamma _i(x)\mathrm {d}x&=\frac{t}{T}\sum _{i=n+1}^{\infty }\frac{2\nu }{\pi }\frac{T^2}{(i-1)(i-2)}\nonumber \\&=\frac{2\nu }{\pi }\frac{T}{n-1}t \end{aligned}$$
(34)

It remains to prove (20). Using the monotone convergence theorem

$$\begin{aligned}&E\left[ \left( \sum _{i=n+1}^{\infty }Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}}\right) ^2\right] \\&\quad =\sum _{i=n+1}^{\infty }\sum _{j=n+1}^{\infty }E\left[ Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}}Q^{\leftarrow }\left( \frac{\Gamma _j}{T}\right) {\mathbb {1}}_{\{U_j\le t\}}\right] , \end{aligned}$$

since \(Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}}\ge 0\) for all i. Next, by the Cauchy–Schwarz inequality

$$\begin{aligned}&\sum _{i=n+1}^{\infty }\sum _{j=n+1}^{\infty }E\left[ Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}}Q^{\leftarrow }\left( \frac{\Gamma _j}{T}\right) {\mathbb {1}}_{\{U_j\le t\}}\right] \nonumber \\&\quad \le \sum _{i=n+1}^{\infty }\sum _{j=n+1}^{\infty }\sqrt{E\left[ Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) ^2{\mathbb {1}}_{\{U_j\le t\}}\right] E\left[ Q^{\leftarrow }\left( \frac{\Gamma _j}{T}\right) ^2{\mathbb {1}}_{\{U_j\le t\}}\right] }\nonumber \\&\quad <\sum _{i=n+1}^{\infty }\sum _{j=n+1}^{\infty }\sqrt{\frac{t^2}{T^2}E\left[ q\left( \frac{\Gamma _i}{T}\right) ^2\right] E\left[ q\left( \frac{\Gamma _j}{T}\right) ^2\right] }\nonumber \\&\quad =\frac{t}{T}\sum _{i=n+1}^{\infty }\sum _{j=n+1}^{\infty }\sqrt{\int _0^{\infty }q(x/T)^2\gamma _i(x)\mathrm {d}x \times \int _0^{\infty }q(x/T)^2\gamma _j(x)\mathrm {d}x}\nonumber \\&\quad =\frac{t}{T}\frac{4\nu ^2}{\pi ^2}\sum _{i=n+1}^{\infty }\sum _{j=n+1}^{\infty }\sqrt{\frac{T^4}{(i-1)(i-2)(i-3)(i-4)}\frac{T^4}{(j-1)(j-2)(j-3)(j-4)}}, \end{aligned}$$
(35)

Since \(i,j\ge n+1\ge 5\), we can bound (35) using \((i-1)(i-2)(i-3)(i-4)\ge (i-4)^4\) by

$$\begin{aligned} \frac{t}{T}\frac{4\nu ^2}{\pi ^2}\left( \sum _{i=n+1}^{\infty }\frac{T^2}{(i-4)^2}\right) ^2=\frac{4\nu ^2T^3}{\pi ^2}t\psi '(n-3)^2, \end{aligned}$$
(36)

where \(\psi '(x)\) denotes the first derivative of the digamma function \(\psi (x):=\frac{\Gamma '(x)}{\Gamma (x)}\) (also called polygamma function of order 1). Guo et al. (2015) provide a sharp bound for polygamma functions. The inequality for \(\psi '(x)\) is

$$\begin{aligned} |\psi '(x)|<\frac{1}{x+\frac{1}{2}}+\frac{1}{x^2}. \end{aligned}$$
(37)

Applying (37) to (36) we obtain the bound

$$\begin{aligned} E\left[ \left( \sum _{i=n+1}^{\infty }Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}}\right) ^2\right] <\frac{4\nu ^2T^3}{\pi ^2}t\left( \frac{1}{n-2.5}+\frac{1}{(n-3)^2}\right) ^2. \end{aligned}$$
(38)

\(\square \)

Proof of Theorem 3

Again,

$$\begin{aligned} X_t-X_t^{(n)}=\sum _{i=n+1}^{\infty }\sqrt{Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) }V_i{\mathbb {1}}_{\{U_i\le t\}}. \end{aligned}$$

Note that \(E[V_i]=0\) and that \(V_i\) is independent of \(Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) \) and \(U_i\). Hence, by Fubini’s theorem,

$$\begin{aligned} E\left[ \sum _{i=n+1}^{\infty }\sqrt{Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) }V_i{\mathbb {1}}_{\{U_i\le t\}}\right] =0. \end{aligned}$$

Furthermore, analogously to Theorem 2,

$$\begin{aligned}&E\left[ \left( \sum _{i=n+1}^{\infty }\sqrt{Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) }V_i{\mathbb {1}}_{\{U_i\le t\}}\right) ^2\right] \nonumber \\ =&\sum _{i=n+1}^{\infty }\sum _{j=n+1}^{\infty }E\left[ \sqrt{Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) }V_i{\mathbb {1}}_{\{U_i\le t\}}\sqrt{Q^{\leftarrow }\left( \frac{\Gamma _j}{T}\right) }V_j{\mathbb {1}}_{\{U_j\le t\}}\right] \nonumber \\ =&\sum _{i=n+1}^{\infty }\sum _{j=n+1}^{\infty }E\left[ \sqrt{Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) }\sqrt{Q^{\leftarrow }\left( \frac{\Gamma _j}{T}\right) }{\mathbb {1}}_{\{U_i\le t\}}{\mathbb {1}}_{\{U_j\le t\}}\right] E\left[ V_iV_j\right] . \end{aligned}$$
(39)

Since \(E[V_iV_j]=\delta _{i,j}\), (39) equals

$$\begin{aligned} \sum _{i=n+1}^{\infty }E\left[ Q^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}_{\{U_i\le t\}}\right] <\frac{t}{T}\sum _{i=n+1}^{\infty }E\left[ q\left( \frac{\Gamma _i}{T}\right) \right] =\frac{2\nu }{\pi }\frac{T}{n-1}t \end{aligned}$$

as in the proof of Theorem 2. \(\square \)

Proof of Corollary 2

Since \(N_{\tau }=\#\{i\in {\mathbb {N}}:\Gamma _i\le \tau \}\) and the \(\Gamma _i\) are unit Poisson arrival times, \(N_{\tau }\sim Poi(\tau )\). We now use the law of iterated expectation.

$$\begin{aligned} E[(X_t-X_t^{\tau })^2|\Gamma _2\le \tau ]&=E\left[ E[(X_t-X_t^{\tau })^2|N_{\tau },\Gamma _2\le \tau ]|\Gamma _2\le \tau \right] \\&<E\left[ \frac{2\nu }{\pi }\frac{Tt}{N_{\tau }-1}\bigg |\Gamma _2\le \tau \right] , \end{aligned}$$

by Theorem 3. The conditional expected value \(E\left[ \frac{1}{N_{\tau }-1}|\Gamma _2\le \tau \right] \) exists and \(N_{\tau }|\Gamma _2\le \tau \) follows a truncated Poisson distributed with density function

$$\begin{aligned} P[N_{\tau }=k|N_{\tau }\ge 2]=\frac{e^{-\tau }\tau ^k}{k!(1-P[N_{\tau }\le 2])}=\frac{e^{-\tau }\tau ^k}{k!(1-\Gamma (3,\tau )/2)}. \end{aligned}$$

Hence, the conditional expectation is

$$\begin{aligned} E\left[ \frac{1}{N_{\tau }-1}\big |\Gamma _2\le \tau \right]&=\sum _{k=2}^{\infty }\frac{1}{k-1}\frac{e^{-\tau }\tau ^k}{k!(1-\Gamma (3,\tau )/2)}\\&=\frac{e^{-\tau }}{(1-\Gamma (3,\tau )/2)}\left( \tau +1-e^\tau -\tau \gamma +\tau Ei(\tau )-\tau \log (\tau )\right) , \end{aligned}$$

which completes the proof. \(\square \)

Proof of Corollary 3

We only prove the claim for the deterministic truncation; the random truncation bound follows as in Corollary 2. Note that Lemma 2 implies that \(\frac{\mathrm {d}Q}{\mathrm {d}Q_0}\le 1\) and that the tail inverse \(Q_0^{\leftarrow }(y)=\frac{2\nu }{\pi y^2}\) exists in closed form. Let us start with the mean squared error

$$\begin{aligned}&E\left[ \left( \sum _{i=n+1}^{\infty }\sqrt{Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) }V_i{\mathbb {1}}\left( \left\{ \frac{\mathrm {d}Q}{\mathrm {d}Q_0}\left( Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) \right) \ge W_i\right\} \right) {\mathbb {1}}_{\{U_i\le t\}}\right) ^2\right] \\&\quad = \sum _{i=n+1}^{\infty }E\left[ Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}\left( \left\{ \frac{\mathrm {d}Q}{\mathrm {d}Q_0}\left( Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) \right) \ge W_i\right\} \right) {\mathbb {1}}_{\{U_i\le t\}}\right] , \end{aligned}$$

analogously as in the proof of Theorem 3, since \(E[V_iV_j]=\delta _{i,j}\). By the law of iterated expectation, this is equal to

$$\begin{aligned}&\sum _{i=n+1}^{\infty }E\left[ E\left[ \left. Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) {\mathbb {1}}\left( \left\{ \frac{\mathrm {d}Q}{\mathrm {d}Q_0}\left( Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) \right) \ge W_i\right\} \right) \right| \Gamma _i\right] \right] E\left[ {\mathbb {1}}_{\{U_i\le t\}}\right] \\&\quad =\sum _{i=n+1}^{\infty }E\left[ Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) P\left[ \left. \frac{\mathrm {d}Q}{\mathrm {d}Q_0}\left( Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) \right) \ge W_i\right| \Gamma _i\right] \right] E\left[ {\mathbb {1}}_{\{U_i\le t\}}\right] \\&\quad =\sum _{i=n+1}^{\infty }E\left[ Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) \frac{\mathrm {d}Q}{\mathrm {d}Q_0}\left( Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) \right) \right] E\left[ {\mathbb {1}}_{\{U_i\le t\}}\right] \\&\quad \le \sum _{i=n+1}^{\infty }E\left[ Q_0^{\leftarrow }\left( \frac{\Gamma _i}{T}\right) \right] E\left[ {\mathbb {1}}_{\{U_i\le t\}}\right] , \end{aligned}$$

because \(\frac{\mathrm {d}Q}{\mathrm {d}Q_0}\le 1\). The rest of the proof follows as in the proof of Theorem 3 since \(Q_0^{\leftarrow }\equiv q\). \(\square \)

Proof of Proposition 3

Asmussen and Rosiński (2001) showed that the distributional convergence is implied by

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\frac{\sigma _{\varepsilon }}{\varepsilon }=+\infty . \end{aligned}$$
(40)

We show that \(\lim _{\varepsilon \rightarrow 0}\frac{\sigma _{\varepsilon }^2}{\varepsilon ^2}=+\infty \). Recall that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\frac{\sigma _{\varepsilon }^2}{\varepsilon ^2}=\lim _{\varepsilon \rightarrow 0}\frac{\int _0^{\varepsilon }u^2\int _0^{\infty }u^{-1}e^{-su}\nu g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s\mathrm {d}u}{\varepsilon ^2}. \end{aligned}$$
(41)

Using l’Hôspital’s rule, (41) is equal to

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\frac{\varepsilon ^2\int _0^{\infty }\varepsilon ^{-1}e^{-s\varepsilon }\nu g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s}{2\varepsilon }=\lim _{\varepsilon \rightarrow 0}\frac{1}{2}\int _0^{\infty }e^{-s\varepsilon }g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s. \end{aligned}$$
(42)

The monotone convergence theorem can be applied to (41) and thus

$$\begin{aligned} \frac{1}{2}\int _0^{\infty }\lim _{\varepsilon \rightarrow 0}e^{-s\varepsilon }\nu g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s=\frac{1}{2}\int _0^{\infty }\nu g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s=\infty . \end{aligned}$$

\(\square \)

Proof of Proposition 4

Note that in the non-finite variation case \(\mu _{\varepsilon }\) has to be zero (Sato 1999). Recall the Lévy measure for the univariate Student–Lévy process (with no drift and standard scaling) is given by

$$\begin{aligned} \Pi (\mathrm {d}x)=\frac{\nu 2^{\frac{3}{4}}|x|^{-\frac{1}{2}}}{\pi ^{\frac{1}{2}}}\int _{0}^{\infty }s^{\frac{1}{4}}K_{\frac{1}{2}}\left( \sqrt{2s}|x|\right) g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s\mathrm {d}x. \end{aligned}$$
(43)

In the following we use the identity

$$\begin{aligned} K_{\frac{1}{2}}(z)=\sqrt{\frac{\pi }{2}}e^{-z}z^{-\frac{1}{2}} \end{aligned}$$
(44)

for \(z>0\). As \(\Pi \) is a symmetric measure,

$$\begin{aligned} \sigma _{\varepsilon }^2=\int _{-\varepsilon }^{\varepsilon }x^2\Pi (\mathrm {d}x)=2\int _{0}^{\varepsilon }x^2\Pi (\mathrm {d}x). \end{aligned}$$

Again using l’Hôspital’s rule, for some constant \(C>0\) that may change from line to line

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\frac{\sigma _{\varepsilon }^2}{\varepsilon ^2}&=\lim _{\varepsilon \rightarrow 0}\frac{C\int _0^{\varepsilon }x^2|x|^{-\frac{1}{2}}\int _{0}^{\infty }s^{\frac{1}{4}}K_{\frac{1}{2}}\left( \sqrt{2s}|x|\right) g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s\mathrm {d}x}{\varepsilon ^2}\\&=\lim _{\varepsilon \rightarrow 0}C\frac{\varepsilon ^2\varepsilon ^{-\frac{1}{2}}\int _{0}^{\infty }s^{\frac{1}{4}}e^{-\sqrt{2s}\varepsilon }\left( \sqrt{2s}\right) ^{-\frac{1}{2}}\varepsilon ^{-\frac{1}{2}}g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s}{\varepsilon }\\&=\lim _{\varepsilon \rightarrow 0}C\int _0^{\infty }e^{-\sqrt{2s}\varepsilon }g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s\\&=C\int _0^{\infty }g_{\frac{\nu }{2}}(2\nu s)\mathrm {d}s\\&=\infty . \end{aligned}$$

The second-to-last last step uses the monotone convergence theorem.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Massing, T. Simulation of Student–Lévy processes using series representations. Comput Stat 33, 1649–1685 (2018). https://doi.org/10.1007/s00180-018-0814-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-018-0814-y

Keywords

Navigation