Skip to main content
Log in

Estimation and asymptotic covariance matrix for stochastic volatility models

  • Original Paper
  • Published:
Statistical Methods & Applications Aims and scope Submit manuscript

Abstract

In this paper we compute the asymptotic variance-covariance matrix of the method of moments estimators for the canonical Stochastic Volatility model. Our procedure is based on a linearization of the initial process via the log-squared transformation of Breidt and Carriquiry (Modelling and prediction, honoring Seymour Geisel. Springer, Berlin, 1996). Knowledge of the asymptotic variance-covariance matrix of the method of moments estimators offers a concrete possibility for the use of the classical testing procedures. The resulting asymptotic standard errors are then compared with those proposed in the literature applying different parameter estimates. Applications on simulated data support our results. Finally, we present empirical applications on the daily returns of Euro-US dollar and Yen-US dollar exchange rates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

References

  • Andersen TG, Sørensen BE (1996) GMM estimation of a stochastic volatility model: a Monte Carlo study. J Bus Econ Stat 14(3):328–352

    Google Scholar 

  • Bartolucci F, De Luca G (2001) Maximum likelihood estimation of a latent variable time-series model. Appl Stoch Models Bus Ind 17:5–17

    Article  MathSciNet  MATH  Google Scholar 

  • Breidt FJ, Carriquiry AL (1996) Improved quasi-maximum likelihood estimation for stochastic volatility models. In: Zellner A, Lee JS (eds) Modelling and prediction, honoring Seymour Geisel. Springer, Berlin

    Google Scholar 

  • Broto C, Ruiz E (2004) Estimation methods for stochastic volatility models: a survey. J Econ Surv 18:613–649

    Article  Google Scholar 

  • Chaussé P and Xú D (2012) GMM estimation of a stochastic volatility model with realized volatility: a Monte Carlo study. Working papers no. 1203, Department of Economics, University of Waterloo, Canada

  • Chib S, Nardari F, Shephard N (2002) Markov chain Monte Carlo methods for stochastic volatility models. J Econom 108:281–316

    Article  MathSciNet  MATH  Google Scholar 

  • Dhaene G (2004) Indirect inference for stochastic volatility models via the log-squared observations. Tijdschr Econ Manag XLIX(3):421–440

    Google Scholar 

  • Dhaene G, Vergote O (2003) Asymptotic results for GMM estimators of stochastic volatility models. Center for Economic Studies. Discussion paper series 03.06. Katholieke Universiteit Leuven, Belgium

  • Francq C, Zakoïan JM (2006) Linear-representation based estimation of stochastic volatility models. Scand J Stat 33(4):785–806

    Article  MathSciNet  MATH  Google Scholar 

  • Fridman M, Harris L (1998) A maximum likelihood approach for non-Gaussian stochastic volatility models. J Bus Econ Stat 16:284–291

    Google Scholar 

  • Fuller WA (1996) Introduction to statistical time series. Wiley, New York

    MATH  Google Scholar 

  • Gallant AR, Tauchen G (1996) Which moments to match? Econom Theory 12:657–681

    Article  MathSciNet  Google Scholar 

  • Gouriéroux C, Monfort A, Renault E (1993) Indirect inference. J Appl Econom 8:S85–S118

    Article  MATH  Google Scholar 

  • Harvey AC, Shephard N (1996) Estimation of an asymmetric model of asset prices. J Bus Econ Stat 14(4):429–434

    Google Scholar 

  • Jacquier E, Polson NG, Rossi PE (1994) Bayesian analysis of stochastic volatility models. J Bus Econ Stat 12(4):69–87

    MathSciNet  MATH  Google Scholar 

  • Knight JL, Satchell SE, Yu J (2002) Estimation of the stochastic volatility model by the empirical characteristic function method. Aust N Z J Stat 44(3):319–335

    Article  MathSciNet  MATH  Google Scholar 

  • Monfardini C (1998) Estimating stochastic volatility models through indirect inference. Econom J 1:113–128

    Article  Google Scholar 

  • Nelson DB (1994) Comment on Bayesian analysis of stochastic volatility models. J Bus Econ Stat 12(4):403–406

    Google Scholar 

  • Rue H, Martino S, Chopin N (2009) Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J R Stat Soc B 71(2):319–392

    Article  MathSciNet  MATH  Google Scholar 

  • Ruiz E (1994) Quasi-maximum likelihood estimation of stochastic volatility models. J Econom 63:289–306

    Article  MATH  Google Scholar 

  • Sandmann G, Koopman SJ (1998) Estimation of stochastic volatility models via Monte Carlo maximum likelihood. J Econom 87:271–301

    Article  MathSciNet  MATH  Google Scholar 

  • Taylor SJ (1994) Modelling stochastic volatility. Math Financ 4:183–204

    Article  MATH  Google Scholar 

  • Tsyplakov A (2010) Revealing the arcane: an introduction to the art of stochastic volatility models. Munich personal RePEc archive (MPRA). Paper no. 25511. Munich, Germany. http://mpra.ub.uni-muenchen.de/25511/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maddalena Cavicchioli.

Appendices

Appendix 1: Computation of \(V_{\eta }\)

The elements of \(\widehat{\eta }\) are the sample mean \(\widehat{m}_X\), the sample variance \(\widehat{\gamma }_X (0)\) and the sample first-order autocovariance \(\widehat{\gamma }_X (1)\) of \(x_t = \log y^2_t\). Thus the asymptotic variance-covariance matrix is

$$\begin{aligned} V_{\eta } = \lim _{T \rightarrow + \infty } {\text {var}} [\sqrt{T} (\widehat{\eta } - \eta )] = \sum _{j = - \infty }^{+ \infty } {\text {cov}} (\epsilon _t, \epsilon _{t - j}) \end{aligned}$$

where

$$\begin{aligned} \epsilon _t = \left( x_t^{*} \quad x_t^{* 2} \quad x_t^{*} x^{*}_{t-1} \right) ^{'} \end{aligned}$$

and

$$\begin{aligned} x^{*}_t = x_t - \alpha - \mu (1 - \rho )^{-1} = \log y^2_t - \alpha - \mu (1 - \rho )^{-1}. \end{aligned}$$

Write \(x_t^{*}\) as \(h_t^{*} + e_t\), where \(h^{*}_t = h_t - \mu (1 - \rho )^{-1}\) and \(e_t = \log u^2_t - c_1\). Then the transition equation in (2.1) becomes \(h_t^{*} = \rho h^{*}_{t - 1} + v_t\). Now \(e_t\) and \(h^{*}_t \) have zero mean and are independent. Furthermore, we have

$$\begin{aligned} \begin{aligned} {\text {cov}} \left( h^{*}_t , h^{*}_{t-i} \right)&= \frac{\rho ^{|i|}}{1 - \rho ^2 } \sigma ^2_v \\ {\text {cov}} \left( h^{*}_t h^{*}_{t-i} , h^{*}_{t - j} \right)&= 0 \\ {\text {cov}} \left( h^{*}_t h^{*}_{t-i}, h^{*}_{t - j } h^{*}_{t- \ell }\right)&= \frac{\rho ^{|j| + |i - \ell |} + \rho ^{ |\ell | + | i - j |}}{( 1 - \rho ^2 )^2 } \sigma ^4_v \end{aligned} \end{aligned}$$

for any integers i, j and \(\ell \). Using these formulae, the elements of \(V_{\eta } \) are derived as follows:

$$\begin{aligned}&\begin{aligned} V_{\eta } (1, 1)&= \sum _{j = - \infty }^{+ \infty } {\text {cov}} \left( h_t^{*} + e_t , h^{*}_{ t - j} + e_{ t - j}\right) \\&= \sum _{j = - \infty }^{+ \infty } \left[ {\text {cov}} \left( h_t^{*} , h^{*}_{ t - j} \right) + {\text {cov}} \left( e_t , e_{ t - j} \right) \right] \\&= \frac{1}{(1 - \rho )^2} \sigma ^2_v + c_2 \end{aligned} \\&\begin{aligned} V_{\eta } (2, 2)&= \sum _{j = - \infty }^{+ \infty } {\text {cov}} \left[ \left( h^{*}_t + e_t \right) ^2 , \left( h^{*}_{t - j} + e_{t - j} \right) ^2 \right] \\&= \sum _{j = - \infty }^{+ \infty } \left[ {\text {cov}} \left( h^{* 2}_t , h^{* 2}_{t - j} \right) + 4 {\text {cov}} \left( h^{*}_t e_t , h^{*}_{t - j} e_{t - j}\right) + {\text {cov}} \left( e_t^2 , e_{t - j}^2 \right) \right] \\&= \frac{2 (1 + \rho ^2 )}{(1 - \rho ^2 )^3 } \sigma ^4_v + \frac{4 c_2}{1 - \rho ^2 } \sigma ^2_v + c_4 - c_2^2 \end{aligned}\\&\begin{aligned} V_{\eta } (3, 3)&= \sum _{j = - \infty }^{+ \infty } {\text {cov}} \left[ \left( h^{*}_t + e_t \right) \left( h^{*}_{t - 1} + e_{t-1}\right) , \left( h^{*}_{t - j} + e_{t - j} \right) \left( h^{*}_{t-j-1} + e_{t - j -1}\right) \right] \\&= \sum _{j = - \infty }^{+ \infty } \left[ {\text {cov}} \left( h^{*}_t h^{*}_{t - 1} , h^{*}_{t - j} h^{*}_{t - j - 1} \right) + {\text {cov}} \left( h^{*}_t e_{t - 1} , h^{*}_{t - j} e_{t - j - 1} \right) \right. \\&\qquad \left. + {\text {cov}} \left( h^{*}_t e_{t - 1} , e_{t - j} h^{*}_{t - j - 1} \right) + {\text {cov}} \left( e_t h^{*}_{t - 1} , h^{*}_{t - j} e_{t - j - 1} \right) \right. \\&\qquad \left. + {\text {cov}} \left( e_t h^{*}_{t - 1} , e_{t - j} h^{*}_{t - j - 1} \right) + {\text {cov}} \left( e_t e_{t - 1} , e_{t - j} e_{t - j - 1} \right) \right] \\&= \frac{1 - \rho ^4 + 4 \rho ^2}{ (1 - \rho ^2 )^3} \sigma ^4_v + \frac{2 c_2 (1 + \rho ^2 )}{1 - \rho ^2 } \sigma ^2_v + c_2^2 \end{aligned} \end{aligned}$$
$$\begin{aligned} V_{\eta } (2,1)= & {} \sum _{j = - \infty }^{+ \infty } {\text {cov}} \left[ \left( h^{*}_t + e_t \right) ^2 , h^{*}_{t - j} + e_{t - j} \right] = \sum _{j = - \infty }^{+ \infty } {\text {cov}} \left( e_t^2, e_{t-j}\right) \\= & {} E\left( e^3_t\right) = c_3\\&V_{\eta } (3,1) = \sum _{j = - \infty }^{+ \infty } {\text {cov}} \left[ \left( h^{*}_t + e_t \right) \left( h^{*}_{t-1} + e_{t-1}\right) , h^{*}_{t - j} + e_{t - j} \right] = 0\\ V_{\eta } (3,2)= & {} \sum _{j = - \infty }^{+ \infty } {\text {cov}} \left[ \left( h^{*}_t + e_t \right) \left( h^{*}_{t-1} + e_{t-1} \right) , \left( h^{*}_{t - j} + e_{t - j}\right) ^2 \right] \\= & {} \sum _{j = - \infty }^{+ \infty } \left[ {\text {cov}} \left( h^{*}_t h^{*}_{t-1} , h^{* 2}_{t - j}\right) + 2 {\text {cov}} \left( h^{*}_t e_{t-1} , h^{*}_{t - j} e_{t - j} \right) \right. \\&\left. + 2 {\text {cov}} \left( h^{*}_{t -1} e_t , h^{*}_{t - j} e_{t - j} \right) \right] \\= & {} \sum _{j = - \infty }^{+ \infty } \frac{2 \, \rho ^{|j| + |j-1|}}{(1 - \rho ^2 )^2} \, \sigma ^4_v + 4 \, {\text {cov}} \left( h^{*}_t , h^{*}_{t - j} \right) \, c_2 \\= & {} \left[ \frac{2 \, \rho }{(1 - \rho ^2 )^2} + 2 \sum _{j = 1}^{+ \infty } \, \frac{\rho ^{2 j - 1}}{(1 - \rho ^2 )^2} + 2 \, \sum _{j = - \infty }^{- 1} \, \frac{ \rho ^{- (2 j - 1)}}{(1 - \rho ^2 )^2} \right] \, \sigma ^4_v + \frac{4 \, c_2 \, \rho }{ (1 - \rho ^2)} \, \sigma ^2_v \\= & {} \left[ \frac{2 \, \rho }{(1 - \rho ^2 )^2} + 2 \sum _{i = 0}^{+ \infty } \, \frac{\rho ^{2 i + 1}}{(1 - \rho ^2 )^2} + 2 \, \sum _{i = 0}^{+ \infty } \, \frac{ \rho ^{ 2 i + 3}}{(1 - \rho ^2 )^2} \right] \, \sigma ^4_v + \frac{4 \, c_2 \, \rho }{ (1 - \rho ^2)} \, \sigma ^2_v \\= & {} \left[ \frac{2 \, \rho }{(1 - \rho ^2 )^2} + \frac{2 \, \rho }{(1 - \rho ^2 )^3} + \frac{2 \, \rho ^3 }{(1 - \rho ^2 )^3} \right] \, \sigma ^4_v \, + \frac{4 \, c_2 \, \rho }{ (1 - \rho ^2)} \, \sigma ^2_v \\= & {} \frac{4\, \rho }{(1 - \rho ^2 )^3} \, \sigma ^4_v + \frac{4 \, c_2 \, \rho }{(1 - \rho ^2)} \, \sigma ^2_v \end{aligned}$$

Appendix 2: Proof of Theorem A

Set \(\theta = \left( \mu \, \, \rho \, \, \sigma _{v}^2\right) ^{'}\) and \(\eta =(m_X \, \, \gamma _{X}(0) \, \, \gamma _{X}(1))^{'}\). Let \(\Theta = \mathbb {R} \times (- 1, 1) \times (0, + \infty )\) be the open space of the parameter vector \(\theta \). Let \(\eta = \eta (\theta )\) be the map from \(\Theta \) into \(\mathbb {R}^3\) defined by Eqs. (2.3), (2.5) and (2.6). This map is injective and formed by rational functions, hence it is differentiable. The Jacobian matrix \(J = \frac{\partial \, \eta }{\partial \, \theta ^{'}}\) has full rank as \(\det J = - (1 + \rho ) \, (1 - \rho ^2)^{-3}\, \sigma _{v}^{2} \ne 0\) for \(|\rho | < 1\). The inverse map \(\theta = \theta (\eta )\) from the image of \( \eta \) into \(\Theta \) is defined by the equations

$$\begin{aligned} \begin{aligned} \mu =&\frac{\gamma _X (0) - \gamma _X (1) - c_2}{\gamma _X (0) - c_2} (m_X - c_1) \qquad \qquad \rho = \frac{\gamma _X (1)}{\gamma _X (0) - c_2 } \\&\qquad \qquad \sigma ^2_v = \frac{[\gamma _X (0) - c_2 ]^2 - [\gamma _X (1)]^2 }{\gamma _X (0) - c_2}. \end{aligned} \end{aligned}$$
(6.1)

where \(\gamma _X (0) \ne c_2 = \sigma _{e}^2\) by (2.5). Let \(\widehat{\theta }\) be the MM estimator of \(\theta \) in Model (1.1). As T grows larger, \(\widehat{\mu }\), \(\widehat{\rho }\), and \(\widehat{\sigma }^2_v\) converge to their population counterparts (or probability limits). Let \({\widehat{\eta }}_T = ({{\widehat{m}}}_X \, \, {\widehat{\gamma }}_X (0)\, \, {\widehat{\gamma }}_X (1))^{'}\) be the sample estimator of \(\eta \). Then \({\widehat{\eta }}_T \rightarrow \eta \) as \(T \rightarrow \infty \). Substituting \( \widehat{m}_X\), \( \widehat{\gamma }_X (0)\) and \(\widehat{\gamma }_X (1)\) on the right–hand sides of (6.1) yields a consistent estimator of \(\theta \) that is asymptotically equivalent to \(\widehat{\theta }\). More precisely, the sequence \(\{ {\widehat{\eta }}_T \}\) satisfies \(\sqrt{T} ({\widehat{\eta }}_T - \eta ) \rightarrow \mathcal{N}(0, V_{\eta })\) in distribution. Applying the Delta Method gives \(\sqrt{T} ({\widehat{\theta }}_T - \theta ) = \sqrt{T} (\theta ({\widehat{\eta }}_T) - \theta (\eta )) \rightarrow \mathcal{N}(0, V_{\theta })\) in distribution, where the asymptotic variance-covariance matrix \(V_{\theta }\) is given by \(\frac{\partial \, \theta }{\partial \, \eta ^{'}} \, V_{\eta }\, \frac{\partial \, \theta ^{'}}{\partial \, \eta } = J^{-1} \, V_{\eta } \, (J^{'})^{-1}\). This proves the statement.

Appendix 3: Proof of Theorem B

Set \(\theta _{*} = (\xi \, \, \beta \, \, \sigma _{w}^2)^{'}\) and \(\theta = (\mu \, \, \rho \, \, \sigma _{v}^2)^{'}\). Let \(\Theta _{*} = \mathbb {R} \times \{(- 1, 0) \cup (0, 1) \} \times (0, + \infty )\) be the open space of the parameter vector \(\theta _{*}\). Let \(\theta = \theta (\theta _{*})\) be the map from \(\Theta _{*}\) into \(\mathbb {R}^3\) defined by the following equations which are derived from Sect. 3:

$$\begin{aligned} \begin{aligned} \mu&= \xi \, - \, \left( 1 + \sigma _{w}^2 \, \sigma _{e}^{- 2}\, \beta \right) \alpha \\ \rho&= - \sigma _{w}^2 \, \sigma _{e}^{- 2}\, \beta \\ \sigma _{v}^2&= (1 + \beta ^2) \, \sigma _{w}^2 \, - \, \left( 1 + \sigma _{w}^4 \, \sigma _{e}^{- 4}\, \beta ^2\right) \, \sigma _{e}^2 \end{aligned} \end{aligned}$$
(6.2)

This map is differentiable. The Jacobian matrix \(D = \frac{\partial \, \theta }{\partial \theta _{*}^{'}}\) has full rank as \(\det D = \sigma _{w}^2 \, \sigma _{e}^{- 2}\, (\beta ^2-1) \ne 0\) for \(|\beta | <1\). The map \(\theta = \theta (\theta _{*})\) is injective and the identification problem is completely solved as the transformation \(\theta _{*} = \theta _{*} (\theta )\) from the image of \( \theta \) into \(\Theta _{*}\) is derived as follows. Substituting (3.5) into (3.4) and then multiplying by \(\beta \), we get

$$\begin{aligned} \rho \sigma ^2_e \beta ^2 + \left[ \sigma ^2_v + (1 + \rho ^2 ) \sigma ^2_e \right] \beta + \rho \sigma ^2_e = 0 \end{aligned}$$
(6.3)

which gives

$$\begin{aligned} \beta = \frac{- \sigma ^2_v - (1 + \rho ^2 ) \sigma ^2_e + \sqrt{\Delta }}{ 2 \rho \sigma ^2_e } \end{aligned}$$
(6.4)

where

$$\begin{aligned} \Delta = \left[ \sigma ^2_v + (1 + \rho ^2 ) \sigma ^2_e \right] ^2 - 4 \rho ^2 \sigma ^4_e = \left[ \sigma ^2_v + (1 - \rho )^2 \, \sigma ^2_e \right] \, \left[ \sigma ^2_v + (1 + \rho )^2 \, \sigma ^2_e \right] > 0 . \end{aligned}$$

Note that \(- \sqrt{\Delta }\) in (6.4) is not acceptable since \( | \beta | < 1\) (and \(\beta \ne 0\)). This condition ensures the invertibility of the process. In particular, if \(\rho >0\), then we have \(-1< \beta <0\). From (6.4) and (3.5) we obtain

$$\begin{aligned} \sigma ^2_w = \frac{\sigma ^2_v + (1 + \rho ^2 ) \sigma ^2_e + \sqrt{\Delta }}{2}. \end{aligned}$$
(6.5)

Now the equations of the map \(\theta _{*} = \theta _{*} (\theta )\) can be easily derived from (3.2), (6.4) and (6.5). From Theorem A we have \({\widehat{\theta }}_T \rightarrow \theta \) as T goes to \( \infty \) and \(\sqrt{T} ({\widehat{\theta }}_T - \theta ) \rightarrow \mathcal{N}(0, V_{\theta })\) in distribution. Applying the Delta Method gives \(\sqrt{T} ({\widehat{\theta }}_{* T} - \theta _{*}) = \sqrt{T} (\theta _{*}({\widehat{\theta }}_T) - \theta _{*}(\theta )) \rightarrow \mathcal{N}(0, V_{\theta _{*}})\) in distribution, where

$$\begin{aligned} V_{\theta _{*}} = \frac{\partial \, \theta _{*}}{\partial \theta ^{'}} \, V_{\theta } \, \frac{\partial \, \theta _{*}^{'}}{\partial \, \theta } = D^{-1} \, V_{\theta } \, (D^{'})^{- 1} = D^{- 1}\, J^{-1} \, V_{\eta } \, (J^{'})^{-1} \, (D^{'})^{- 1}. \end{aligned}$$

This proves the statement.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cavicchioli, M. Estimation and asymptotic covariance matrix for stochastic volatility models. Stat Methods Appl 26, 437–452 (2017). https://doi.org/10.1007/s10260-016-0373-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10260-016-0373-8

Keywords

JEL Classification

Navigation