Skip to main content
Log in

Periodic autoregressive models with closed skew-normal innovations

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

This paper is concerned with the estimation problem of a periodic autoregressive model with closed skew-normal innovations. The closed skew-normal (CSN) distribution has some useful properties similar to those of the Gaussian distribution. Maximum likelihood (ML), Maximum a posteriori (MAP) and Bayesian approaches are proposed and compared in order to estimate the model parameters. For the Bayesian approach, the Gibbs sampling algorithm and for computing the ML and MAP estimations, the expectation–maximization algorithms are performed. The simulation studies are then conducted to compare the frequentist average losses of competing estimators and to study the asymptotic properties of the given estimators. The proposed model and methods developed in this paper are also applied to a real time series. The accuracy of the CSN and Gaussian models is compared by cross validation criterion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Arellano-Valle RB, Azzalini A (2006) On the unification of families of skew-normal distributions. J Stat Theory Appl 33(3):561–574

    MathSciNet  MATH  Google Scholar 

  • Azzalini A (1985) A class of distribution which includes the normal ones. Scand J Stat 12(2):171–178

    MathSciNet  MATH  Google Scholar 

  • Azzalini A (2005) The skew normal distribution and related multivariate families. Scand J Stat 32(2):159–200 (with discussion)

    Article  MathSciNet  MATH  Google Scholar 

  • Azzalini A, Capitanio A (1999) Statistical applications of the multivariate skew-normal distribution. J R Stat Soc B 61:579–602

    Article  MathSciNet  MATH  Google Scholar 

  • Azzalini A, Dalla Valle A (1996) The multivariate skew-normal distribution. Biometrika 83:715–726

    Article  MathSciNet  MATH  Google Scholar 

  • Basawa IV, Lund RB (2001) Large sample properties of parameter estimates for periodic arma models. J Time Ser Anal 22:651–663

    Article  MathSciNet  MATH  Google Scholar 

  • Bayes CL, Branco MD (2007) Bayesian inference for the skewness parameter of the scalar skew-normal distribution. Braz J Probab Stat 21:141–163

    MathSciNet  MATH  Google Scholar 

  • Bondon P (2009) Estimation of autoregressive models with epsilon-skew-normal innovations. J Multivar Anal 100(8):1761–1776

    Article  MathSciNet  MATH  Google Scholar 

  • Broszkiewicz-Suwaj E, Makagon A, Weron R, Wylomanska A (2004) On detecting and modeling periodic correlation in financial data. Physica A 336(1–2):196–205

    Article  MathSciNet  Google Scholar 

  • Chaari F, Leskow J, Napolitano A, Sanchez-Ramirez A (eds) (2014) Cyclostationarity: theory and methods. Lecture notes in mechanical engineering. Springer, Cham

    Google Scholar 

  • Chaari F, Leskow J, Napolitano A, Zimroz R, Wylomanska A, Dudek A (eds) (2015) Cyclostatioarity: theory and methods II. Applied condition monitoring. Springer, Cham

    Google Scholar 

  • Chaari F, Leskow J, Napolitano A, Zimroz R, Wylomanska A (eds) (2017) Cyclostationarity: theory and methods III. Applied condition monitoring. Springer, Cham

    Google Scholar 

  • Franses PH (1996) Periodicity and stochastic trends in economic time series. Oxford University Press, Oxford

    MATH  Google Scholar 

  • Franses PH, Paap R (1994) Model selection in periodic autoregressive. Oxf Bull Econ Stat 56(4):421–439

    Article  Google Scholar 

  • Franses PH, Paap R (2004) Periodic time series models. Oxford University Press, Oxford

    Book  MATH  Google Scholar 

  • Gardner WA (1994) Cyclostationarity in communications and signal processing. IEEE Press, New York

    MATH  Google Scholar 

  • Gauvain J, Lee C (1994) Maximum a posteriori estimation for multivariate Gaussian mixture observations of markov Chains. IEEE Trans Speech Audio Process 2(2):291–298

    Article  Google Scholar 

  • Gebizlioglu OL, Senoglu B, Kantar YM (2011) Comparison of certain value-at-risk estimation methods for the two-parameter Weibull loss distribution. J Comput Appl Math 235(11):3304–3314

    Article  MathSciNet  MATH  Google Scholar 

  • Genton ME (2004) Skew elliptical distributions and their applications: a journey beyond normality. CRC, London

    Book  MATH  Google Scholar 

  • Gladyshev EG (1961) Periodically correlated random sequences. Sov Math 2:385–388

    MATH  Google Scholar 

  • González-Farías G, Domı́nguez-Molina J, Gupta A (2004) Additive properties of skew normal random vectors. J Stat Plan Inference 126:521–534

    Article  MathSciNet  MATH  Google Scholar 

  • Hipel KW, Mcleod AI (1994) Time series modelling of water resources and environmental systems. Elsevier, Amsterdam

    Google Scholar 

  • Hurd HL, Miamee A (2007) Periodically correlated random sequences: spectral theory and practice. Wiley, Hoboken

    Book  MATH  Google Scholar 

  • Li WK, McLeod AI (1988) ARMA modelling with non-Gaussian innovations. J Time Ser Anal 9(2):155–168

    Article  MathSciNet  MATH  Google Scholar 

  • Liu C, Rubin DB (1994) The ECME algorithm: a simple extension of EM and ECM with faster monotone convergence. Biometrika 81:633–648

    Article  MathSciNet  MATH  Google Scholar 

  • Lund RB, Basawa IV (2000) Recursive prediction and likelihood evaluation for periodic arma models. J Time Ser Anal 21:75–93

    Article  MathSciNet  MATH  Google Scholar 

  • Lund R, Shao Q, Basawa I (2006) Parsimonious periodic time series models. Aust N Z J Stat 48(1):33–47

    Article  MathSciNet  MATH  Google Scholar 

  • Lutkepohl H (2005) New introduction to multiple time series analysis. Springer, Berlin

    Book  MATH  Google Scholar 

  • Maleki M, Arellano-Valle RB (2017) Maximum a posteriori estimation of autoregressive processes based on finite mixtures of scale-mixtures of skew-normal distributions. J Stat Comput Simul 87(2):1061–1083

    Article  MathSciNet  Google Scholar 

  • Maleki M, Arellano-Valle RB, Dey DK, Mahmoudi MR, Jalili SMJ (2018) A Bayesian approach to robust skewed autoregressive processes. Calcutta Stat Assoc Bull 69(2):165–182

    Article  MathSciNet  Google Scholar 

  • Manouchehri T, Nematollahi AR (2019) On the estimation problem of periodic autoregressive time series: symmetric and asymmetric innovations. J Stat Comput Simul 89(1):71–97

    Article  MathSciNet  Google Scholar 

  • McLeod AI (1993) Parsimony, model adequacy, and periodic autocorrelation in time series forecasting. Int Stat Rev 61:387–393

    Article  MATH  Google Scholar 

  • Mcleod AI (1994) Diagnostic checking of periodic autoregression models with application. J Time Ser Anal 15(2):221–233

    Article  MathSciNet  MATH  Google Scholar 

  • Meng XL, Rubin DB (1993) Maximum likelihood estimation via the ECM algorithm: a general framework. Biometrika 80:267–278

    Article  MathSciNet  MATH  Google Scholar 

  • Nematollahi AR, Soltani AR (2000) Discrete time periodically correlated Markov processes. Math Stat 20:127–140

    MathSciNet  MATH  Google Scholar 

  • Nematollahi AR, Soltani AR, Mahmoudi MR (2017) Periodically correlated modeling by means of the periodograms asymptotic distributions. Stat Pap 1(1):1–12

    MathSciNet  MATH  Google Scholar 

  • Ni S, Sun D (2003) Noninformative priors and frequentist risks of bayesian estimators of vector-autoregressive models. J Econom 115:159–197

    Article  MathSciNet  MATH  Google Scholar 

  • Noakes DJ, McLeod AI, Hipel KW (1985) Forecasting monthly river-flow time series. Int J Forecast 1:179–190

    Article  Google Scholar 

  • Novales A, de Frutto RF (1997) Forecasting with periodic models: a comparison with time invariant coefficient models. Int J Forecast 13:393–405

    Article  Google Scholar 

  • Osborn D, Smith J (1989) The performance of periodic autoregressive models in forecasting seasonal U.K. consumption. J Bus Econ Stat 7:117–127

    Google Scholar 

  • Pagano M (1978) On periodic and multiple autoregressions. Ann Stat 6:1310–1317

    Article  MathSciNet  MATH  Google Scholar 

  • Pourahmadi M (2007) Skew-normal ARMA models with nonlinear heteroscedastic predictors. Commun Stat Theory Methods 36:1803–1819

    Article  MathSciNet  MATH  Google Scholar 

  • Serpedin E, Panduru F, Sarı I, Giannakis GB (2005) Bibliography on cyclostationarity. Signal Process 85(12):2233–2303

    Article  MATH  Google Scholar 

  • Shao Q (2006) Mixture periodic autoregressive time series models. Stat Probab Lett 76(6):609–618

    Article  MathSciNet  MATH  Google Scholar 

  • Shao Q (2007) Robust estimation for periodic autoregressive time series. J Time Ser Anal 29:251–263

    Article  MathSciNet  MATH  Google Scholar 

  • Sharafi M, Nematollahi AR (2016) AR(1) model with skew-normal innovations. Metrika 79(8):1011–1029

    Article  MathSciNet  MATH  Google Scholar 

  • Sun D, Ni S (2004) Bayesian analysis of vector-autoregressive models with noninformative priors. J Stat Plan Inference 121(2):291–309

    Article  MathSciNet  MATH  Google Scholar 

  • Sun D, Ni S (2005) Bayesian estimates for vector autoregressive models. J Bus Econ Stat 23(1):105–117

    Article  MathSciNet  Google Scholar 

  • Tolpin D, Wood F (2015) Maximum a posteriori estimation by search in probabilistic programs. In: The 8th annual symposium on combinatorial search will take place in EinGedi, the Dead Sea, Israel, from June 11–13, 2015. The proceedings and workshop technical reports will be published by AAAI Press

  • Troutman BM (1979) Some results in periodic autoregression. Biometrika 66:219–228

    Article  MathSciNet  MATH  Google Scholar 

  • Ursu E, Duchesne P (2009) On modelling and diagnostic checking of vector periodic autoregressive time series models. J Time Ser Anal 30:70–96

    Article  MathSciNet  MATH  Google Scholar 

  • Ursu E, Turkman KF (2012) Periodic autoregressive model identification using genetic algorithm. J Time Ser Anal 33:398–405

    Article  MathSciNet  MATH  Google Scholar 

  • Vecchia AV (1985a) Periodic autoregressive-moving average (PARMA) modeling with applications to water resources. Water Resour Bull 21:721–730

    Article  Google Scholar 

  • Vecchia AV (1985b) Maximum likelihood estimation for periodic autoregressive moving average models. Technometrics 27:375–384

    Article  Google Scholar 

  • White M, Wen J, Bowling M, Schuurmans D (2015) Optimal estimation of multivariate ARMA models. In: Proceedings of the twenty-ninth AAAI conference on artificial intelligence

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. R. Nematollahi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proof of Theorem 3.1

In order to prove of Theorem 3.1, we need some preliminary definitions and properties.

Definition 1

(Truncated multivariate normal). If \( \varvec{W} \sim N_{q} \left( {\varvec{\mu},{\varvec{\Sigma}}} \right) \) and \( \varvec{U} = \left\{ {\begin{array}{*{20}l} \varvec{W} & {if\quad \varvec{W} \ge \varvec{c}} \\ {\mathbf{0}} & {if\quad ~\varvec{W} < \varvec{c}} \\ \end{array} } \right. \) where \( \varvec{W} \ge \varvec{c} \) means \( W_{j} \ge c_{j} , j = 1, \ldots ,q \), then the density function of \( \varvec{U} \) is:

$$ f\left( {\varvec{u};\varvec{\mu},\varvec{\varSigma},\varvec{c}} \right) = \varPhi_{q}^{ - 1} \left( {0;\varvec{c} -\varvec{\mu},\varvec{\varSigma}} \right)\varphi_{q} \left( {\varvec{u};\varvec{\mu},\varvec{\varSigma}} \right) , \varvec{u} \ge \varvec{c}. $$

\( \varvec{U} \) is truncated multivariate normal denote by \( \varvec{U} \sim N_{q}^{\varvec{c}} \left( {\varvec{\mu},{\varvec{\Sigma}}} \right) \).

Property 1

If \( \varvec{U} \sim N_{q}^{\varvec{c}} \left( {\varvec{\mu},{\varvec{\Sigma}}} \right) \) then the moment generating function of \( \varvec{U} \) is given by

$$ M_{\varvec{U}} \left( \varvec{t} \right) = \varPhi_{q}^{ - 1} \left( {{\mathbf{0}};\varvec{c} -\varvec{\mu},\varvec{\varSigma}} \right)e^{{\varvec{t^{\prime}\mu } + \frac{1}{2}\varvec{t^{\prime}\varSigma t}}} \varPhi_{q} \left( {\varvec{\varSigma t};\varvec{c} -\varvec{\mu},\varvec{\varSigma}} \right), \varvec{t} \in {\mathcal{R}}^{q} . $$

Property 2

If\( \varvec{Z} \sim CSN_{p,q} \left( {\varvec{\mu},\varvec{\varSigma},\varvec{\varGamma},\varvec{\nu},\varvec{\varDelta}} \right) \), then the moment generative function of\( \varvec{Z} \)is given in González-Farías et al. (2004) as

$$ M_{\varvec{Z}} \left( \varvec{s} \right) = \frac{{\varPhi_{q} \left( {\varvec{\varGamma \varSigma s};\varvec{\nu},\varvec{\varDelta}+ \varvec{\varGamma \varSigma \varGamma^{\prime}}} \right)}}{{\varPhi_{q} \left( {{\mathbf{0}};\varvec{\nu},\varvec{\varDelta}+ \varvec{\varGamma \varSigma \varGamma^{\prime}}} \right)}}\exp \left( {{\mathbf{s^{\prime}}}\varvec{\mu}+ \frac{1}{2}{\mathbf{s^{\prime}}}\varvec{\varSigma s}} \right),\quad {\mathbf{s}} \in {\mathbb{R}}^{p} . $$

Proof of Theorem 3.1

The result of part (a) is proved by using the uniqueness property of the moment generating functions. Note that

$$ \begin{aligned} M_{{\varvec{Y}_{t} }} \left( \varvec{s} \right) & = M_{{\varvec{V}_{t} }} \left( \varvec{s} \right)M_{{\varvec{W}_{t} }} \left( {\varvec{D^{\prime}s}} \right) = e^{{\varvec{s^{\prime}}\left( {\mathop \sum \limits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} } \right) + \frac{1}{2}\varvec{s}^{'} \varvec{Gs}}} \\ & \quad \times\varPhi_{T}^{ - 1} \left( {{\mathbf{0}};{\mathbf{0}},\varvec{\varLambda}} \right)e^{{\frac{1}{2}\varvec{s}^{\prime} \varvec{D\varLambda D^{\prime}s}}} \varPhi_{T} \left( {\varvec{\varLambda D^{\prime}s};{\mathbf{0}},\varvec{\varLambda}} \right) \\ & = \frac{{\varPhi_{T} \left( {\varvec{\varLambda D^{\prime}s};{\mathbf{0}},\varvec{\varLambda}} \right)}}{{\varPhi_{T} \left( {{\mathbf{0}};{\mathbf{0}},\varvec{\varLambda}} \right)}}e^{{\varvec{s^{\prime}}\left( {\mathop \sum \limits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} } \right) + \frac{1}{2}\varvec{s}^{'} \left( {\varvec{G} + \varvec{D\varLambda D^{\prime}}} \right)\varvec{s}}} \\ & = \frac{{\varPhi_{T} \left( {\varvec{\varGamma}^{*}\varvec{\varSigma}^{*} \varvec{s};{\mathbf{0}},\varvec{I}_{\varvec{T}} +\varvec{\varGamma}^{*}\varvec{\varSigma}^{*}\varvec{\varGamma}^{{*'}} } \right)}}{{\varPhi_{T} \left( {{\mathbf{0}};{\mathbf{0}},\varvec{I}_{\varvec{T}} +\varvec{\varGamma}^{*}\varvec{\varSigma}^{*}\varvec{\varGamma}^{{*'}} } \right)}}e^{{\varvec{s^{\prime}}\left( {\mathop \sum \limits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} } \right) + \frac{1}{2}\varvec{s}^{'}\varvec{\varSigma}^{*} \varvec{s}}} = M_{\varvec{Z}} \left( \varvec{s} \right) \\ \end{aligned} $$

where \( \varvec{Z} \sim CSN_{T,T} \left( {\mathop \sum \limits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} ,\varvec{\varSigma}^{*} ,\varvec{\varGamma}^{*} ,{\mathbf{0}},\varvec{I}_{\varvec{T}} } \right) \).

(b) It is proved by using the linearity property of the multivariate normal distributions.

(c) It can be proved by the following arguments:

$$ \begin{aligned} f\left( {\varvec{W}_{t} |\varvec{Y}_{t} } \right) & = \frac{{f\left( {\varvec{Y}_{t} ,\varvec{W}_{t} } \right)}}{{f\left( {\varvec{Y}_{t} } \right)}} = \frac{{f\left( {\varvec{W}_{t} } \right)f\left( {\varvec{Y}_{t} |\varvec{W}_{t} } \right)}}{{f\left( {\varvec{Y}_{t} } \right)}} \\ & = \frac{{\varPhi_{T}^{ - 1} \left( {{\mathbf{0}};{\mathbf{0}},\varvec{\varLambda}} \right)\varphi_{T} \left( {\varvec{w}_{t} ;{\mathbf{0}},\varvec{\varLambda}} \right)\varphi_{T} \left( {\varvec{y}_{t} ;\mathop \sum \nolimits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} + \varvec{DW}_{t} ,\varvec{G}} \right)}}{{\varphi_{T} \left( {\varvec{y}_{t} ;\mathop \sum \nolimits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{Y}_{t - j} ,\varvec{\varSigma}^{*} } \right)\varPhi_{T} \left( {\varvec{\varGamma}^{*} \left( {\varvec{Y}_{t} - \mathop \sum \nolimits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} } \right);{\mathbf{0}},\varvec{I}_{T} } \right)\varPhi_{T}^{ - 1} \left( {{\mathbf{0}};{\mathbf{0}},\varvec{I}_{T} } \right)}} \\ & = \frac{{\varphi_{T} \left( {\varvec{w}_{t} ;{\mathbf{0}},\varvec{\varLambda}} \right)\varphi_{T} \left( {\varvec{Y}_{t} ;\mathop \sum \nolimits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} + \varvec{Dw}_{t} ,\varvec{G}} \right)}}{{\varphi_{T} \left( {\varvec{y}_{t} ;\mathop \sum \nolimits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} ,\varvec{\varSigma}^{*} } \right)\varPhi_{T} \left( {\varvec{\varGamma}^{*} \left( {\varvec{y}_{t} - \mathop \sum \nolimits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} } \right);{\mathbf{0}},\varvec{I}_{T} } \right)}} \\ & = C_{1} \left( {\varvec{y}_{t} ,\varvec{\theta}_{1} } \right)\varphi_{q} \left( {\varvec{w}_{t} ;\nu^{*} ,\varLambda^{*} } \right) = C_{2} \left( {\varvec{y}_{t} ,\varvec{\theta}_{1} } \right)\varPhi_{T}^{ - 1} \left( {{\mathbf{0}}; -\varvec{\nu}^{*} ,\varvec{\varLambda}^{*} } \right)\varphi_{q} \left( {\varvec{w}_{t} ;\varvec{\nu}^{*} ,\varvec{\varLambda}^{*} } \right) \\ & = C\left( {\varvec{y}_{t} ,\varvec{\theta}_{1} } \right)N_{T}^{{\mathbf{0}}} \left( {\varvec{w}_{t} ;\varvec{\nu}^{*} ,\varvec{\varLambda}^{*} } \right),\varvec{w}_{t} \ge 0, \\ \end{aligned} $$

where \( {\varvec{\Lambda}}^{*} = \left( {\varvec{\varLambda}^{ - 1} + \varvec{D^{\prime}G}^{ - 1} \varvec{D}} \right)^{ - 1} \), \( \varvec{\nu}^{*} = {\varvec{\Lambda}}^{*} \varvec{D^{\prime}G}^{ - 1} \left( {\varvec{y}_{t} - \mathop \sum \limits_{j = 1}^{P}\varvec{\varPhi}_{j} \varvec{y}_{t - j} } \right) \), and \( C \) is function of parameters \( \varvec{\theta}_{1} = \left( {\varvec{\varPhi}_{j} ,{\varvec{\Lambda}},\varvec{G},\varvec{D}} \right) \) and observed data \( \varvec{y}_{t} \).

The moment generating function of \( \varvec{W}_{t} |\varvec{Y}_{t} \) is given by

$$ M_{{\varvec{W}_{t} |\varvec{Y}_{t} }} \left( s \right) = C\left( {\varvec{Y}_{t} ,\varvec{\theta}_{1} } \right)\frac{{\varPhi_{T} \left( {{\varvec{\Lambda}}^{*} \varvec{s}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}{{\varPhi_{T} \left( {{\mathbf{0}}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}e^{{\varvec{s}^{\prime}\varvec{\nu}^{*} + \frac{1}{2}\varvec{s^{\prime}}{\varvec{\Lambda}}^{*} \varvec{s}}} , \varvec{s} \in {\mathcal{R}}^{T} , $$

and so

$$ E(\varvec{W}_{t} |\varvec{Y}_{t} ) = \frac{{\partial M_{{\varvec{W}_{t} |\varvec{Y}_{t} }} \left( \varvec{s} \right)}}{{\partial \varvec{s}}}|_{{\varvec{s} = 0}} , $$

where

$$ \begin{aligned} \frac{{\partial M_{{\varvec{W}_{t} |\varvec{Y}_{t} }} \left( \varvec{s} \right)}}{{\partial \varvec{s}}} & = C\left( {\varvec{Y}_{t} ,\varvec{\theta }_{1} } \right)\left\{ {\Lambda ^{*} \frac{{\frac{{\partial \Phi _{T} \left( {\varvec{s}; - \varvec{\nu }^{*} ,{\varvec{\Lambda }}^{*} } \right)}}{{\partial \varvec{s}}}}}{{\Phi _{T} \left( {{\varvec{0}}; - \varvec{\nu }^{*} ,{\varvec{\Lambda }}^{*} } \right)}}e^{{\varvec{s^{\prime}\nu }^{*} + \frac{1}{2}\varvec{s^{\prime}}{\varvec{\Lambda }}^{*} \varvec{s}}} } \right. \\ & \quad \left. { + \frac{{\Phi _{T} \left( {{\varvec{\Lambda }}^{*} \varvec{s}; - \varvec{\nu }^{*} ,{\varvec{\Lambda }}^{*} } \right)}}{{\Phi _{T} \left( {{\varvec{0}}; - \varvec{\nu }^{*} ,{\varvec{\Lambda }}^{*} } \right)}}\left( {\varvec{\nu }^{*} + {\varvec{\Lambda }}^{*} \varvec{s}} \right)e^{{\varvec{s^{\prime}\nu }^{*} + \frac{1}{2}\varvec{s^{\prime}}{\varvec{\Lambda }}^{*} \varvec{s}}} } \right\}. \\ \end{aligned} $$

Therefore

$$ E\left( {\varvec{W}_{t} |\varvec{Y}_{t} } \right) = \frac{{\partial M_{{\varvec{W}_{t} |\varvec{Y}_{t} }} \left( \varvec{s} \right)}}{{\partial \varvec{s}}}|_{{\varvec{s} = 0}} = C\left( {\varvec{Y}_{t} ,\varvec{\theta}_{1} } \right)\left( {\varvec{\nu}^{*} + \varLambda^{*} \xi_{1} } \right), $$

where \( \xi_{1} = \frac{{\frac{{\partial \varPhi_{T} \left( {\varvec{s}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}{{\partial \varvec{s}}}}}{{\varPhi_{T} \left( {0; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}|_{{\varvec{s} = 0}} . \) Also,

$$ E\left( {\varvec{W}_{t}^{'} \varvec{W}_{t} |\varvec{Y}_{t} } \right) = \frac{{\partial^{2} M_{{\varvec{W}_{t} |\varvec{Y}_{t} }} \left( \varvec{s} \right)}}{{\partial \varvec{s}\partial \varvec{s^{\prime}}}}|_{{\varvec{s} = \varvec{s^{\prime}} = {\mathbf{0}}}} , $$

Where

$$ \begin{aligned} &\frac{{\partial^{2} M_{{\varvec{W}_{t} |\varvec{Y}_{t} }} \left( \varvec{s} \right)}}{{\partial \varvec{s}\partial \varvec{s^{\prime}}}} = C\left( {\varvec{Y}_{t} ,\varvec{\theta}_{1} } \right)\left\{ {\varvec{\Lambda}}^{*} \frac{{\frac{{\partial^{2} \varPhi_{T} \left( {\varvec{s}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}{{\partial \varvec{s}\partial \varvec{s^{\prime}}}}}}{{\varPhi_{T} \left( {{\mathbf{0}}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}e^{{\varvec{s^{\prime}\nu }^{*} + \frac{1}{2}\varvec{s^{\prime}}{\varvec{\Lambda}}^{*} \varvec{s}}} \right. \\ &\quad \left. + {\varvec{\Lambda}}^{*} \frac{{\frac{{\partial \varPhi_{T} \left( {\varvec{s}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}{{\partial \varvec{s}}}}}{{\varPhi_{T} \left( {{\mathbf{0}}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}\left( {\varvec{\nu}^{*} + {\varvec{\Lambda}}^{*} \varvec{s}} \right)^{'} e^{{\varvec{s^{\prime}\nu }^{*} + \frac{1}{2}\varvec{s^{\prime}}{\varvec{\Lambda}}^{*} \varvec{s}}} \right. \\ &\quad \left. + \left( {\varvec{\Lambda}}^{*} \frac{{\frac{{\partial \varPhi_{T} \left( {\varvec{s}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}{{\partial \varvec{s}}}}}{{\varPhi_{T} \left( {{\mathbf{0}}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}\left( {\varvec{\nu}^{*} + {\varvec{\Lambda}}^{*} \varvec{s}} \right)^{'} {e^{{\varvec{s^{\prime}\nu }^{*} + \frac{1}{2}\varvec{s^{\prime}}{\varvec{\Lambda}}^{*} \varvec{s}}} } \right)^{{\prime }} \right. \\ &\quad \left. + \frac{{\varPhi_{T} \left( {{\varvec{\Lambda}}^{*} \varvec{s}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}{{\varPhi_{T} \left( {{\mathbf{0}}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}\left( {\left( {\varvec{\nu}^{*} + {\varvec{\Lambda}}^{*} \varvec{s}} \right)\left( {\varvec{\nu}^{*} + {\varvec{\Lambda}}^{*} \varvec{s}} \right)^{'} e^{{\varvec{s^{\prime}\nu }^{*} + \frac{1}{2}\varvec{s^{\prime}}{\varvec{\Lambda}}^{*} \varvec{s}}} + {\varvec{\Lambda}}^{*} e^{{\varvec{s^{\prime}\nu }^{*} + \frac{1}{2}\varvec{s^{\prime}}{\varvec{\Lambda}}^{*} \varvec{s}}} } \right) \vphantom{\frac{{\frac{{\partial^{2} \varPhi_{T} \left( {\varvec{s}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}{{\partial \varvec{s}\partial \varvec{s^{\prime}}}}}}{{\varPhi_{T} \left( {{\mathbf{0}}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}}\right\} \hfill \\ \end{aligned} $$

Therefore

$$ \begin{aligned} E\left( {\varvec{W}_{t}^{'} \varvec{W}_{t} |\varvec{Y}_{t} } \right)& = \frac{{\partial^{2} M_{{\varvec{W}_{t} |\varvec{Y}_{t} }} \left( \varvec{s} \right)}}{{\partial \varvec{s}\partial \varvec{s^{\prime}}}}|_{{\varvec{s} = \varvec{s^{\prime}} = {\mathbf{0}}}} \\ & \quad = C\left( {\varvec{Y}_{t} ,\varvec{\theta}_{1} } \right)\left( {{\varvec{\Lambda}}^{*} \xi_{2} + {\varvec{\Lambda}}^{*} \xi_{1}\varvec{\nu}^{*'} + \left( {{\varvec{\Lambda}}^{*} \xi_{1}\varvec{\nu}^{*'} } \right)^{'} +\varvec{\nu}^{*}\varvec{\nu}^{*'} + {\varvec{\Lambda}}^{*} } \right),\end{aligned} $$

where \( \xi_{2} = \frac{{\frac{{\partial^{2} \varPhi_{T} \left( {\varvec{s}; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}{{\partial \varvec{s}\partial \varvec{s^{\prime}}}}}}{{\varPhi_{T} \left( {0; -\varvec{\nu}^{*} ,{\varvec{\Lambda}}^{*} } \right)}}|_{{\varvec{s} = \varvec{s^{\prime}} = 0}} . \)

Appendix B. Algorithm CJJ

Step 1 Compute \( \varvec{\nu}_{{\left( {k + 1} \right)}} = \varvec{I}_{T} \) and \( \varvec{m}_{{\left( {k + 1} \right)}} =\varvec{\alpha}_{k} {\varvec{\Delta}}_{k}^{ - 1/2}\varvec{\varPhi}_{0k}^{\varvec{'}} \left( {\varvec{Y}_{ - P} - \varvec{Z}_{ - P} {\varvec{\Phi}}_{k} } \right).\varvec{ } \) Simulate \( \varvec{W}_{k + 1} \) from a multivariate truncated normal with mean \( \varvec{m}_{{\left( {k + 1} \right)}} \) and \( T \times T \) variance–covariance matrix \( \varvec{\nu}_{k + 1} \).

Step 2 Select a \( T \)-dimention random vector \( \varvec{V}_{1} \) with elements \( v_{1i} = z_{1i} /( {\mathop \sum \limits_{j} z_{1j}^{2} } )^{1/2} \), where, \( z_{1i} \), \( 1 \le i \le T \) are \( iid \sim N\left( {0,1} \right) \). Generate \( \lambda_{1} \sim N\left( {0,1} \right) \) and set \( {\varvec{\Upsilon}}_{1} =\varvec{\varPhi}_{k} + \lambda_{1} \varvec{V}_{1} \). Compute

$$ \tau_{k + 1} = { \log }\left\{ {\pi \left( {{\varvec{\Upsilon}}_{1} |\varvec{W}_{k + 1} } \right)} \right\} - { \log }\left\{ {\pi \left( {\varvec{\varPhi}_{k} |\varvec{W}_{k + 1} } \right)} \right\} $$

Simulate \( u_{1} \sim {\text{Unif}}\left( {0,1} \right) \). If \( u_{1} \le { \hbox{min} }\left( {1,{ \exp }\left( {\tau_{k + 1} } \right)} \right) \)., let \( \varvec{\varPhi}_{k + 1} = {\varvec{\Upsilon}}_{1} \). Otherwise, let \( \varvec{\varPhi}_{k + 1} =\varvec{\varPhi}_{k} \).

Step 3 Decompose \( {\varvec{\Sigma}}_{k} = \varvec{ODO^{\prime}} \), where, \( \varvec{D} = {\text{diag}}( {d_{1} , \ldots ,d_{T} }) \), \( d_{1} \ge d_{2} \ge \ldots \ge d_{T} \), and \( \varvec{OO^{\prime}} = \varvec{I} \). Let \( d_{i}^{*} = { \log }\left( {d_{i} } \right) \), \( \varvec{D}^{*} = {\text{diag}}\left( {d_{1}^{*} , \ldots ,d_{T}^{*} } \right) \) and \( {\varvec{\Sigma}}_{k}^{*} = \varvec{OD}^{*} \varvec{O^{\prime}}. \)

Select a random symmetric \( T \times T \) matrix \( \varvec{V}_{2} \) with elements \( v_{2ij} = z_{2ij} /( {\mathop \sum \nolimits_{l \le m} z_{2lm}^{2} } )^{1/2} \), where, \( z_{2ij} \), \( 1 \le i \le j \le T \times \left( {T + 1} \right)/2 \), are \( iid \sim N\left( {0,1} \right) \). (the other elements of \( \varvec{V}_{2} \) are defined by symmetry).

Generate \( \lambda_{2} \sim N\left( {0,1} \right) \) and set \( {\varvec{\Upsilon}}_{2} = {\varvec{\Sigma}}_{k}^{*} + \lambda_{2} \varvec{V}_{2} \). Decompose \( {\varvec{\Upsilon}}_{2} = \varvec{QC}^{*} \varvec{Q^{\prime}} \), where, \( \varvec{C}^{*} = diag( {c_{1}^{*} , \ldots ,c_{T}^{*} }) \), \( c_{1}^{*} \ge c_{2}^{*} \ge \ldots \ge c_{T}^{*} \), and \( \varvec{QQ^{\prime} } = \varvec{I} \). Compute

$$ \tau_{k + 1} = { \log }\left\{ {\pi \left( {{\varvec{\Upsilon}}_{2} |\varvec{W}_{k + 1} ,\varvec{\varPhi}_{k + 1} } \right)} \right\} - { \log }\left\{ {\pi \left( {{\varvec{\Sigma}}_{k}^{*} |\varvec{W}_{k + 1} ,\varvec{\varPhi}_{k + 1} } \right)} \right\} $$

Simulate \( u_{2} \sim {\text{Unif}}\left( {0,1} \right) \). If \( u_{2} \le { \hbox{min} }\left( {1,exp\left( {\tau_{k + 1} } \right)} \right) \), let \( {\varvec{\Sigma}}_{k + 1}^{*} = {\varvec{\Upsilon}}_{2} \), \( \varvec{C} = {\text{diag}}\left( {e^{{c_{1}^{*} }} , \ldots ,e^{{c_{T}^{*} }} } \right) \) and \( {\varvec{\Sigma}}_{k + 1} = \varvec{QCQ^{\prime}} \). Otherwise, let \( {\varvec{\Sigma}}_{k + 1}^{*} = {\varvec{\Sigma}}_{k}^{*} \) and \( {\varvec{\Sigma}}_{k + 1} = {\varvec{\Sigma}}_{k} \).

Step 4 Select a \( T \)-dimention random vector \( \varvec{V}_{3} \) with elements \( v_{3i} = z_{3i} /( {\mathop \sum \limits_{j} z_{3j}^{2} } )^{1/2} \), where, \( z_{3i} \), \( 1 \le i \le T \) are \( iid \sim N\left( {0,1} \right) \). Generate \( \lambda_{3} \sim N\left( {0,1} \right) \) and set \( {\varvec{\Upsilon}}_{3} =\varvec{\alpha}_{k} + \lambda_{3} \varvec{V}_{3} \). Compute

$$ \tau_{k + 1} = { \log }\left\{ {\pi \left( {{\varvec{\Upsilon}}_{3} |\left( {\varvec{W}_{k + 1} ,{\varvec{\Sigma}}_{k + 1} ,\varvec{\varPhi}_{k + 1} } \right)} \right)} \right\} - { \log }\left\{ {\pi \left( {\varvec{\alpha}_{k} |\left( {\varvec{W}_{k + 1} ,{\varvec{\Sigma}}_{k + 1} ,\varvec{\varPhi}_{k + 1} } \right)} \right)} \right\}. $$

Simulate \( u_{3} \sim {\text{Unif}}\left( {0,1} \right) \). If \( u_{3} \le { \hbox{min} }\left( {1,exp\left( {\tau_{k + 1} } \right)} \right) \), let \( \varvec{\alpha}_{k + 1} = {\varvec{\Upsilon}}_{3} \). Otherwise, let \( \varvec{\alpha}_{k + 1} =\varvec{\alpha}_{k} \).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Manouchehri, T., Nematollahi, A.R. Periodic autoregressive models with closed skew-normal innovations. Comput Stat 34, 1183–1213 (2019). https://doi.org/10.1007/s00180-019-00893-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-019-00893-z

Keywords

Navigation