Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter July 4, 2023

Bootstrap choice of non-nested autoregressive model with non-normal innovations

  • Sedigheh Zamani Mehreyan ORCID logo EMAIL logo

Abstract

It is known that the block-based version of the bootstrap method can be used for distributional parameter estimation of dependent data. One of the advantages of this method is that it improves mean square errors. The paper makes two contributions. First, we consider the moving blocking bootstrap method for estimation of parameters of the autoregressive model. For each block, the parameters are estimated based on the modified maximum likelihood method. Second, we provide a method for model selection, Vuong’s test and tracking interval, i.e. for selecting the optimal model for the innovation’s distribution. Our analysis provides analytic results on the asymptotic distribution of the bootstrap estimators and also computational results via simulations. Some properties of the moving blocking bootstrap method are investigated through Monte Carlo simulation. This simulation study shows that, sometimes, Vuong’s test based on the modified maximum likelihood method is not able to distinguish between the two models; Vuong’s test based on the moving blocking bootstrap selects one of the competing models as optimal model. We have studied real data, the S&P500 data, and select optimal model for this data based on the theoretical results.

MSC 2010: 62F40; 62F10; 62M10; 62E20

A Appendix

Proof of Theorem 1

The Taylor expansion of L B n f ( j ) ( γ * ) around γ ^ j B is

L B n f ( j ) ( γ * ) = L B n f ( j ) ( γ ^ j B ) + L B n f ( j ) ( γ ) γ | γ = γ ^ j B ( γ * γ ^ j B ) + 1 2 ! ( γ * γ ^ j B ) T 2 L B n f ( j ) ( γ ) γ γ T | γ = γ ^ j B ( γ * γ ^ j B ) + o p ( 1 ) = L B n f ( j ) ( γ ^ j B ) + 1 2 ! ( γ * γ ^ j B ) T 2 L B n f ( j ) ( γ ) γ γ T | γ = γ ^ j B ( γ * γ ^ j B ) + o p ( 1 ) .

So

(A.1) L B n f ( j ) ( γ ^ j B ) = L B n f ( j ) ( γ * ) 1 2 ! ( γ * γ ^ j B ) T 2 L B n f ( j ) ( γ ) γ γ T | γ = γ ^ j B ( γ * γ ^ j B ) + o p ( 1 )

Now consider summation of (A.1) for 𝑘 blocks as

(A.2) j = 1 k i = 1 b log f γ ^ j B ( ϵ s j + i ) = j = 1 k i = 1 b log f γ * ( ϵ s j + i ) 1 2 ! j = 1 k ( γ * γ ^ j B ) T ( i = 1 b 2 γ γ T log f γ ( ϵ s j + i ) | γ = γ ^ j B ) ( γ * γ ^ j B ) + o p ( k ) .

Similarly,

(A.3) j = 1 k i = 1 b log g β ^ j B ( ϵ s j + i ) = j = 1 k i = 1 b log g β * ( ϵ s j + i ) 1 2 ! j = 1 k ( β * β ^ j B ) T ( i = 1 b 2 β β T log f β ( ϵ s j + i ) | β = β ^ j B ) ( β * β ^ j B ) + o p ( k ) .

Using (A.2) and (A.3), we have

L B n f / g ( γ ^ j B , β ^ j B ) = j = 1 k i = 1 b log f γ * ( ϵ s j + i ) g β * ( ϵ s j + i ) 1 2 ! j = 1 k ( γ * γ ^ j B ) T ( i = 1 b 2 γ γ T log f γ ( ϵ s j + i ) | γ = γ ^ j B ) ( γ * γ ^ j B ) 1 2 ! j = 1 k ( β * β ^ j B ) T ( i = 1 b 2 β β T log f β ( ϵ s j + i ) | β = β ^ j B ) ( β * β ^ j B ) + o p ( k ) .

Because of the dependence between the blocks, we use the central limit theorem for dependent random variables to calculate the asymptotic distribution of L B n f / g ( γ ^ j B , β ^ j B ) (see [11]),

L B n f / g ( γ ^ j B , β ^ j b ) = j = 1 k i = 1 b log f γ * ( ϵ ( j 1 ) b + i ) g β * ( ϵ ( j 1 ) b + i ) + j = 1 k R b j f j = 1 k R b j g 1 2 ! j = 1 k ( γ * γ ^ j B ) T ( i = 1 b 2 γ γ T log f γ ( ϵ s j + i ) | γ = γ ^ j B ) ( γ * γ ^ j B ) + 1 2 ! j = 1 k ( β * β ^ j B ) T ( i = 1 b 2 β β T log f β ( ϵ s j + i ) | β = β ^ j B ) ( β * β ^ j B ) + o p ( k ) ,

where

R b j f = i = 1 b log f γ * ( ϵ s j + i ) f γ * ( ϵ ( j 1 ) b + i ) and E { R b j f } = 0 ,

or equally

(A.4) n 1 2 L B n f / g ( γ ^ j B , β ^ j B ) = n 1 2 j = 1 k i = 1 b log f γ * ( ϵ ( j 1 ) b + i ) g β * ( ϵ ( j 1 ) b + i ) + n 1 2 j = 1 k R b j f n 1 2 j = 1 k R b j g n 1 2 2 ! j = 1 k ( γ * γ ^ j B ) T ( i = 1 b 2 γ γ T log f γ ( ϵ s j + i ) | γ = γ ^ j B ) ( γ * γ ^ j B ) + n 1 2 2 ! j = 1 k ( β * β ^ j B ) T ( i = 1 b 2 β β T log f β ( ϵ s j + i ) | β = β ^ j B ) ( β * β ^ j B ) + o p ( 1 ) .

The second term in (A.4) is o p ( 1 ) since k b 0 as b and 1 k j = 1 k R b j f 0 as k . For the fourth term in (A.4), we can write

(A.5) n 1 2 2 ! j = 1 k ( γ * γ ^ j B ) T ( i = 1 b 2 γ γ T log f ( γ ) ( ϵ s j + i ) | γ = γ ^ j B ) ( γ * γ ^ j B ) = n 1 2 2 ! ( γ * γ ^ n ) T j = 1 k ( i = 1 b 2 γ γ T log f ( γ ) ( ϵ s j + i ) | γ = γ ^ j B ) ( γ * γ ^ n ) + n 1 2 ( γ * γ ^ n ) T j = 1 k ( i = 1 B 2 γ γ T log f ( γ ) ( ϵ s j + i ) | γ = γ ^ j B ) ( γ ^ n γ ^ j B ) + n 1 2 2 ! j = 1 k ( γ ^ n γ ^ j B ) T ( i = 1 b 2 γ γ T log f ( γ ) ( ϵ s j + i ) | γ = γ ^ j B ) ( γ ^ n γ ^ j B ) .

The first term of (A.5) can be rewritten as

k b 2 ! ( γ * γ ^ n ) T 1 k j = 1 k ( 1 b i = 1 b 2 γ γ T log f ( γ ) ( ϵ s j + i ) | γ = γ ^ j B ) ( γ * γ ^ n ) = o p ( 1 )

since

1 b i = 1 b 2 γ γ T log f ( γ ^ j B ) ( ϵ s j + i ) A f , k b ( γ * γ ^ n ) = O p ( 1 ) and γ * γ ^ n = o p ( 1 ) .

Similarly, the second and third terms of (A.5) are o p ( 1 ) . Also, the third and fifth terms in (A.4) are similar to the second and fourth. Using central limit theorem and Slutsky theorem, the first term of (A.4) has asymptotic normal distribution,

n 1 2 L B n f / g ( γ ^ j B , β ^ j B ) n 1 2 E h { log f γ * ( ϵ t ) g β * ( ϵ t ) } σ ^ v D N ( 0 , 1 ) .

Proof of Corollary 1

Straightforward from Theorem 2.1. ∎

References

[1] B. L. Alvarez, G. Ferreira and E. Porcu, Modified maximum likelihood estimation in autoregressive processes with generalized exponential innovations, Open J. Stat. 4 (2014), no. 8, 620–629. 10.4236/ojs.2014.48058Search in Google Scholar

[2] H. Akaike, Information theory and an extension of the maximum likelihood principle, Second International Symposium on Information Theory, Akadémiai Kiadó, Budapest (1973), 267–281. Search in Google Scholar

[3] P. Bondon, Estimation of autoregressive models with epsilon-skew-normal innovations, J. Multivariate Anal. 100 (2009), no. 8, 1761–1776. 10.1016/j.jmva.2009.02.006Search in Google Scholar

[4] G. E. P. Box and G. M. Jenkins, Time Series Analysis: Forecasting and Control, Holden-Day, San Francisco, 1976. Search in Google Scholar

[5] G. Ciołek and P. Potorski, Bootstrapping periodically autoregressive models, ESAIM Probab. Stat. 21 (2017), 394–411. 10.1051/ps/2017017Search in Google Scholar

[6] D. Commenges, A. Sayyareh, L. Letenneur, J. Guedj and A. Bar-Hen, Estimating a difference of Kullback–Leibler risks using a normalized difference of AIC, Ann. Appl. Stat. 2 (2008), no. 3, 1123–1142. 10.1214/08-AOAS176Search in Google Scholar

[7] B. Efron, Bootstrap methods: Another look at the jackknife, Ann. Statist. 7 (1979), no. 1, 1–26. 10.1214/aos/1176344552Search in Google Scholar

[8] D. P. Gaver and P. A. W. Lewis, First-order autoregressive gamma sequences and point processes, Adv. Appl. Probab. 12 (1980), no. 3, 727–745. 10.2307/1426429Search in Google Scholar

[9] E. Hwanga and D. W. Shin, New Bootstrap method for autoregressive models, Comm. Statist. Appl. Methods 20 (2013), 85–96. 10.5351/CSAM.2013.20.1.085Search in Google Scholar

[10] S. Kaur and M. Rakshit, Gaussian and non-Gaussian autoregressive time series models with rainfall data, Int. J. Eng. Adv. Technol. 9 (2019), 2249–8958. 10.35940/ijeat.A1994.109119Search in Google Scholar

[11] K. Knight, Mathematical Statistics, Chapman & Hall/CRC Texts Stat. Sci. Ser., Chapman & Hall/CRC, Boca Raton, 2000. 10.1201/9781584888567Search in Google Scholar

[12] S. Kullback and R. A. Leibler, On information and sufficiency, Ann. Math. Statistics 22 (1951), 79–86. 10.1214/aoms/1177729694Search in Google Scholar

[13] H. R. Künsch, The jackknife and the bootstrap for general stationary observations, Ann. Statist. 17 (1989), no. 3, 1217–1241. 10.1214/aos/1176347265Search in Google Scholar

[14] J. Li and J. Lee, Improved autoregressive forecasts in the presence of non-normal errors, J. Stat. Comput. Simul. 85 (2015), no. 14, 2936–2952. 10.1080/00949655.2014.945930Search in Google Scholar

[15] R. Y. Liu and K. Singh, Moving blocks jackknife and bootstrap capture weak dependence, Exploring the Limits of Bootstrap, Wiley Ser. Probab. Math. Statist. Probab. Math. Statist., Wiley, New York, (1992), 225–248. Search in Google Scholar

[16] D. N. Politis, The impact of bootstrap methods on time series analysis, Statist. Sci. 2 (2003), 219–230. 10.1214/ss/1063994977Search in Google Scholar

[17] D. N. Politis and J. P. Romano, A general Theory for large sample confidence regions based on subsamples under minimal assumptions, Technical Report 399 Department of Statistics, Stanford University, 1992. Search in Google Scholar

[18] D. N. Politis and J. P. Romano, Large sample confidence regions based on subsamples under minimal assumptions, Ann. Statist. 22 (1994), no. 4, 2031–2050. 10.1214/aos/1176325770Search in Google Scholar

[19] R. C. Rathnayake and D. J. Olive, Bootstrapping some GLM and survival regression variable selection estimators, Comm. Statist. Theory Methods 52 (2023), no. 8, 2625–2645. 10.1080/03610926.2021.1955389Search in Google Scholar

[20] Q. H. Vuong, Likelihood ratio tests for model selection and nonnested hypotheses, Econometrica 57 (1989), no. 2, 307–333. 10.2307/1912557Search in Google Scholar

[21] S. Zamani Mehreyan and A. Sayyareh, Separated hypotheses testing for autoregressive models with non-negative residuals, J. Stat. Comput. Simul. 87 (2017), no. 4, 689–711. 10.1080/00949655.2016.1222613Search in Google Scholar

Received: 2022-06-30
Revised: 2023-05-12
Accepted: 2023-05-19
Published Online: 2023-07-04
Published in Print: 2023-09-01

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 18.5.2024 from https://www.degruyter.com/document/doi/10.1515/mcma-2023-2010/html
Scroll to top button