Skip to main content
Log in

Robust estimation for the one-parameter exponential family integer-valued GARCH(1,1) models based on a modified Tukey’s biweight function

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

In this paper, we study a robust estimation method for observation-driven integer-valued time series models whose conditional distribution belongs to the one-parameter exponential family. Maximum likelihood estimator (MLE) is commonly used to estimate parameters, but it is highly affected by outliers. We resort to the Mallows’ quasi-likelihood estimator based on a modified Tukey’s biweight function as a robust estimator and establish its existence, uniqueness, consistency and asymptotic normality under some regularity conditions. Compared with MLE, simulation results illustrate the better performance of the new estimator. An application is performed on data for two real data sets, and a comparison with other existing robust estimators is also given.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  • Aeberhard WH, Cantoni E, Heritier S (2014) Robust inference in the negative binomial regression model with an application to falls data. Biometrics 7:920–931

    Article  MathSciNet  Google Scholar 

  • Ahmad A, Francq C (2016) Poisson QMLE of count time series models. J Time Ser Anal 37:291–314

    Article  MathSciNet  Google Scholar 

  • Bianco AM, Boente G, Rodrigues IM (2013) Resistant estimators in Poisson and Gamma models with missing responses and an application to outlier detection. J Multivar Anal 114:209–226

    Article  MathSciNet  Google Scholar 

  • Cantoni E, Ronchetti EM (2001) Robust inference for generalized linear models. J Am Stat Assoc 96:1022–1030

    Article  MathSciNet  Google Scholar 

  • Chen CWS, Lee S (2017) Bayesian causality test for integer-valued time series models with applications to climate and crime data. J R Stat Soc Ser C 66:797–814

    Article  MathSciNet  Google Scholar 

  • Chen CWS, Lee S, Khamthong K (2021) Bayesian inference of nonlinear hysteretic integer-valued GARCH models for disease counts. Comput Stat 36:261–281

    Article  MathSciNet  Google Scholar 

  • Chen H, Li Q, Zhu F (2020) Two classes of dynamic binomial integer-valued ARCH models. Braz J Probab Stat 34:685–711

    Article  MathSciNet  Google Scholar 

  • Chen H, Li Q, Zhu F (2022) A new class of integer-valued GARCH models for time series of bounded counts with extra-binomial variation. AStA Adv Stat Anal 106:243–270

    Article  MathSciNet  Google Scholar 

  • Chow YS (1967) On a strong law of large numbers for martingales. Ann Math Stat 38:610

    Article  MathSciNet  Google Scholar 

  • Cui Y, Li Q, Zhu F (2021) Modeling \({\mathbb{Z} }\)-valued time series based on new versions of the Skellam INGARCH model. Braz J Probab Stat 35:293–314

    Article  MathSciNet  Google Scholar 

  • Cui Y, Zheng Q (2017) Conditional maximum likelihood estimation for a class of observation-driven time series models for count data. Stat Probab Lett 123:193–201

    Article  MathSciNet  Google Scholar 

  • Davies L (1992) The asymptotics of Rousseeuw’s minimum volume ellipsoid estimator. Ann Stat 20:1828–1843

    Article  MathSciNet  Google Scholar 

  • Davis RA, Liu H (2016) Theory and inference for a class of observation-driven models with application to time series of counts. Stat Sin 26:1673–1707

    Google Scholar 

  • Davis RA, Fokianos K, Holan SH, Joe H, Livsey J, Lund R, Pipiras V, Ravishanker N (2021) Count time series: a methodological review. J Am Stat Assoc 116:1533–1547

    Article  MathSciNet  CAS  Google Scholar 

  • Ferland R, Latour A, Qraichi D (2006) Integer-valued GARCH process. J Time Ser Anal 27:923–942

    Article  MathSciNet  Google Scholar 

  • Fokianos K, Fried R (2010) Interventions in INGARCH processes. J Time Ser Anal 31:210–225

    Article  MathSciNet  Google Scholar 

  • Fokianos K, Fried R (2012) Interventions in log-linear Poisson autoregression. Stat Model 12:299–322

    Article  MathSciNet  Google Scholar 

  • Fokianos K, Tjøstheim D (2011) Log-linear Poisson autoregression. J Multivar Anal 102:563–578

    Article  MathSciNet  Google Scholar 

  • Jensen ST, Rahbek A (2004) Asymptotic inference for nonstationary GARCH. Economet Theor 20:1203–1226

    Article  MathSciNet  Google Scholar 

  • Jensen ST, Rahbek A (2007) On the law of large number for (geometrically) ergodic Markov chains. Economet Theor 23:761–766

    Article  MathSciNet  Google Scholar 

  • Kang J, Lee S (2014) Minimum density power divergence estimator for Poisson autoregressive models. Comput Stat Data Anal 80:44–56

    Article  MathSciNet  Google Scholar 

  • Kim B, Lee S (2020) Robust estimation for general integer-valued time series models. Ann Inst Stat Math 72:1371–1396

    Article  MathSciNet  Google Scholar 

  • Kitromilidon K, Fokianos K (2016) Mallows’ quasi-likelihood estimation for log-linear Poisson autoregressions. Stat Infer Stoch Process 19:337–361

    Article  MathSciNet  Google Scholar 

  • Lee Y, Lee S (2019) CUSUM test for general nonlinear integer-valued GARCH models: comparison study. Ann Inst Stat Math 71:1033–1057

    Article  MathSciNet  Google Scholar 

  • Li Q, Chen H, Zhu F (2021) Robust estimation for Poisson integer-valued GARCH models using a new hybrid loss. J Syst Sci Complex 34:1578–1596

    Article  MathSciNet  Google Scholar 

  • Li Q, Lian H, Zhu F (2016) Robust closed-form estimators for the integer-valued GARCH(1,1) model. Comput Stat Data Anal 101:209–225

    Article  MathSciNet  Google Scholar 

  • Liu M, Li Q, Zhu F (2019) Threshold negative binomial autoregressive model. Statistics 53:1–25

    Article  MathSciNet  CAS  Google Scholar 

  • Liu M, Li Q, Zhu F (2020) Self-excited hysteretic negative binomial autoregression. AStA Adv Stat Anal 104:385–415

    Article  MathSciNet  Google Scholar 

  • Liu M, Zhu F, Zhu K (2022) Modeling normalcy-dominant ordinal time series: an application to air quality level. J Time Ser Anal 43:460–478

    Article  MathSciNet  Google Scholar 

  • Pingal AC, Chen CWS (2022) Bayesian modelling of integer-valued transfer function models. Stat Model. https://doi.org/10.1177/1471082X221075477

    Article  Google Scholar 

  • Rousseeuw PJ, Driessen KV (1999) A fast algorithm for the minimum covariance determinant estimator. Technometrics 41:212–223

    Article  Google Scholar 

  • Taniguchi M, Kakizawa Y (2000) Asymptotic Theory of Statistical Inference for Time Series. Springer, New York

    Book  Google Scholar 

  • Toma A, Broniatowski M (2011) Dual divergence estimators and tests: robustness results. J Multivar Anal 102:20–36

    Article  MathSciNet  Google Scholar 

  • Weiß CH, Zhu F, Hoshiyar A (2022) Softplus INGARCH models. Stat Sin 32:1099–1120

    MathSciNet  Google Scholar 

  • Xiong L, Zhu F (2019) Robust quasi-likelihood estimation for the negative binomial integer-valued GARCH(1,1) model with an application to transaction counts. J Stat Plan Inference 203:178–198

    Article  MathSciNet  Google Scholar 

  • Xiong L, Zhu F (2022) Minimum density power divergence estimator for negative binomial integer-valued GARCH models. Commun Math Stat 10:233–261

    Article  MathSciNet  Google Scholar 

  • Xu Y, Zhu F (2022) A new GJR-GARCH model for \({\mathbb{Z} }\)-valued time series. J Time Ser Anal 43:490–500

    Article  MathSciNet  Google Scholar 

  • Zhu F (2011) A negative binomial integer-valued GARCH model. J Time Ser Anal 32:54–67

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We are very grateful to the Editor, AE and anonymous referees for providing several constructive and helpful comments which led to a significant improvement of the paper. Zhu’s work is supported by National Natural Science Foundation of China (Nos. 12271206, 11871027, 11731015), and Natural Science Foundation of Jilin Province (No. 20210101143JC). Xiong’s work is supported by Research Start-up Fund of Anhui University and Natural Science Found of Anhui University under Grant No. KJ2021A0047.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fukang Zhu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Partial derivatives. For the simplicity, in the proof below, we denote \(\psi _c(u)\) by \(\psi \). Now we give the first and second derivative of every components of \(s_t(\theta )\). From Lemma 3, we have

$$\begin{aligned} s_{ti}=\left[ \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \dfrac{\partial \lambda _t}{\partial \theta _i}-E\left( \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \dfrac{\partial \lambda _t}{\partial \theta _i}\bigg |{\mathcal {F}}_{t-1}\right) \right] , \end{aligned}$$

furthermore, from (2.1) and (2.2),

$$\begin{aligned} \frac{\partial \lambda _t}{\partial \alpha _0}=\frac{1-\beta _1^t}{1-\beta _1},~ \frac{\partial \lambda _t}{\partial \alpha _1}=\sum \limits _{d=0}^{t-1}\beta _1^dX_{t-1-d}, ~\hbox {and}~ \frac{\partial \lambda _t}{\partial \beta _1}=\sum \limits _{d=0}^{t-1}\beta _1^d\lambda _{t-1-d}. \end{aligned}$$
(B.1)

The first and second derivative of \(s_{ti}(\theta )\) are \(\dfrac{\partial s_{ti}(\theta )}{\partial \theta _j}\) and \(\dfrac{\partial ^2 s_{ti}(\theta )}{\partial \theta _j\partial \theta _k}\), respectively, given by

$$\begin{aligned} \frac{\partial s_{ti}(\theta )}{\partial \theta _j}&\,=\,\frac{\partial }{\partial \theta _j}\left[ \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta _i}\right] \\&\quad -E\left[ \frac{\partial }{\partial \theta _j}\left( \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta _i}\right) \bigg |{\mathcal {F}}_{t-1}\right] ,~j=1,2,3, \\ \frac{\partial ^2 s_{ti}(\theta )}{\partial \theta _j\partial \theta _k}&=\frac{\partial ^2}{\partial \theta _j\partial \theta _k}\left[ \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta _i}\right] \\&\quad -E\left[ \frac{\partial ^2}{\partial \theta _j\partial \theta _k}\left( \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta _i}\right) \bigg |{\mathcal {F}}_{t-1}\right] ,~k=1,2,3, \end{aligned}$$

where

$$\begin{aligned} \frac{\partial }{\partial \theta _j}\left( \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta _i}\right) \,=\,&(m_{1t}+m_{2t})\frac{\partial \lambda _t}{\partial \theta _j}\frac{\partial \lambda _t}{\partial \theta _i} +m_{3t}\frac{\partial ^2\lambda _t}{\partial \theta _i\partial \theta _j}\\ \frac{\partial ^2}{\partial \theta _j\partial \theta _k}\left( \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}}\frac{\partial \lambda _t}{\partial \theta _i}\right) =&(m_{4t}+m_{5t}+m_{6t})\frac{\partial \lambda _t}{\partial \theta _k} \frac{\partial \lambda _t}{\partial \theta _j}\frac{\partial \lambda _t}{\partial \theta _i}\\&+(m_{1t}+m_{2t})\left( \frac{\partial ^2\lambda _t}{\partial \theta _i\partial \theta _k}\frac{\partial \lambda _t}{\partial \theta _j} +\frac{\partial ^2\lambda _t}{\partial \theta _j\partial \theta _k}\frac{\partial \lambda _t}{\partial \theta _i} +\frac{\partial ^2\lambda _t}{\partial \theta _i\partial \theta _j}\frac{\partial \lambda _t}{\partial \theta _k}\right) \\&+m_{3t}\frac{\partial ^3\lambda _t}{\partial \theta _i\partial \theta _j\partial \theta _k}, \end{aligned}$$

with

$$\begin{aligned}&m_{1t}\,=\,-\psi ^{'}(r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \left( \frac{\frac{1}{2}(X_t-\lambda _t)B^{''}(\eta _t)}{{B^{'}(\eta _t)}^\frac{5}{2}} +\frac{1}{\sqrt{B^{'}(\eta _t)}}\right) ,\nonumber \\&m_{2t}=-\psi (r_t)w_t\frac{\frac{1}{2}B^{''}(\eta _t)}{{B^{'}(\eta _t)}^2\sqrt{B^{'}(\eta _t)}},\nonumber \\&m_{3t}=\psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}},\nonumber \\&m_{4t}=\psi ^{''}(r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} {\left( \frac{\frac{1}{2}(X_t-\lambda _t)B^{''}(\eta _t)}{{B^{'}(\eta _t)}^\frac{5}{2}} +\frac{1}{\sqrt{B^{'}(\eta _t)}}\right) }^2,\nonumber \\&m_{5t}=-\psi ^{'}(r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \left( \frac{\frac{1}{2}(X_t-\lambda _t)B^{'''}(\eta _t)}{{B^{'}(\eta _t)}^\frac{7}{2}} -\frac{\frac{7}{4}(X_t-\lambda _t){B^{''}(\eta _t)}^2}{{B^{'}(\eta _t)}^\frac{9}{2}} -\frac{\frac{3}{2}B^{''}(\eta _t)}{{B^{'}(\eta _t)}^\frac{5}{2}}\right) ,\nonumber \\&m_{6t}=\psi (r_t)w_t \left( \frac{\frac{5}{4}{B^{''}(\eta _t)}^2\sqrt{B^{'}(\eta _t)}}{{B^{'}(\eta _t)}^5} -\frac{\frac{1}{2}B^{'''}(\eta _t)\sqrt{B^{'}(\eta _t)}}{{B^{'}(\eta _t)}^4}\right) ,\nonumber \\&\frac{\partial ^2\lambda _t}{\partial \alpha _0^2}=0,~~~ \frac{\partial ^2\lambda _t}{\partial \alpha _1^2}=0,~~~ \frac{\partial ^2\lambda _t}{\partial \alpha _0\partial \alpha _1}=0,~~~ \frac{\partial ^2\lambda _t}{\partial \alpha _0\partial \beta _1}=\frac{\partial \lambda _{t-1}}{\partial \alpha _0}+\beta _1\frac{\partial ^2\lambda _{t-1}}{\partial \alpha _0\partial \beta _1},\end{aligned}$$
(B.2)
$$\begin{aligned}&\frac{\partial ^2\lambda _t}{\partial \alpha _1\partial \beta _1}\,=\,\frac{\partial \lambda _{t-1}}{\partial \alpha _1}+\beta _1 \frac{\partial ^2\lambda _{t-1}}{\partial \alpha _1\partial \beta _1},~~~ \frac{\partial ^2\lambda _t}{\partial \beta _1^2}=\frac{\partial \lambda _{t-1}}{\partial \beta _1}+\beta _1\frac{\partial ^2\lambda _{t-1}}{\partial \beta _1^2},\end{aligned}$$
(B.3)
$$\begin{aligned}&\frac{\partial ^3\lambda _t}{\partial \alpha _0\partial \beta _1^2} \,=\,2\frac{\partial ^2\lambda _{t-1}}{\partial \alpha _0\partial \beta _1} +\beta _1\frac{\partial ^3\lambda _{t-1}}{\partial \alpha _0\partial \beta _1^2},~~~ \frac{\partial ^3\lambda _t}{\partial \alpha _1\partial \beta _1^2} =2\frac{\partial ^2\lambda _{t-1}}{\partial \alpha _1\partial \beta _1} +\beta _1\frac{\partial ^3\lambda _{t-1}}{\partial \alpha _1\partial \beta _1^2},\end{aligned}$$
(B.4)
$$\begin{aligned}&\frac{\partial ^3\lambda _t}{\partial \beta _1^3} =2\frac{\partial ^2\lambda _{t-1}}{\partial \beta _1^2} +\beta _1\frac{\partial ^3\lambda _{t-1}}{\partial \beta _1^3}. \end{aligned}$$
(B.5)

Next, without loss of generality, we only consider derivatives with respect to \(\beta _1\). Under the assumptions of (A5)–(A7) and according to (B.1)–(B.5), we have

$$\begin{aligned}&|m_{1t}|\le \psi ^{'}(r_t)w_t \left( \frac{\frac{1}{2}(X_t+\lambda _t)K_2}{K_1} +\frac{1}{K_1}\right) ,\\&|m_{2t}|\le \frac{1}{2}\psi (r_t)w_t\frac{K_2}{\sqrt{K_1}},\\&|m_{3t}|\le \psi (r_t)w_t\frac{1}{\sqrt{K_1}},\\&|m_{4t}|\le \psi ^{''}(r_t)w_t\frac{1}{\sqrt{K_1}} {\left( \frac{\frac{1}{2}(X_t+\lambda _t)K_2}{\sqrt{K_1}} +\frac{1}{\sqrt{K_1}}\right) }^2,\nonumber \\&|m_{5t}|\le \psi ^{'}(r_t)w_t \left( \frac{\frac{1}{2}(X_t+\lambda _t)K_3}{K_1} +\frac{\frac{7}{4}(X_t+\lambda _t)K_2^2}{K_1} +\frac{\frac{3}{2}K_2}{K_1}\right) ,\\&|m_{6t}|\le \psi (r_t)w_t \left( \frac{\frac{5}{4}K_2^2}{\sqrt{K_1}} +\frac{\frac{1}{2}K_3}{\sqrt{K_1}}\right) ,\\&\lambda _t\le \mu _{0t}:=\alpha _{1u}\sum \limits _{d=1}^{t-1}\beta _{1u}^dX_{t-1-d}+c_0,~\hbox {where}~c_0=\frac{\alpha _{0u}}{1-\beta _{1u}}+\lambda _0,\\&\frac{\partial \lambda _t}{\partial \beta _1}\le \mu _{1t} :=\alpha _{1u}\sum \limits _{d=1}^{t-1}d\beta _{1u}^dX_{t-1-d}+c_1,~\hbox {where}~c_1=\frac{c_0}{1-\beta _{1u}},\\&\frac{\partial ^2\lambda _t}{\partial \beta _1^2}\le \mu _{2t} :=\alpha _{1u}\sum \limits _{d=1}^{t-2}d(d+1)\beta _{1u}^{d-1}X_{t-2-d}+c_2,~\hbox {where}~c_2=\frac{2c_0}{(1-\beta _{1u})^2},\\&\frac{\partial ^3\lambda _t}{\partial \beta _1^3}\le \mu _{3t} :=\alpha _{1u}\sum \limits _{d=1}^{t-2}d(d+1)(d+2)\beta _{1u}^{d-1}X_{t-3-d}+c_3,~\hbox {where}~c_3=\frac{6c_0}{(1-\beta _{1u})^3}, \end{aligned}$$
$$\begin{aligned} \Bigg |\frac{\partial s_{ti}(\theta )}{\partial \beta _1}\Bigg |&\le \left( |m_{1t}-E(m_{1t}|{\mathcal {F}}_{t-1})| +|m_{2t}-E(m_{2t}|{\mathcal {F}}_{t-1})|\right) \mu _{1t}^2 \nonumber \\&\quad +|m_{3t}-E(m_{3t}|{\mathcal {F}}_{t-1})|\mu _{2t}\equiv {\tilde{m}}_t,\end{aligned}$$
(B.6)
$$\begin{aligned} \Bigg |\frac{\partial ^2 s_{ti}(\theta )}{\partial \beta _1^2}\Bigg |&\le \left( |m_{4t}-E(m_{4t}|{\mathcal {F}}_{t-1})| +|m_{5t}-E(m_{5t}|{\mathcal {F}}_{t-1})|+|m_{6t}-E(m_{6t}|{\mathcal {F}}_{t-1})|\right) \mu _{1t}^3\nonumber \\&\quad +3\left( |m_{1t}-E(m_{1t}|{\mathcal {F}}_{t-1})| +|m_{2t}-E(m_{2t}|{\mathcal {F}}_{t-1})|\right) \mu _{1t}\mu _{2t}\nonumber \\&\quad +|m_{3t}-E(m_{3t}|{\mathcal {F}}_{t-1})|\mu _{3t}\equiv m_t, \end{aligned}$$
(B.7)

\(E{\tilde{m}}_t<\infty \) and \(Em_t<\infty \) hold by considering each term in \({\tilde{m}}_t\) and \(m_t\) separately. For instance, \(E(m_{1t}-E(m_{1t}|{\mathcal {F}}_{t-1}))^2\le Em_{1t}^2<\infty \) and \(E\mu _{1t}^4<\infty \) as function \(\psi \) is bounded and \(EX_t^6\) is finite, thus \(E|m_{1t}-E(m_{1t}|{\mathcal {F}}_{t-1})|\mu _{1t}^2<\infty \). In addition, the highest moment of \(X_t\) contained in \(m_t\) is 6 and \(EX_t^6\) is finite, thus \(Em_t<\infty \).

Proof of Lemma 1

Part (1). From Eq. (2.3), we know that \(s_t(\theta )\) is a martingale difference sequence as \(E(s_t(\theta )|{\mathcal {F}}_{t-1})=0\) and \(S_n(\theta )\) is a martingale sequence with \(E(S_n(\theta )|{\mathcal {F}}_{n-1}) =S_{n-1}(\theta )\). In the following, it is sufficient to show \(s_t(\theta )\) is square integrable, that is \(E\Vert s_t(\theta )\Vert ^2<\infty \). According to the strong law of large numbers for martingales (Chow 1967), we have

$$\begin{aligned} \frac{1}{n}S_n(\theta )\xrightarrow {a.s.}0,~n\rightarrow \infty , \end{aligned}$$

for that

$$\begin{aligned} \Vert s_t(\theta )\Vert ^2&=\left\| \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta }\right\| ^2+\left\| E\left( \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta }\bigg |{\mathcal {F}}_{t-1}\right) \right\| ^2\nonumber \\&~~~-2\psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}}E\left( \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \bigg |{\mathcal {F}}_{t-1}\right) \left\| \frac{\partial \lambda _t}{\partial \theta }\right\| ^2\nonumber \\&\leqslant 2\left( \left\| \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta }\right\| ^2+\left\| E\left( \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta }\bigg |{\mathcal {F}}_{t-1}\right) \right\| ^2\right) , \end{aligned}$$

then

$$\begin{aligned} E\Vert s_t(\theta )\Vert ^2&\le C_1^2E\left\| \frac{\partial \lambda _t}{\partial \theta }\right\| ^2, \end{aligned}$$

where \(C_1\) is a constant and depends on function \(\psi \) and \(\frac{1}{\sqrt{B^{'}(\eta _t)}}\), as \(EX_t^2<\infty \), \(E\lambda _t^2<\infty \) and according to (B.1), it is easy to know that \(E\left( \dfrac{\partial \lambda _t}{\partial \theta _i}\right) ^2,~i=1,2,3\) are finite, then \(E\Vert s_t(\theta )\Vert ^2<\infty \).

Part (2).

$$\begin{aligned} \Vert s_t(\theta )\Vert ^4&=(\Vert s_t(\theta )\Vert ^2)^2 \le 8 \\&\quad \times \left( \left\| \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta }\right\| ^4+\left\| E\left( \psi (r_t)w_t\frac{1}{\sqrt{B^{'}(\eta _t)}} \frac{\partial \lambda _t}{\partial \theta }\bigg |{\mathcal {F}}_{t-1}\right) \right\| ^4\right) ,\\ E\Vert s_t(\theta )\Vert ^4&\le 16C_1^4E\left\| \frac{\partial \lambda _t}{\partial \theta }\right\| ^4. \end{aligned}$$

As \(\left\| \dfrac{\partial \lambda _t}{\partial \theta }\right\| ^4\) is the function of \(X_t\) and \(\lambda _t\), \(EX_t^6\) is finite, then \(E\Vert s_t(\theta )\Vert ^4<\infty \). Note that

$$\begin{aligned} \frac{1}{n}\sum \limits _{t=1}^nE\left[ \Vert s_t(\theta )\Vert ^2I(\Vert s_t(\theta )\Vert >\sqrt{n}\varepsilon )|{\mathcal {F}}_{t-1}\right] \le \frac{1}{n^2\varepsilon ^2}\sum \limits _{t=1}^nE\left[ \Vert s_t(\theta )\Vert ^4|{\mathcal {F}}_{t-1}\right] \rightarrow 0, \end{aligned}$$

this result gives the conditional Lindeberg’s condition. In addition,

$$\begin{aligned} \frac{1}{n}\sum \limits _{t=1}^n\hbox {Var}(s_t(\theta )|{\mathcal {F}}_{t-1})\xrightarrow {P}E\left\{ E\left[ s_t(\theta )(s_t(\theta ))^\top \bigg |{\mathcal {F}}_{t-1}\right] \right\} =W. \end{aligned}$$

Applying the CLT for martingales (Taniguchi and Kakizawa 2000, Theorem 1.3.13), then

$$\begin{aligned} \frac{1}{\sqrt{n}}S_n(\theta )=\frac{1}{\sqrt{n}}\sum \limits _{t=1}^ns_t(\theta )\xrightarrow {d}N(0,W). \end{aligned}$$

\(\square \)

Proof of Lemma 2

Because \(S_n(\theta )=0\) is an unbiased estimating function, then

$$\begin{aligned} -E\left[ \frac{\partial }{\partial {\theta }^\top }s_t(\theta )\bigg |{\mathcal {F}}_{t-1}\right] =E\left[ s_t(\theta )\frac{\partial l_t(\theta )}{\partial {\theta }^\top }\bigg |{\mathcal {F}}_{t-1}\right] , \end{aligned}$$

for that

$$\begin{aligned} 0=\frac{\partial E(s_t(\theta )|{\mathcal {F}}_{t-1})}{\partial {\theta }^\top }&=\int \frac{\partial (s_t(\theta )\hbox {exp}\{\eta _t X_t-A(\eta _t)\}h(X_t))}{\partial {\theta }^\top }d_x\nonumber \\&=\int \frac{\partial s_t(\theta )}{\partial {\theta }^\top }\hbox {exp}\{\eta _t X_t-A(\eta _t)\}h(X_t)d_x+\\&\int s_t(\theta )\frac{\partial (\hbox {exp}\{\eta _t X_t-A(\eta _t)\}h(X_t))}{\partial {\theta }^\top }d_x \nonumber \\&=E\left[ \frac{\partial }{\partial {\theta }^\top }s_t(\theta )\bigg |{\mathcal {F}}_{t-1}\right] +\int s_t(\theta )\frac{\partial l_t(\theta )}{\partial {\theta }^\top }\hbox {exp}\{\eta _t X_t-A(\eta _t)\}h(X_t)d_x\nonumber \\&=E\left[ \frac{\partial }{\partial {\theta }^\top }s_t(\theta )\bigg |{\mathcal {F}}_{t-1}\right] +E\left[ s_t(\theta )\frac{\partial l_t(\theta )}{\partial {\theta }^\top }\bigg |{\mathcal {F}}_{t-1}\right] . \end{aligned}$$

Let

$$\begin{aligned} V_t=E\left[ s_t(\theta )\frac{\partial l_t(\theta )}{\partial {\theta }^\top }\bigg |{\mathcal {F}}_{t-1}\right] = M_{11}, \end{aligned}$$
(B.8)

where

$$\begin{aligned} M_{11}=\frac{\partial \lambda _t}{\partial \theta }E\left[ \psi (r_t)(X_t-\lambda _t)|{\mathcal {F}}_{t-1}\right] \left( \frac{1}{\sqrt{B^{'}(\eta _t)}}\right) ^3w_t\frac{\partial \lambda _t}{\partial {\theta }^\top }. \end{aligned}$$

To prove V is a positive definite matrix is equivalent to show \(V_t\) is a positive definite matrix. Obviously, \(V_t\) is a \(3\times 3\) positive definite matrix. This is because for any non-zero three dimensional real vector z, \(z^\top M_{11}z>0\) is equivalent to \(z^\top \dfrac{\partial \lambda _t}{\partial {\theta }} \dfrac{\partial \lambda _t}{\partial {\theta }^\top }z>0\), if \(z^\top \dfrac{\partial \lambda _t}{\partial {\theta }}=0\), from (A8), we derive \(z=0\). Thus, \(V_t\) is a positive definite matrix. Let \(H_n(\theta )=-\dfrac{1}{n}\dfrac{\partial }{\partial {\theta }^\top }S_n(\theta )\), according to (B.6), \(E\left\| \dfrac{\partial }{\partial {\theta }^\top }s_t(\theta )\right\| <\infty \). Using the LLN of Jensen and Rahbek (2007), it is easy to obtain

$$\begin{aligned} H_n(\theta )=-\frac{1}{n}\dfrac{\partial }{\partial {\theta }^\top }S_n(\theta )=-\frac{1}{n}\sum \limits _{t=1}^n\dfrac{\partial }{\partial {\theta }^\top }s_t(\theta )\xrightarrow {a.s.} -E\left[ \dfrac{\partial }{\partial {\theta }^\top }s_t(\theta )\right] =V. \end{aligned}$$

\(\square \)

Proof of Lemma 3

From (B.7), we have

$$\begin{aligned} \max _{i,j,k=1,2,3}\sup \limits _{\theta \in \Theta }\left| \frac{1}{n}\sum \limits _{t=1}^n\frac{\partial ^2 s_{ti}}{\partial \theta _j\partial \theta _k}\right| \le M_n: =\frac{1}{n}\sum \limits _{t=1}^nm_t, \end{aligned}$$

with \(Em_t<\infty \), then \(M_n\xrightarrow {a.s.} M\) and \(M=Em_t\) hold by the LLN in Jensen and Rahbek (2007). Using Lemmas 13, we can claim Theorem 1 holds. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiong, L., Zhu, F. Robust estimation for the one-parameter exponential family integer-valued GARCH(1,1) models based on a modified Tukey’s biweight function. Comput Stat 39, 495–522 (2024). https://doi.org/10.1007/s00180-022-01293-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-022-01293-6

Keywords