Skip to main content
Log in

Empirical likelihood for varying-coefficient semiparametric mixed-effects errors-in-variables models with longitudinal data

  • Published:
Statistical Methods & Applications Aims and scope Submit manuscript

Abstract

In this paper, the empirical likelihood inferences for varying-coefficient semiparametric mixed-effects errors-in-variables models with longitudinal data are investigated. We construct the empirical log-likelihood ratio function for the fixed-effects parameters and the mean parameters of random-effects. The empirical log-likelihood ratio at the true parameters is proven to be asymptotically \(\chi ^2_{q+r}\), where \(q\) and \(r\) are dimensions of the fixed and random effects respectively, and the corresponding confidence regions for them are then constructed. We also obtain the maximum empirical likelihood estimator of the parameters of interest, and prove it is the asymptotically normal under some suitable conditions. A simulation study and a real data application are undertaken to assess the finite sample performance of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Bai Y, Fung WK, Zhu ZY (2010) Weighted empirical likelihood for generalized linear models with longitudinal data. J Stat Plan Inference 140:3446–3456

    Article  MATH  MathSciNet  Google Scholar 

  • Chen QH, Zhong PS, Cui HJ (2009) Empirical likelihood for mixed-effects error-in-variables model. Acta Mathematicae Applicatae Sinica (English Series) 25:561–578

    Article  MATH  MathSciNet  Google Scholar 

  • Cui HJ, Chen SX (2003) Empirical likelihood confidence region for parameter in the errors-in-variables models. J Multivar Anal 84:101–115

    Article  MATH  MathSciNet  Google Scholar 

  • Cui HJ, Ng Kai W (2004) Estimation in mixed effects model with errors in variables. J Multivar Anal 91:53–73

    Article  MATH  Google Scholar 

  • Fan JQ, Huang T (2005) Profile likelihood inferences on semi-parametric varying-coefficient partially linear models. Bernoulli 11:1031–1057

    Article  MATH  MathSciNet  Google Scholar 

  • Fan JQ, Li R (2004) New estimation and model selection procedures for semiparametric modeling in longitudinal data analysis. J Am Stat Assoc 99:710–723

    Article  MATH  MathSciNet  Google Scholar 

  • Huang JZ, Wu CO, Zhou L (2002) Varying-coefficient models and basis function approximations for the analysis of repeated measurements. Biometrika 89:111–128

    Article  MATH  MathSciNet  Google Scholar 

  • Huang JZ, Wu CO, Zhou L (2004) Polynomial spline estimation and inference for varying coefficient models with longitudinal data. Statistica Sinica 14:763–788

    MATH  MathSciNet  Google Scholar 

  • Huang ZS, Zhang RQ (2009) Empirical likelihood for nonparametric parts in semiparametric varying-coefficient partially linear models. Stat Probab Lett 79:1798–1808

    Article  MATH  Google Scholar 

  • Huang ZS, Zhou ZG, Jiang R, Qian WM, Zhang RQ (2010) Empirical likelihood based inference for semiparametric varying-coefficient partially linear models with error-prone linear covariates. Stat Probab Lett 80:497–504

    Article  MATH  MathSciNet  Google Scholar 

  • Kaslow RA, Ostrow DG, Detels R, Phair JP, Polk BF, Rinaldo CJ (1987) The multicenter AIDS cohort study: rationale, organization and selected characteristics of the participants. Am J Epidemiol 126:310–318

    Article  Google Scholar 

  • Kolaczyk ED (1994) Empirical likelihood for generalized linear models. Statistica Sinica 4:199–218

    MATH  MathSciNet  Google Scholar 

  • Laird NM, Ware JH (1982) Random effects models for longitudinal data. Biometrics 38:963–974

    Article  MATH  Google Scholar 

  • Li L, Greene T (2008) Varying coefficients model with measurement error. Biometrics 64:519–526

    Article  MATH  MathSciNet  Google Scholar 

  • Li YS, Lin XH, Muller P (2010) Bayesian inference in semiparametric mixed models for longitudinal data. Biometrics 66:70–78

    Article  MATH  MathSciNet  Google Scholar 

  • Li GR, Xue LG (2008) Empirical likelihood confidence region for the parameter in a partially linear errors-in-variables model. Commun Stat Theory Methods 37:1552–1564

    Article  MATH  MathSciNet  Google Scholar 

  • Liang H (2000) Asymptotic normality of parametric part in partially linear models with measurement error in the nonparametric part. J Stat Plan Inference 86:51–62

    Article  MATH  Google Scholar 

  • Liang H, Härdle W, Carroll RJ (1999) Estimation in a semiparametric partially linear errors-in-variables model. Ann Stat 27:1519–1535

    Article  MATH  Google Scholar 

  • Liang KY, Zeger SL (1986) Longitudinal data analysis using generalized linear model. Biometrika 73:13–22

    Article  MATH  MathSciNet  Google Scholar 

  • Lin DY, Ying Z (2001) Semiparametric and nonparametric regression analysis of longitudinal data (with discussion). J Am Stat Assoc 96:103–126

    Article  MATH  MathSciNet  Google Scholar 

  • McCullagh P, Nelder JA (1989) Generalized linear models, 2nd edn. Chapman & Hall, London

    Book  MATH  Google Scholar 

  • Nelder JA, Wedderburn RWM (1972) Generalized linear models. J R Stat Soc Ser A 135:370–384

    Article  Google Scholar 

  • Owen AB (1990) Empirical likelihood ratio confidence regions. Ann Stat 18:90–120

    Article  MATH  MathSciNet  Google Scholar 

  • Owen AB (2001) Empirical likelihood. Chapman & Hall, New York

    Book  MATH  Google Scholar 

  • Qin J, Lawless J (1994) Empirical likelihood and general estimating equations. Ann Stat 22:300–325

    Article  MATH  MathSciNet  Google Scholar 

  • Qin GY, Bai Y, Zhu ZY (2009) Robust empirical likelihood inference for longitudinal data. Stat Probab Lett 79:2101–2108

    Article  MATH  Google Scholar 

  • Ruppert D, Wand MP, Carroll RJ (2003) Semiparametric regression. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Vonesh EF, Chinchilli VM (1996) Linear and nonlinear models for the analysis of repeated measurements. Marcel Dekker, New York

    Google Scholar 

  • Wang Y (1998) Mixed-effects smoothing spline ANOVA. J R Stat Soc Ser B 60:159–174

    Article  MATH  Google Scholar 

  • Wang XL, Li GR, Lin L (2011) Empirical likelihood inference for semi-parametric varying-coefficient partially linear EV models. Metrika 73:171–185

    Article  MATH  MathSciNet  Google Scholar 

  • Wang SJ, Qian LF, Carroll RJ (2010) Generalized empirical likelihood methods for analyzing longitudinal data. Biometrika 97:79–93

    Article  MATH  MathSciNet  Google Scholar 

  • Wei CH (2011) Estimation in varying-coefficient errors-in-variables models with missing response variables. Commun Stat Simul Comput 40:383–393

    Article  MATH  Google Scholar 

  • Wu H, Zhang JT (2006) Nonparametric regression methods for longitudinal data. Wiley, New Jersey

    MATH  Google Scholar 

  • Xue LG, Zhu LX (2007a) Empirical likelihood for a varying coefficient model with longitudinal data. J Am Stat Assoc 102:642–654

    Article  MATH  MathSciNet  Google Scholar 

  • Xue LG, Zhu LX (2007b) Empirical likelihood semiparametric regression analysis for longitudinal data. Biometrika 94:921–937

    Article  MATH  MathSciNet  Google Scholar 

  • Xue LG, Zhu LX (2008) Empirical likelihood-based inference in a partially linear model for longitudinal data. Sci China Ser A 51:115–130

    Article  MATH  MathSciNet  Google Scholar 

  • You JH, Chen GM, Zhou Y (2006) Block empirical likelihood for longitudinal partially linear regression models. Can J Stat 34:79–96

    Article  MATH  MathSciNet  Google Scholar 

  • You JH, Chen GM (2006) Estimation of a semiparametric varying-coefficient partially linear errors-in-variables model. J Multivar Anal 97:324–341

    Article  MATH  MathSciNet  Google Scholar 

  • Zeger SL, Diggle PJ (1994) Semiparametric models for longitudinal data with application to CD4 cell numbers in HIV seroconverters. Biometrics 50:689–699

    Article  MATH  Google Scholar 

  • Zhao PX, Xue LG (2009) Empirical likelihood inferences for semiparametric varying-coefficient partially linear error-in-variables models with longitudinal data. J Nonparametr Stat 21:907–923

    Article  MATH  MathSciNet  Google Scholar 

  • Zhao PX, Xue LG (2010) Variable selection for semiparametric varying coefficient partially linear errors-in-variables models. J Multivar Anal 101:1872–1883

    Article  MATH  MathSciNet  Google Scholar 

  • Zhang D, Lin X, Raz J, Sowers M (1998) Semiparametric stochastic mixed models for longitudinal data. J Am Stat Assoc 93:710–719

    Article  MATH  MathSciNet  Google Scholar 

  • Zhong XP, Fung WK, Wei BC (2002) Estimation in linear models with random effects and errors-in-variables. Ann Inst Math Stat 54:595–606

    Article  MATH  MathSciNet  Google Scholar 

  • Zhou Y, Liang H (2009) Statistical inference for semiparametric varying-coefficient partially linear models with error-prone linear covariates. Ann Stat 37:427–458

    Article  MATH  MathSciNet  Google Scholar 

  • Zhu LX, Cui HJ (2003) Semiparametric regression model with errors in variables. Scan J Stat 30:429–444

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

The authors are grateful to the Editor and two anonymous referees for their constructive comments which have greatly improved this paper. This work is partially supported by Anhui Provincial Natural Science Foundation (No. 11040606M04), Key Natural Science Foundation of Higher Education Institutions of Anhui Province of China (No. KJ2012A270), NSFC (No. 11171065), NSFJS (No. BK2011058), Youth Foundation for Humanities and Social Sciences Project from Ministry of Education of China (No. 11YJC790311), Postdoctoral Research Program of Jiangsu Province of China (No. 1202013C) and Scientific Research Starting Foundation for Talents of Tongling University (No. 2012tlxyrc05).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xing-cai Zhou.

Appendix

Appendix

To illustrate our main results, the following assumptions are imposed. These assumptions are actually quite mild and can be easily satisfied.

  1. (a)

    The bandwidth satisfies \(h=h_0n^{-1/5}\) for some constant \(h_0>0\).

  2. (b)

    The matrices \(\Upsilon (t)\), \(\Phi (t)\) and \(\Psi (t)\) are twice continuously differentiable on \((0,1)\), and \(\Upsilon (t)\) is positive definite on \((0,1)\).

  3. (c)

    \(\{\alpha _l(u),l=1,\ldots ,p\}\) has continuous second derivatives on \((0,1)\).

  4. (d)

    The Kernel \(K(\cdot )\) is a symmetric density function with compact support.

  5. (e)

    The intensity function \(f(t)\) of the process \(N(t)\) is bounded away from 0 and infinity on \([0,1]\), and is continuously differentiable on \((0,1)\).

  6. (f)

    There is an \(s>2\) such that \(\sup _{0\le t\le 1}E\Vert X(t)\Vert ^{2s}<\infty \), \(\sup _{0\le t\le 1}E\Vert W(t)\Vert ^{2s}<\infty \), \(\sup _{0\le t\le 1}E\Vert Z(t)\Vert ^{4s}<\infty \), \(\sup _{0\le t\le 1}E\Vert \mu (t)\Vert ^{2s}<\infty \), \(\sup _{0\le t\le 1}E\Vert \nu (t)\Vert ^{2s}\!<\!\infty \), \(E\Vert \gamma \Vert ^{4s}<\infty \), \(\sup _{0\le t\le 1}E\Vert \epsilon (t)\Vert ^{2s}<\infty \), \(i=1,\ldots n\), and for some \(\delta <2-s^{-1}\) such that \(n^{2\delta -1}h\rightarrow \infty \).

  7. (g)

    \(\Gamma \) is a positive definite matrix, where \(\Gamma \) is defined in Theorem 2.

Let \(c_n=\left( \frac{\log (1/h)}{nh}\right) ^{1/2}+h^2\), \(\kappa _l=\int u^lK(u)du\), \(l=0,1,2\), and \(A\otimes B\) be the Kronecker product of matrix \(A\) and \(B\). To prove our main results, we first give several lemmas.

Lemma 1

Suppose that assumptions (a)–(f) hold. Then it holds uniformly for \(t\in \mathcal I \)

$$\begin{aligned} B_t^T M_t B_t&= nf(t)\textit{diag}(1,\kappa _2)\otimes \Upsilon (t)(1+O_p(c_n)),\end{aligned}$$
(5.1)
$$\begin{aligned} B_t^T M_t W&= nf(t)(1,0)^T\otimes \Phi (t)(1+O_p(c_n)),\end{aligned}$$
(5.2)
$$\begin{aligned} B_t^T M_t Z&= nf(t)(1,0)^T\otimes \Psi (t)(1+O_p(c_n)),\end{aligned}$$
(5.3)
$$\begin{aligned} B_t^T M_t X_\alpha&= nf(t)(1,0)^T\otimes \Upsilon (t)\alpha (t)(1+O_p(c_n)). \end{aligned}$$
(5.4)

Here we omit the proof, which is similar to Lemma 2 in Fan and Huang (2005).

Lemma 2

Suppose that assumptions (a)–(f) hold. Then it holds uniformly for \(t\in \mathcal I \)

$$\begin{aligned} S(t)W&= \Upsilon ^{-1}(t)\Phi (t)(1+O_p(c_n)),\end{aligned}$$
(5.5)
$$\begin{aligned} S(t)Z&= \Upsilon ^{-1}(t)\Psi (t)(1+O_p(c_n)),\end{aligned}$$
(5.6)
$$\begin{aligned} S(t)X_\alpha&= \alpha (t)(1+O_p(c_n)). \end{aligned}$$
(5.7)

Proof

Noting that \(S(t)=[I_p\ \ 0] \left( B_t^TM_tB_t\right) ^{-1}B_t^TM_t\), (5.5)–(5.7) can be obtained directly by Lemma 1.

Lemma 3

Suppose that assumptions (a)–(f) hold. Then it holds uniformly for \(t\in \mathcal I \)

$$\begin{aligned} S(t)\epsilon&= O_p(c_n),\end{aligned}$$
(5.8)
$$\begin{aligned} S(t)\omega&= O_p(c_n), \end{aligned}$$
(5.9)

where \(\omega \) is \(\mu \) or \(\nu \).

Proof

With the similar proof of Lemmas 1 and 2, one can obtain (5.8) and (5.9).

Lemma 4

Let \(e_i, i=1,\ldots ,n\), be a sequence of multi-independent random variate with \(E(e_i)=0\) and \(E(e_i^2)<c<\infty \). Then

$$\begin{aligned} \max _{1\le k\le n}\left| \sum \limits _{i=1}^k e_i\right| =O_p(\sqrt{n}\log n). \end{aligned}$$

Further, let \((j_1,\ldots ,j_n)\) be a permutation of \((1,\ldots ,n)\). Then we have

$$\begin{aligned} \max _{1\le k\le n}\left| \sum \limits _{i=1}^k e_{j_i}\right| =O_p(\sqrt{n}\log n). \end{aligned}$$

Proof

The proof of Lemma 4 can be found in Zhao and Xue (2009).

Let \(e_{ij}=Z_{ij}^T(\gamma _i-\gamma _\mu )+\epsilon _{ij}-(\mu ^T_{ij},\nu ^T_{ij})\beta \),

$$\begin{aligned} J_{i1}&= \sum \limits _{j=1}^{n_i}\Bigg \{\left( \begin{array}{c} W_{ij}+\widetilde{\mu }_{ij}-\Lambda ^T(t_{ij})X_{ij} \\ Z_{ij}+\widetilde{\nu }_{ij}-\Pi ^T(t_{ij})X_{ij} \\ \end{array} \right) \left[ Z_{ij}^T(\gamma _i-\gamma _\mu )+\epsilon _{ij}-(\widetilde{\mu }_{ij}^T,\widetilde{\nu }_{ij}^T)\beta \right] \\&\quad +\textit{diag}(\widetilde{\Sigma }_{ij\mu },\widetilde{\Sigma }_{ij\nu })\beta \Bigg \},\\ J_{i2}&= \sum \limits _{j=1}^{n_i}\left\{ \left( \begin{array}{c} \Lambda ^T(t_{ij})X_{ij}-(S(t_{ij})W)^TX_{ij} \\ \Pi ^T(t_{ij})X_{ij}-(S(t_{ij})Z)^TX_{ij} \\ \end{array} \right) [e_{ij}+X_{ij}^TS(t_{ij})(\mu ,\nu )\beta ] \right\} ,\\ J_{i3}&= \sum \limits _{j=1}^{n_i}\left\{ \left( \begin{array}{c} W_{ij}-\Lambda ^T(t_{ij})X_{ij} \\ Z_{ij}-\Pi ^T(t_{ij})X_{ij}\\ \end{array} \right) \left[ X_{ij}^T\alpha (t_{ij})-X_{ij}^TS(t_{ij})(Y-(W,Z)\beta )\right] \right\} ,\\ J_{i4}&= \sum \limits _{j=1}^{n_i}\left\{ \left( \begin{array}{c} \mu _{ij} \\ \nu _{ij}\\ \end{array} \right) \left[ X_{ij}^T\alpha (t_{ij})-X_{ij}^TS(t_{ij})(Y-(W,Z)\beta )\right] \right\} ,\\ J_{i5}\!&= \!\sum \limits _{j=1}^{\!n_i}\left\{ \left( \begin{array}{c} \Lambda ^T(t_{ij})\!X_{ij}\!-\!(\!S(t_{ij})W^*)^TX_{ij} \\ \Pi ^T(t_{ij})\!X_{ij}\!-\!(S(t_{ij})Z^*)^TX_{ij}\\ \end{array} \right) \left[ X_{ij}^T\alpha (t_{ij})\!-\!X_{ij}^TS(t_{ij})(Y\!-\!(W,Z)\!\beta )\right] \right\} . \end{aligned}$$

and \(J_\kappa =\frac{1}{\sqrt{n}}\sum \limits _{i=1}^n J_{i\kappa }, \kappa =1,\ldots ,5\).

Lemma 5

Suppose that assumptions (a)–(f) hold, we have

$$\begin{aligned} J_\kappa =o_p(1), \kappa =2,\ldots ,5. \end{aligned}$$
(5.10)

Proof

First, we prove \(J_2=o_p(1)\). Let \(A_{ij}=(\Lambda (t_{ij})-S(t_{ij})W,\Pi (t_{ij})-S(t_{ij})Z)^T\) be \((q+r)\times p\) matrix, and \(b_{ij}=X_{ij}e_{ij}\) be \(p\times 1\) vector. Then

$$\begin{aligned} J_2=\frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\sum \limits _{j=1}^{n_i}A_{ij}b_{ij} +\frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\sum \limits _{j=1}^{n_i}A_{ij}X_{ij}X_{ij}^TS(t_{ij})(\mu ,\nu )\beta =J_{21}+J_{22}. \nonumber \\ \end{aligned}$$
(5.11)

Further, let \(a_{ij,rs}\) be the \((r,s)\) component of \(A_{ij}\), \(b_{ij,s}\) be the \(s\)th component of \(b_{ij}\), \(a_{ij_0,rs_0}b_{ij_0,s_0}=\max _{j,s}\{a_{ij,rs}b_{ij,s}\}\), and \((a_{l_i,r},i=1,\ldots ,n)\) be a permutation of \((a_{ij_0,rs_0},i=1,\ldots ,n)\) such that \(a_{l_1,r}\ge \cdots \ge a_{l_n,r}\), corresponding permutation of \((b_{ij_0,s_0},i=1,\ldots ,n)\) are denoted by \((b_{l_i},i=1,\ldots ,n)\). Denote \(J_{21r}\) be the \(r\)th component of \(J_{21}\). By Abel’s inequality, Lemmas 2 and 4, we have

$$\begin{aligned} |J_{21r}|&= \frac{1}{\sqrt{n}}\left| \sum \limits _{i=1}^n \sum \limits _{j=1}^{n_i}\sum \limits _{s=1}^pa_{ij,rs}b_{ij,s}\right| \le \frac{pn_i}{\sqrt{n}}\left| \sum \limits _{i=1}^n a_{ij_0,rs_0}b_{ij_0,s_0}\right| \\&= \frac{pn_i}{\sqrt{n}}\left| \sum \limits _{i=1}^n a_{l_i,r}b_{l_i}\right| \le \frac{C}{\sqrt{n}}\sup _{1\le i\le n}|a_{l_i,r}|\max _{1\le k\le n}\left| \sum \limits _{i=1}^k b_{l_i}\right| \\&= \frac{C}{\sqrt{n}}O_p(c_n)O_p(\sqrt{n}\log n)=o_p(1). \end{aligned}$$

For \(J_{22}\), by (5.5), (5.6) and (5.9), we have \(\Vert J_{22}\Vert \le O_p(\sqrt{n}c_n^2)=o_p(1)\). Together it with (5.11), \(J_2=o_p(1)\) holds.

For \(J_3\) and \(J_4\), we denote \(A_{ij}=\alpha (t_{ij})-S(t_{ij})(Y-(W,Z)\beta )\). From (5.7) and (5.8), we obtain uniformly in \(t_{ij}\in \mathcal I \)

$$\begin{aligned} \Vert A_{ij}\Vert =\Vert (\alpha (t_{ij})-S(t_{ij})X_\alpha )\Vert +\Vert S(t_{ij})\epsilon \Vert =O_p(c_n). \end{aligned}$$
(5.12)

Together this with \(E\left\{ \left( \begin{array}{c} W_{ij}-\Lambda ^T(t_{ij})X_{ij} \\ Z_{ij}-\Pi ^T(t_{ij})X_{ij}\\ \end{array} \right) X_{ij}^T\right\} =0\), and \(E\left\{ \left( \begin{array}{c} \mu _{ij} \\ \nu _{ij}\\ \end{array} \right) X_{ij}^T\right\} =0\), and similar to the proof of \(J_{21}\), we obtain \(J_3=o_p(1)\) and \(J_4=o_p(1)\). For \(J_5\), by (5.5), (5.6), (5.9) and (5.12), we have \(\Vert J_5\Vert \le O_p(\sqrt{n}c_n^2)=o_p(1)\). So the proof of Lemma 5 is completed.

Lemma 6

Suppose that assumptions (a)–(f) hold, we have

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\eta _{i}(\beta ){\stackrel{\mathfrak{D }}{\longrightarrow }}N(0, B), \end{aligned}$$

where \(B\) is defined in Theorem 2.

Proof

From (2.7), we have

$$\begin{aligned} \left( \begin{array}{c} W_{ij}^* \\ Z_{ij}^* \\ \end{array} \right)&= \left( \begin{array}{c} W_{ij}+\widetilde{\mu }_{ij}-\Lambda ^T(t_{ij})X_{ij} \\ Z_{ij}+\widetilde{\nu }_{ij}-\Pi ^T(t_{ij})X_{ij} \\ \end{array} \right) +\left( \begin{array}{c} \Lambda ^T(t_{ij})X_{ij}-(S(t_{ij})W)^TX_{ij}\\ \Pi ^T(t_{ij})X_{ij}-(S(t_{ij})Z)^TX_{ij} \\ \end{array} \right) \\&= \left( \begin{array}{c} W_{ij}-\Lambda ^T(t_{ij})X_{ij} \\ Z_{ij}-\Pi ^T(t_{ij})X_{ij} \\ \end{array} \right) +\left( \begin{array}{c} \mu _{ij} \\ \nu _{ij} \\ \end{array} \right) +\left( \begin{array}{c} \Lambda ^T(t_{ij})X_{ij}-(S(t_{ij})W^*)^TX_{ij} \\ \Pi ^T(t_{ij})X_{ij}-(S(t_{ij})Z^*)^TX_{ij} \\ \end{array} \right) \end{aligned}$$

and

$$\begin{aligned} \widetilde{Y}_{ij}-\left( \widetilde{ W}^{*T}_{ij}, \widetilde{Z}^{*T}_{ij}\right) \beta&= \left[ X_{ij}^T\alpha (t_{ij})-X_{ij}^TS(t_{ij})(Y-(W,Z)\beta )\right] \\&+\left[ e_{ij}+X_{ij}^TS(t_{ij})(\mu ,\nu )\beta \right] . \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n \eta _{i}(\beta )=J_{1}+J_{2}+J_{3}+J_{4}+J_{5}. \end{aligned}$$
(5.13)

By directly calculating its expectation and variance, we have \(E(J_1)=0\) and \(\textit{Var}(J_1)=B+o(1)\). Then, by the central limit theorem, we have \(J_1{\stackrel{\mathfrak{D }}{\longrightarrow }}N(0, B)\). Further, by Lemma 5 and (5.13), we compete the proof of Lemma 6.

Lemma 7

Suppose that assumptions (a)–(f) hold, we have

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n \eta _{i}(\beta )\eta _{i}^T(\beta ){\stackrel{\mathcal{P }}{\longrightarrow }}B. \end{aligned}$$

We omit the proof of Lemma 7, which can be obtained based on Lemmas 5 and 6 by the same arguments as for the proof of Lemma 5.6 in Zhao and Xue (2009).

Lemma 8

Suppose that assumptions (a)–(f) hold, we have

$$\begin{aligned} \max _{1\le i\le n} \Vert \eta _{i}(\beta )\Vert =o_p(n^{1/2}). \end{aligned}$$

Proof

Note that

$$\begin{aligned} \max _{1\le i\le n}\Vert \eta _{i}(\beta )\Vert \le \max _{1\le i\le n}\Vert J_{i1}\Vert \!+\!\max _{1\le i\le n}\Vert J_{i2}\Vert \!+\!\max _{1\le i\le n}\Vert J_{i3}\Vert \!+\!\max _{1\le i\le n}\Vert J_{i4}\Vert \!+\!\max _{1\le i\le n}\Vert J_{i5}\Vert . \end{aligned}$$

By Lemma 3 of Owen (1990), we have \(\max _{1\le i\le n}\Vert J_{i1}\Vert =o_p(n^{1/2})\). From (5.5), (5.6), (5.9) and Lemma 3 of Owen (1990), we have

$$\begin{aligned}&\max _{1\le i\le n}\Vert J_{i2}\Vert \le O_p(c_n)\\&\quad \times \left( \max _{1\le i\le n}\left\| \int \limits _0^1b_i(t)dN_i(t)\right\| +\max _{1\le i\le n}\left\| \int \limits _0^1X_i(t)X_i^T(t)S(t)(\mu ,\nu )\beta dN_i(t)\right\| \right) \\&=O_p(c_n)\left( o_p(n^{1/2})+o_p(n^{1/2})o_p(n^{1/2})O_p(c_n)\right) =o_p(n^{1/2}). \end{aligned}$$

Similar to the arguments of \(J_{i2}\), we can obtain \(\max _{1\le i\le n}\Vert J_{il}\Vert =o_p(n^{1/2}) (l=3,4,5)\). This competes the proof of Lemma 8.

Proof of Theorem 1

From Lemmas 6–8, using the same arguments as were used in the proof of expression (2.14) of Owen (1990), we have

$$\begin{aligned} \Vert \lambda \Vert =O_p(n^{-1/2}). \end{aligned}$$
(5.14)

From (2.13), we have

$$\begin{aligned} 0&= \frac{1}{n}\sum \limits _{i=1}^n\frac{\eta _i(\beta )}{1+\lambda ^T\eta _i(\beta )} =\frac{1}{n}\sum \limits _{i=1}^n\eta _i(\beta )-\frac{1}{n}\sum \limits _{i=1}^n\eta _i(\beta )\eta _i^T(\beta )\lambda \\&+\frac{1}{n}\sum \limits _{i=1}^n\frac{\eta _i(\beta )(\lambda ^T\eta _i(\beta ))^2}{1+\lambda ^T\eta _i(\beta )}. \end{aligned}$$

By using (5.14) and Lemma 8, we obtain

$$\begin{aligned}&\sum \limits _{i=1}^n(\lambda ^T\eta _i(\beta ))^2=\sum \limits _{i=1}^n\lambda ^T\eta _i(\beta )+o_p(1),\end{aligned}$$
(5.15)
$$\begin{aligned}&\lambda =\left[ \sum \limits _{i=1}^n\eta _i(\beta )\eta _i^T(\beta )\right] ^{-1}\sum \limits _{i=1}^n\eta _i(\beta ) +o_p(n^{-1/2}). \end{aligned}$$
(5.16)

Applying the Taylor expansion to (2.12), and using (5.14) and Lemma 8, it follows that

$$\begin{aligned} \ell _n(\beta )=2\sum \limits _{i=1}^n\left[ \lambda ^T\eta _i(\beta )- \frac{1}{2}(\lambda ^T\eta _i(\beta ))^2\right] +o_p(1). \end{aligned}$$
(5.17)

Then, by (5.15)–(5.17), we have

$$\begin{aligned} \ell _n(\beta )=\left[ \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\eta _i(\beta )\right] ^T \left[ \frac{1}{n}\sum \limits _{i=1}^n\eta _i(\beta )\eta _i^T(\beta )\right] ^{-1} \left[ \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\eta _i(\beta )\right] +o_p(1). \end{aligned}$$

Together with Lemmas 6–8, This completes the proof of Theorem 1.

Proof of Theorem 2

Following the similar arguments as were used in the proof of Theorem 2 in Xue and Zhu (2008), we have

$$\begin{aligned} \hat{\beta }-\beta =\hat{\Gamma }^{-1}n^{-1}\sum \limits _{i=1}^n\eta _i(\beta )+o_p(n^{-1/2}). \end{aligned}$$

By Lemma 2, we can prove \(\hat{\Gamma }{\stackrel{\mathcal{P }}{\longrightarrow }}\Gamma \) by the law of large numbers. Together with Lemma 6 and the Slutsky Theorem, this proves Theorem 2.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhou, Xc., Lin, JG. Empirical likelihood for varying-coefficient semiparametric mixed-effects errors-in-variables models with longitudinal data. Stat Methods Appl 23, 51–69 (2014). https://doi.org/10.1007/s10260-013-0238-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10260-013-0238-3

Keywords

Mathematics Subject Classification

Navigation