Skip to main content

Advertisement

Log in

Empirical likelihood method for multivariate Cox regression

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

A unified empirical likelihood approach for three Cox-type marginal models dealing with multiple event times, recurrent event times and clustered event times is proposed. The resulting log-empirical likelihood ratio test statistics are shown to possess chi-squared limiting distributions. When making inferences, there is no need to solve estimating equations nor to estimate limiting covariance matrices. The optimal linear combination property for over-identified empirical likelihood is preserved by the proposed method and the property can be used to improve estimation efficiency. In addition, an adjusted empirical likelihood approach is applied to reduce the error rates of the proposed empirical likelihood ratio tests. The adjusted empirical likelihood tests could outperform the existing Wald tests for small to moderate sample sizes. The proposed approach is illustrated by extensive simulation studies and two real examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Anderson PK, Gill RD (1982) Cox’s regression model for counting processes: a large sample study. Ann Stat 10:1100–1120

    Article  Google Scholar 

  • Cai JW, Prentice RL (1997) Regression estimation using multivariate failure time data and a common baseline hazard function model. Lifetime Data Anal 3:197–213

    Article  MATH  Google Scholar 

  • Chen J, Variyath AM, Abraham B (2008) Adjusted empirical likelihood and its properties. J Comput Graph Stat 17:426–443

    Article  MathSciNet  Google Scholar 

  • Cox DR (1972) Regression models and life-tables (with discussion). J R Stat Soc Ser B Stat Methodol 34:187–220

    MATH  Google Scholar 

  • Diciccio TJ, Hall P, Romano J (1991) Empirical likelihood is Barterlett-correctable. Ann Stat 19:1053–1061

    Article  MathSciNet  MATH  Google Scholar 

  • Fleming TR, Harrington DP (1991) Counting processes and survival analysis. Wiley, New York

    MATH  Google Scholar 

  • Lee EW, Wei LJ, Amato DA (1992) Cox-type regression analysis for large numbers of small groups of correlated failure time observations. In: Klein JP, Goel PK (eds) Survival analysis: state of the art. Kluwer, Dordrecht, pp 237–247

    Chapter  Google Scholar 

  • Li G, Wang QH (2003) Empirical likelihood regression analysis for right censored data. Stat Sinica 13:51–68

    MATH  Google Scholar 

  • Liang KY, Self SG, Chang YC (1993) Modelling marginal hazards in multivariate failure time data. J R Stat Soc Ser B Stat Methodol 55:441–453

    MathSciNet  MATH  Google Scholar 

  • Lin DY, Wei LJ, Yang I, Ying Z (2000) Semiparametric regression for the mean and rate functions of recurrent events. J R Stat Soc Ser B Stat Methodol 62:711–730

    Article  MathSciNet  MATH  Google Scholar 

  • Lu W, Liang Y (2006) Empirical likelihood inference for linear transformation models. J Multivar Anal 97:1586–1599

    Article  MathSciNet  MATH  Google Scholar 

  • Owen AB (1988) Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75: 237–249

    Article  MathSciNet  MATH  Google Scholar 

  • Owen AB (1990) Empirical likelihood ratio confidence regions. Ann Stat 18:90–120

    Article  MathSciNet  MATH  Google Scholar 

  • Owen AB (2001) Empirical likelihood. Chapman and Hall, London

    Book  MATH  Google Scholar 

  • Qin G, Jing BY (2001a) Empirical likelihood for censored linear regression. Scand J Stat 28:661–673

    Article  MathSciNet  MATH  Google Scholar 

  • Qin G, Jing BY (2001b) Empirical likelihood for Cox regression model under random censorship. Comm Stat Simul Comput 30:79–90

    Article  MathSciNet  MATH  Google Scholar 

  • Qin J, Lawless J (1994) Empirical likelihood and general estimating equations. Ann Stat 22:300–325

    Article  MathSciNet  MATH  Google Scholar 

  • Qin J, Lawless J (1995) Estimating equations, empirical likelihood and constraints on parameters. Canad J Stat 23:145–159

    Article  MathSciNet  MATH  Google Scholar 

  • Qin G, Tsao M (2003) Empirical likelihod inference for median regression models for censored survival data. J Multivar Anal 85:416–430

    Article  MathSciNet  MATH  Google Scholar 

  • Tsiatis AA (2006) Semiparametric theory and missing data. Springer, New York

    MATH  Google Scholar 

  • Wei LJ, Lin DY, Weissfeld L (1989) Regression analysis of multivariate incomplete failure time data by modeling marginal distributions. J Am Stat Assoc 84:1065–1073

    Article  MathSciNet  Google Scholar 

  • Yu W (2010) Empirical likelihood method for general additive-multiplicative hazard models. Comm Stat Thoery Methods 39:2977–2990

    Article  MATH  Google Scholar 

  • Yu W, Sun Y, Zheng M (2009) Empirical likelihood method for censored median regression models. Comm Stat Thoery Methods 38:1170–1183

    Article  MathSciNet  MATH  Google Scholar 

  • Zhao Y (2010) Semiparametric inference for transformation models via empirical likelihood. J Multivar Anal 101:1846–1858

    Article  MATH  Google Scholar 

  • Zhou M (2005) Empirical likelihood analysis of the rank estimator for the censored accelerated failure time model. Biometrika 92:492–498

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors thank an anonymous referee whose insightful comments have resulted in a much improved paper. The research of Ming Zheng is supported by the National Natural Science Foundation of China (10971033). The research of Wen Yu is supported by the National Natural Science Foundation of China (11101091) and the Specialized Research Fund for the Doctoral Program of Higher Education of China (20110071120023).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wen Yu.

Appendix

Appendix

Here we prove the theorems in Sects. 2, 3 and 4. The following notation and regularity conditions are needed.

Multiple event times: For each \(i=1,2,\ldots ,n\), let \(\tilde{\mathbf T}_i=(\tilde{T}_{1i},\tilde{T}_{2i},\ldots ,\tilde{T}_{Ki}), \Delta _i =(\delta _{1i},\delta _{2i},\ldots ,\delta _{Ki})\) and \({\mathbf Z}_i=({\mathbf Z}_{1i}^T,{\mathbf Z}_{2i}^T,\ldots ,{\mathbf Z}_{Ki}^T)^T\).

  1. C1.

    \(\{\tilde{\mathbf T}_i, \Delta _i, {\mathbf Z}_i^T\}, i=1,2,\ldots ,n\), are independently and identically distributed (i.i.d.) random vectors.

  2. C2.

    There exists a constant \(B_1\) such that \(\sup _{1\le i\le n}\Vert {\mathbf Z}_i\Vert \le B_1\), where \(\Vert \cdot \Vert \) is the Euclidean norm.

  3. C3.

    For each \(k=1,2,\ldots ,K\), there exists a positive constant \(\tau _k<\infty \) such that \(P(C_{ki}\ge \tau _k)=P(C_{ki}=\tau _k)>0\) and \(P(T_{ki}>\tau _k)>0\).

  4. C4.

    For each \(k=1,2,\ldots ,K, A_k\) is positive definite. Recurrent event times:

  5. C5.

    \(\{N_i(\cdot ), Y_i(\cdot ), {\mathbf Z}_i\}, i=1,2,\ldots ,n\), are i.i.d.

  6. C6.

    There exists a constant \(B_2\) such that \(\sup _{1\le i\le n}\Vert {\mathbf Z}_i\Vert \le B_2\).

  7. C7.

    \(P(C_i\ge \tau )>0, i=1,2,\ldots ,n\).

  8. C8.

    There exists a constant \(B_3\) such that \(\sup _{1\le i\le n}|N_i(\tau )|\le B_3\).

  9. C9.

    \(A=\text{ pr-}\lim _{n\rightarrow \infty }\hat{A}\) is positive definite. Clustered event times: Suppose that for each cluster we have potentially \(K\) cluster members. In practice we observe a random subset of size \(K_i\) for the \(i\)th cluster. Let \(\tilde{\mathbf T}_i=(\tilde{T}_{1i},\ldots ,\tilde{T}_{K}), \Delta _i=(\delta _{1i},\ldots ,\delta _{K})\) and \({\mathbf Z}_i=({\mathbf Z}_{1i}^T,\ldots ,{\mathbf Z}_{K}^T)^T, i=1,2,\ldots ,n\).

  10. C10.

    \(\{\tilde{\mathbf T}_i, \Delta _i, {\mathbf Z}_i^T\}, i=1,2,\ldots ,n\), are i.i.d. random vectors.

  11. C11.

    There exists a constant \(B_4\) such that \(\sup _{1\le i\le n}\Vert {\mathbf Z}_i\Vert \le B_4\).

  12. C12.

    For each \(k=1,2,\ldots ,K\), there exists a positive constant \(\tau _k<\infty \) such that \(P(C_{ki}\ge \tau _k)=P(C_{ki}=\tau _k)>0\) and \(P(T_{ki}>\tau _k)>0\).

  13. C13.

    \(A=E[\sum _{k=1}^K\int _0^\infty \{{\mathbf Z}_{k1}-\overline{\mathbf z}(t)\}^{\otimes 2}Y_{k1}(t)e^{\beta _0^T{\mathbf Z}_{k1}}\lambda (t)dt]\) is positive definite, where \(\overline{\mathbf z}(t)=E[\sum _{k=1}^KY_{k1}(t){\mathbf Z}_{k1}e^{\beta _0^T{\mathbf Z}_{k1}}]/E[\sum _{k=1}^KY_{k1}(t)e^{\beta _0^T{\mathbf Z}_{k1}}]\).

Note that the similar conditions can be found in Wei et al. (1989), Lin et al. (2000), etc. Conditions C2, C6 and C11 require bounded covariates which are quite common in practice. Conditions C3, C7, C8 and C12 are conditions supposed to avoid tedious tail discussion. Conditions C4, C9 and C13 guarantee the existence of limiting covariance matrices for MELEs. The conditions can be relaxed but are sufficient for our conclusions.

We first prove Theorems 2.1 and 2.2. The following lemmas are needed for the proof.

Lemma 8.1.

Under conditions C1–C3, \(\max _{1\le i\le n}\Vert {\mathbf g}_i(\varvec{\beta }_0)\Vert =o_p(\sqrt{n})\).

Proof

For each \(k\) and \(i\), define

$$\begin{aligned} g_{ki0}=\int \limits _0^\infty \{{\mathbf Z}_{ki}-\overline{\mathbf z}_k(t)\}dM_{ki}(t). \end{aligned}$$

Let \({\mathbf g}_{i0}=(g_{1i0}^T, g_{2i0}^T,\ldots ,g_{Ki0}^T)^T, i=1,2,\ldots ,n\). It is easy to see that each \(g_{ki0}\) is a square integrable martingale with respect to a proper filtration. Thus, \(E[g_{ki0}^{\otimes 2}]<\infty \) for each \(k\) and \(E[{\mathbf g}_{i0}^{\otimes 2}]<\infty \). By arguments similar to those in the proof of lemma 11.2 in Owen (2001), we have that

$$\begin{aligned} \max _{1\le i\le n}\Vert {\mathbf g}_{i0}\Vert =o_p(\sqrt{n}). \end{aligned}$$
(8.1)

Moreover, by some simple algebra, we have that for each \(k\) and \(i\),

$$\begin{aligned} g_{ki}(\beta _{k0})=g_{ki0}+r_{ki1}+r_{ki2}, \end{aligned}$$
(8.2)

where

$$\begin{aligned} r_{ki1}&= \int \limits _0^\infty \left\{ {\mathbf Z}_{ki}-\overline{\mathbf z}_k(t)\right\} d\left[\hat{\Lambda }_k(\beta _{k0};t)-\Lambda _k(t)\right], \ r_{ki2}\\&= \int \limits _0^\infty \left\{ \overline{\mathbf Z}_k(\beta _{k0};t)-\overline{\mathbf z}_k(t)\right\} d\hat{M}_{ki}(\beta _{k0};t), \end{aligned}$$

and \(\Lambda _k(t)=\int _0^t\lambda _k(u)du\). By the uniform convergence of \(\hat{\Lambda }_k(\beta _{k0};t)\) to \(\Lambda _k(t)\) and \(\overline{\mathbf Z}_k(\beta _{k0};t)\) to \(\overline{\mathbf z}_k(t)\) on \([0,\infty )\), we have \(\max _{1\le i\le n}\Vert r_{ki1}\Vert =o_p(1)\) and \(\max _{1\le i\le n}\Vert r_{ki2}\Vert =o_p(1)\) for each \(k\). Since \(K\) is a finite constant, the conclusion follows from (8.1) and (8.2). \(\square \)

Lemma 8.2.

Under conditions C1–C3,

$$\begin{aligned} \left\Vert\frac{1}{n}\sum _{i=1}^n{\mathbf g}_i(\varvec{\beta }_0)^ {\otimes 2}-\frac{1}{n}\sum _{i=1}^n{\mathbf g}_{i0}^{\otimes 2}\right\Vert \end{aligned}$$

converges to 0 in probability as \(n\rightarrow \infty \).

Proof

Wei et al. (1989) proved that \(\Vert \hat{\mathbf \Sigma }-n^{-1}\sum _{i=1}^n{\mathbf g}_{i0}^{\otimes 2}\Vert \) converges to \(0\) in probability as \(n\rightarrow \infty \). Their proof can be mimicked here without much difficulty. We omit the details here. \(\square \)

Lemma 8.3.

Under conditions C1–C3,

$$\begin{aligned} \sum _{i=1}^n{\mathbf g}_i(\varvec{\beta }_0)=\sum _{i=1}^n{\mathbf g}_{i0}+o_p(\sqrt{n}). \end{aligned}$$

Proof

By the definition of \(\hat{M}_{ki}(\beta _k;t)\), it is easy to see that

$$\begin{aligned} \sum _{i=1}^ng_{ki}(\beta _{k})=\sum _{i=1}^n\int \limits _0^\infty \left\{ {\mathbf Z}_{ki}-\overline{\mathbf Z}_k(\beta _{k};t)\right\} dN_{ki}(t), \quad k=1,2,\ldots ,K. \end{aligned}$$
(8.3)

Thus, the zero point of \(n^{-1}\sum _{i=1}^ng_{ki}(\beta _{k})=0\) is given by \(\hat{\beta }_k\). By the uniform convergence of \(\overline{\mathbf Z}_k(\beta _{k0};t)\) on \([0,\infty )\), Wei et al. (1989) showed that for each \(k\),

$$\begin{aligned} \sum _{i=1}^n\int \limits _0^\infty \left\{ {\mathbf Z}_{ki}-\overline{\mathbf Z}_k(\beta _{k0};t)\right\} dN_{ki}(t)&= \sum _{i=1}^n\int \limits _0^\infty \left\{ {\mathbf Z}_{ki}-\overline{\mathbf Z}_k(\beta _{k0};t)\right\} dM_{ki}(t)\nonumber \\&= \sum _{i=1}^ng_{ki0}+o_p(\sqrt{n}). \end{aligned}$$
(8.4)

The conclusion follows from (8.3) and (8.4). \(\square \)

Proof of Theorem 2.1

Since \(E[{\mathbf g}_{i0}]=0\), by the proof of theorem 3.2 in Owen (2001), we have that with probability tending to 1, 0 is inside the convex hull of \({\mathbf g}_i({\varvec{\beta }}_0), i=1,2,\ldots ,n\). Thus, from (2.5), we have

$$\begin{aligned} -2\log R(\varvec{\beta }_0)=\sum _{i=1}^n\log \left[1\!+\!\sum _{k=1} ^K\eta _k(\varvec{\beta }_0)^Tg_{ki}(\beta _{k0})\right] =\sum _{i=1}^n\log \left[1\!+\!\varvec{\eta }(\varvec{\beta }_0)^T{\mathbf g}_{i}(\varvec{\beta }_{0})\right]\!,\nonumber \\ \end{aligned}$$
(8.5)

where \(\eta _k(\varvec{\beta })\)s solve (2.6). By Lemma 8.1, Lemma 8.2 and arguments similar to those in the proof of theorem 3.2 in Owen (2001), we can show that

$$\begin{aligned} \varvec{\eta }(\varvec{\beta }_0)=\left(\sum _{i=1}^n{\mathbf g}_i(\varvec{\beta }_0)^{\otimes 2}\right)^{-1}\left(\sum _{i=1}^n{\mathbf g}_i(\varvec{\beta }_0)\right)+o_p(n^{-1/2}). \end{aligned}$$
(8.6)

By (8.5), (8.6) and Lemma 8.3, we can further show that

$$\begin{aligned} -2\log R(\varvec{\beta }_0)&\!=\!&\left(\frac{1}{\sqrt{n}}\sum _{i=1}^n{\mathbf g}_i(\varvec{\beta }_0)\right)^T\!\left(\frac{1}{n}\sum _{i=1}^n{\mathbf g}_i(\varvec{\beta }_0)^{\otimes 2}\right)^{-1}\!\left(\frac{1}{\sqrt{n}}\sum _{i=1}^n{\mathbf g}_i(\varvec{\beta }_0)\right)\!+\!o_p(1)\\&\!=\!&\left(\frac{1}{\sqrt{n}}\sum _{i=1}^n{\mathbf g}_{i0}\right)^T\left(\frac{1}{n}\sum _{i=1}^n{\mathbf g}_{i0}^{\otimes 2}\right)^{-1}\left(\frac{1}{\sqrt{n}}\sum _{i=1}^n{\mathbf g}_{i0}\right)+o_p(1). \end{aligned}$$

By the multivariate central limit theorem and Lemma 8.2, one can show that \((n^{-1}\sum _{i=1}^n{\mathbf g}_{i0}^{\otimes 2})^{-1/2}(n^{-1/2}\sum _{i=1}^n{\mathbf g}_{i0})\) converges to \(N(0,{\mathbf I}_{\tilde{d}\times \tilde{d}})\) in distribution as \(n\rightarrow \infty \). The conclusion of the theorem follows immediately.

Proof of Theorem 2.2

By Lagrange multipliers, to maximize \(R(\varvec{\beta })\) subject to \(D_2^T\varvec{\beta }=0\) is equivalent to solve the following equations

$$\begin{aligned} \left\{ \begin{array}{l@{\quad \quad }l}\displaystyle -\frac{1}{n}\sum _{i=1}^n\frac{\left[\partial {\mathbf g}_i(\varvec{\beta })/\partial \varvec{\beta }^T\right]^T}{1+\varvec{\eta }(\varvec{\beta }){\mathbf g}_i(\varvec{\beta })}\varvec{\eta }(\varvec{\beta })+D_2\varvec{\lambda }=0, \\ [4mm] D_2^T\varvec{\beta }=0,\end{array}\right. \end{aligned}$$
(8.7)

where \(\varvec{\lambda }\in \mathbb R ^{s-s_1}\) is the Lagrange multiplier. Under conditions C1–C3, by arguments similar to those in the proof of Lemma 1 in Qin and Lawless (1995), we can show that the equations (8.7) have a solution, denoted by \((\hat{\varvec{\beta }}_D, \hat{\varvec{\lambda }}_D)\), with probability tending to 1. Moreover, \(\Vert \hat{\varvec{\beta }}_D-\varvec{\beta }_0\Vert \le n^{-1/3}\) and \(\Vert \hat{\varvec{\lambda }}_D\Vert \le n^{-1/3}\). By Taylor series expansion and arguments similar to those in the proof of Theorem 1 in Qin and Lawless (1995), we have that

$$\begin{aligned} \sqrt{n}\left(\begin{array}{c} \displaystyle \hat{\varvec{\beta }}_D-{\varvec{\beta }}_0 \\ \displaystyle \hat{\varvec{\lambda }}_D\end{array}\right)=\left(\begin{array}{cc} {\mathbf V}&D_2 \\ D_2^T&0\end{array}\right)^{-1}\left(\begin{array}{c} -{\mathbf A}^T{\mathbf \Sigma }^{-1}n^{-1/2}\sum _{i=1}^n{\mathbf g}_i({\varvec{\beta }}_0) \\ 0 \end{array}\right)+o_p(1), \end{aligned}$$

where \({\mathbf A}=\text{ diag}\{A_1,A_2,\ldots ,A_K\}, {\mathbf \Sigma }=\lim _{n\rightarrow \infty }n^{-1}\sum _{i=1}^n{\mathbf g}_{i}^{\otimes 2}\) and \({\mathbf V}={\mathbf A}^{-1}{\mathbf \Sigma }{\mathbf A}^{-1}\). By Lemma 8.3, one can show that \(n^{-1/2}\sum _{i=1}^n{\mathbf g}_i({\varvec{\beta }}_0)\) converges to \(N(0,{\mathbf \Sigma })\) in distribution. Thus, we have that \(\sqrt{n}(\hat{\varvec{\beta }}_D-\varvec{\beta }_0)\) converges in distribution to normal with mean \(0\) and covariance matrix \({\mathbf V}-{\mathbf V}D_2(D_2^T{\mathbf V}D_2)^{-1}D_2{\mathbf V}\). The first conclusion of the theorem follows directly.

For the second conclusion, let \(\xi =D^T\varvec{\beta }, \xi _1=D_1^T\varvec{\beta }\) and \(\xi _2=D_2^T\varvec{\beta }\). Define

$$\begin{aligned} \tilde{R}(\xi )=\prod _{i=1}^n\left\{ \frac{1}{1+\tilde{\varvec{\eta }}(\xi )^T\tilde{\mathbf g}_i(\xi )}\right\} , \end{aligned}$$

where \(\tilde{\mathbf g}_i(\xi )={\mathbf g}_i(\varvec{\beta })\) and \(\tilde{\varvec{\eta }}(\xi )\) solves

$$\begin{aligned} \sum _{i=1}^n\frac{\tilde{\mathbf g}_i(\xi )}{1+\tilde{\varvec{\eta }}^T\tilde{\mathbf g}_i(\xi )}=0. \end{aligned}$$

Then it is easy to see that \(\hat{\xi }=\text{ argmax}_{\xi _2=0}\tilde{R}(\xi )\) and \(\hat{\xi }\) is the MELE for the over-identified empirical likelihood ratio \(\tilde{R}(\xi )\). For any \(\tilde{d}\times \tilde{d}^*\) matrix \(\tilde{D}_1\) such that satisfies the condition in Theorem 2.2, we can conclude that \(\hat{\xi }^*\) converges in probability to \(\xi _0\), where \(\xi _0=D^T\varvec{\beta }_0\). On the other hand, \(\hat{\xi }^*\) converges in probability to \(\tilde{D}_1^T\varvec{\beta }_0\). Thus, we have that \(\xi _0=\tilde{D}_1^T\varvec{\beta }_0\) and that

$$\begin{aligned} \sqrt{n}\left(\hat{\xi }^*-\xi _{0}\right) =\sqrt{n}\tilde{D}_1^T\left(\hat{\varvec{\beta }}-{\varvec{\beta }}_{0}\right) =\frac{1}{\sqrt{n}}\sum _{i=1}^n\tilde{D}_1^T{\mathbf A}^{-1}\tilde{\mathbf g}_{i}(\xi _0)+o_p(1). \end{aligned}$$

From Qin and Lawless (1994)’s result, the MELE has the limiting covariance matrix reaching the minimum value attained by the optimal linear combination of the original estimating functions. As a result, we have \(V_{\xi ^*}\ge V_{\xi }\). \(\square \)

Proof of the asymptotic property of l in Section 2

Define the matrices

$$\begin{aligned} \left(\begin{array}{cc} P&Q^T \\ Q&R \end{array}\right)=\left(\begin{array}{cc} {\mathbf V}&D \\ D^T&0 \end{array}\right)^{-1} \ \text{ and} \ \left(\begin{array}{cc} P_1&Q_1^T \\ Q_1&R_1 \end{array}\right)=\left(\begin{array}{cc} {\mathbf V}&D_1 \\ D_1^T&0 \end{array}\right)^{-1}. \end{aligned}$$

By using arguments similar to those in the proof of Theorem 2 in Qin and Lawless (1995), we have that when \(D_2^T\varvec{\beta }_0=0\),

$$\begin{aligned} l&= \left(\frac{1}{\sqrt{n}}\sum _{i=1}^n{\mathbf g}_i(\varvec{\beta }_0)\right)^T{\mathbf \Sigma }^{-1}{\mathbf A}\left(Q_1^TR_1^{-1}Q_1-Q^TR^{-1}Q\right)\\&\times {\mathbf A}^T{\mathbf \Sigma }^{-1}\left(\frac{1}{\sqrt{n}}\sum _{i=1}^n{\mathbf g}_i(\varvec{\beta }_0)\right)+o_p(1). \end{aligned}$$

By Lemma 8.3, \(n^{-1/2}\sum _{i=1}^n{\mathbf g}_i({\varvec{\beta }}_0)\) converges to \(N(0,{\mathbf \Sigma })\) in distribution. Moreover, the matrix \({\mathbf \Sigma }^{-1/2}{\mathbf A}\left(Q_1^TR_1^{-1}Q_1-Q^TR^{-1}Q\right){\mathbf A}^T{\mathbf \Sigma }^{-1/2}\) is symmetric and idempotent with trace equaling to \(s-s_1\). Then \(l\) converges to \(\chi ^2_{s-s_1}\) in distribution. \(\square \)

Proof of Theorems 3.1 and 4.1

The proof of these two theorems are in the same sprit as that of Theorem 2.1. We omit the details here. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zheng, M., Yu, W. Empirical likelihood method for multivariate Cox regression. Comput Stat 28, 1241–1267 (2013). https://doi.org/10.1007/s00180-012-0348-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-012-0348-7

Keywords

Navigation