Skip to main content
Log in

Testing in linear composite quantile regression models

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Composite quantile regression (CQR) can be more efficient and sometimes arbitrarily more efficient than least squares for non-normal random errors, and almost as efficient for normal random errors. Based on CQR, we propose a test method to deal with the testing problem of the parameter in the linear regression models. The critical values of the test statistic can be obtained by the random weighting method without estimating the nuisance parameters. A distinguished feature of the proposed method is that the approximation is valid even the null hypothesis is not true and power evaluation is possible under the local alternatives. Extensive simulations are reported, showing that the proposed method works well in practical settings. The proposed methods are also applied to a data set from a walking behavior survey.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Barbe P, Bertail P (1995) The weighted bootstrap. Lecture Notes in Statistics 98. Springer, New York

  • Chen K, Ying Z, Zhang H, Zhao L (2008) Analysis of least absolute deviation. Biometrika 95:107–122

    Article  MathSciNet  MATH  Google Scholar 

  • Guo J, Tian M, Zhu K (2012) New efficient and robust estimation in varying-coefficient models with heteroscedasticity. Stat Sin 22:1075–1101

    MathSciNet  MATH  Google Scholar 

  • Jiang R, Zhou ZG, Qian WM, Chen Y (2013) Two step composite quantile regression for single-index models. Comput Stat Data Anal 64:180–191

    Article  MathSciNet  Google Scholar 

  • Jiang R, Zhou ZG, Qian WM, Shao WQ (2012a) Single-index composite quantile regression. J Korean Stat Soc 3:323–332

    Article  Google Scholar 

  • Jiang R, Qian WM, Zhou ZG (2012b) Variable selection and coefficient estimation via composite quantile regression with randomly censored data. Stat Probab Lett 2:308–317

    Article  MathSciNet  Google Scholar 

  • Jiang R, Yang XH, Qian WM (2012c) Random weighting m-estimation for linear errors-in-variables models. J Korean Stat Soc 41:505–514

    Article  Google Scholar 

  • Kai B, Li R, Zou H (2011) New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models. Ann Stat 39:305–332

    Article  MathSciNet  MATH  Google Scholar 

  • Kai B, Li R, Zou H (2010) Local composite quantile regression smoothing: an efficient and safe alternative to local polynomial regression. J R Stat Soc Ser B 72:49–69

    Article  MathSciNet  Google Scholar 

  • Knight K (1998) Limiting distributions for \(l_{1}\) regression estimators under general conditions. Ann Stat 26:755–770

    Article  MATH  Google Scholar 

  • Koenker R (2005) Quantile regression. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Pollard D (1991) Asymptotics for least absolute deviation regression estimators. Econom Theory 7:186–199

    Article  MathSciNet  Google Scholar 

  • Praestgaard J, Wellner JA (1993) Exchangeably weighted bootstraps of the general empirical process. Ann Probab 21:2053–2086

    Article  MathSciNet  MATH  Google Scholar 

  • Rao CR, Zhao LC (1992) Approximation to the distribution of M-estimates in linear models by randomly weighted bootstrap. Sankhy\(\bar{a}\) A 54:323–331

  • Rubin DB (1981) The bayesian bootstrap. Ann Stat 9:130–134

    Article  Google Scholar 

  • Tang L, Zhou Z, Wu C (2012a) Weighted composite quantile estimation and variable selection method for censored regression model. Stat Probab Lett 3:653–663

    Article  MathSciNet  Google Scholar 

  • Tang L, Zhou Z, Wu C (2012) Efficient estimation and variable selection for infinite variance autoregressive models. J Appl Math Comput 2012(40):399–413

    Article  MathSciNet  Google Scholar 

  • Wang Z, Wu Y, Zhao LC (2009) Approximation by randomly weighting method in censored regression model. Sci China Ser A 52:561–576

    Article  MathSciNet  Google Scholar 

  • Wu XY, Yang YN, Zhao LC (2007) Approximation by random weighting method for m-test in linear models. Sci China Ser A 50:87–99

    Article  MathSciNet  Google Scholar 

  • Zheng Z (1987) Random weighting method. Acta Math Appl Sin 10:247–253 (In Chinese)

    MATH  Google Scholar 

  • Zou H, Yuan M (2008) Composite quantile regression and the oracle model selection theory. Ann Stat 36:1108–1126

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank Dr. Yong Chen for sharing the walking behavior survey data and thank the Editor, Associate Editor and Referees for their helpful suggestions that improved the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing-Ru Li.

Appendix

Appendix

To prove main results in this paper, the following technical conditions are imposed.

A1. :

\(d^2_n=\max _{1\le i \le n}\{X_i^TS_nX_i\}\rightarrow 0\) as \(n\rightarrow \infty \).

A2. :

\(S=\lim _{n\rightarrow \infty }S_n/n\) is a \(p\times p\) positive definite matrix. For each p-vector \(u\),

$$\begin{aligned}&\lim _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^{n}\int \limits _0^{u_0+X_i^Tu}\sqrt{n}[F(a+t/\sqrt{n})-F(a)]dt =\frac{1}{2}f(a)(u_0,u^T)\left[ \begin{array}{cc} 1&{} 0\\ 0 &{} S\\ \end{array}\right] \\&\quad (u_0,u^T)^T. \end{aligned}$$
A3. :

The random weights \(\omega _{1},\ldots ,\omega _{n}\) are i.i.d. with \(P(\omega _{1}\ge 0)\), \(E(\omega _{1})=Var(\omega _{1})=1\), and the sequence \(\{\omega _{i}\}\) and \(\{Y_{i},X_{i}\}\) are independent.

Remark 3

Conditions A1 and A3 are standard conditions in the random weighting method, see Rao and Zhao (1992). And conditions A2 is commonly assumed in the quantile regression, see Koenker (2005).

Write

$$\begin{aligned} X_{ni}=S_n^{-1/2}X_i,~~\beta (n)=S_n^{1/2}(\beta -b),~~Y_{ni}=Y_i-X_i^Tb. \end{aligned}$$

The model (2.1) can be written as

$$\begin{aligned} Y_{ni}=X_{ni}^T\beta (n)+\varepsilon _i,~~i=1,\ldots ,n, \end{aligned}$$
(5.1)

Denote

$$\begin{aligned}&L_{n}(b_1,\ldots ,b_K, \beta (n))\\&\quad =\sum _{k=1}^{K}\sum _{i=1}^{n}\left[ \rho _{\tau _{k}} (Y_{ni}-b_k-X_{ni}^{T}\beta (n))-\rho _{\tau _{k}} (Y_{ni}-b_{\tau _k}-X_{ni}^{T}\beta _0(n))\right] , \end{aligned}$$
$$\begin{aligned} L_{n}^*(b_1,\ldots ,b_K, \beta (n))&= \sum _{k=1}^{K}\sum _{i=1}^{n}\omega _i\left[ \rho _{\tau _{k}} (Y_{ni}-b_k-X_{ni}^{T}\beta (n))-\rho _{\tau _{k}} (Y_{ni}-b_{\tau _k}\right. \\&\left. -X_{ni}^{T}\beta _0(n))\right] . \end{aligned}$$

where \(\beta _0(n)=S_n^{1/2}(\beta _0-b)\) and \(\beta _0\) is the true value.

Lemma 1

Under the conditions of Theorem 2, we have

$$\begin{aligned} \begin{aligned} \sqrt{n}(\hat{b}_k^*-b_{\tau _k})&=-\frac{1}{\sqrt{n}f(b_{\tau _k})}\sum _{i=1}^{n}\omega _i\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1),\\ \hat{\beta }^*(n)-\beta _0(n)&=-\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\sum _{i=1}^{n}\omega _iX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1), \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} L^*_{n}(\hat{b}^*_1,\ldots ,\hat{b}^*_K, \hat{\beta }^*(n))=&\sum _{i=1}^{n}\omega _iX_{ni}^T\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] (\hat{\beta }^*(n)-\beta _0(n))\\&+\frac{1}{2}\left( \sum _{k=1}^{K}f(b_{\tau _k})\right) (\hat{\beta }^*(n)-\beta _0(n))^T(\hat{\beta }^*(n)\\&-\beta _0(n))+A^*+o_p(1), \end{aligned} \end{aligned}$$

where \(A^*=-\frac{1}{2n}\sum _{k=1}^{K}f^{-1}(b_{\tau _k})\left[ \sum _{i=1}^{n}\omega _i\{I(\varepsilon _i<b_{\tau _k})-\tau _k\}\right] ^2\).

Particularly, when \(w\equiv 1\), we have

$$\begin{aligned} \begin{aligned}&\sqrt{n}(\hat{b}_k-b_{\tau _k})=-\frac{1}{\sqrt{n}f(b_{\tau _k})}\sum _{i=1}^{n}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1),\\&\hat{\beta }(n)-\beta _0(n)=-\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\sum _{i=1}^{n}X_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1), \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} L_{n}(\hat{b}_1,\ldots ,\hat{b}_K, \hat{\beta }(n))=&\sum _{i=1}^{n}X_{ni}^T\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] (\hat{\beta }(n)-\beta _0(n))\\&+\frac{1}{2}\left( \sum _{k=1}^{K}f(b_{\tau _k}) \right) (\hat{\beta }(n)\\&-\beta _0(n))^T(\hat{\beta }(n)-\beta _0(n))+A+o_p(1), \end{aligned} \end{aligned}$$

where \(A=-\frac{1}{2n}\sum _{k=1}^{K}f^{-1}(b_{\tau _k})\left[ \sum _{i=1}^{n}\{I(\varepsilon _i<b_{\tau _k})-\tau _k\}\right] ^2\).

Proof

Let \(\hat{\beta }^*(n)-\beta _0(n)=\mathbf{u}_{n}\) and \(\sqrt{n}(\hat{b}_{k}-b_{\tau _{k}})=v_{n,k}\). Then \((v_{n,1},\ldots ,v_{n,q},\mathbf{u}_{n})\) is the minimizer of the following criterion:

$$\begin{aligned} \begin{aligned} L_{n}^*&=\sum _{k=1}^{K}\sum _{i=1}^{n}\omega _i[\rho _{\tau _{k}} (\varepsilon _{i}-b_{\tau _{k}}-[v_{k}/ \sqrt{n}+X_{ni}^{T}\mathbf{u}])-\rho _{\tau _{k}} (\varepsilon _{i}-b_{\tau _{k}})], \end{aligned} \end{aligned}$$

To apply the identity (Knight 1998)

$$\begin{aligned} \rho _{\tau }(x-y)-\rho _{\tau }(x)=y\{I(x<0)-\tau \}+\int \limits _{0}^{y}\{I(x\le z)-I(x\le 0)\}dz, \end{aligned}$$

Thus we rewrite \(L^*_{n}\) as follows:

$$\begin{aligned} \begin{aligned} L^*_{n}=&\sum _{k=1}^{K}\sum _{i=1}^{n}\omega _i[v_{k}/ \sqrt{n}+X_{ni}^{T}\mathbf{u}][I(\varepsilon _{i}<b_{\tau _{k}})-\tau _{k}]\\&+\sum _{k=1}^{K}\sum _{i=1}^{n}\omega _i\int \limits _{0}^{[v_{k}/ \sqrt{n}+X_{ni}^{T}\mathbf{u}]} [I(\varepsilon _{i}\le b_{\tau _{k}}+t)-I(\varepsilon _{i}\le b_{\tau _{k}})]dt\\ \equiv&L^*_{1n}+L^*_{2n}. \end{aligned} \end{aligned}$$

Denote \(L^*_{2n}=\sum _{k=1}^{K}L_{2n}^{*(k)}\), where \(L_{2n}^{*(k)}=\sum _{i=1}^{n}\omega _i\int _{0}^{[v_{k}/ \sqrt{n}+X_{ni}^{T}\mathbf{u}]} [I(\varepsilon _{i}\le b_{\tau _{k}}+t)-I(\varepsilon _{i}\le b_{\tau _{k}})]dt\). By A3, noting that \(\max _{1\le i\le n}\parallel X_{in}\parallel =d_n\rightarrow 0\) by A1, we have

$$\begin{aligned} E[L_{2n}^{*(k)}]&= \sum _{i=1}^{n}\int \limits _{0}^{[v_{k}/ \sqrt{n}+X_{ni}^{T}\mathbf{u}]} [F(b_{\tau _{k}}+t)-F( b_{\tau _{k}})]dt\\&= \frac{1}{n}\sum _{i=1}^{n}\int \limits _{0}^{[v_{k}+\sqrt{n}X_{ni}^{T}\mathbf{u}]}\sqrt{n}[F(b_{\tau _{k}}+t/\sqrt{n})-F( b_{\tau _{k}})]dt\\&\rightarrow \frac{1}{2}f(b_{\tau _k})(v_{k},\mathbf{u})\left[ \begin{array}{cc} 1&{} 0\\ 0 &{} I_p \end{array} \right] (v_{k},\mathbf{u}^T)^{T}.\\ Var[L_{2n}^{*(k)}]&\le 4E[L_{2n}^{*(k)}]\max _{1\le i\le n}|v_{k}/\sqrt{n}+X_{ni}^{T}\mathbf{u}|\rightarrow 0. \end{aligned}$$

Hence, \(L_{2n}^{*(k)} \xrightarrow {p}\frac{1}{2}f(b_{\tau _k})(v_{k},\mathbf{u})\left[ \begin{array}{cc} 1&{} 0\\ 0 &{} I_p \end{array} \right] (v_{k},\mathbf{u}^T)^{T}.\) Thus it follows that

$$\begin{aligned} \begin{aligned} L^*_{n}&= \sum _{k=1}^{K}\sum _{i=1}^{n}\omega _i[v_{k}/ \sqrt{n}+X_{ni}^{T}\mathbf{u}][I(\varepsilon _{i}<b_{\tau _{k}})-\tau _{k}]\\&\quad +\frac{1}{2}\sum _{k=1}^{K}f(b_{\tau _k})(v_{k},\mathbf{u})\left[ \begin{array}{cc} 1&{} 0\\ 0 &{} I_p \end{array} \right] (v_{k},\mathbf{u}^T)^{T}+o_p(1). \end{aligned} \end{aligned}$$

Since \(L^*_{n}-\sum _{k=1}^{K}\sum _{i=1}^{n}\omega _i[v_{k}/ \sqrt{n}+X_{ni}^{T}\mathbf{u}][I(\varepsilon _{i}<b_{\tau _{k}})-\tau _{k}]\) converges in probability to the convex function \(\frac{1}{2}\sum _{k=1}^{K}f(b_{\tau _k})(v_{k},\mathbf{u})\left[ \begin{array}{cc} 1&{} 0\\ 0 &{} I_p \end{array} \right] (v_{k},\mathbf{u}^T)^{T}\), it follows from the convexity lemma (Pollard 1991) that, for any compact set, the quadratic approximation to \(L^*_{n}\) holds uniformly for \((v_{1},\ldots ,v_{K},\mathbf{u})\) in any compact set, which leads to

$$\begin{aligned} \begin{aligned}&\sqrt{n}(\hat{b}_k^*-b_{\tau _k})=-\frac{1}{\sqrt{n}f(b_{\tau _k})}\sum _{i=1}^{n}\omega _i\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1),\\&\hat{\beta }^*(n)-\beta _0(n)=-\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\sum _{i=1}^{n}\omega _iX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1). \end{aligned} \end{aligned}$$

Thus, we can obtain

$$\begin{aligned} L^*_{n}(\hat{b}^*_1,\ldots ,\hat{b}^*_K, \hat{\beta }^*(n))&= \sum _{i=1}^{n}\omega _iX_{ni}^T\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] (\hat{\beta }^*(n)-\beta _0(n))\\&+\frac{1}{2}\left( \sum _{k=1}^{K}f(b_{\tau _k}) \right) (\hat{\beta }^*(n)-\beta _0(n))^T(\hat{\beta }^*(n)\\&-\beta _0(n))+A^*+o_p(1). \end{aligned}$$

The Lemma is proved.

Now we proceed to prove the theorems.

Proof of Theorem 1

Suppose \(0<q<p\) and let \(K\) be a \(p\times (p-q)\) matrix of the rank \((p-q)\) such that \(H^TK=0\) and \(K^T\omega _{n}=0\). Without loss of generality, \(H_{0}\) and \(H_{2,n}\) can be written as

$$\begin{aligned}&H_{0}: \mathbf{\beta }-b=K\gamma ,~for~some~ \gamma \in \mathbf{R}^{p-q},~H_{2,n}:\mathbf{\beta }-b=K\gamma +\omega _{n},\\&\quad ~for~some~ \gamma \in \mathbf{R}^{p-q}. \end{aligned}$$

Write

$$\begin{aligned} H_{n}=S_n^{-1/2}H(H^{T}S_n^{-1}H)^{-1/2},~~~~K_{n}=S_n^{1/2}K(K^{T}S_nK)^{-1/2}, \end{aligned}$$

thus,

$$\begin{aligned} H_{n}^{T}H_{n}=I_{q},~~K_{n}^{T}K_{n}=I_{p-q},~~H_{n}^{T}K_{n}=0,~~H_{n}H_{n}^{T}+K_{n}K_{n}^{T}=I_{p}. \end{aligned}$$

When H\(_0\) is true, model (A.1) can be written as

$$\begin{aligned} Y_{ni}=X_{ni}^TK_n\gamma _0(n)+\varepsilon _i,~~i=1,\ldots ,n, \end{aligned}$$

where \(\gamma _0(n)=(K^{T}S_nK)^{1/2}\gamma \). Then, replacing \(X_{ni}\) in (A.1) by \(K_n^TX_{ni}\), and using a similar argument as that of Lemma 1, we have

$$\begin{aligned} \tilde{b}_k-b_{\tau _k}&= -\frac{1}{nf(b_{\tau _k})}\sum _{i=1}^{n}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1),\\ \hat{\gamma }(n)-\gamma _0(n)&= -\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\sum _{i=1}^{n}K_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1), \end{aligned}$$
$$\begin{aligned} \begin{aligned} L_{n}(\tilde{b}_1,\ldots ,\tilde{b}_K, \tilde{\beta }(n))=&\sum _{i=1}^{n}X_{ni}^TK_n\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] (\hat{\gamma }(n)-\gamma _0(n))\\&+\frac{1}{2}\left( \sum _{k=1}^{K}f(b_{\tau _k}) \right) (\hat{\gamma }(n)-\gamma _0(n))^TK_n^TK_n(\hat{\gamma }(n)\\&-\gamma _0(n))+A+o_p(1),\\ =&-\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i{=}1}^{n}K_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k}){-}\tau _k\right] \right\| ^2\\&+A+o_p(1), \end{aligned} \end{aligned}$$

where \(\hat{\gamma }(n)\) is the CQR estimate of \(\gamma _0(n)\) and \(\tilde{\beta }(n)=S_n^{1/2}(\tilde{\beta }-b)=K_n\hat{\gamma }(n)\).

Hence, under the null hypotheses, we have

$$\begin{aligned} M_{n}&= L_{n}(\tilde{b}_1,\ldots ,\tilde{b}_K, \tilde{\beta }(n))-L_{n}(\hat{b}_1,\ldots ,\hat{b}_K, \hat{\beta }(n))\\&= -\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}K_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2\\&+\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}X_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2+o_{p}(1)\\&= \frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}H_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2+o_{p}(1) \end{aligned}$$

When \(H_{2,n}\) is true, the model (A.1) can be written as

$$\begin{aligned} Y_{ni}=X_{ni}^T[K_n\gamma _2(n)+H_n\delta (n)]+\varepsilon _i,~~i=1,\ldots ,n, \end{aligned}$$

where \(\gamma _2(n)=(K^{T}S_nK)^{1/2}\gamma +K_n^TS_n^{1/2}\omega _n\) and \(\delta (n)=H_n^TS_n^{1/2}\omega _n\). Then by Lemma 1, we have

$$\begin{aligned} \tilde{b}_k-b_{\tau _k}&= -\frac{1}{nf(b_{\tau _k})}\sum _{i=1}^{n}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1),\\ \hat{\gamma }_2(n)-\gamma _2(n)&= -\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\sum _{i=1}^{n}K_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] +o_p(1), \end{aligned}$$
$$\begin{aligned}&L_{n}(\tilde{b}_1,\ldots ,\tilde{b}_K, \tilde{\beta }(n))\\&\quad =\sum _{i=1}^{n}X_{ni}^T\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] (K_n\hat{\gamma }_2(n)-K_n\gamma _{2}(n)-H_n\delta (n))\\&\qquad +\frac{1}{2}\left( \sum _{k=1}^{K}f(b_{\tau _k}) \right) (K_n\hat{\gamma }_2(n)-K_n\gamma _{2}(n)\\&\qquad -H_n\delta (n))^T(K_n\hat{\gamma }_2(n)-K_n\gamma _{2}(n)-H_n\delta (n))+A+o_p(1),\\&\quad =-\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}K_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2\\&\qquad -\sum _{i=1}^{n}X_{ni}^TH_n\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \delta (n)\\&\qquad +\frac{1}{2}\sum _{k=1}^{K}f(b_{\tau _k})\Vert \delta (n)\Vert ^2 +A+o_p(1). \end{aligned}$$

Hence, under the local alternative hypotheses, we have

$$\begin{aligned}&\!\!\!\!\! M_{n}=L_{n}(\tilde{b}_1,\ldots ,\tilde{b}_K, \tilde{\beta }(n))-L_{n}(\hat{b}_1,\ldots ,\hat{b}_K, \hat{\beta }(n))\\&\quad =-\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}K_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2\\&\qquad +\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}X_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2\\&\qquad -\sum _{i=1}^{n}X_{ni}^TH_n\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \delta (n) +\frac{1}{2}\sum _{k=1}^{K}f(b_{\tau _k})\Vert \delta (n)\Vert ^2+o_{p}(1)\\&\quad =\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}H_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2\\&\qquad -\sum _{i=1}^{n}X_{ni}^TH_n\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \delta (n) +\frac{1}{2}\sum _{k=1}^{K}f(b_{\tau _k})\Vert \delta (n)\Vert ^2+o_{p}(1)\\&\quad =\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}H_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right. \\&\qquad \left. -\sum _{k=1}^{K}f(b_{\tau _k})\delta (n)\right\| ^2+o_{p}(1)\\&\quad =\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}H_n^TX_{ni}\sum _{k=1}^{K}\left[ \tau _k-I(\varepsilon _i<b_{\tau _k})\right] \right. \\&\qquad \left. +\sum _{k=1}^{K}f(b_{\tau _k})\delta (n)\right\| ^2+o_{p}(1). \end{aligned}$$

The theorem is thus proved.

Proof of Theorem 2

By Lemma 2, we can obtain

$$\begin{aligned}&L^*_{n}(\hat{b}^*_1,\ldots ,\hat{b}^*_K, \hat{\beta }^*(n))-L^*_{n}(\hat{b}_1,\ldots ,\hat{b}_K, \hat{\beta }(n))\\&\quad = \sum _{i=1}^{n}\omega _iX_{ni}^T\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] (\hat{\beta }^*(n)-\hat{\beta }(n))\\&\qquad +\frac{1}{2}\left( \sum _{k=1}^{K}f(b_{\tau _k}) \right) \left[ (\hat{\beta }^*(n)-\beta _0(n))^T(\hat{\beta }^*(n)\right. \\&\qquad \left. -\beta _0(n))-(\hat{\beta }(n)-\beta _0(n))^T(\hat{\beta }(n)-\beta _0(n))\right] +o_p(1)\\&\quad =-\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}(\omega _i-1)X_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2+o_{p}(1). \end{aligned}$$

Similarly, under the null and local alternative hypotheses, we can obtain

$$\begin{aligned}&L^*_{n}(\tilde{b}^*_1,\ldots ,\tilde{b}^*_K, \tilde{\beta }^*(n))-L^*_{n}(\tilde{b}_1,\ldots ,\tilde{b}_K, \tilde{\beta }(n))\\&\quad =-\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}(\omega _i-1)K_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2+o_{p}(1). \end{aligned}$$

where \(\hat{\beta }^*(n)=S_n(\hat{\beta }^*-b)\) and \(\tilde{\beta }^*(n)=S_n(\tilde{\beta }^*-b)\). Therefore, we can obtain

$$\begin{aligned} \begin{aligned} M_n^*=&\left[ L^*_{n}(\hat{b}^*_1,\ldots ,\hat{b}^*_K, \hat{\beta }^*(n))\right. \\&\left. \!\!-L^*_{n}(\hat{b}_1,\ldots ,\hat{b}_K, \hat{\beta }(n))\right] -\left[ L^*_{n}(\tilde{b}^*_1,\ldots ,\tilde{b}^*_K, \tilde{\beta }^*(n))-L^*_{n}(\tilde{b}_1,\ldots ,\tilde{b}_K, \tilde{\beta }(n))\right] \\ =&-\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}(\omega _i-1)X_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2\\&+\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}(\omega _i-1)K_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2+o_{p}(1)\\ =&\frac{1}{2}\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-1}\left\| \sum _{i=1}^{n}(\omega _i-1)H_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \right\| ^2+o_{p}(1). \end{aligned} \end{aligned}$$

By the checking the Lindeberg condition, we know that the conditional distribution of \(\sum _{i=1}^{n}(\omega _i-1)H_n^TX_{ni}\sum _{k=1}^{K}\left[ I(\varepsilon _i<b_{\tau _k})-\tau _k\right] \) given \(Y_1,\ldots ,Y_n\) converge to \(N\left( \mathbf{0}, A\left[ \sum _{k=1}^{K}f(b_{\tau _k})\right] ^{-2}\cdot I_q\right) \). Therefore, the conditional distribution of \(M_n^*\) given \(Y_1,\ldots ,Y_n\) converges to \(\chi ^2_q/\left[ 2A^{-1}\sum _{k=1}^{K}f(b_{\tau _k})\right] \). The proof of Theorem 2 is completed.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, R., Qian, WM. & Li, JR. Testing in linear composite quantile regression models. Comput Stat 29, 1381–1402 (2014). https://doi.org/10.1007/s00180-014-0497-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-014-0497-y

Keywords

Navigation