Skip to main content
Log in

Weighted composite quantile regression for single index model with missing covariates at random

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

This paper considers weighted composite quantile estimation of the single-index model with missing covariates at random. Under some regularity conditions, we establish the large sample properties of the estimated index parameters and link function. The large sample properties of the parametric part show that the estimator with estimated selection probability have a smaller limiting variance than the one with the true selection probability. However, the large sample properties of the estimated link function indicate that whether weights were estimated or not has no effect on the asymptotic variance. Studies of simulation and the real data analysis are presented to illustrate the behavior of the proposed estimators.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Chaudhuri P, Doksum K, Samarov A (1997) On average derivative quantile regression. Ann Stat 25:715–744

    Article  MathSciNet  Google Scholar 

  • Guo X, Xu W, Zhu L (2014) Multi-index regression models with missing covariates at random. J Multivar Anal 123:345–363

    Article  MathSciNet  Google Scholar 

  • Härdle W, Stoker TM (1989) Investigating smooth multiple regression by the method of average derivatives. J Am Stat Assoc 84:986–995

    MathSciNet  MATH  Google Scholar 

  • Hjort NL, Pollard D (2011) Asymptotics for minimisers of convex processes. arXiv:1107.3806

  • Horvitz DG, Thompson DJ (1952) A generalization of sampling without replacement from a finite universe. J Am Stat Assoc 47:663–685

    Article  MathSciNet  Google Scholar 

  • Jiang R, Zhou Z, Qian W, Shao W (2012) Single-index composite quantile regression. J Korean Stat Soc 41:323–332

    Article  MathSciNet  Google Scholar 

  • Kai B, Li R, Zou H (2010) Local composite quantile regression smoothing: an efficient and safe alternative to local polynomial regression. J R Stat Soc Ser B 72:49–69

    Article  MathSciNet  Google Scholar 

  • Kai B, Li R, Zou H (2011) New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models. Ann Stat 39:305–332

    Article  MathSciNet  Google Scholar 

  • Knight K (1998) Limiting distributions for L1 regression estimators under general conditions. Ann Stat 26:755–770

    Article  Google Scholar 

  • Koenker R, Bassett G (1978) Regression quantiles. Econometrica 46:33–50

    Article  MathSciNet  Google Scholar 

  • Kong E, Xia Y (2014) An adaptive composite quantile approach to dimension reduction. Ann Stat 42:1657–1688

    Article  MathSciNet  Google Scholar 

  • Li KC (1991) Sliced inverse regression for dimension reduction. J Am Stat Assoc 86:316–327

    Article  MathSciNet  Google Scholar 

  • Li KC (1992) On principal Hessian directions for data visualization and dimension reduction: another application of Stein’s lemma. J Am Stat Assoc 87:1025–1039

    Article  MathSciNet  Google Scholar 

  • Li T, Yang H (2016) Inverse probability weighted estimators for single-index models with missing covariates. Commun Stat-Theory Methods 45:1199–1214

    Article  MathSciNet  Google Scholar 

  • Li J, Li Y, Zhang R (2017) B spline variable selection for the single index models. Stat Pap 58:691–706

    Article  MathSciNet  Google Scholar 

  • Liang H (2008) Generalized partially linear models with missing covariates. J Multivar Anal 99:880–895

    Article  MathSciNet  Google Scholar 

  • Liang H, Wang S, Robins JM, Carroll RJ (2011) Estimation in partially linear models with missing covariates. J Am Stat Assoc 99:357–367

    Article  MathSciNet  Google Scholar 

  • Little RJ, Rubin DB (1987) Statistical analysis with missing data. Wiley, New York

    MATH  Google Scholar 

  • Liu H, Yang H (2017) Estimation and variable selection in single-index composite quantile regression. Commun Stat-Simul Comput 46:7022–7039

    Article  MathSciNet  Google Scholar 

  • Liu H, Yang H, Xia X (2017) Robust estimation and variable selection in censored partially linear additive models. J Korean Stat Soc 46:88–103

    Article  MathSciNet  Google Scholar 

  • Lv Y, Zhang R, Zhao W, Liu J (2014) Quantile regression and variable selection for the single-index model. J Appl Stat 41:1565–1577

    Article  MathSciNet  Google Scholar 

  • Mack YP, Silverman BW (1982) Weak and strong uniform consistency of kernel regression estimates. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 61:405–415

    Article  MathSciNet  Google Scholar 

  • Peng H, Huang T (2011) Penalized least squares for single index models. J Stat Plan Inference 141:1362–1379

    Article  MathSciNet  Google Scholar 

  • Sherwood B, Wang L, Zhou XH (2013) Weighted quantile regression for analyzing health care cost data with missing covariates. Stat Med 32:4967–4979

    Article  MathSciNet  Google Scholar 

  • Wang CY, Wang S, Gutierrez RG, Carroll RJ (1998) Local linear regression for generalized linear models with missing data. Ann Stat 26:1028–1050

    Article  MathSciNet  Google Scholar 

  • Wang CY, Chen HY (2001) Augmented inverse probability weighted estimator for Cox missing covariate regression. Biometrics 57:414–419

    Article  MathSciNet  Google Scholar 

  • Wong H, Guo S, Chen M, Wai-Cheung IP (2009) On locally weighted estimation and hypothesis testing of varying-coefficient models with missing covariates. J Stat Plan Inference 139:2933–2951

    Article  MathSciNet  Google Scholar 

  • Wu T, Yu K, Yu Y (2010) Single-index quantile regression. J Multivar Anal 101:1607–1621

    Article  MathSciNet  Google Scholar 

  • Xia Y, Tong H, Li WK, Zhu LX (2002) An adaptive estimation of dimension reduction space. J R Stat Soc Ser B 64:363–410

    Article  MathSciNet  Google Scholar 

  • Xia Y, Härdle W (2006) Semi-parametric estimation of partially linear single-index models. J Multivar Anal 97:1162–1184

    Article  MathSciNet  Google Scholar 

  • Yang H, Liu HL (2016) Penalized weighted composite quantile estimators with missing covariates. Stat Pap 57:69–88

    Article  MathSciNet  Google Scholar 

  • Zou H, Yuan M (2008) Composite quantile regression and the oracle model selection theory. Ann Stat 36:1108–1126

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors sincerely thank the Editor, the Associate Editor and two Reviewers for their helpful comments and suggestions which lead to a significant improvement on this paper. Liu’s work is supported by the National Natural Science Foundation of China (Grant No. 11761020), China Postdoctoral Science Foundation(Grant No. 2017M623067), Open Foundation of Guizhou Provincial Key Laboratory of Public Big Data(Grant No. 2017BDKFJJ030), Scientific Research Foundation for Young Talents of Department of Education of Guizhou Province(Grant No. 2017104), Science and Technology Foundation of Guizhou Province (Grant No. QKH20177222). Yang’s work is supported by the National Natural Science Foundation of China (Grant No. 11671059). Peng’s work is supported by the National Natural Science Foundation of China (Grant No. 61662009), Science and Technology Foundation of Guizhou Province (Grant No. QKH20183001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huilan Liu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Lemma 1

Let \((X_{1}, Y_{1})\), \((X_{2}, Y_{2}),\ldots , (X_{n}, Y_{n})\) be independent and identically distributed random vectors, where \(Y_{i}\) are scalar random variables. Furthermore, assume that \(E|Y|^{s}<\infty \), and \(\sup _{x}\int |Y|^{s}f(x,y)dy<\infty \), where \(f(\cdot ,\cdot )\) denotes the joint density of (XY). Let K be a bounded positive function with a bounded support, satisfying the Lipschitz condition. Given that \(n^{2\varepsilon -1}h\rightarrow \infty \) for some \(\varepsilon <1-s^{-1}\), then

$$\begin{aligned} \sup _{x}|\frac{1}{n}\sum _{i=1}^{n}[K_{h}(X_{i}-x)Y_{i}-E(K_{h}(X_{i}-x)Y_{i})]|=O_{p}(\sqrt{\log (1/h)/(nh)}). \end{aligned}$$

Proof

This follows immediately from the result obtained by Mack and Silverman (1982). \(\square \)

Lemma 2

(Quadratic Approximation Lemma) Suppose \(A_{n}(s)\) is convex and can be represented as \(\frac{1}{2}s^{T}Vs+U^{T}_{n}s+C_{n}+r_{n}(s)\) , where V is symmetric and positive definite, \(U_{n}\) is stochastically bounded, \(C_{n}\) is arbitrary, and \(r_{n}(s)\) goes to zero in probability for each s. Then \(\alpha _{n}\), the minimizer of \(A_{n}(s)\), is only \(o_{p}(1)\) away from \(\beta _{n}=-V^{-1}U_{n}\), the minimizer of \(\frac{1}{2}s^{T}Vs+U^{T}_{n}s+C_{n}\). If also \(U_{n}\xrightarrow {d} U\), then \(\alpha _{n}\xrightarrow {d} -V^{-1}U\).

The proof of Lemma 2 is available from Hjort and Pollard (2011). The proof of the Quadratic Approximation Lemma indicates that the positive definiteness of the matrix V can be reduced into semi-positive definiteness, even to the existence of a generalized inverse, see Wu et al. (2010).

Lemma 3

Suppose that Conditions (C1)–(C7) hold. Let \({\hat{\beta }}^{0}\) be the initial estimator of \(\beta _{0}\). As \(h\rightarrow 0\) and \(\delta _{n}\rightarrow 0\), we have

$$\begin{aligned}&\sqrt{nh}\left( \begin{array}{c} {\hat{a}}_{WCQR1}(u,{\hat{\beta }}^{0})-g(u)-c_{1}+g^{'}(u)E(X|u)^{T}{\tilde{\beta }}-\frac{h^{2}}{2}g^{''}(u)\mu _{2}+O(h^{2}\delta _{\beta }+\delta ^{2}_{\beta }+h^{3})\\ ... \\ {\hat{a}}_{WCQRq}(u,{\hat{\beta }}^{0})-g(u)-c_{q}+g^{'}(u)E(X|u)^{T}{\tilde{\beta }}-\frac{h^{2}}{2}g^{''}(u)\mu _{2}+O(h^{2}\delta _{\beta }+\delta ^{2}_{\beta }+h^{3})\\ h({\hat{g}}_{WCQR}'(u,{\hat{\beta }}^{0})-g'(u)+O(\delta _{\beta }+h^{2}))\\ \end{array} \right) \\&\quad =-\frac{S^{-1}}{f_{U_{0}}(u)}W_{n1}+o_{p}(1), \end{aligned}$$

where \({\tilde{\beta }}={\hat{\beta }}^{0}-\beta _{0}\), \(\delta _{n}=\sqrt{\log (1/h)/(nh)}\) and \(\delta _{\beta }=|{\hat{\beta }}^{0}-\beta _{0}|\).

Proof

Recall that \(({\hat{a}}_{WCQR1}(u,{\hat{\beta }}^{0}),...,{\hat{a}}_{WCQRq}(u,{\hat{\beta }}^{0}),{\hat{g}}_{WCQR}'(u,{\hat{\beta }}^{0}))\) are obtained by minimizing the following objective function

$$\begin{aligned}&\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{\pi (Y_{i})}\rho _{\tau _{k}}[Y_{i}-a_{k}-b(X_{i}^{T}{\hat{\beta }}^{0}-u)]K_{i}(u), \end{aligned}$$
(A.1)

where \(K_{i}(u)=K_{h}(X_{i}^{T}{\hat{\beta }}^{0}-u).\) Let \({\hat{\theta }}=\sqrt{nh}\{({\hat{a}}_{WCQR1}(u,{\hat{\beta }}^{0})-g(u)-c_{1}),...,({\hat{a}}_{WCQRq}(u,{\hat{\beta }}^{0})-g(u)-c_{q}),h({\hat{g}}_{WCQR}'(u,{\hat{\beta }}^{0})-g'(u))\}\), \(r_{i}(u)=g(X_{i}^{T}\beta _{0})-g(u)-g^{'}(u)(X_{i}^{T}{\hat{\beta }}^{0}-u)\), \(r^{0}_{i}(u)=g(X_{i}^{T}{\hat{\beta }}^{0})-g(u)-g^{'}(u)(X_{i}^{T}{\hat{\beta }}^{0}-u)\), \(\eta _{ik}=I(\epsilon _{i}-c_{k}<0)-\tau _{k}\), \(\eta _{ik}(u)=I(\epsilon _{i}-c_{k}+r_{i}(u)<0)-\tau _{k}\), \(X_{ik}=(e_{k},(X_{i}^{T}{\hat{\beta }}^{0}-u)/h)^{T}\), \(X^{*}_{k}=(e_{k},(X^{T}{\hat{\beta }}^{0}-u)/h)^{T}\), \(e_{k}\) is a q-vector with 1 on the kth position and 0 elsewhere. Then, \({\hat{\theta }}\) is also the minimizer of

$$\begin{aligned} L_{n}(\theta )=\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{\pi (Y_{i})}\left\{ \rho _{\tau _{k}}[\epsilon _{i}-c_{k}{+}r_{i}(u){-}\frac{X_{ik}^{T}\theta }{\sqrt{nh}}]-\rho _{\tau _{k}}[\epsilon _{i}-c_{k}+r_{i}(u)]\right\} K_{i}(u). \end{aligned}$$
(A.2)

By Knight (1998), for any \(x\ne 0\), we have

$$\begin{aligned} \rho _{\tau }(x-y)-\rho _{\tau }(x)=y[I(x<0)-\tau ]+\int _{0}^y[I(x\le t)-I(x\le 0)]dt. \end{aligned}$$

Then, we can get

$$\begin{aligned} L_{n}(\theta )=W_{n}^{T}\theta +\sum _{k=1}^q B_{n,k}(\theta ), \end{aligned}$$

where

$$\begin{aligned} W_{n}=\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{\pi (Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik} \end{aligned}$$

and

$$\begin{aligned} B_{n,k}(\theta )=\sum _{i=1}^n\frac{V_{i}}{\pi (Y_{i})}\int _{0}^{\frac{X_{ik}^{T}\theta }{\sqrt{nh}}}[I(\epsilon _{i}-c_{k}+r_{i}(u)\le t)-I(\epsilon _{i}-c_{k}+r_{i}(u)\le 0)]dt K_{i}(u). \end{aligned}$$

Since \(B_{n,k}(\theta )\) is a summation of i.i.d. random variables of the kernel dorm, we obtain by Lemma 1

$$\begin{aligned} B_{n,k}(\theta )=E[B_{n,k}(\theta )]+O_{p}(\delta _{n}). \end{aligned}$$

Let \({\hat{\chi }}^{0}\) be the \(\sigma \)-field generated by \(\{X_{1}^{T}{\hat{\beta }}^{0},...,X_{n}^{T}{\hat{\beta }}^{0}\}\). Since \({\hat{\beta }}^{0}\) is a \(\sqrt{n}\)-consistent estimator of \(\beta _{0}\), we have \(f_{U}(.)=f_{U_{0}}(.)(1+o_{p}(1))\). Hence, the expectation of \(\sum _{k=1}^qB_{n,k}(\theta )\) is

$$\begin{aligned} \sum _{k=1}^qE[B_{n,k}(\theta )]=\sum _{k=1}^qE\{E[B_{n,k}(\theta )|{\hat{\chi }}^{0}]\}=\frac{f_{U_{0}}(u)}{2}\theta ^{T}S\theta +O(\delta _{\beta })+O(h^{2}), \end{aligned}$$

where \(S=diag(C,\mu _{2}c)\), \(C=diag(f_{\epsilon }(c_{1}),...,f_{\epsilon }(c_{q}))\), \(c=\sum _{k=1}^qf_{\epsilon }(c_{k})\). Hence, we have

$$\begin{aligned} L_{n}(\theta )&=W_{n}^{T}\theta +\frac{f_{U_{0}}(u)}{2}\theta ^{T}S\theta +O(\delta _{\beta })+O(h^{2})+O_{p}(\delta _{n})\nonumber \\&=W_{n}^{T}\theta +\frac{f_{U_{0}}(u)}{2}\theta ^{T}S\theta +o_{p}(1). \end{aligned}$$
(A.3)

By Lemma 2, we have

$$\begin{aligned} {\hat{\theta }}=-\frac{S^{-1}}{f_{U_{0}}(u)}W_{n}+o_{p}(1). \end{aligned}$$
(A.4)

Let \(W_{n1}=\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{\pi (Y_{i})}\eta _{ik}K_{i}(u)X_{ik}\), we can get \(Var(W_{n}-W_{n1})=o_{p}(1)\), which implies

$$\begin{aligned} W_{n}-W_{n1}-E[W_{n}]=o_{p}(1). \end{aligned}$$

Then, we get

$$\begin{aligned} W_{n}=W_{n1}+E[W_{n}]+o_{p}(1). \end{aligned}$$
(A.5)

Combining (A.4) and (A.5), we obtain

$$\begin{aligned} {\hat{\theta }}=-\frac{S^{-1}}{f_{U_{0}}(u)}W_{n1}-\frac{S^{-1}}{f_{U_{0}}(u)}E[W_{n}]+o_{p}(1). \end{aligned}$$
(A.6)

Considering \(E[W_{n}]\), we have

$$\begin{aligned} \frac{1}{\sqrt{nh}}E[W_{n}]&=-\frac{1}{nh}\sum _{k=1}^q \sum _{i=1}^nE\{f_{\epsilon }(c_{k})[g(X_{i}^{T}\beta _{0})-g(X_{i}^{T}{\hat{\beta }}^{0})\nonumber \\&\quad +r^{0}_{i}(u)]K_{i}(u)X_{ik}(1+O_{p}(n^{-1/2}+h^{2}))\}\nonumber \\&=\frac{1}{nh}\sum _{k=1}^q \sum _{i=1}^nE\{f_{\epsilon }(c_{k})[g(X_{i}^{T}{\hat{\beta }}^{0})-g(X_{i}^{T}\beta _{0})]K_{i}(u)X_{ik}\}\nonumber \\&\quad -\frac{1}{nh}\sum _{k=1}^q \sum _{i=1}^nE[f_{\epsilon }(c_{k})r^{0}_{i}(u)K_{i}(u)X_{ik}]+o_{p}(1)\nonumber \\&=I_{1}-I_{2}+o_{p}(1). \end{aligned}$$

It is easy to show that

$$\begin{aligned} I_{1}&=\left( \begin{array}{c} g^{'}(u)f_{U_{0}}(u)E(X|u)^{T}({\hat{\beta }}^{0}-\beta _{0})\sum _{k=1}^q f_{\epsilon }(c_{k})e_{k}+O(h^{2}\delta _{\beta }+\delta ^{2}_{\beta })\\ O(h\delta _{\beta })\\ \end{array} \right) \end{aligned}$$

and

$$\begin{aligned} I_{2}&=\frac{1}{2}\sum _{k=1}^q f_{\epsilon }(c_{k})\{\int g^{''}(u)t^{2}h^{2}\left( \begin{array}{c} e_{k}K(t) \\ 0\\ \end{array} \right) f_{U_{0}}(x)dx\}+O(h^{3})\nonumber \\&=\frac{h^{2}}{2}g^{''}(u)f_{U_{0}}(u)\left( \begin{array}{c} \sum _{k=1}^q f_{\epsilon }(c_{k})e_{k}\mu _{2} \\ 0\\ \end{array} \right) +O(h^{3}). \end{aligned}$$

Therefore,

$$\begin{aligned}&\frac{1}{\sqrt{nh}}E[W_{n}]\nonumber \\&\quad =\left( \begin{array}{c} f_{U_{0}}(u)\sum _{k=1}^q f_{\epsilon }(c_{k})e_{k}[ g^{'}(u)E(X|u)^{T}({\hat{\beta }}^{0}-\beta _{0})-\frac{h^{2}}{2}g^{''}(u)\mu _{2}] +O(h^{2}\delta _{\beta }+\delta ^{2}_{\beta }+h^{3})\\ O(h\delta _{\beta }+h^{3})\\ \end{array} \right) . \end{aligned}$$
(A.7)

Combining (A.6) and (A.7), we complete the proof of Lemma 3. \(\square \)

Lemma 4

Suppose that Conditions (C1)–(C7) hold. As \(h\rightarrow 0\) and \(\delta _{n}\rightarrow 0\), we have

$$\begin{aligned}&\sqrt{nh}\left( \begin{array}{c} {\hat{a}}_{NWCQR1}(u,{\hat{\beta }}^{0})-g(u)-c_{1}+g^{'}(u)E(X|u)^{T}{\tilde{\beta }}-\frac{h^{2}}{2}g^{''}(u)\mu _{2}+O(\Delta _{(n,h)})\\ ... \\ {\hat{a}}_{NWCQRq}(u,{\hat{\beta }}^{0})-g(u)-c_{q}+g^{'}(u)E(X|u)^{T}{\tilde{\beta }}-\frac{h^{2}}{2}g^{''}(u)\mu _{2}+O(\Delta _{(n,h)})\\ h({\hat{g}}_{NWCQR}'(u,{\hat{\beta }}^{0})-g'(u)+O(\delta _{\beta }+h^{2}+h+1/(nh^{2}))\\ \end{array} \right) \\&\quad =-\frac{S^{-1}}{f_{U_{0}}(u)}[W_{n1}-W_{n2}]+o_{p}(1), \end{aligned}$$

where \(\Delta _{(n,h)}=(h^{2}+1/(nh)+h^{2}\delta _{\beta }+\delta _{\beta }^{2}+h^{3})\), \({\tilde{\beta }}\), \(\delta _{n}\) and \(\delta _{\beta }\) are defined in Lemma 3.

Proof

Note that \(({\hat{a}}_{NWCQR1}(u,{\hat{\beta }}^{0}),...,{\hat{a}}_{NWCQRq}(u,{\hat{\beta }}^{0}),{\hat{g}}_{NWCQR}'(u,{\hat{\beta }}^{0}))\) are obtained by minimizing the following objective function

$$\begin{aligned}&\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{{\hat{\pi }}(Y_{i})}\rho _{\tau _{k}}[Y_{i}-a_{k}-b(X_{i}^{T}{\hat{\beta }}^{0}-u)]K_{i}(u). \end{aligned}$$
(A.8)

Let \({\hat{\theta }}^{*}=\sqrt{nh}\{({\hat{a}}_{NWCQR1}(u,{\hat{\beta }}^{0})-g(u)-c_{1}),...,({\hat{a}}_{NWCQRq}(u,{\hat{\beta }}^{0})-g(u)-c_{q}),h({\hat{g}}_{NWCQR}'(u,{\hat{\beta }}^{0})-g'(u))\}\). Then, \({\hat{\theta }}^{*}\) is also the minimizer of

$$\begin{aligned} L_{n}(\theta ^{*})&=\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{{\hat{\pi }}(Y_{i})}\left\{ \rho _{\tau _{k}}[\epsilon _{i}{-}c_{k}+r_{i}(u){-}\frac{X_{ik}^{T}\theta ^{*}}{\sqrt{nh}}]{-}\rho _{\tau _{k}}[\epsilon _{i}-c_{k}+r_{i}(u)]\right\} K_{i}(u)\nonumber \\&=W_{n}^{*T}\theta ^{*}+\sum _{k=1}^q B^{*}_{n,k}(\theta ^{*}), \end{aligned}$$
(A.9)

where

$$\begin{aligned} W_{n}^{*}=\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{{\hat{\pi }}(Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik} \end{aligned}$$

and

$$\begin{aligned} B^{*}_{n,k}(\theta ^{*})&=\sum _{i=1}^n\frac{V_{i}}{{\hat{\pi }}(Y_{i})}\int _{0}^{\frac{X_{ik}^{T}\theta ^{*}}{\sqrt{nh}}}[I(\epsilon _{i}-c_{k}\\&\quad +r_{i}(u)\le t)-I(\epsilon _{i}-c_{k}+r_{i}(u)\le 0)]dt K_{i}(u). \end{aligned}$$

Notice that

$$\begin{aligned}&B^{*}_{n,k}(\theta ^{*})\\&\quad =\sum _{i=1}^n\frac{V_{i}}{\pi (Y_{i})}\int _{0}^{\frac{X_{ik}^{T}\theta ^{*}}{\sqrt{nh}}}[I(\epsilon _{i}-c_{k}+r_{i}(u)\le t)-I(\epsilon _{i}-c_{k}+r_{i}(u)\le 0)]dt K_{i}(u)\\&\qquad -\sum _{i=1}^n\frac{V_{i}[{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi (Y_{i}){\hat{\pi }}(Y_{i})}\int _{0}^{\frac{X_{ik}^{T}\theta ^{*}}{\sqrt{nh}}}[I(\epsilon _{i}-c_{k}+r_{i}(u) \le t)\\&\qquad -I(\epsilon _{i}-c_{k}+r_{i}(u)\le 0)]dt K_{i}(u).\\&\quad =I_{3}-I_{4}, \end{aligned}$$

where \(I_{3}\) is the first summation, \(I_{4}\) is the second one. From the proof of Lemma 3, we have \(I_{3}=\frac{f_{U_{0}}(u)}{2}\theta ^{*T}S\theta ^{*}+o_{p}(1)=O_{p}(1)\). Considering the fact \(sup_{y}|{\hat{\pi }}(y)-\pi (y)|=o_{p}(1)\), we have \(I_{4}=o_{p}(1)\). Then, it follows that

$$\begin{aligned} \sum _{k=1}^qB^{*}_{n,k}(\theta ^{*})=\frac{f_{U_{0}}(u)}{2}\theta ^{*T}S\theta ^{*}+o_{p}(1). \end{aligned}$$
(A.10)

Recalling the definition of \(W_{n}^{*}\), we have

$$\begin{aligned} W_{n}^{*}&=W_{n}-\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}[{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi ^{2}(Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik}+o_{p}(1)\nonumber \\&=W_{n}-\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{[V_{i}-\pi (Y_{i})][{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi ^{2}(Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik}\nonumber \\&\quad -\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{[{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi (Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik}+o_{p}(1)\nonumber \\&=W_{n1}+E[W_{n}]-I_{5}-I_{6}+o_{p}(1), \end{aligned}$$
(A.11)

where

$$\begin{aligned} I_{5}&=\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{[V_{i}-\pi (Y_{i})][{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi ^{2}(Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik},\\ I_{6}&=\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{[{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi (Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik}, \end{aligned}$$

\(W_{n}\) and \(W_{n1}\) are given in Lemma 3.

Recalling the definition of \({\hat{\pi }}(y)\), we have

$$\begin{aligned} {\hat{\pi }}(y)-\pi (y)&=\frac{\sum _{j=1}^n(V_{j}-\pi (Y_{j}))L_{h}(Y_{j}-y)}{\sum _{j=1}^nL_{h}(Y_{j}-y)}\\&\quad +\frac{\sum _{j=1}^n(\pi (Y_{j})-\pi (y))L_{h}(Y_{j}-y)}{\sum _{j=1}^nL_{h}(Y_{j}-y)}\\&=\frac{1}{nf_{Y}(y)}\sum _{j=1}^n(V_{j}-\pi (Y_{j}))L_{h}(Y_{j}-y)\\&\quad +\frac{\sum _{j=1}^n(\pi (Y_{j})-\pi (y))L_{h}(Y_{j}-y)}{nf_{Y}(y)}\\&=\frac{1}{nf_{Y}(y)}\sum _{j=1}^n(V_{j}-\pi (Y_{j}))L_{h}(Y_{j}-y)\\&\quad +O(h^{2})+O_{p}\left( 1/\sqrt{nh^{-1}}\right) , \end{aligned}$$

where \(O(h^{2})\) and \(O_{p}(1/\sqrt{nh^{-1}})\) don’t depend on the \(V_{j}\), \((j=1,...,n)\).

Noting that \(E\sum _{k=1}^q[\eta _{jk}(u)K_{j}(u)X_{jk}|Y_{j}]=O_{p}(h)\), we have

$$\begin{aligned} I_{5}&=\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\sum _{j=1}^n\frac{[V_{i}-\pi (Y_{i})][V_{j}-\pi (Y_{j})]}{nf_{Y}(Y_{i})\pi ^{2}(Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik}L_{h}(Y_{j}-Y_{i})\\&\quad +O(h^{2})+O_{p}\left( 1/\sqrt{nh^{-1}}\right) \\&=\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i\ne j}^n\frac{[V_{i}-\pi (Y_{i})][V_{j}-\pi (Y_{j})]}{nf_{Y}(Y_{i})\pi ^{2}(Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik}L_{h}(Y_{j}-Y_{i})\\&\quad +\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{[V_{i}-\pi (Y_{i})]^2}{nf_{Y}(Y_{i})\pi ^{2}(Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik}L_{h}(0)+O(h^{2})\\&\quad +\,O_{p}\left( 1/\sqrt{nh^{-1}}\right) \\&=I^{1}_{5}+I^{2}_{5}+O(h^{2})+O_{p}\left( 1/\sqrt{nh^{-1}}\right) . \end{aligned}$$

By calculating the mean and the variance, we can get

$$\begin{aligned} I^{1}_{5}=O_{p}\left( 1/\sqrt{nh}\right) ,\quad I^{2}_{5}=O_{p}\left( 1/\sqrt{nh}\right) . \end{aligned}$$

Then, we have

$$\begin{aligned} I_{5}=O_{p}\left( 1/\sqrt{nh}\right) +O(h^{2}). \end{aligned}$$
(A.12)

For \(I_{6}\), we can get

$$\begin{aligned} I_{6}&=\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{1}{\pi (Y_{i})}\eta _{ik}(u)K_{i}(u)X_{ik}\frac{1}{nf_{Y}(Y_{i})}\sum _{j=1}^n(V_{j}-\pi (Y_{j}))L_{h}(Y_{j}-Y_{i})\\&\quad +O(h^{2})\sqrt{nh}+O_{p}(h) \end{aligned}$$

Denote \(D_{ij}=\sum _{k=1}^q \frac{1}{nf_{Y}(Y_{i})\pi (Y_{i})}L_{h}(Y_{j}-Y_{i})\eta _{ik}(u)K_{i}(u)X_{ik}\). For given \(Y_{j}\), \(D_{ij}\) is independent with \(D_{lj}\), \(i\ne l\). Hence, we have

$$\begin{aligned} \sum _{i=1}^nD_{ij}=\frac{1}{\pi (Y_{j})}E\left[ \sum _{k=1}^q\eta _{jk}(u)K_{j}(u)X_{jk}|Y_{j}\right] +O_{p}(n^{-1/2}). \end{aligned}$$

Then, \(I_{6}\) can be rewritten as

$$\begin{aligned} I_{6}&=\frac{1}{\sqrt{nh}}\sum _{k=1}^q\sum _{j=1}^n\frac{(V_{j}-\pi (Y_{j}))}{\pi (Y_{j})}E[\eta _{jk}(u)K_{j}(u)X_{jk}|Y_{j}]+O_{p}\left( 1/\sqrt{nh}\right) \nonumber \\&\quad +O(h^{2})\sqrt{nh}+O_{p}(h)\nonumber \\&=W_{n2}+o_{p}(\sqrt{h})+O_{p}\left( 1/\sqrt{nh}\right) +O(h^{2})\sqrt{nh}+O_{p}(h), \end{aligned}$$
(A.13)

where

$$\begin{aligned} W_{n2}=\frac{1}{\sqrt{nh}}\sum _{k=1}^q\sum _{j=1}^n\frac{V_{j}-\pi (Y_{j})}{\pi (Y_{j})}E[\eta _{jk}K_{j}(u)X_{jk}|Y_{j}]. \end{aligned}$$

Combining (A.11)–(A.13), we have

$$\begin{aligned} W^{*}_{n}=W_{1n}-W_{2n}+E[W_{n}]+O_{p}\left( 1/\sqrt{nh}\right) +O(h^{2})\sqrt{nh}+O_{p}(h). \end{aligned}$$

By Lemma 2, we complete the proof of Lemma 4. \(\square \)

Proof of Theorem 3.1

When the selection probability is known, we denote \({\hat{\beta }}_{WCQR}^{1}\) and \({\hat{c}}_{WCQRk}^{1}\), \((k=1,...,q)\) to be the estimators of \(\beta _{0}\) and \(c_{k}\), \((k=1,...,q)\) which are obtained in 1th iteration in the algorithm proposed in Sect. 2.1. Then, \({\hat{\beta }}_{WCQR}^{1}\) and \({\hat{c}}_{WCQRk}^{1}\), \((k=1,...,q)\) are the minimizer of the following loss function:

$$\begin{aligned}&\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{\pi (Y_{i})}\rho _{\tau _{k}}\Big [Y_{i}-c_{k}-{\hat{g}}_{WCQR}(X^{T}_{i}{\hat{\beta }}^{0},{\hat{\beta }}^{0})\nonumber \\&\quad -{\hat{g}}'_{WCQR}(X_{i}^{T}{\hat{\beta }}^{0},{\hat{\beta }}^{0})(X_{i}^{T}\beta -X_{i}^{T}{\hat{\beta }}^{0})\Big ]. \end{aligned}$$
(A.14)

Let \({\hat{\zeta }}_{WCQR}=\sqrt{n}({\hat{\beta }}_{WCQR}^{1}-\beta _{0})\), \({\hat{v}}_{WCQRk}=\sqrt{n}({\hat{c}}_{WCQRk}^{1}-c_{k})\), \(r_{ik}=c_{k}+{\hat{g}}_{WCQR}(X^{T}_{i}{\hat{\beta }}^{0},{\hat{\beta }}^{0})-g(X^{T}_{i}\beta _{0})+{\hat{g}}'_{WCQR}(X_{i}^{T}{\hat{\beta }}^{0},{\hat{\beta }}^{0})(X_{i}^{T}\beta _{0}-X_{i}^{T}{\hat{\beta }}^{0})\) and \(N_{i}={\hat{g}}'_{WCQR}(X_{i}^{T}{\hat{\beta }}^{0},{\hat{\beta }}^{0})X_{i}\). Then, \({\hat{\zeta }}_{WCQR}\) and \({\hat{v}}_{WCQRk}\), \((k=1,...,q)\) minimize the following:

$$\begin{aligned}&Q_{n}(v_{1},...,v_{q},\zeta ,\pi (Y_{i}))\nonumber \\&\quad =\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{\pi (Y_{i})}\rho _{\tau _{k}}(\epsilon _{i}-r_{ik}-\frac{1}{\sqrt{n}}(N^{T}_{i}\zeta +v_{k}))\nonumber \\&\quad \quad -\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{\pi (Y_{i})}\rho _{\tau _{k}}(\epsilon _{i}-r_{ik})\nonumber \\&\quad =\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{\pi (Y_{i})}[I(\epsilon _{i}< r_{ik})-\tau _{k}]N^{T}_{i}\zeta \nonumber \\&\quad \quad +\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{\pi (Y_{i})}[I(\epsilon _{i}< r_{ik})-\tau _{k}]v_{k} \nonumber \\&\quad \quad +\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{\pi (Y_{i})}\int _{0}^{\frac{1}{\sqrt{n}}(N^{T}_{i}\zeta +v_{k})}[I(\epsilon _{i}\le r_{ik}+t)-I(\epsilon _{i}\le r_{ik})]dt\nonumber \\&\quad =M^{T}_{n}\zeta +\frac{1}{2}\zeta ^{T}\Gamma \zeta +\sum _{k=1}^qz_{nk}v_{k}+\frac{1}{2}\sum _{k=1}^qf_{\epsilon }(c_{k})v_{k}^{2}+o_{p}(1), \end{aligned}$$
(A.15)

where

$$\begin{aligned} M_{n}&=\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{\pi (Y_{i})}[I(\epsilon _{i}< r_{ik})-\tau _{k}]N_{i},\\ z_{nk}&=\frac{1}{\sqrt{n}}\sum _{i=1}^n \frac{V_{i}}{\pi (Y_{i})}[I(\epsilon _{i}< r_{ik})-\tau _{k}] \end{aligned}$$

and

$$\begin{aligned} \Gamma =cE[g'(X^{T}\beta _{0})^{2}XX^{T}]. \end{aligned}$$

Letting \(M_{n1}=\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{\pi (Y_{i})}[I(\epsilon _{i}< c_{k})-\tau _{k}]N_{i}\), we can get \(Var(M_{n}-M_{n1}|{\hat{\chi }}^{0},X)=o_{p}(1)\). Then,

$$\begin{aligned} M_{n}&=M_{n1}+E[M_{n}|{\hat{\chi }}^{0},X]+o_{p}(1)\nonumber \\&=M_{n1}+\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{i=1}^nf_{\epsilon }(c_{k})(r_{ik}-c_{k})N_{i}+o_{p}(1). \end{aligned}$$
(A.16)

With the help of Lemma 3 and Condition (C9), we have

$$\begin{aligned} r_{ik}&=g(X^{T}_{i}{\hat{\beta }}^{0})+c_{k}-g'(X^{T}_{i}{\hat{\beta }}^{0})E(X_{i}|X^{T}_{i}{\hat{\beta }}^{0})({\hat{\beta }}^{0}-\beta _{0})+\frac{h^{2}}{2}g''(X^{T}_{i}{\hat{\beta }}^{0})\mu _{2}\\&\quad -\frac{1}{q}\sum _{k=1}^q\frac{1}{nhf_{U_{0}}(X^{T}_{i}{\hat{\beta }}^{0})f_{\epsilon }(c_{k})}\sum _{j=1}^n \frac{V_{j}}{\pi (Y_{j})}\eta _{jk}K_{j}(X^{T}_{i}{\hat{\beta }}^{0})\\&\quad +O(h^{2}\delta _{\beta }+\delta ^{2}_{\beta }+h^{3})-g(X^{T}_{i}\beta _{0})\\&\quad +[g'(X_{i}^{T}{\hat{\beta }}^{0})-O(\delta _{n}/h)+O(\delta _{\beta }+h^{2})](X_{i}^{T}\beta _{0}-X_{i}^{T}{\hat{\beta }}^{0})\\&=c_{k}-g'(X^{T}_{i}{\hat{\beta }}^{0})E(X_{i}|X^{T}_{i}{\hat{\beta }}^{0})({\hat{\beta }}^{0}-\beta _{0})\\&\quad -\frac{1}{q}\sum _{k=1}^q\frac{1}{nhf_{U_{0}}(X^{T}_{i}{\hat{\beta }}^{0})f_{\epsilon }(c_{k})}\sum _{j=1}^n \frac{V_{j}}{\pi (Y_{j})}\eta _{jk}K_{j}(X^{T}_{i}{\hat{\beta }}^{0})+o_{p}(n^{-1/2}). \end{aligned}$$

Thus, we have

$$\begin{aligned}&\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{i=1}^nf_{\epsilon }(c_{k})(r_{ik}-c_{k})N_{i}\nonumber \\&=\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{i=1}^nf_{\epsilon }(c_{k})[-g'(X^{T}_{i}{\hat{\beta }}^{0})E(X_{i}|X^{T}_{i}{\hat{\beta }}^{0})({\hat{\beta }}^{0}-\beta _{0})\nonumber \\&\quad -\frac{1}{nhf_{U_{0}}(X^{T}_{i}{\hat{\beta }}^{0})f_{\epsilon }(c_{k})}\sum _{j=1}^n \frac{V_{j}}{\pi (Y_{j})}\eta _{jk}K_{j}(X^{T}_{i}{\hat{\beta }}^{0})]N_{i}\nonumber \\&=-\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{i=1}^nf_{\epsilon }(c_{k})N_{i}g'(X^{T}_{i}{\hat{\beta }}^{0})E(X_{i}|X^{T}_{i}{\hat{\beta }}^{0})({\hat{\beta }}^{0}-\beta _{0})\nonumber \\&\quad -\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{i=1}^n\sum _{j=1}^n \frac{N_{i}}{nhf_{U_{0}}(X^{T}_{i}{\hat{\beta }}^{0})} \frac{V_{j}}{\pi (Y_{j})}\eta _{jk}K_{j}(X^{T}_{i}{\hat{\beta }}^{0})+o_{p}(1)\nonumber \\&=-I_{7}-I_{8}+o_{p}(1). \end{aligned}$$
(A.17)

For \(I_{7}\), we have

$$\begin{aligned}&I_{7}=\sqrt{n}C_{0}({\hat{\beta }}^{0}-\beta _{0})+o_{p}(1), \end{aligned}$$
(A.18)

where \(C_{0}=cE[g'(X^{T}\beta _{0})^2E(X|X^{T}\beta _{0})E(X|X^{T}\beta _{0})^{T}]\) and c defined in Lemma 3.

Considering \(I_{8}\), we have

$$\begin{aligned} I_{8}&=\frac{1}{\sqrt{n}nh}\sum _{k=1}^q\sum _{j=1}^n\frac{V_{j}}{\pi (Y_{j})}\eta _{jk} \sum _{i=1}^n\frac{N_{i}}{f_{U_{0}}(X^{T}_{i}{\hat{\beta }}^{0})} K_{j}(X^{T}_{i}{\hat{\beta }}^{0})\nonumber \\&=\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{j=1}^n\frac{V_{j}}{\pi (Y_{j})}\eta _{jk}{\hat{g}}'(X_{j}^{T}{\hat{\beta }}^{0})E[X_{j}|X^{T}_{j}\beta _{0}]+o_{p}(1)\nonumber \\&=C_{n}+o_{p}(1), \end{aligned}$$
(A.19)

where \(C_{n}=\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{j=1}^n\frac{V_{j}}{\pi (Y_{j})}\eta _{jk}{\hat{g}}'(X_{j}^{T}{\hat{\beta }}^{0},{\hat{\beta }}^{0})E[X_{j}|X^{T}_{j}\beta _{0}]\).

Combining (A.15)–(A.19), we obtain

$$\begin{aligned} Q_{n}(v_{1},...,v_{q},\zeta ,\pi (Y_{i}))&=[M_{n1}-C_{n}-\sqrt{n}C_{0}({\hat{\beta }}^{0}-\beta _{0})]\zeta +\frac{1}{2}\zeta ^{T}\Gamma \zeta \\&\quad +\sum _{k=1}^qz_{nk}v_{k}+\frac{1}{2}\sum _{k=1}^qf_{\epsilon }(c_{k})v_{k}^{2}+o_{p}(1). \end{aligned}$$

By Lemma 2, we have

$$\begin{aligned} {\hat{\zeta }}_{WCQR}=-\Gamma ^{-1}[M_{n1}-C_{n}]+\sqrt{n}\Gamma ^{-1}C_{0}({\hat{\beta }}^{0}-\beta _{0})+o_{p}(1). \end{aligned}$$

Recalling the definition of \({\hat{\zeta }}_{WCQR}\), we have

$$\begin{aligned}&({\hat{\beta }}_{WCQR}^{1}-\beta _{0})=-\frac{1}{\sqrt{n}}\Gamma ^{-1}[M_{n1}-C_{n}]+\Gamma ^{-1}C_{0}({\hat{\beta }}^{0}-\beta _{0})+o_{p}\left( 1/\sqrt{n}\right) . \end{aligned}$$
(A.20)

Denote \({\tilde{\Gamma }}=\Gamma ^{-\frac{1}{2}} C_{0}\Gamma ^{-\frac{1}{2}}\) and \({\hat{\beta }}_{WCQR}^{k}\) to be the estimator of \(\beta _{0}\) gained in the kth iteration. For k, replace \({\hat{\beta }}_{WCQR}^{1}\) and \({\hat{\beta }}^{0}\) by \({\hat{\beta }}_{WCQR}^{k+1}\) and \({\hat{\beta }}_{WCQR}^{k}\), respectively, Equation (A.20) still holds. Letting \(\vartheta ^{k}=\Gamma ^{\frac{1}{2}}({\hat{\beta }}_{WCQR}^{k}-\beta _{0})\), we have

$$\begin{aligned}&\vartheta ^{k+1}=-\frac{1}{\sqrt{n}}\Gamma ^{-\frac{1}{2}}[M_{n1}-C_{n}]+{\tilde{\Gamma }}\vartheta ^{k}+o_{p}\left( 1/\sqrt{n}\right) . \end{aligned}$$

Similar to Xia and Härdle (2006), we can get the convergence of our algorithm. For sufficiently large k, we have

$$\begin{aligned}&\Gamma ^{\frac{1}{2}}({\hat{\beta }}_{WCQR}-\beta _{0})=-\frac{1}{\sqrt{n}}\Gamma ^{-\frac{1}{2}}[M_{n1}-C_{n}]+{\tilde{\Gamma }}\Gamma ^{\frac{1}{2}}({\hat{\beta }}_{WCQR}-\beta _{0})+o_{p}\left( 1/\sqrt{n}\right) . \end{aligned}$$

Then,

$$\begin{aligned}&(\Gamma -\Gamma ^{\frac{1}{2}}{\tilde{\Gamma }}\Gamma ^{\frac{1}{2}})({\hat{\beta }}_{WCQR}-\beta _{0})=-\frac{1}{\sqrt{n}}[M_{n1}-C_{n}]+o_{p}(1/\sqrt{n}). \end{aligned}$$
(A.21)

Recalling the definition of \({\tilde{\Gamma }}\), (A.21) can be rewritten as

$$\begin{aligned}&\sqrt{n}D({\hat{\beta }}_{WCQR}-\beta _{0})=-[M_{n1}-C_{n}]+o_{p}(1), \end{aligned}$$
(A.22)

where \(D=\Gamma -C_{0}\).

By Central limit theorem, we know

$$\begin{aligned}&-[M_{n1}-C_{n}]\xrightarrow {d}N(\mathbf 0 , D_{1}), \end{aligned}$$
(A.23)

where \(D_{1}\) is given in Theorem 3.1.

Combining Equations (A.22) and (A.23), we obtain the asymptotic normality of \({\hat{\beta }}_{WCQR}\). \(\square \)

Proof of Theorem 3.2

Denote \({\hat{\beta }}_{NWCQR}^{1}\) and \({\hat{c}}_{NWCQRk}^{1}\), \((k=1,...,q)\) to be the calculation results of the 1th iteration when using the estimated selection probability. \({\hat{\beta }}_{NWCQR}^{1}\) and \({\hat{c}}_{NWCQRk}^{1}\), \((k=1,...,q)\) are obtained by minimizing the following objective function:

$$\begin{aligned}&\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{{\hat{\pi }}(Y_{i})}\rho _{\tau _{k}}{[}Y_{i}-c_{k}-{\hat{g}}_{NWCQR}(X^{T}_{i}{\hat{\beta }}^{0},{\hat{\beta }}^{0})\nonumber \\&\qquad -{\hat{g}}'_{NWCQR}(X_{i}^{T}{\hat{\beta }}^{0},{\hat{\beta }}^{0})(X_{i}^{T}\beta -X_{i}^{T}{\hat{\beta }}^{0})]. \end{aligned}$$
(A.24)

Let \({\hat{\zeta }}_{NWCQR}=\sqrt{n}({\hat{\beta }}_{NWCQR}^{1}-\beta _{0})\), \({\hat{v}}_{NWCQRk}=\sqrt{n}({\hat{c}}_{NWCQRk}^{1}-c_{k})\), \(N^{*}_{i}={\hat{g}}'_{NWCQR}(X_{i}^{T}{\hat{\beta }}^{0},{\hat{\beta }}^{0})X_{i}\) and \(r^{*}_{ik}=c_{k}+{\hat{g}}_{NWCQR}(X^{T}_{i}{\hat{\beta }}^{0},{\hat{\beta }}^{0})-g(X^{T}_{i}\beta _{0})+{\hat{g}}'_{NWCQR}(X_{i}^{T}{\hat{\beta }}^{0},{\hat{\beta }}^{0})(X_{i}^{T}\beta _{0}-X_{i}^{T}{\hat{\beta }}^{0})\) , \((k=1,...,q)\). Then, \({\hat{\zeta }}_{NWCQR}\), \({\hat{v}}_{NWCQRk}\), \((k=1,...,q)\) minimizes the following

$$\begin{aligned}&Q_{n}(v_{1},...,v_{q},\zeta ,{\hat{\pi }}(Y_{i}))\nonumber \\&\quad =\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{{\hat{\pi }}(Y_{i})}\rho _{\tau _{k}}(\epsilon _{i}-r^{*}_{ik}-\frac{1}{\sqrt{n}}(N^{*T}_{i}\zeta +v_{k}))\nonumber \\&\quad \quad -\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{{\hat{\pi }}(Y_{i})}\rho _{\tau _{k}}(\epsilon _{i}-r^{*}_{ik})\nonumber \\&\quad =\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{{\hat{\pi }}(Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*T}_{i}\zeta \nonumber \\&\quad \quad +\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{{\hat{\pi }}(Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]v_{k} \nonumber \\&\quad \quad +\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{{\hat{\pi }}(Y_{i})}\int _{0}^{\frac{1}{\sqrt{n}}(N^{*T}_{i}\zeta +v_{k})}[I(\epsilon _{i}\le r^{*}_{ik}+t)-I(\epsilon _{i}\le r^{*}_{ik})]dt\nonumber \\&\quad ={\hat{M}}^{*T}_{n}\zeta +\frac{1}{2}\zeta ^{T}\Gamma \zeta +\sum _{k=1}^q{\hat{z}}^{*}_{nk}v_{k}+\frac{1}{2}\sum _{k=1}^qf_{\epsilon }(c_{k})v_{k}^{2}+o_{p}(1), \end{aligned}$$
(A.25)

where

$$\begin{aligned} {\hat{M}}^{*}_{n}&=\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{{\hat{\pi }}(Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i},\\ {\hat{z}}^{*}_{nk}&=\frac{1}{\sqrt{n}}\sum _{i=1}^n \frac{V_{i}}{{\hat{\pi }}(Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}] \end{aligned}$$

and \(\Gamma \) is given in the proof of Theorem 3.1. Note that

$$\begin{aligned} {\hat{M}}^{*}_{n}&=M^{*}_{n}-\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}[{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi ^{2}(Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i}+o_{p}(1) \nonumber \\&=M^{*}_{n}-\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{[V_{i}-\pi (Y_{i})][{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi ^{2}(Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i} \nonumber \\&\quad -\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{[{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi (Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i}+o_{p}(1) \nonumber \\&=M^{*}_{n}-I_{9}-I_{10}+o_{p}(1), \end{aligned}$$
(A.26)

where

$$\begin{aligned} M^{*}_{n}&=\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}}{\pi (Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i},\\ I_{9}&=\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{[V_{i}-\pi (Y_{i})][{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi ^{2}(Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i}, \end{aligned}$$

and

$$\begin{aligned}&I_{10}=\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{[{\hat{\pi }}(Y_{i})-\pi (Y_{i})]}{\pi (Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i}. \end{aligned}$$

Using Lemma 4 and Conditions (C8)–(C9), and following the proof of Theorem 3.1, we have

$$\begin{aligned}&M^{*}_{n}=M_{n1}-C_{n}+C_{n1}-\sqrt{n}C_{0}({\hat{\beta }}^{0}-\beta _{0})+o_{p}(1), \end{aligned}$$
(A.27)

where

$$\begin{aligned}&C_{n1}=\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{j=1}^n\frac{V_{j}-\pi (Y_{j})}{\pi (Y_{j})}E\{\eta _{jk}g^{'}(X^{T}_{j}\beta _{0})E[X_{j}|X^{T}_{j}\beta _{0}]|Y_{j}\}. \end{aligned}$$

Considering \(I_{9}\), we have

$$\begin{aligned} I_{9}&=\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{V_{i}-\pi (Y_{i})}{\pi ^{2}(Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i}\nonumber \\&\qquad \left\{ \frac{1}{nf_{Z}(Z_{i})}\sum _{j=1}^n[V_{j}-\pi (Y_{j})]L_{h}(Y_{j}-Y_{i})+O(h^{2})+O_{p}\left( 1/\sqrt{nh^{-1}}\right) \right\} \nonumber \\&\quad =\frac{1}{\sqrt{n}n}\sum _{k=1}^q \sum _{i\ne j}^n \frac{[V_{i}-\pi (Y_{i})][V_{j}-\pi (Y_{j})]}{\pi ^{2}(Y_{i})f_{Y}(Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i}L_{h}(Y_{j}-Y_{i})\nonumber \\&\qquad +\frac{1}{\sqrt{n}n}\sum _{k=1}^q \sum _{i=1}^n \frac{[V_{i}-\pi (Y_{i})]^2}{\pi ^{2}(Y_{i})f_{Y}(Y_{i})}\left[ I\left( \epsilon _{i}< r^{*}_{ik}\right) -\tau _{k}\right] \nonumber \\&\qquad N^{*}_{i}L_{h}(0)+O(h^{2})+O_{p}\left( 1/\sqrt{nh^{-1}}\right) \nonumber \\&\quad =O_{p}\left( 1/\sqrt{nh}\right) +O_{p}\left( 1/\sqrt{nh^{2}}\right) +O(h^{2})+O_{p}\left( 1/\sqrt{nh^{-1}}\right) =o_{p}(1). \end{aligned}$$
(A.28)

Using Conditions (C8)–(C9), we have

$$\begin{aligned} I_{10}&=\frac{1}{\sqrt{n}}\sum _{k=1}^q \sum _{i=1}^n \frac{1}{\pi (Y_{i})}[I(\epsilon _{i}< r^{*}_{ik})-\tau _{k}]N^{*}_{i}\nonumber \\&\qquad \left\{ \frac{1}{nf_{Y}(Y_{i})}\sum _{j=1}^n[V_{j}-\pi (Y_{j})]L_{h}(Y_{j}-Y_{i})+O(h^{2})+O_{p}\left( 1/\sqrt{nh^{-1}}\right) \right\} \nonumber \\&=\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{j=1}^n\frac{V_{j}-\pi (Y_{j})}{\pi (Y_{j})}E\left\{ [I(\epsilon _{j}< r^{*}_{jk})-\tau _{k}]N^{*}_{j}|Y_{j}\right\} +O\left( \sqrt{n}h^{2}\right) +o_{p}(1)\nonumber \\&=\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{j=1}^n\frac{V_{j}-\pi (Y_{j})}{\pi (Y_{j})}E\{\eta _{jk}g^{'}(X^{T}_{j}\beta _{0})X_{j}|Y_{j}\}+O\left( \sqrt{n}h^{2}\right) +o_{p}(1)\nonumber \\&=M_{n2}+o_{p}(1), \end{aligned}$$
(A.29)

where

$$\begin{aligned}&M_{n2}=\frac{1}{\sqrt{n}}\sum _{k=1}^q\sum _{j=1}^n\frac{V_{j}-\pi (Y_{j})}{\pi (Y_{j})}E\{\eta _{jk}g^{'}(X^{T}_{j}\beta _{0})X_{j}|Y_{j}\}. \end{aligned}$$

Combining (A.26)–(A.29), we have

$$\begin{aligned}&{\hat{M}}^{*}_{n}=M_{n1}-C_{n}-M_{n2}+C_{n1}-\sqrt{n}C_{0}({\hat{\beta }}^{0}-\beta _{0})+o_{p}(1). \end{aligned}$$
(A.30)

By Central limit theorem, we have

$$\begin{aligned}&-[M_{n1}-C_{n}-M_{n2}+C_{n1}]\rightarrow _{d}N(\mathbf 0 ,D_{2}), \end{aligned}$$
(A.31)

where \(D_{2}\) is given in Theorem 3.2. Following the proof of Theorem 3.1, we obtain the asymptotic normality of \({\hat{\beta }}_{NWCQR}\). \(\square \)

Proof of Theorem 3.3

By Theorem 3.1, we can see that \({\hat{\beta }}_{WCQR}\) is a \(\sqrt{n}\)-consistent estimator of \(\beta _{0}\). Following the proof of Lemma 3, the asymptotic normality of \({\hat{g}}(u,{\hat{\beta }}_{WCQR})\) is obtained. \(\square \)

Proof of Theorem 3.4

By Theorem 3.2, we can see that \({\hat{\beta }}_{NWCQR}\) is a \(\sqrt{n}\)-consistent estimator of \(\beta _{0}\). Suppose that Conditions (C1)–(C7) and (C10) hold, by Lemma 4, we have

$$\begin{aligned}&\sqrt{nh}\left( \begin{array}{c} {\hat{a}}_{1}(u,{\hat{\beta }}_{NWCQR})-g(u)-c_{1}-\frac{h^{2}}{2}g''(u)\mu _{2}+O(h^{2})\\ ... \\ {\hat{a}}_{q}(u,{\hat{\beta }}_{NWCQR})-g(u)-c_{q}-\frac{h^{2}}{2}g''(u)\mu _{2}+O(h^{2})\\ \end{array} \right) \\&\quad =-\frac{C^{-1}}{f_{U_{0}}(u)}[W_{n1*}-W_{n2*}]+o_{p}(1), \end{aligned}$$

where C is defined in Lemma 3,

$$\begin{aligned}&W_{n1*}=\frac{1}{\sqrt{nh}}\sum _{k=1}^q \sum _{i=1}^n\frac{V_{i}}{\pi (Y_{i})}\eta _{ik}K\left( \frac{X^{T}_{i}{\hat{\beta }}_{NWCQR}-u}{h}\right) e_{k} \end{aligned}$$

and

$$\begin{aligned}&W_{n2*}=\frac{1}{\sqrt{nh}}\sum _{k=1}^q\sum _{j=1}^n\frac{V_{j}-\pi (Y_{j})}{\pi (Y_{j})}E[\eta _{jk}K\left( \frac{X^{T}_{j}{\hat{\beta }}_{NWCQR}-u}{h}\right) e_{k} |Y_{j}]. \end{aligned}$$

By calculating the expectation and variance, we obtain the asymptotic normality of \({\hat{g}}(u,{\hat{\beta }}_{NWCQR})\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, H., Yang, H. & Peng, C. Weighted composite quantile regression for single index model with missing covariates at random. Comput Stat 34, 1711–1740 (2019). https://doi.org/10.1007/s00180-019-00886-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-019-00886-y

Keywords

Navigation