Skip to main content
Log in

Weighted quantile regression and testing for varying-coefficient models with randomly truncated data

  • Original Paper
  • Published:
AStA Advances in Statistical Analysis Aims and scope Submit manuscript

Abstract

This paper develops a varying-coefficient approach to the estimation and testing of regression quantiles under randomly truncated data. In order to handle the truncated data, the random weights are introduced and the weighted quantile regression (WQR) estimators for nonparametric functions are proposed. To achieve nice efficiency properties, we further develop a weighted composite quantile regression (WCQR) estimation method for nonparametric functions in varying-coefficient models. The asymptotic properties both for the proposed WQR and WCQR estimators are established. In addition, we propose a novel bootstrap-based test procedure to test whether the nonparametric functions in varying-coefficient quantile models can be specified by some function forms. The performance of the proposed estimators and test procedure are investigated through simulation studies and a real data example.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Andriyana, Y., Gijbels, I., Verhasselt, A.: P-spline quantile regression estimation in varying coefficient models. Test 23, 153–194 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Andriyana, Y., Gijbels, I.: Quantile regression in heteroscedastic varying coefficient models. AStA Adv. Stat. Anal. 101, 151–176 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  • Cai, Z.W., Fan, J.Q., Yao, Q.W.: Functional-coefficient regression models for nonlinear time series. J. Am. Stat. Assoc. 95, 941–956 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  • Fan, J.Q., Huang, T.: Profile likelihood inferences on semiparametric varying-coefficient partially linear models. Bernoulli 11, 1031–1057 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Guo, J., Tian, M.Z., Zhu, K.: New efficient and robust estimation in varying-coefficient models with heteroscedasticity. Stat. Sin. 22, 1075–1101 (2012)

    MathSciNet  MATH  Google Scholar 

  • Hastie, T., Tibshirani, R.: Varying-coefficient models. J. R. Stat. Soc. B 55, 757–796 (1993)

    MathSciNet  MATH  Google Scholar 

  • He, S.Y., Yang, G.L.: Estimation of the truncation probability in the random truncation model. Ann. Stat. 26, 1011–1027 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  • He, S.Y., Yang, G.L.: Estimation of regression parameters with left truncated data. J. Stat. Plan. Inference 117, 99–122 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  • Honda, T.: Quantile regression in varying coefficient models. J. Stat. Plan. Inference 121, 113–125 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  • Jiang, R., Zhou, Z.G., Qian, W.M., Chen, Y.: Two step composite quantile regression for single-index models. Comput. Stat. Data Anal. 64, 180–191 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Jiang, R., Qian, W.M., Zhou, Z.G.: Weighted composite quantile regression for single-index models. J. Multivar. Anal. 148, 34–48 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Kai, B., Li, R.Z., Zou, H.: Local composite quantile regression smoothing: an efficient and safe alternative to local polynomial regression. J. R. Stat. Soc. B 72, 49–69 (2010)

    Article  MathSciNet  Google Scholar 

  • Kai, B., Li, R.Z., Zou, H.: New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models. Ann. Stat. 39, 305–332 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Kim, M.O.: Quantile regression with varying coefficients. Ann. Stat. 35, 92–108 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Knight, K.: Limiting distributions for \(l_1\) regression estimators under general conditions. Ann. Stat. 26, 755–770 (1998)

    Article  MATH  Google Scholar 

  • Koenker, R.: Econometric Society Monographs: Quantile Regression. Cambridge Press, Cambridge (2005)

    Book  MATH  Google Scholar 

  • Koenker, R., Bassett, G.: Regression quantiles. Econometrica 46, 33–50 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  • Lemdani, M., Ould-Saïd, E., Poulin, P.: Asymptotic properties of a conditional quantile estimator with randomly truncated data. J. Multivar. Anal. 100, 546–559 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Liang, H.Y., Baek, J.I.: Asymptotic normality of conditional density estimation with left-truncated and dependent data. Stat. Papers 57, 1–20 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Liang, H.Y., Liu, A.A.: Kernel estimation of conditional density with truncated, censored and dependent data. J. Multivar. Anal. 120, 40–58 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Luo, S., Mei, C., Zhang, C.Y.: Smoothed empirical likelihood for quantile regression models with response data missing at random. AStA Adv. Stat. Anal. 101, 95–116 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  • Lv, Y.H., Zhang, R.Q., Zhao, W.H., Liu, J.C.: Quantile regression and variable selection of partial linear single-index model. Ann. Inst. Stat. Math. 67, 375–409 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Lynden-Bell, D.: A method of allowing for known observational selection in small samples applied to 3CR quasars. Month. Not. R. Astron. Soc. 155, 95–118 (1971)

    Article  Google Scholar 

  • Mack, Y.P., Silverman, B.W.: Weak and strong uniform consistency of kernel regression estimators. Probab. Theory Relat. Fields 61, 405–415 (1982)

    MATH  Google Scholar 

  • Ould-Saïd, E., Lemdani, M.: Asymptotic properties of a nonparametric regression function estimator with randomly truncated data. Ann. Inst. Stat. Math. 58, 357–378 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Stute, W., Wang, J.L.: The central limit theorem under random truncation. Bernoulli 14, 604–622 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  • Woodroofe, W.: Estimation a distribution function with truncated data. Ann. Stat. 13, 163–177 (1985)

    Article  MATH  Google Scholar 

  • Xu, H.X., Chen, Z.L., Wang, J.F., Fan, G.L.: Quantile regression and variable selection for partially linear model with randomly truncated data. Stat. Papers (2017). https://doi.org/10.1007/s00362-016-0867-3

    Google Scholar 

  • Yu, K., Jones, M.C.: Local linear quantile regression. J. Am. Stat. Assoc. 93, 228–237 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  • Zhou, W.H.: A weighted quantile regression for randomly truncated data. Comput. Stat. Data Anal. 55, 554–566 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Zou, H., Yuan, M.: Composite quantile regression and the oracle model selection theory. Ann. Stat. 36, 1108–1126 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the editor, an associated editor and reviewers for their constructive comments, which have led to a dramatic improvement of the earlier version of this article. This research was supported by the National Natural Science Foundation of China (11371321, 11401006), Chinese Postdoctoral Science Foundation (2017M611083), the Project of Humanities and Social Science Foundation of Ministry of Education (15YJC910006), the National Statistical Science Research Program of China (2017LY51, 2016LY80, 2016LZ05), Zhejiang Provincial Natural Science Foundation (LY18A010007), Zhejiang Provincial Key Research Base for Humanities and Social Science Research (Statistics 1020XJ3316004G) and First Class Discipline of Zhejiang - A (Zhejiang Gongshang University- Statistics).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhen-Long Chen.

Appendix

Appendix

Lemma A.1

Let \((X_1,Y_1), \ldots , (X_n,Y_n)\) be independent and identically distributed (i.i.d.) random vectors. Assume that \(E|Y|^r<\infty , sup_x\int |y|^rf(x,y){\hbox {d}}y<\infty \), where f denotes the joint density of (XY). Let K be a bounded positive function with a bounded support, satisfying a Lipschitz condition. Then

$$\begin{aligned} \sup _{x} \Bigg |\frac{1}{n}\sum ^n_{i=1}\big [K_h(X_i-x)Y_i-E(K_h(X_i-x)Y_i)\big ]\Bigg |=O_p\Bigg (\frac{\log ^{1/2}(1/h)}{\sqrt{nh}}\Bigg ), \end{aligned}$$

provided that \(n^{2\varepsilon -1}h\rightarrow \infty \) for some \(\varepsilon <1-r^{-1}\).

Lemma A.1 follows from the result by Mack and Silverman (1982).

Lemma A.2

(Lv et al. 2015) Suppose \(A_n(s)\) is convex and can be represented as \(\frac{1}{2}s^\mathrm{T}Vs+U_n^\mathrm{T}s+C_n+r_n(s)\), where V is symmetric and positive definite, \(U_n\) is stochastically bounded, \(C_n\) is arbitrary and \(r_n(s)\) goes to zero in probability for each s. Then \(\alpha _n\), the argmin of \(A_n\), is only \(o_p(1)\) away from \(\beta _n=-V^{-1}U_n\), the argmin of \(\frac{1}{2}s^\mathrm{T}Vs+U_n^\mathrm{T}s+C_n\). If also \(U_n {\mathop {\rightarrow }\limits ^{\mathcal {D}}}U\), then \(\alpha _n{\mathop {\rightarrow }\limits ^{\mathcal {D}}}-V^{-1}U\).

Let \(\eta ^{*}_{i,k}=I(\varepsilon _i-c_{\tau _{k}}+r_i(u)\le 0)-\tau _k, ~\eta _{i,k}=I(\varepsilon _i-c_{\tau _{k}}\le 0)-\tau _k, ~\delta _n=\Big (\frac{\log (1/h)}{nh}\Big )^{1/2}\).

Proof of Theorem 2.1

The proof of Theorem 2.1 follows similar strategies in Theorem 2.2, and we omit the details here.

Proof of Theorem 2.2

Recall that \(\{\hat{a}_{0,1}, \ldots , \hat{a}_{0,q}, \hat{\mathbf{a}}, \hat{b}_0, \hat{\mathbf{b}}\}\) minimizes

$$\begin{aligned} \sum _{k=1}^q\Bigg [\sum _{i=1}^n\frac{1}{G_n(Y_i)}\rho _{\tau _k}\big [Y_i-a_{0,k}-b_0(U_i-u)-X_i^\mathrm{T}\{\mathbf{a}+\mathbf{b}(U_i-u)\}\big ]K_h(U_i-u)\Bigg ]. \end{aligned}$$

Denote

$$\begin{aligned} \hat{\xi }&=\sqrt{nh}\left( \begin{array}{ccc} \hat{a}_{0,1}-\alpha _0(u)-c_{\tau _{1}}\\ \vdots \\ \hat{a}_{0,q}-\alpha _0(u)-c_{\tau _{q}}\\ \hat{\mathbf{a}}-\alpha (u)\\ h(\hat{b}_0-\alpha '_0(u))\\ h(\hat{\mathbf{b}}-\alpha '(u))\\ \end{array}\right) , \xi =\sqrt{nh}\left( \begin{array}{ccc} a_{0,1}-\alpha _0(u)-c_{\tau _{1}}\\ \vdots \\ a_{0,q}-\alpha _0(u)-c_{\tau _{q}}\\ \mathbf{a}-\alpha (u)\\ h(b_0-\alpha '_0(u))\\ h(\mathbf{b}-\alpha '(u)) \end{array}\right) ,\\ N_{i,k}&= \left( \begin{array}{c} \mathbf{e_k}\\ X_i\\ \frac{U_i-u}{h}\\ X_i \frac{U_i-u}{h} \end{array}\right) , \end{aligned}$$

\(\mathbf{e_k}\) is a q-vector with 1 at the kth position and 0 elsewhere. We write \(Y_i-a_{0,k}-b_0(U_i-u)-X_i^\mathrm{T}\{\mathbf{a}+\mathbf{b}(U_i-u)\}=\varepsilon _i-c_{\tau _{k}}+r_i(u)-N^\mathrm{T}_{i,k}\xi /\sqrt{nh}:=\varepsilon _i-c_{\tau _{k}}+r_i(u)-\Delta _{i,k}\), where \( r_i(u)=\alpha _0(U_i)-\alpha _0(u)-\alpha '_0(u)(U_i-u)+X_i^\mathrm{T}\{\alpha (U_i)-\alpha (u)-\alpha '(u)(U_i-u)\}, K_i(u)=K(\frac{U_i-u}{h})\), then \(\hat{\xi }\) will be the minimizer of

$$\begin{aligned} Q_n(\xi )=\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G_n(Y_i)}\bigg [\rho _{\tau _k}(\varepsilon _i-c_{\tau _{k}}+r_i(u)-N^\mathrm{T}_{i,k}\xi /\sqrt{nh})-\rho _{\tau _k}(\varepsilon _i-c_{\tau _{k}}+r_i(u))\bigg ]. \end{aligned}$$

Following the identity by Knight (1998),

$$\begin{aligned} \rho _\tau (u-v)-\rho _\tau (u)=-v\psi _\tau (u)+\int _0^v\big [I(u\le s)-I(u\le 0)\big ]{\hbox {d}}s, \end{aligned}$$

where \(\psi _\tau (u)=\tau -I(u\le 0)\). Then we obtain

$$\begin{aligned} Q_n(\xi )=&\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G_n(Y_i)}\Bigg [ \frac{N^\mathrm{T}_{i,k}\xi }{\sqrt{nh}}\{I(\varepsilon _i-c_{\tau _{k}}+r_i(u))-\tau _k\}\\&+\int _{0}^{\Delta _{i,k}}\big [I(\varepsilon _i-c_{\tau _{k}}+r_i(u)\le z)-I(\varepsilon _i-c_{\tau _{k}}\le 0)\big ]{\hbox {d}}z\Bigg ]\\ =&\,\frac{1}{\sqrt{nh}}\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G_n(Y_i)}N_{i,k}^\mathrm{T}\xi \{I(\varepsilon _i-c_{\tau _{k}}+r_i(u)\le 0)-\tau _k\}\\&+\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G_n(Y_i)}\int _{0}^{\Delta _{i,k}}\big [I(\varepsilon _i-c_{\tau _{k}}+r_i(u)\le z)-I(\varepsilon _i-c_{\tau _{k}}+r_i(u)\le 0)\big ]{\hbox {d}}z\\ :=\,&W_{n,k}^\mathrm{T}(u)\xi +\sum _{k=1}^qB_{n,k}(\xi ), \end{aligned}$$

where \(W_{n,k}(u)=\frac{1}{\sqrt{nh}}\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G_n(Y_i)}\eta ^{*}_{i,k}N_{i,k}, \eta ^{*}_{i,k}=I(\varepsilon _i-c_{\tau _{k}}+r_i(u)\le 0)-\tau _k\).

Firstly, we prove that \(E\{\sum _{k=1}^qB_{n,k}(\xi )\}=\frac{1}{2}\xi ^\mathrm{T}\frac{f_U(u)}{\theta }S(u)\xi \). Let \(\widetilde{B}_{n,k}(\xi )=\sum _{i=1}^n\frac{K_i(u)}{G(Y_i)}\int _{0}^{\Delta _{i,k}}\big \{I(\varepsilon _i\le c_{\tau _{k}}-r_i(u)+z)-I(\varepsilon _i\le c_{\tau _{k}}-r_i(u))\big \}{\hbox {d}}z\), \(\Delta (u,x,\mu )\) and \( r_i(u,x,\mu )\) be equal to \(N_{i,k}^\mathrm{T}\xi /\sqrt{nh}\) and \( r_i(u)\), where \(X_i, U_i\) are replaced by \(x,\mu \).

Since \(\widetilde{B}_{n,k}(\xi )\) is a summation of i.i.d. random variables of the kernel form, according to Lemma A.1, we have \(\widetilde{B}_{n,k}(\xi )=E[\widetilde{B}_{n,k}(\xi )]+O_p(\delta _n)\). The expectation of \(\widetilde{B}_{n,k}(\xi )\) is

$$\begin{aligned} E\{\widetilde{B}_{n,k}(\xi )\}=&\sum _{k=1}^q\sum _{i=1}^nE\Bigg [\frac{K_i(u)}{G(Y_i)}\int _{0}^{\Delta _{i,k}}\big \{I(\varepsilon _i\le c_{\tau _{k}}-r_i(u)+z)-I(\varepsilon _i\le c_{\tau _{k}}-r_i(u))\big \}{\hbox {d}}z\Bigg ]\\ =\,&\sum _{k=1}^q\sum _{i=1}^n\int \int \int \frac{1}{G(y)}K\left( \frac{\mu -u}{h}\right) \int _{0}^{\Delta (u,x,\mu )}\\ {}&\big [I(y\le \alpha _0(\mu )+x^\mathrm{T}\alpha (\mu )+c_{\tau _{k}}-r_i(u,x,\mu )+z)\\&-I(y\le \alpha _0(\mu )+x^\mathrm{T}\alpha (\mu )+c_{\tau _{k}}-r_i(u,x,\mu ))\big ]{\hbox {d}}zf^{*}(x,\mu ,y){\hbox {d}}x{\hbox {d}}\mu {\hbox {d}}y\\ =\,&\frac{1}{\theta }\sum _{k=1}^q\sum _{i=1}^n\mathbb {E}\bigg \{K_i(u)\int _{0}^{\Delta _{i,k}}\big \{I(\varepsilon _i\le c_{\tau _{k}}-r_i(u)+z)-I(\varepsilon _i\le c_{\tau _{k}}-r_i(u))\big \}{\hbox {d}}z\bigg \}\\ =\,&\frac{1}{\theta }\sum _{k=1}^q\sum _{i=1}^n\mathbb {E}\bigg \{K_i(u)\mathbb {E}\big \{\int _{0}^{\Delta _{i,k}}\big [\big \{I(\varepsilon _i\le c_{\tau _{k}}-r_i(u)+z)\\ {}&-I(\varepsilon _i\le c_{\tau _{k}}-r_i(u))\big \}{\hbox {d}}z\big \}|U\big ]\big \}\bigg \}\\ =\,&\frac{1}{\theta }\sum _{k=1}^q\sum _{i=1}^n\mathbb {E}\bigg \{K_i(u)\int _{0}^{\Delta _{i,k}}\big [F_{\varepsilon _i}(c_{\tau _{k}}-r_i(u)+z)-F_{\varepsilon _i}(c_{\tau _{k}}-r_i(u))\big ]{\hbox {d}}z\bigg \}\\ =\,&\frac{1}{\theta }\sum _{k=1}^q\sum _{i=1}^n\mathbb {E}\bigg \{K_i(u)\int _{0}^{\Delta _{i,k}}\big [f_{\varepsilon _i}(c_{\tau _{k}}-r_i(u))z+o(1)\big ]{\hbox {d}}s\bigg \}\\ =\,&\frac{1}{2\theta }\xi ^\mathrm{T}\mathbb {E}\bigg \{\frac{1}{nh}\sum _{k=1}^q\sum _{i=1}^nK_i(u)f_{\varepsilon _i}(c_{\tau _{k}}-r_i(u))N_{i,k}N_{i,k}^\mathrm{T}\bigg \}\xi +O_p(\delta _n)\\ :=\,&\frac{1}{2\theta }\xi ^\mathrm{T}S_n(u)\xi +O_p(\delta _n). \end{aligned}$$

Further, we can prove that \(\mathbb {E}\{S_n(u)\}=f_U(u)S(u)+O(h^2)\), where \(S(u)=\text{ diag }\{S_1(u),c\mu _2S_2(u)\}\), \(S_2(u)=\mathbb {E}\{(1,X^\mathrm{T})^\mathrm{T}(1,X^\mathrm{T})|U=u\}\), \(c_{\tau _{k}}=F_{\varepsilon }^{-1}(\tau _k)\),

$$\begin{aligned} S_1(u)=\mathbb {E}\bigg \{\bigg (\begin{array}{c} C~~\mathbf{c}X^\mathrm{T}\\ X^\mathrm{T}{} \mathbf{c}~~ cXX^\mathrm{T} \end{array}\bigg )|U=u\bigg \}, \end{aligned}$$

C is a \(q\times q\) diagonal matrix with \(C_{jj}=f_{\varepsilon }(c_{\tau _{j}})\), \(\mathbf{c}=(f_{\varepsilon }(c_{\tau _{1}}), \ldots , f_{\varepsilon }(c_{\tau _{q}}))^\mathrm{T}\), \(c=\sum ^q_{k=1}f_{\varepsilon }(c_{\tau _{k}})\). Similarly, we can obtain \(\text{ Var }\{{\widetilde{B}_{n,k}(\xi )}\}=o(1)\). Then \(\widetilde{B}_{n,k}(\xi )=\frac{1}{2}\xi ^\mathrm{T}\frac{f_U(u)}{\theta }S(u)\xi +O_p(\delta _n)\). According to Lemma 5.2 in Liang and Baek (2016), we have

$$\begin{aligned} \text{ sup }_y|G_n(y)-G(y)|=O_p(n^{-1/2}). \end{aligned}$$
(A.1)

By some calculations, we can obtain

$$\begin{aligned} | B_{n,k}(\xi )- \widetilde{B}_{n,k}(\xi )|=O_p(h^{\frac{1}{2}})=o_p(1). \end{aligned}$$
(A.2)

Thus,

$$\begin{aligned} Q_{n}(\xi )&=W_{n,k}^\mathrm{T}(u)\xi +E[\widetilde{B}_{n,k}(\xi )]+O_p(\delta _n)\\&=W_{n,k}^\mathrm{T}(u)\xi +\frac{1}{2}\xi ^\mathrm{T}\frac{f_U(u)}{\theta }S(u)\xi +O_p(\delta _n+h^2). \end{aligned}$$

According to Lemma A.2, the minimizer of \(Q_{n}(\xi )\) can be expressed as

$$\begin{aligned} \hat{\xi }=-\theta f^{-1}_U(u)S^{-1}(u)W_{n,k}(u)+o_p(1). \end{aligned}$$
(A.3)

Therefore,

$$\begin{aligned} \sqrt{nh}\left( \begin{array}{ccc} \hat{a}_{0,1}-\alpha _0(u)-c_{\tau _{1}}\\ \vdots \\ \hat{a}_{0,q}-\alpha _0(u)-c_{\tau _{q}}\\ \hat{a}-\alpha (u) \end{array}\right) =-\theta f^{-1}_U(u)S^{-1}_1(u)W^*_{n,k}(u)+o_p(1), \end{aligned}$$
(A.4)

where \(W^*_{n,k}(u)=\frac{1}{\sqrt{nh}}\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G_n(Y_i)}\eta ^{*}_{i,k}(e_k^\mathrm{T},X_i^\mathrm{T})^\mathrm{T}\).

Denote \(\widetilde{W}^*_{n,k}(u)=\frac{1}{\sqrt{nh}}\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G(Y_i)}\eta _{i,k}(e_k^\mathrm{T},X_i^\mathrm{T})^\mathrm{T}:=(w_{11}, \ldots , w_{1q},w_{21})^\mathrm{T}\), where \(w_{1k}=(nh)^{-1/2}\sum _{i=1}^n\frac{K_i(u)}{G(Y_i)}\eta _{i,k}, k=1,\ldots ,q\), and \(w_{21}=(nh)^{-1/2}\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G(Y_i)}\eta _{i,k}X_i\). Note that \(\text{ Cov }(\eta _{i,k}, \eta _{i,k^{'}})=\tau _{kk^{'}}=\tau _k\wedge \tau _{k^{'}}-\tau _k\tau _{k^{'}}\) and \(\text{ Cov }(\eta _{i,k}, \eta _{j,k^{'}})=0\) if \(i\ne j\). Then

$$\begin{aligned} E(w_{1k})&=\frac{1}{\sqrt{nh}}\sum _{i=1}^nE\bigg \{{\frac{K_i(u)}{G(Y_i)}\eta _{i,k}}\bigg \}\\&=\frac{1}{\theta \sqrt{nh}}\sum _{i=1}^n\mathbb {E}\bigg [K_i(u)\mathbb {E}\big \{(I(\varepsilon _i\le c_{\tau _{k}})-\tau _k)|U\big \}\bigg ]\\&=\frac{1}{\theta \sqrt{nh}}\sum _{i=1}^n\mathbb {E}\bigg [K_i(u)\{F_{\varepsilon _i}(c_{\tau _{k}})-\tau _k\}\bigg ]=0. \end{aligned}$$

Similarly, we can obtain \(E(w_{21})=0\). On the other hand,

$$\begin{aligned} \text{ Cov }&(w_{1k}, w_{1k^{'}})=E(w_{1k}w_{1k^{'}})=\frac{1}{nh}\sum _{i=1}^nE\bigg \{{\frac{K^2_i(u)}{G^2(Y_i)}\eta _{i,k}\eta _{i,k^{'}}}\bigg \}\\ =&\frac{1}{h}\int \int \int \frac{K^2(\frac{\mu -u}{h})}{G^2(y)}\{I(y\le \alpha _0(\mu )+x^\mathrm{T}\alpha (\mu )+c_{\tau _{k}})-\tau _k\}\\&\{I(y\le \alpha _0(\mu )+x^\mathrm{T}\alpha (\mu )+c_{k^{'}})-\tau _{k^{'}}\}f^{*}(\mu ,x,y){\hbox {d}}\mu {\hbox {d}}x {\hbox {d}}y\\ =&\frac{1}{\theta h}\int \int \int \frac{K^2(\frac{\mu -u}{h})}{G(y)}\{I(y\le \alpha _0(\mu )+x^\mathrm{T}\alpha (\mu )+c_{\tau _{k}})-\tau _k\}\\&\{I(y\le \alpha _0(\mu )+x^\mathrm{T}\alpha (\mu )+c_{\tau _{k^{'}}})-\tau _{k^{'}}\}f(\mu ,x,y){\hbox {d}}\mu {\hbox {d}}x {\hbox {d}}y\\ =&\frac{1}{\theta h}\int \int \int \frac{K^2(\frac{\mu -u}{h})f(\mu ,x,y)}{G(y)}\Big \{I(y\le \alpha _0(\mu )+x^\mathrm{T}\alpha (\mu )+c_{\tau _{k}}\wedge c_{\tau _{k^{'}}})-\\&\tau _kI(y\le \alpha _0(\mu )+x^\mathrm{T}\alpha (\mu )+c_{\tau _{k^{'}}})-\tau _{k^{'}}I(y\le \alpha _0(\mu )+x^\mathrm{T}\alpha (\mu )+c_{k})+\tau _k\tau _{k^{'}}\Big \}{\hbox {d}}\mu {\hbox {d}}x {\hbox {d}}y\\ \rightarrow&\frac{1}{\theta }\int \int \int \frac{K^2(t)f(u,x,y)}{G(y)}\Big \{I(y\le \alpha _0(u)+x^\mathrm{T}\alpha (u)+c_{\tau _{k}}\wedge c_{\tau _{k^{'}}})-\\&\tau _kI(y\le \alpha _0(u)+x^\mathrm{T}\alpha (u)+c_{\tau _{k^{'}}})-\tau _{k^{'}}I(y\le \alpha _0(u)+x^\mathrm{T}\alpha (u)+c_{k})+\tau _k\tau _{k^{'}}\Big \}{\hbox {d}}x{\hbox {d}}t{\hbox {d}}y+o_p(1)\\ =&\frac{\nu _0 f_U(u)}{\theta }\lambda ^0_{kk^{'}}(u):=\frac{\nu _0 f_U(u)}{\theta }A_{11}(u). \end{aligned}$$

Similarly, we can obtain that \(\text{ Cov }(w_{1k}, w_{21})=\frac{\nu _0 f_U(u)}{\theta }\sum _{k^{'}=1}^q\lambda ^1_{kk^{'}}(u):=\frac{\nu _0 f_U(u)}{\theta }A_{12}(u)\) and \(\text{ Var }(w_{21})=\frac{\nu _0 f_U(u)}{\theta }\sum _{k=1}^q\sum _{k^{'}=1}^q\lambda ^2_{kk^{'}}(u):=\frac{\nu _0 f_U(u)}{\theta }A_{22}(u)\).

By the Cramér–Wald theorem and the central limit theorem, we have

$$\begin{aligned} \widetilde{W}^*_{n,k}(u){\mathop {\rightarrow }\limits ^{\mathcal {D}}}N\Big (0,\frac{\nu _0 f_U(u)}{\theta }A(u)\Big ). \end{aligned}$$

Define \(\overline{W}^*_{n,k}(u)=\frac{1}{\sqrt{nh}}\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G(Y_i)}\eta ^*_{i,k}(e_k^\mathrm{T},X_i^\mathrm{T})^\mathrm{T}:=(\bar{w}_{11}, \ldots , \bar{w}_{1q},\bar{w}_{21})^\mathrm{T},\) where \(\bar{w}_{1k}=\frac{1}{\sqrt{nh}}\sum _{i=1}^n\frac{K_i(u)}{G(Y_i)}\eta ^*_{i,k}, k=1,\ldots ,q,\) and \(\bar{w}_{21}=\frac{1}{\sqrt{nh}}\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G(Y_i)}\eta ^*_{i,k}X_i\).

By some calculations, we have

$$\begin{aligned}&\text{ Var }(\bar{w}_{1k}-\tilde{w}_{1k})\le \frac{C_0}{\theta G(a_F)nh}\sum _{i=1}^n\mathbb {E}\{K^2_i(u)(\eta ^*_{i,k}-\eta _{i,k})^2\}=o(1)~\text{ and }~\\&\quad \text{ Var }(\bar{w}_{21}-\tilde{w}_{21})=o(1). \end{aligned}$$

Thus \(\text{ Var }\{\overline{W}^*_{n,k}(u)-\widetilde{W}^*_{n,k}(u)\}=o(1)\). By Slutsky’s theorem, we have

$$\begin{aligned} \overline{W}^*_{n,k}(u)-E\{\overline{W}^*_{n,k}(u)\}{\mathop {\rightarrow }\limits ^{\mathcal {D}}}N\Big (0,\frac{\nu _0 f_U(u)}{\theta }A(u)\Big ). \end{aligned}$$
(A.5)

Note that

$$\begin{aligned} W^*_{n,k}(u)=W^*_{n,k}(u)-\overline{W}^*_{n,k}(u)+\big [\overline{W}^*_{n,k}(u)-E\{\overline{W}^*_{n,k}(u)\}\big ]+E\{\overline{W}^*_{n,k}(u)\}. \end{aligned}$$
(A.6)

Similar to the proof of (A.2), we have \(W^*_{n,k}(u)-\overline{W}^*_{n,k}(u)=o_p(1)\). Thus,

$$\begin{aligned} W^*_{n,k}(u)-E\{\overline{W}^*_{n,k}(u)\}=\overline{W}^*_{n,k}(u)-E\{\overline{W}^*_{n,k}(u)\}+o_p(1). \end{aligned}$$
(A.7)

Next we calculate the mean of \(\overline{W}^*_{n,k}(u)\). In fact

$$\begin{aligned} \frac{1}{\sqrt{nh}}E(\overline{W}^*_{n,k}(u))&=\frac{1}{nh}E\bigg \{\sum _{k=1}^q\sum _{i=1}^n\frac{K_i(u)}{G(Y_i)}\eta ^*_{i,k}( e^\mathrm{T}_k,X_i^\mathrm{T})^\mathrm{T}\bigg \}\nonumber \\&=\frac{1}{nh\theta }\sum _{k=1}^q\sum _{i=1}^n\mathbb {E}\big [K_i(u)\mathbb {E}\{(I(\varepsilon _i-c_{\tau _{k}}+ r_i(u)\le 0)-\tau _k)|U,X\}(e^\mathrm{T}_k,X_i^\mathrm{T})^\mathrm{T}\big ]\nonumber \\&=\frac{1}{nh\theta }\sum _{k=1}^q\sum _{i=1}^n\mathbb {E}\big [K_i(u)\{F_{\varepsilon }(c_{\tau _{k}}- r_i(u))-F_{\varepsilon }(c_{\tau _{k}})\}(e^\mathrm{T}_k,X_i^\mathrm{T})^\mathrm{T}\big ]\nonumber \\&=-\frac{1}{nh\theta }\sum _{k=1}^q\sum _{i=1}^n\mathbb {E}\big \{K_i(u)f_{\varepsilon }(c_{\tau _{k}})r_i(u)\big (1+o(1)\big )(e^\mathrm{T}_k,X_i^\mathrm{T})^\mathrm{T}\big \}\nonumber \\&=-\frac{\mu _2h^2}{2\theta }f_U(u)S_1(u)\bigg (\begin{array}{ccc} \alpha _0''(u)\\ \alpha ''(u) \end{array}\bigg )+o_p(h^2). \end{aligned}$$
(A.8)

Combining (A.4)–(A.8), the proof of Theorem 2.2 is completed.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, HX., Fan, GL., Chen, ZL. et al. Weighted quantile regression and testing for varying-coefficient models with randomly truncated data. AStA Adv Stat Anal 102, 565–588 (2018). https://doi.org/10.1007/s10182-018-0319-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10182-018-0319-6

Keywords

Mathematics Subject Classification

Navigation