Skip to main content
Log in

Simultaneous confidence bands for extremal quantile regression with splines

  • Published:
Extremes Aims and scope Submit manuscript

Abstract

This study investigates simultaneous confidence bands for extremal quantile regressions using the spline method. We construct the spline estimator for intermediate order quantiles using a conventional quantile regression framework, and we obtain the extreme order quantile estimator by extrapolating the spline estimator for intermediate order quantiles. We establish the asymptotic normality of the spline and extrapolated estimators for intermediate and extreme order quantiles. By applying the volume of tube formula to the above two estimators, we construct simultaneous conditional quantile confidence bands for intermediate and extreme order quantiles. To confirm the performance of the proposed confidence bands, we use a Monte Carlo simulation and an example with real data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Aagrwal, G.G., Studden, W.J.: Asymptotic integrated mean square error using least squares and bias minimizing splines. Ann. Statist. 8, 1307–1325 (1980)

    Article  MathSciNet  Google Scholar 

  • Barrow, D.L., Smith, P.W.: Asymptotic properties of best L2[0, 1] approximation by spline with variable knots. Quart. Appl. Math. 36, 293–304 (1978)

    Article  MathSciNet  Google Scholar 

  • Bondell, H.D., Reich, B.J., Wang, H.: Non-crossing quantile regression curve estimation. Biometrika 97, 825–838 (2010)

    Article  MathSciNet  Google Scholar 

  • Chernozhukov, V.: Extremal quantile regression. Ann. Statist. 33, 806–839 (2005)

    Article  MathSciNet  Google Scholar 

  • Chernozhukov, V., Fernández-val: Inference for extremal conditional quantile models, with an application to market and birthweight risks. Rev. Econ. Stud. 78, 559–589 (2011)

    Article  MathSciNet  Google Scholar 

  • Daouia, A., Gardes, L., Girard, S.: On kernel smoothing for extremal quantile regression. Bernoulli 19, 2557–2589 (2013)

    Article  MathSciNet  Google Scholar 

  • de Haan, L., Ferreira, A.: Extreme Value Theory: An Introduction. Springer, New York (2006)

    Book  Google Scholar 

  • He, F., Cheng, Y., Tong, T.: Estimation of extreme conditional quantiles through an extrapolation of intermediate regression quantiles. Statist. Probab. Let. 113, 30–37 (2016)

    Article  MathSciNet  Google Scholar 

  • Hill, B.M.: A simple general approach to inference about the tail of a distribution. Ann. Statist. 13, 331–341 (1975)

    Article  MathSciNet  Google Scholar 

  • Horowitz, J.L., Krishnamurthy, A.: A bootstrap method for constructing pointwise and uniform confidence bands for conditional quantile functions. Statistica Sinica. https://doi.org/10.5705/ss.202017.0013 (2018)

  • Knight, K: Limiting distributions for L1 regression estimators under general conditions. Ann. Statist. 26, 755–770 (1998)

    Article  MathSciNet  Google Scholar 

  • Knight, K.: Limiting distributions of linear programming estimators. Extremes 4, 87–103 (2001)

    Article  MathSciNet  Google Scholar 

  • Koenker, R.: Quantile Regression. Cambridge University Press, Cambridge (2005)

    Book  Google Scholar 

  • Koenker, R.: Additive models for quantile regression: Model selection and confidence bandaids. Brazil. J. Probab. Statist. 25, 239–262 (2011)

    Article  MathSciNet  Google Scholar 

  • Koenker, R., Bassett, G.: Regression quantiles. Econometrica 46, 33–50 (1978)

    Article  MathSciNet  Google Scholar 

  • Krivobokova, T., Kneib, T., Claeskens, G.: Simultaneous confidence bands for penalized spline estimators. J. Amer. Statist. Assoc. 490, 852–863 (2010)

    Article  MathSciNet  Google Scholar 

  • Liang, X., Zou, T., Guo, B., Li, S., Zhang, H., Zhang, S., Huang, H., Chen, S.X.: Assessing Beijing’s PM2.5 pollution: Severity, weather impact, APEC and winter heating. Proc. R. Soc. A, 471 (2015)

  • Lim, Y., Oh, S.: Simultaneous confidence interval for quantile regression. Comput. Stat. 30, 345–358 (2015)

    Article  MathSciNet  Google Scholar 

  • Pollard, D.: Asymptotics for least absolute deviation regression estimators. Econometric Theory 7, 186–199 (1991)

    Article  MathSciNet  Google Scholar 

  • Portnoy, S., Jurečková, J.: On extreme regression quantiles. Extremes 2, 227–243 (1999)

    Article  MathSciNet  Google Scholar 

  • Smith, R.L.: Nonregular regression. Biometrika 81, 173–183 (1994)

    Article  MathSciNet  Google Scholar 

  • Song, S., Ritov, Y., Härdle, W.K.: Bootstrap confidence bands and partial linear quantile regression. J. Mult. Anal. 107, 244–262 (2012)

    Article  MathSciNet  Google Scholar 

  • Sun, J.: Tail probabilities of the maxima of Gaussian random fields. Ann. Probab. 21, 34–71 (1993)

    Article  MathSciNet  Google Scholar 

  • Sun, J., Loader, C.R.: Simultaneous confidence bands for linear regression and smoothing. Ann. Statist. 22, 1328–1345 (1994)

    Article  MathSciNet  Google Scholar 

  • Wang, H.J., Li, D., He, X.: Estimation of high dimensional conditional quantiles for heavy-tailed distributions. J. Amer. Statist. Assoc. 107, 1453–1464 (2012)

    Article  MathSciNet  Google Scholar 

  • Weissman, I.: Estimation of parameters and large quantiles based on the k largest observations. J. Amer. Statist. Assoc. 73, 812–815 (1978)

    MathSciNet  MATH  Google Scholar 

  • Yoshida, T.: Asymptotics for penalized spline estimators in quantile regression. Communications in Statistics-Theory and Methods. https://doi.org/10.1080/03610926.2013.765477. Online Only (2013)

  • Yoshida, T: Nonparametric smoothing for extremal quantile regression with heavy tailed data. REVSTAT-Statiatical Journal, Forthcoming. https://www.ine.pt/revstat/forthcoming_papers.html (2019)

  • Zhou, S., Shen, X., Wolfe, D.A.: Local asymptotics for regression splines and confidence regions. Ann. Statist. 26, 1760–1782 (1998)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author gratefully acknowledge the valuable input of the Editor, Associate Editor and the anonymous two referees that improved the presentation of this paper. The research of the author was partially supported by KAKENHI 26730019 and KAKENHI 18K18011.

Author information

Authors and Affiliations

Authors

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Here, we give the proofs of the theorems discussed in Sections 3 and 4. Theorem 1 can be proven by using a similar argument as the proof of Theorem 1 in Yoshida (2019). We provide a brief sketch of the proof.

Proof of Theorem 1

Let \(\boldsymbol {b}_{0}(\tau )\in \mathbb {R}^{K+p}\) be the minimizer of

$$ E[\rho_{\tau}(Y-\boldsymbol{B}(X)^{T}\boldsymbol{b})] $$

with respect to \(\boldsymbol {b}\in \mathbb {R}^{K+p}\). Since ρτ(u) is convex and QY(τ|x) is minimizer of E[ρτ(Yu)], b0(τ) is equivalent to the best approximation of QY(τ|x). From Barrow and Smith (1978), we have

$$ \begin{array}{@{}rcl@{}} \boldsymbol{B}(x)^{T}\boldsymbol{b}_{0}(\tau)-Q_{Y}(\tau|x)=K^{-m}Q(\tau)b(x)(1+o(1)) \end{array} $$
(9)

as \(K\rightarrow \infty \) under Conditions A–B. In particular, to obtain (9), we need Conditions B1-3 to use the result of Barrow and Smith (1978). Next, we derive the difference between \(\tilde {Q}_{Y}(\tau |x)=\boldsymbol {B}(x)^{T}\tilde {\boldsymbol {b}}(\tau )\) and \(\boldsymbol {B}(x)^{T}\boldsymbol {b}_{0}(\tau )\). Define Ui = YiB(x)Tb0(τ) and \(a(\tau ,n)=\sqrt {n(1-\tau )}\). We let

$$ R_{n}(\tau|\boldsymbol{\delta})=\sum\limits_{i=1}^{n} \rho_{\tau}(U_{i}-\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}/a(n,\tau))-\rho_{\tau}(U_{i}). $$

Then, the minimizer \(\tilde {\boldsymbol {\delta }}(\tau )\) of Rn is

$$ \tilde{\boldsymbol{\delta}}(\tau)=a(n,\tau)(\tilde{\boldsymbol{b}}(\tau)-\boldsymbol{b}_{0}). $$

From the result of Pollard (1991), we know that \(\tilde {\boldsymbol {\delta }}(\tau )\) is asymptotically equivalent to the minimizer of the asymptotic form of Rn(τ|δ). Therefore, we now investigate the asymptotic behavior of Rn(τ|δ). The Knight’s identity (Knight 1998) yields

$$ \begin{array}{@{}rcl@{}} \rho_{\tau}(u-v)-\rho_{\tau}(u)=-v(\tau-I(u<0))+{{\int}_{0}^{v}} \{I(u\leq s)-I(u\leq 0)\}ds \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} R_{n}(\tau|\boldsymbol{\delta})=W_{n}(\tau)^{T}\boldsymbol{\delta}+G_{n}(\tau|\boldsymbol{\delta}), \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} W_{n}(\tau)&\equiv&\frac{-1}{\sqrt{(1-\tau) n}}\sum\limits_{i=1}^{n} (\tau-I(Y_{i}<\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau)))\boldsymbol{B}(x_{i}) \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} G_{n}(\boldsymbol{\delta}|\tau)&\equiv&\sum\limits_{i=1}^{n} {\int}_{0}^{\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}/a(n,\tau)}I(U_{i}\leq s)-I(U_{i}\leq 0)ds. \end{array} $$

We now show the asymptotic behavior of Gn(δ|τ). Writing

$$ \begin{array}{@{}rcl@{}} &&G_{n}(\boldsymbol{\delta}|\tau)\\ &&= \sum\limits_{i=1}^{n} {\int}_{0}^{\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}/a(n,\tau)}E[I(U_{i}\leq s)-I(U_{i}\leq 0)]ds\\ &&\quad + \sum\limits_{i=1}^{n} {\int}_{0}^{\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}/a(n,\tau)}\{I(U_{i}\leq s)-I(U_{i}\leq 0)-E[I(U_{i}\leq s)-I(U_{i}\leq 0)]\}ds\\ &&\equiv G_{1n}(\boldsymbol{\delta}|\tau)+G_{2n}(\boldsymbol{\delta}|\tau). \end{array} $$

The Taylor’s theorem yields that

$$ \begin{array}{@{}rcl@{}} &&G_{1n}(\boldsymbol{\delta}|\tau)\\ &&= \frac{1}{a(n,\tau)}\sum\limits_{i=1}^{n} {\int}_{0}^{\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}}F_{Y}(\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau) + s/a(n,\tau)|x) - F_{Y}(\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau)|x_{i})ds\\ &&= \sum\limits_{i=1}^{n} {\int}_{0}^{\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}}f_{Y}(\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau)|x)\frac{s}{a(n,\tau)^{2}}ds\\ &&\quad + \sum\limits_{i=1}^{n} {\int}_{0}^{\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}}f^{\prime}_{Y}(\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau)+\theta s/(a(n,\tau))|x)\frac{s^{2}}{a(n,\tau)^{3}}ds\\ &&= \sum\limits_{i=1}^{n} \frac{f_{Y}(\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau)|x)}{2n(1-\tau)}\boldsymbol{\delta}^{T}\boldsymbol{B}(x_{i})\boldsymbol{B}(x_{i})\boldsymbol{\delta}\\ &&\quad + \sum\limits_{i=1}^{n} {\int}_{0}^{\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}}f^{\prime}_{Y}(\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau)+\theta s/(a(n,\tau))|x)\frac{s^{2}}{a(n,\tau)^{3}}ds. \end{array} $$

where fY(⋅|x) is the conditional density function of Y given X = x and 𝜃 ∈ (0, 1). Here, we see that \(f_{Y}(Q_{Y}(\tau |x)|x)=\{\partial Q_{Y}(\tau |x)/\partial \tau \}^{-1}\) from τ = FY(QY(τ|x)|x) and its derivative with respect to τ. Therefore, under Conditions A1 and A4, we obtain

$$ f_{Y}(Q_{Y}(\tau|x)|x) = \gamma^{-1}H(x)^{-\gamma} (1-\tau)^{\gamma+1}L(H(x)/(1-\tau))^{-1}(1+o(1)) = O \left( \frac{1 - \tau}{Q(\tau)}\right) . $$

Similarly, we obtain

$$ f^{\prime}_{Y}(Q_{Y}(\tau|x)|x)=-\left\{\frac{\partial Q_{Y}(\tau|x)}{\partial \tau}\right\}^{-2}\frac{\partial^{2} Q_{Y}(\tau|x)}{\partial \tau^{2}}f_{Y}(Q_{Y}(\tau|x)|x). $$

and \(f^{\prime }_{Y}(Q_{Y}(\tau |x)|x)=O((1-\tau )^{2\gamma +1})\), which indicates \(|f^{\prime }_{Y}(Q_{Y}(\tau |x)|x)| < C_{1}(1-\tau )^{2\gamma +1}\) for some constant C1 > 0. From the property of B-spline approximation, we have

$$ |f_{Y}(\boldsymbol{B}(x)^{T}\boldsymbol{b}_{0}(\tau))-f_{Y}(Q_{Y}(\tau|x)|x)|=O(f^{\prime}_{Y}(Q_{Y}(\tau|x)|x)Q(\tau)K^{-m}), $$

which can be bounded by C2(1 − τ)2γ+ 1Q(τ)Kmfor a constant C2 > 0. Therefore,

$$ \begin{array}{@{}rcl@{}} &&\sum\limits_{i=1}^{n} \frac{f_{Y}(\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau)}{2n(1-\tau)}\boldsymbol{\delta}^{T}\boldsymbol{B}(x_{i})\boldsymbol{B}(x_{i})\boldsymbol{\delta}\\ &&\leq \sum\limits_{i=1}^{n} \frac{f_{Y}(Q_{Y}(\tau|x)|x)|x)}{2n(1-\tau)}\boldsymbol{\delta}^{T}\boldsymbol{B}(x_{i})\boldsymbol{B}(x_{i})\boldsymbol{\delta}\\ &&+ \sum\limits_{i=1}^{n} \frac{C_{2} (1-\tau)^{2\gamma+1}Q(\tau)K^{-m}}{2n(1-\tau)}\boldsymbol{\delta}^{T}\boldsymbol{B}(x_{i})\boldsymbol{B}(x_{i})\boldsymbol{\delta}\\ &&= \frac{\gamma^{-1}Q(\tau)}{2}\boldsymbol{\delta}^{T}\left( \frac{1}{n}\sum\limits_{i=1}^{n} H(x_{i})^{-1}\boldsymbol{B}(x)\boldsymbol{B}(x)^{-1}\right)\boldsymbol{\delta}+C_{3} Q(\tau)(1-\tau)^{2\gamma}K^{-m-1} \end{array} $$

since \(n^{-1}{\sum }_{i=1}^{n} \boldsymbol {B}(x_{i})\boldsymbol {B}(x_{i})^{T}= O(K^{-1})\) for a constant C3 > 0. We can evaluate

$$ \begin{array}{@{}rcl@{}} &&\sum\limits_{i=1}^{n} {\int}_{0}^{\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}}|f^{\prime}_{Y}(\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau)+\theta s/(a(n,\tau))|x)|\frac{s^{2}}{a(n,\tau)^{3}}ds\\ &&\leq C_{4}\frac{(1-\tau)^{2\gamma+1}}{n^{1/2}(1-\tau)(1-\tau)^{1/2}} \frac{1}{n}\sum\limits_{i=1}^{n} \{\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}\}^{3}\\ &&\leq C_{4} \frac{Q(\tau)}{K}\frac{(1-\tau)^{3\gamma}}{\{n(1-\tau)\}^{1/2}} \end{array} $$

for a constant C4 ≥ 0. Therefore, we obtain

$$ \begin{array}{@{}rcl@{}} \lim\limits_{n\rightarrow\infty}\sup\limits_{\tau\in[\tau_{1n},\tau_{2n}]} \left|\frac{K}{Q(\tau)}G_{1n}(\boldsymbol{\delta}|\tau)-\frac{1}{2}\boldsymbol{\delta}^{T}G(H^{-\gamma})\boldsymbol{\delta}\right|=0. \end{array} $$

Next, we show \(\lim _{n\rightarrow \infty }\sup _{\tau \in [\tau _{1n},\tau _{2n}]} KG_{2n}(\boldsymbol {\delta }|\tau )/Q(\tau )=0\). From the proof of Theorem 4.1 of Koenker (2005), we obtain

$$ V\left[\frac{K}{Q(\tau)}G_{2n}(\boldsymbol{\delta}|\tau)\right] \leq \frac{\max_{i}|\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}|}{\sqrt{n(1-\tau)}} E\left[\frac{K}{Q(\tau)}G_{2n}(\boldsymbol{\delta}|\tau)\right]\leq C_{5} \frac{1}{\sqrt{n(1-\tau)}} $$

for some constant C5 > 0. Meanwhile, it is easy to show that there exists a constant M > 0 such that

$$ \begin{array}{@{}rcl@{}} \frac{K}{Q(\tau)}\frac{|\boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}|}{a(n,\tau)}&& \left|{{\int}_{0}^{1}}I(U_{i}\leq \boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}/a(n,\tau)s) - E[I(U_{i}\leq \boldsymbol{B}(x_{i})^{T}\boldsymbol{\delta}/a(n,\tau))ds\right.\\ &&\left.-{{\int}_{0}^{1}}I(U_{i}\leq 0)-I(U_{i}\leq 0)ds\right|\leq \frac{K}{Q(\tau) \sqrt{n(1-\tau)}} M \end{array} $$

since I is the indicator and E[I(Uia)] ≤ 1. Therefore, from Bernstein’s inequality, we obtain for t > 0 that

$$ \begin{array}{@{}rcl@{}} P\left( \frac{K}{Q(\tau)}|G_{2n}(\boldsymbol{\delta}|\tau)|\geq t\right)\leq 2 \exp\left[-\frac{t^{2}}{2}\frac{\sqrt{n(1-\tau)}}{C_{5} +\frac{ M t K}{Q(\tau)}}\right]\leq 2 \exp\left[-\frac{t^{2}}{2}\frac{\sqrt{n(1-\tau)}}{C_{6}}\right] \end{array} $$

for some constant C6C5. Putting \(t= C_{7}\{n(1-\tau _{2})\}^{-4+\nu }, \nu \in (0,4), C_{7}>0\), we have

$$ \begin{array}{@{}rcl@{}} \frac{K}{Q(\tau)}|G_{2n}(\boldsymbol{\delta}|\tau)|\leq C_{7}\{n(1-\tau_{2})\}^{-4+\nu} \end{array} $$

with probability one. This indicates that

$$ \lim\limits_{n\rightarrow\infty}\sup\limits_{\tau\in[\tau_{1},\tau_{2}]}\frac{K}{Q(\tau)}|G_{2n}(\boldsymbol{\delta}|\tau)|=0. $$

Consequently, under Conditions A1 and A4, we can show that

$$ G_{n}(\boldsymbol{\delta}|\tau) =\frac{Q(\tau)}{K}\left\{\frac{1}{2}\gamma^{-1}\boldsymbol{\delta}^{T}G(H^{-\gamma})\boldsymbol{\delta}+r_{n}(\tau)\right\}, $$

where \(\lim _{n\rightarrow \infty }\sup _{\tau \in [\tau _{1n},\tau _{2n}]} |r_{n}(\tau )|=0\).

Therefore, Rn(τ|δ) is asymptotically equivalent to

$$ W_{n}(\tau)^{T}\boldsymbol{\delta}+\frac{Q(\tau)}{2K}\gamma^{-1}\boldsymbol{\delta}G(H^{-\gamma})\boldsymbol{\delta} $$

uniformly for τ ∈ [τ1, τ2]. We define

$$ \boldsymbol{\varepsilon}=\frac{-\sqrt{K}}{\sqrt{(1-\tau) n}}\sum\limits_{i=1}^{n} (\tau-I(Y_{i}<\boldsymbol{B}(x_{i})^{T}\boldsymbol{b}_{0}(\tau)))G^{-1/2}\boldsymbol{B}(x_{i}). $$

and then get

$$ \begin{array}{@{}rcl@{}} \tilde{\boldsymbol{\delta}}(\tau)&=& K\gamma G(H^{-\gamma})^{-1}W_{n}(\tau)(1+o_{P}(1))\\ &=&\gamma K^{1/2} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}(1+o_{P}(1)), \end{array} $$

uniformly for τ ∈ [τ1, τ2].

Thus, the first assertion of Theorem 1 can be achieved to show that ε is asymptotically distributed as NK+p(0, I). From Taylor expansion, we see that \(E[I(Y_{i}<\boldsymbol {B}(x_{i})^{T}\boldsymbol {b}_{0}(\tau ))]=P(Y_{i}<\boldsymbol {B}(x_{i})^{T}\boldsymbol {b}_{0}(\tau )|x_{i})=F_{Y}(Q_{Y}(\tau |x_{i})|x_{i})+f_{Y}(Q_{Y}(\tau |x_{i})|x_{i})K^{-m}Q(\tau )b(x_{i})(1+o(1))\).

Furthermore, we have \({{\int }_{a}^{b}} b(x)\boldsymbol {B}(x)dx=O(K^{-2})\) from the properties of the Bernoulli polynomial and the B-spline basis (Aagrwal and Studden 1980). Therefore, we obtain

$$ \begin{array}{@{}rcl@{}} E[\boldsymbol{\varepsilon}]&=&O(nK^{1/2}f_{Y}(Q_{Y}(\tau|x)|x)K^{-m}Q(\tau)\{n(1-\tau)\}^{-1/2}){{\int}_{a}^{b}} b(x)\boldsymbol{B}(x)dx\\ &=&O(K^{-m-3/2}\sqrt{n(1-\tau)})\\ &\leq &O(K^{-m-3/2}\sqrt{n(1-\tau_{2})})\\ &=&o(1). \end{array} $$

That is E[ε] is dominated by the B-spline model bias. Meanwhile, the variance of ε can be calculated as \(\sup _{\tau \in [\tau _{1},\tau _{2}]}V[\boldsymbol {\varepsilon }]=I\) as \(n\rightarrow \infty \) and \(\tau _{1},\tau _{2}\rightarrow 1\). Similar to Lemma 9.6 of Chernozhukov (2005), ε is equivalent to normal with mean 0 and variance I. Consequently, since

$$ \begin{array}{@{}rcl@{}} \frac{1}{a(\tau,n)}\boldsymbol{B}(x)^{T}\tilde{\boldsymbol{\delta}}(\tau)&=& \boldsymbol{B}(x)^{T}\tilde{\boldsymbol{b}}(\tau)-\boldsymbol{B}(x)^{T}\boldsymbol{b}_{0}(\tau)\\ &=&\tilde{Q}_{Y}(\tau|x)-Q_{Y}(\tau|x)-K^{-m}Q(\tau)b(x)(1+o(1)), \end{array} $$

the intermediate order quantile estimator has an asymptotic form as

$$ \tilde{Q}_{Y}(\tau|x)-Q_{Y}(\tau|x)-b(\tau|x)=\frac{Q(\tau)K^{1/2}}{\sqrt{n(1-\tau)}}\gamma \boldsymbol{B}(x)^{T}G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}(1+o_{P}(1)) $$

uniformly for τ ∈ [τ1, τ2].

From this and QY(τ|x) = h(x)Q(τ)(1 + o(1)), we obtain

$$ \begin{array}{@{}rcl@{}} \frac{\tilde{Q}_{Y}(\tau|x)}{Q_{Y}(\tau|x)}-1&=&\frac{K^{-m}b(x)}{h(x)}(1+o(1))\\ &&+\frac{K^{1/2}}{\sqrt{n(1-\tau)}}\frac{\gamma}{h(x)} \boldsymbol{B}(x)^{T}G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}(1+o_{P}(1)). \end{array} $$
(10)

Hence, the MISE is

$$ \begin{array}{@{}rcl@{}} E\left[\left\{\frac{\tilde{Q}_{Y}(\tau|x)}{Q_{Y}(\tau|x)}-1\right\}^{2}\right] &=& O(K^{-m})+O\left( \frac{K}{n(1-\tau)}\right) \end{array} $$

from Lemmas 6.2 and 6.3 of Zhou et al. (1998). Finally, K = O({n(1 − τ)}1/(2m+ 1)) yields the optimal rate of convergence of MISE. This completes the second assertion. □

We next give the proof of Theorem 2. For this, we first derive the asymptotic expression of \(\hat {\gamma }(x)\).

Lemma 1

Under the same conditions as Theorem 2,

$$ \begin{array}{@{}rcl@{}} \hat{\gamma}(x)-\gamma&=& \frac{m}{m+1}k^{-m/(2m+1)}\frac{b(x)}{h(x)}\\ && -\frac{m}{m+1}k^{-m/(2m+1)}\frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}\\ &&\quad +o(k^{-m/(2m+1)}), \end{array} $$

whereh(x) = H(x)γ.

Proof of Lemma 1

Under the Conditions A–C, for the intermediate order quantile estimator, we obtain

$$ \begin{array}{@{}rcl@{}} &&\tilde{Q}_{Y}(\tau_{j}|x)-Q_{Y}(\tau_{j}|x)-K^{-m}Q(\tau_{j})b(x)\\ &&= \frac{Q(\tau_{j})K^{1/2}}{\sqrt{n(1-\tau_{j})}}\gamma\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}(1+o_{P}(1))\\ &&= Q(\tau_{j})\{n(1-\tau_{j})\}^{-\frac{m}{2m+1}}\gamma\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}(1+o_{P}(1)) \end{array} $$
(11)

because \(K=O(\{n(1-\tau _{j})\}^{1/(2m+1)})\) leads to the optimal order. By the Taylor expansion \(\log (1+x)=x+o(|x|)\) for |x| < 1, the EVI estimator defined in Eq. 2 can be written as

$$ \begin{array}{@{}rcl@{}} \hat{\gamma}(x)&=& \frac{1}{k-1}\sum\limits_{j=1}^{k-1} \log \frac{Q_{Y}(\tau_{j}|x)}{Q_{Y}(\tau_{k}|x)}+\frac{1}{k-1}\sum\limits_{j=1}^{k-1} \log \frac{\left\{1+\frac{\tilde{Q}_{Y}(\tau_{j}|x)-Q_{Y}(\tau_{j}|x)}{Q_{Y}(\tau_{j}|x)}\right\}}{\left\{1+\frac{\tilde{Q}_{Y}(\tau_{k}|x)-Q_{Y}(\tau_{k}|x)}{Q_{Y}(\tau_{k}|x)}\right\}}\\ &=& \frac{1}{k-1}\sum\limits_{j=1}^{k-1} \log \frac{Q_{Y}(\tau_{j}|x)}{Q_{Y}(\tau_{k}|x)}\\ &&+\frac{1}{k-1}\sum\limits_{j=1}^{k-1} \frac{\hat{Q}_{Y}(\tau_{j}|x)-Q_{Y}(\tau_{j}|x)}{Q_{Y}(\tau_{j}|x)}(1+o_{P}(1))\\ &&\quad - \frac{\tilde{Q}_{Y}(\tau_{k}|x)-Q_{Y}(\tau_{k}|x)}{Q_{Y}(\tau_{k}|x)}(1+o_{P}(1)). \end{array} $$

From the proof of Theorem 2.3 of Wang et al. (2012), we have

$$ \frac{1}{k-1}\sum\limits_{j=1}^{k-1} \log \frac{Q_{Y}(\tau_{j}|x)}{Q_{Y}(\tau_{k}|x)}=\gamma+O((n/k)^{\max\{\rho^{*},-\gamma\}})+O(k^{-1}n^{\eta} \log(k)). $$

Next, Eq. 11 yields the following:

$$ \begin{array}{@{}rcl@{}} &&\frac{\tilde{Q}_{Y}(\tau_{k}|x)-Q_{Y}(\tau_{k}|x)}{Q_{Y}(\tau_{k}|x)}\\ &&=\{n(1-\tau_{k})\}^{-m/(2m+1)}\frac{Q(\tau_{k})b(x)}{Q_{Y}(\tau_{k}|x)}\\ && \quad +\{n(1-\tau_{k})\}^{-m/(2m+1)}\frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}+o(k^{-m/(2m+1)})\\ &&=k^{-m/(2m+1)}\frac{b(x)}{h(x)}\\ &&\quad +k^{-m/(2m+1)}\frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}+o(k^{-m/(2m+1)}). \end{array} $$

Here, we used \(Q(\tau )/Q_{Y}(\tau |x) = \{h(x)\}^{-1}(1+o(1))\) and

$$ n(1-\tau_{k})=n\frac{k+[n^{\eta}]}{n+1}= k\left( 1+ \frac{[n^{\eta}]}{k}\right)\frac{n}{n+1}=k(1+o(1)) $$

from the assumption that \([n^{\eta }]/k\rightarrow 0\).

Finally, we have

$$ \begin{array}{@{}rcl@{}} &&\frac{1}{k-1}\sum\limits_{j=1}^{k-1} \frac{\hat{Q}_{Y}(\tau_{j}|x)-Q_{Y}(\tau_{j}|x)}{Q_{Y}(\tau_{j}|x)}\\ &&=\frac{1}{k-1}\sum\limits_{j=1}^{k-1} \{n(1-\tau_{j})\}^{-\frac{m}{2m+1}}\frac{Q(\tau_{j})b(x)}{Q_{Y}(\tau_{j}|x)}\\ &&\quad + \frac{1}{k-1}\sum\limits_{j=1}^{k-1} \{n(1-\tau_{j})\}^{-\frac{m}{2m+1}} \frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}\\ &&\quad \quad + o(k^{-m/(2m+1)})\\ &&=\frac{2m+1}{m+1}k^{-m/(2m+1)} \frac{b(x)}{h(x)}\\ &&\quad +\frac{2m+1}{m+1}k^{-m/(2m+1)} \frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}+o(k^{-m/(2m+1)}). \end{array} $$

Here, we used the fact that

$$ \begin{array}{@{}rcl@{}} && \frac{1}{k-1}\sum\limits_{j=1}^{k-1}\left( n(1-\tau_{j})\right)^{-m/(2m+1)}\\ &&= \frac{1}{k-1}\sum\limits_{j=1}^{k-1}k^{-m/(2m+1)}\left( \frac{j+[n^{\eta}]}{k+1}\right)^{-m/(2m+1)}(1+o(1))\\ &&=k^{-m/(2m+1)} {{\int}_{0}^{1}} u^{-m/(2m+1)}du(1+o(1))\\ &&=\frac{2m+1}{m+1}k^{-m/(2m+1)} (1+o(1)). \end{array} $$

Consequently, \(\hat {\gamma }(x)\) can be expressed as

$$ \begin{array}{@{}rcl@{}} \hat{\gamma}(x)&=&\gamma+\frac{m}{m+1}k^{-m/(2m+1)}\frac{b(x)}{h(x)}\\ && +\frac{m}{m+1}k^{-m/(2m+1)}\frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}\\ &&\quad +O((n/k)^{\max\{\rho^{*},-\gamma\}})+O(k^{-1}n^{\eta} \log(k))+o(k^{-m/(2m+1)})\\ &=& \gamma+\frac{2m+1}{m+1}k^{-m/(2m+1)}\frac{b(x)}{h(x)}\\ && +\frac{2m+1}{m+1}k^{-m/(2m+1)}\frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}\\ &&\quad +o(k^{-m/(2m+1)}) \end{array} $$

under the assumptions of Theorem 2, which completes the proof. □

Proof of Theorem 2

By the definition of the extrapolated estimator, we obtain

$$ \begin{array}{@{}rcl@{}} \frac{\hat{Q}_{Y}(\tau|x)}{Q_{Y}(\tau|x)}&=&\left( \frac{1-\tau_{I}}{1-\tau}\right)^{\gamma}\left( \frac{1-\tau_{I}}{1-\tau}\right)^{\hat{\gamma}(x)-\gamma}\frac{\tilde{Q}_{Y}(\tau_{I}|x)}{Q_{Y}(\tau_{I}|x)}\frac{Q_{Y}(\tau_{I}|x)}{Q_{Y}(\tau|x)} \end{array} $$

for the extreme order quantile τ and the intermediate order quantile τI. Thus, the asymptotic form of \(\hat {Q}_{Y}(\tau |x)\) can be represented by those of \(\hat {\gamma }(x)-\gamma \), \(\tilde {Q}_{Y}(\tau _{I}|x)\) and QY(τI|x)/QY(τ|x). Firstly, from Theorem 1, we have

$$ \begin{array}{@{}rcl@{}} \frac{\tilde{Q}_{Y}(\tau_{I}|x)}{Q_{Y}(\tau_{I}|x)}&=&1+\frac{\tilde{Q}_{Y}(\tau_{I}|x)-Q_{Y}(\tau_{I}|x)}{Q_{Y}(\tau_{I}|x)} \\ &=&1+\{n(1-\tau_{I})\}^{-\frac{m}{2m+1}}\frac{b(x)}{h(x)}\\ &&+\{n(1-\tau_{I})\}^{-\frac{m}{2m+1}}\frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})G^{1/2}\boldsymbol{\varepsilon}\\ &&\quad +o\left( \{n(1-\tau_{I})\}^{-\frac{m}{2m+1}}\right) . \end{array} $$
(12)

Secondly, since QY(τ|x) = UY(1/(1 − τ)|x), the second-order condition of UY(⋅|x) is

$$ \begin{array}{@{}rcl@{}} &&\frac{U_{Y}(\frac{1}{1-\tau_{I}}|x)}{U_{Y}\left( \frac{1}{1-\tau}|x\right)}\\ &&= \frac{U_{Y}(\frac{1}{1-\tau_{I}}|x)}{U_{Y}\left( \frac{1-\tau_{I}}{1-\tau}\frac{1}{1-\tau_{I}}|x\right)}\\ &&= \left[ \left( \frac{1 - \tau_{I}}{1 - \tau}\right)^{\gamma} \left\{ 1+A^{*}(1/(1-\tau_{I})|x)\frac{\left( \frac{1-\tau_{I}}{1-\tau}\right)^{\rho^{*}} - 1}{\rho^{*}}\right\}+o(A^{*}(1/(1-\tau_{I})|x)) \right]^{-1}\\ &&=\left( \frac{1 - \tau_{I}}{1 - \tau}\right)^{-\gamma}\left\{ 1-A^{*}(1/(1-\tau_{I})|x)\frac{\left( \frac{1-\tau_{I}}{1-\tau}\right)^{\rho^{*}}-1}{\rho^{*}}+o(A^{*}(1/(1-\tau_{I})|x))\right\}. \end{array} $$

Therefore, under \(A^{*}(1/(1-\tau _{I}))\leq A^{*}(1/(1-\tau _{k}))=A^{*}(n/k|x)(1+o(1))=O((n/k)^{\rho ^{*}})=o(k^{-m/(2m+1)})\), we get

$$ \begin{array}{@{}rcl@{}} \left( \frac{1 - \tau_{I}}{1 - \tau}\right)^{\gamma}\frac{Q_{Y}(\tau_{I}|x)}{Q_{Y}(\tau|x)} &=& 1-A^{*}(1/(1 - \tau_{I})|x)\frac{\left( \frac{1-\tau_{I}}{1-\tau}\right)^{\rho^{*}} - 1}{\rho^{*}}+o(A^{*}(1/(1 - \tau_{I})|x))\\ &=& 1+o(k^{-m/(2m+1)}). \end{array} $$
(13)

Thirdly, from Lemma 1, for a(τ, τI) = (1 − τI)/(1 − τ), we have the following:

$$ \begin{array}{@{}rcl@{}} &&\left( a(\tau,\tau_{I})\right)^{\hat{\gamma}(x)-\gamma}\\ &&=\exp[(\hat{\gamma}(x)-\gamma)\log a(\tau,\tau_{I})]\\ &&=1+ (\hat{\gamma}(x)-\gamma)\log a(\tau,\tau_{I}) +o(k^{-m/(2m+1)}\log(a(\tau,\tau_{I})))\\ &&=1+\frac{m}{m+1}k^{-m/(2m+1)}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)\frac{b(x)}{h(x)}\\ &&\quad +\frac{m}{m+1}k^{-m/(2m+1)}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)\frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}\\ &&\quad\quad +o\left( k^{-m/(2m+1)}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)\right). \end{array} $$
(14)

Here, we notice that

$$ \begin{array}{@{}rcl@{}} &&\sup_{\tau\in[\tau_{E,1},\tau_{E,2}]}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)o(k^{-n/(2m+1)})\\ &&\leq k^{-n/(2m+1)} \log\left( \frac{1-\tau_{I}}{1-\tau_{E,2}}\right)o(1)\\ &&\leq k^{-n/(2m+1)} \log\left( \frac{1-\tau_{I}}{1-\tau_{E,1}}\right)\frac{\log(n(1-\tau_{I}))-\log(n(1-\tau_{E,2}))}{\log(n(1-\tau_{I}))-\log(n(1-\tau_{E,1}))}o(1)\\ &&=o\left( k^{-n/(2m+1)} \log\left( \frac{1-\tau_{I}}{1-\tau_{E,1}}\right)\right). \end{array} $$

Thus, Eq. 14 holds uniformly for τ ∈ [τE,1, τE,2]. Combining (12), (13) and (14), we obtain

$$ \begin{array}{@{}rcl@{}} &&\frac{\hat{Q}_{Y}(\tau|x)}{Q_{Y}(\tau|x)}\\ &&= \left[ 1+\{n(1-\tau_{I})\}^{-\frac{m}{2m+1}}\frac{b(x)}{h(x)}+\{n(1 - \tau_{I})\}^{-\frac{m}{2m+1}}\frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})G^{1/2}\boldsymbol{\varepsilon}\right]\\ &&\times \left[1+o(k^{-m/(2m+1)})\right]\\ &&\quad \times \left[1+\frac{m}{m+1}k^{-m/(2m+1)}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)\frac{b(x)}{h(x)}\right.\\ &&\quad \quad \left. +\frac{m}{m+1}k^{-m/(2m+1)}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)\frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}\right]\\ &=& 1 +\left\{\{n(1-\tau_{I})\}^{-\frac{m}{2m+1}}+\frac{m}{m+1}k^{-\frac{m}{2m+1}}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)\right\}\frac{b(x)}{h(x)}\\ &&+ \left\{\{n(1-\tau_{I})\}^{-\frac{m}{2m+1}}+\frac{m}{m+1}k^{-\frac{m}{2m+1}}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)\right\}\\ &&\quad \quad \times \frac{\gamma}{h(x)}\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\boldsymbol{\varepsilon}\\ &&\quad \quad +o(\{n(1-\tau_{I})\}^{-\frac{m}{2m+1}})+o(k^{-\frac{m}{2m+1}}\log\{(1-\tau_{I})/(1-\tau)\}), \end{array} $$
(15)

uniformly for \(\tau \in [\tau _{E,1},\tau _{E,2}]\), which completes the proof of the first assertion. From this, the asymptotic order of the squared bias and the variance of \(\hat {Q}_{Y}(\tau |x)/Q_{Y}(\tau |x)\) are similarly given by

$$ O\left( \{n(1-\tau_{I})\}^{-\frac{2m}{2m+1}}\right) +O\left( k^{-\frac{2m}{2m+1}}\log^{2}\left( \frac{1-\tau_{I}}{1-\tau}\right)\right). $$

Thus, we have also proved the second assertion. □

Proof of Theorem 3

The width of the confidence band is

$$ 2d_{\alpha}(\tau)||\ell(\tau|x)|| $$

for the intermediate order quantile τ. From Theorem 1, ||(τ|x)|| is the standard deviation of the estimator \(\tilde {Q}_{Y}(\tau |x)\), hence, we have ||(τ|x)|| = O((1 − τ)γ{n(1 − τ)}m/(2m+ 1)) under the assumptions of Theorem 3. What remains is to verify that

$$ d_{\alpha}(\tau)=O(\sqrt{\log(K^{2})})=O(\sqrt{\log[\{n(1-\tau)\}^{2/(2m+1)}]}). $$

The normal distribution is light tail and hence 1 −Φ(dα(τ)) = o(1) as \(n\rightarrow \infty \) and \(\tau \rightarrow 1\). Therefore, dα(τ) is asymptotically equivalent to

$$ \sqrt{\log[2(\alpha\pi)^{-1}\nu(\tau)^{2}]}(1+o(1)), $$

and hence it is sufficient to show that ν(τ) = O(K). By definition,

$$ \begin{array}{@{}rcl@{}} \nu(\tau)&=&{{\int}_{a}^{b}} \left|\left|\frac{d}{dx}\frac{\boldsymbol{\ell}(\tau|x)}{||\boldsymbol{\ell}(\tau|x)||}\right|\right| \\ &=&{{\int}_{a}^{b}} \frac{\sqrt{||\boldsymbol{\ell}(\tau|x)||^{2}||\boldsymbol{\ell}^{(1)}(\tau|x)||^{2}-\{\boldsymbol{\ell}(\tau|x)^{T}\boldsymbol{\ell}^{(1)}(\tau|x)\}^{2}}}{||\boldsymbol{\ell}(\tau|x)||^{2}}dx\\ &=&{{\int}_{a}^{b}} \sqrt{\frac{||\boldsymbol{\ell}(\tau|x)||^{2}||\boldsymbol{\ell}^{(1)}(\tau|x)||^{2}-\{\boldsymbol{\ell}(\tau|x)^{T}\boldsymbol{\ell}^{(1)}(\tau|x)\}^{2}}{||\boldsymbol{\ell}(\tau|x)||^{2}||\boldsymbol{\ell}(\tau|x)||^{2}}}dx, \end{array} $$

where (1)(τ|x) = d(τ|x)/dx. We rewrite the B-spline vector B(x) as \(\boldsymbol {B}^{[p]}(x)=(B_{1}^{[p]}(x),\cdots ,B_{K+p}^{[p]}(x))^{T}\). By the differential property of the B-spline function, we have

$$ \begin{array}{@{}rcl@{}} \boldsymbol{\ell}^{(1)}(\tau|x)^{T}&=&\frac{Q(\tau)K^{1/2}}{\sqrt{n(1-\tau)}}\frac{d}{dx}\boldsymbol{B}^{[p]}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}\\ &=&\frac{Q(\tau)K^{1/2}}{\sqrt{n(1-\tau)}}K^{*}\boldsymbol{B}^{[p-1]}(x)^{T}D_{1} G(H^{-\gamma})^{-1}G^{1/2}, \end{array} $$

where D1 is first-order difference matrix given by the (K + p − 1) × (K + p) matrix having

$$ \begin{array}{@{}rcl@{}} D_{1}= \left[ \begin{array}{ccccc} 1&-1&0&\cdots&0\\ 0&1&-1&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&0&1&-1 \end{array} \right]. \end{array} $$

Therefore, we obtain

$$ \frac{||\boldsymbol{\ell}^{(1)}(\tau|x)||^{2}}{||\boldsymbol{\ell}(\tau|x)||^{2}}= O\left( K^{2}\right) $$

and

$$ \frac{\boldsymbol{\ell}(\tau|x)^{T}\boldsymbol{\ell}^{(1)}(\tau|x)}{||\boldsymbol{\ell}(\tau|x)||^{2}}= O\left( K^{2}\right). $$

These results prove that ν(τ) = O(K), which completes the proof. □

Proof of Theorem 4

To obtain the result of the theorem, we need to show the asymptotic order of

$$ \frac{||\boldsymbol{\ell}^{(1)}(\tau|x)||^{2}}{||\boldsymbol{\ell}(\tau|x)||^{2}} $$

and

$$ \frac{\boldsymbol{\ell}(\tau|x)^{T}\boldsymbol{\ell}^{(1)}(\tau|x)}{||\boldsymbol{\ell}(\tau|x)||^{2}}. $$

Therefore, we consider the derivative of (τ|x).

From the proof of Theorem 2, we obtain

$$ \begin{array}{@{}rcl@{}} \frac{\hat{Q}_{Y}(\tau|x)}{Q_{Y}(\tau|x)}&=&\left[1+\frac{\tilde{Q}_{Y}(\tau_{I}|x)-Q_{Y}(\tau_{I}|x)}{Q_{Y}(\tau_{I}|x)}\right]\\ &&\times [1+A^{*}(1/(1-\tau_{I})|x)+o(A^{*}(1/(1-\tau_{I})|x))]\\ &&\quad \times \left[ 1+ (\hat{\gamma}(x)-\gamma)\log\{(1-\tau_{I})/(1-\tau)\}(1+o(1))\right]. \end{array} $$

First, \(A^{*}(t|x)=\gamma d^{*}(x)t^{\rho ^{*}}\), hence, \(d A^{*}(t|x)/d x=O(t^{\rho ^{*}})\). From this, the A term becomes negligible order. Therefore, we omit this term and obtain the following

$$ \begin{array}{@{}rcl@{}} \hat{Q}_{Y}(\tau|x)-Q_{Y}(\tau|x)&=&\tilde{Q}_{Y}(\tau_{I}|x)-Q_{Y}(\tau_{I}|x)\\ &&+Q_{Y}(\tau|x)(\hat{\gamma}(x)-\gamma)\log\left( \frac{1-\tau_{I}}{1-\tau}\right) +o(k^{-m/(2m+1)}) \end{array} $$

By the result of Theorem 1, we obtain

$$ \tilde{Q}_{Y}(\tau_{I}|x)-Q_{Y}(\tau_{I}|x)-\{n(1-\tau_{I})\}^{-m/(2m+1)}Q(\tau_{I})b(x)=\boldsymbol{\ell}_{1}(x)^{T}\boldsymbol{\varepsilon}(1+o_{P}(1)), $$

where

$$ \boldsymbol{\ell}_{1}(\tau_{I}|x)^{T}=\{n(1-\tau_{I})\}^{-\frac{m}{2m+1}}Q(\tau)\gamma\boldsymbol{B}(x)^{T} G(H^{-\gamma})^{-1}G^{1/2}. $$

Similar to the proof of Theorem 3, we have

$$ \boldsymbol{\ell}^{(1)}_{1}(\tau_{I}|x)^{T}=\{n(1-\tau_{I})\}^{-\frac{m-1}{2m+1}}Q(\tau)\gamma\boldsymbol{B}(x)^{T}D_{1} G(H^{-\gamma})^{-1}G^{1/2} $$

because \(K=O(\{n(1-\tau _{I})\}^{1/(2m+1)})\). Similarly, from the result of Lemma 1,

$$ \hat{\gamma}(x)-\gamma-N(n,k,\tau_{I})b(x)=\boldsymbol{\ell}_{2}(x)^{T}\boldsymbol{\varepsilon}(1+o_{P}(1)), $$

where

$$ \boldsymbol{\ell}_{2}(\tau|x)^{T}=\frac{1}{k-1}\sum\limits_{j=1}^{k-1}\{n(1-\tau_{j})\}^{-\frac{m}{2m+1}}Q(\tau)\gamma \boldsymbol{B}(x)^{T}G(H^{-\gamma})^{-1}G^{1/2}. $$

The derivative of 2(τ|x) can be calculated as

$$ \begin{array}{@{}rcl@{}} \boldsymbol{\ell}_{2}^{(1)}(\tau|x)^{T}=\frac{1}{k-1}\sum\limits_{j=1}^{k-1}\{n(1-\tau_{j})\}^{-\frac{(m-1)}{2m+1}}Q(\tau)\gamma \boldsymbol{B}(x)^{T}D_{1}G(H^{-\gamma})^{-1}G^{1/2}. \end{array} $$

Then, we obtain

$$ \frac{1}{k-1}\sum\limits_{j=1}^{k-1}\{n(1-\tau_{j})\}^{-\frac{m-1}{2m+1}}=\frac{2m+1}{m+2}k^{-\frac{m-1}{2m+1}}(1+o(1)). $$

Hence,

$$ \boldsymbol{\ell}_{2}^{(1)}(\tau|x)^{T}=\frac{2m+1}{m+2}k^{-\frac{m-1}{2m+1}}Q(\tau)\gamma \boldsymbol{B}(x)^{T}D_{1}G(H^{-\gamma})^{-1}G^{1/2}. $$

Since

$$ \boldsymbol{\ell}(\tau|x)=\boldsymbol{\ell}_{1}(\tau|x)+\boldsymbol{\ell}_{2}(\tau|x), $$

we have

$$ \frac{||\boldsymbol{\ell}^{(1)}(\tau|x)||^{2}}{||\boldsymbol{\ell}(\tau|x)||^{2}}=O\left( \{n(1-\tau_{I})\}^{\frac{1}{2m+1}}\right)+O\left( k^{\frac{1}{2m+1}}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)\right) $$

and

$$ \frac{\boldsymbol{\ell}(\tau|x)^{T}\boldsymbol{\ell}^{(1)}(\tau|x)}{||\boldsymbol{\ell}(\tau|x)||^{2}}=O\left( \{n(1-\tau_{I})\}^{\frac{1}{2m+1}}\right)+O\left( k^{\frac{1}{2m+1}}\log\left( \frac{1-\tau_{I}}{1-\tau}\right)\right). $$

The remainder is similar to the proof of Theorem 3. Therefore, we have proved Theorem 4. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yoshida, T. Simultaneous confidence bands for extremal quantile regression with splines. Extremes 23, 117–149 (2020). https://doi.org/10.1007/s10687-019-00360-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10687-019-00360-4

Keywords

AMS 2000 Subject Classifications

Navigation