Skip to main content
Log in

Conditional marginal expected shortfall

  • Published:
Extremes Aims and scope Submit manuscript

Abstract

In the context of bivariate random variables \(\left (Y^{(1)},Y^{(2)}\right )\), the marginal expected shortfall, defined as \(\mathbb {E}\left (Y^{(1)}|Y^{(2)} \ge Q_{2}(1-p)\right )\) for p small, where Q2 denotes the quantile function of Y(2), is an important risk measure, which finds applications in areas like, e.g., finance and environmental science. Our paper pioneers the statistical modeling of this risk measure when the random variables of main interest \(\left (Y^{(1)},Y^{(2)}\right )\) are observed together with a random covariate X, leading to the concept of the conditional marginal expected shortfall. The asymptotic behavior of an estimator for this conditional marginal expected shortfall is studied for a wide class of conditional bivariate distributions, with heavy-tailed marginal conditional distributions, and where p tends to zero at an intermediate rate. The finite sample performance is evaluated on a small simulation experiment. The practical applicability of the proposed estimator is illustrated on flood claim data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Acharya, V., Pedersen, L., Philippon, T., Richardson, M.: Measuring systemic risk. FRB of Cleveland Working Paper No. 10-02. https://ssrn.com/abstract=1595075 (2010)

  • Aerts, M., Claeskens, G.: Local polynomial estimation in multiparameter likelihood models. J. Amer. Statist. Assoc. 92, 1536–1545 (1997)

    Article  MathSciNet  Google Scholar 

  • Artzner, P., Delbaen, F., Eber, J. -M., Heath, D.: Coherent measures of risk. Math. Finance 9, 203–228 (1999)

    Article  MathSciNet  Google Scholar 

  • Bai, S., Taqqu, M.S.: Multivariate limit theorems in the context of long-range dependence. J. Time Series Anal. 34, 717–743 (2013)

    Article  MathSciNet  Google Scholar 

  • Beirlant, J., Joossens, E., Segers, J.: Second-order refined peaks-over-threshold modelling for heavy-tailed distributions. J. Statist. Plann. Inference 139, 2800–2815 (2009)

    Article  MathSciNet  Google Scholar 

  • Cai, J., Li, H.: Conditional tail expectations for multivariate phase-type distributions. J. Appl. Probab. 42, 810–825 (2005)

    Article  MathSciNet  Google Scholar 

  • Cai, J.J., Einmahl, J.H.J., de Haan, L., Zhou, C.: Estimation of the marginal expected shortfall: the mean when a related variable is extreme. J. R. Stat. Soc. Ser. B Stat. Methodol. 77, 417–442 (2015)

    Article  MathSciNet  Google Scholar 

  • Cai, J.-J., Musta, E.: Estimation of the marginal expected shortfall under asymptotic independence. Scand. J. Stat. 47, 56–83 (2020)

    Article  MathSciNet  Google Scholar 

  • Castro, D., de Carvalho, M., Wadsworth, J.L.: Time-varying extreme value dependence with application to leading European stock markets. Ann. Appl. Stat. 12, 283–309 (2018)

    MathSciNet  MATH  Google Scholar 

  • Daouia, A., Gardes, L., Girard, S.: On kernel smoothing for extremal quantile regression. Bernoulli 19, 2557–2589 (2013)

    Article  MathSciNet  Google Scholar 

  • Daouia, A., Gardes, L., Girard, S., Lekina, A.: Kernel estimators of extreme level curves. TEST 20, 311–333 (2011)

    Article  MathSciNet  Google Scholar 

  • Das, B., Fasen-Hartmann, V.: Risk contagion under regular variation and asymptotic tail independence. J. Multivariate Anal. 165, 194–215 (2018)

    Article  MathSciNet  Google Scholar 

  • Das, B., Fasen-Hartmann, V.: Conditional excess risk measures and multivariate regular variation. Stat. Risk Model. 36, 1–23 (2019)

    Article  MathSciNet  Google Scholar 

  • de Carvalho, M.: Statistics of Extremes: Challenges and Opportunities. In: Longin, F. (ed.) Extreme Events in Finance: A Handbook of Extreme Value Theory and Its Applications. Wiley, Hoboken (2016)

  • de Haan, L., Ferreira, A.: Extreme value theory. An introduction. Springer, Berlin (2006)

    Book  Google Scholar 

  • Di Bernardino, E., Prieur, C.: Estimation of the multivariate conditional tail expectation for extreme risk levels: Illustration on environmental data sets. Environmetrics 29(7), 1–22 (2018)

    Article  MathSciNet  Google Scholar 

  • Dierckx, G., Goegebeur, Y., Guillou, A.: Local robust and asymptotically unbiased estimation of conditional Pareto type-tails. TEST 23, 330–355 (2014)

    Article  MathSciNet  Google Scholar 

  • Escobar-Bach, M., Goegebeur, Y., Guillou, A.: Local robust estimation of the Pickands dependence function. Ann. Statist. 46, 2806–2843 (2018a)

    Article  MathSciNet  Google Scholar 

  • Escobar-Bach, M., Goegebeur, Y., Guillou, A.: Local estimation of the conditional stable tail dependence function. Scand. J. Stat. 45, 590–617 (2018b)

    Article  MathSciNet  Google Scholar 

  • Fan, J., Gijbels, I: Local polynomial modelling and its applications. Chapman and Hall (1996)

  • Gardes, L.: Tail dimension reduction for extreme quantile estimation. Extremes 21, 57–95 (2018)

    Article  MathSciNet  Google Scholar 

  • Goegebeur, Y., Guillou, A., Osmann, M.: A local moment type estimator for the extreme value index in regression with random covariates. Canadian J. Statist. 42, 487–507 (2014)

    Article  MathSciNet  Google Scholar 

  • Goegebeur, Y., Guillou, A., Qin, J.: Bias-corrected estimation for conditional Pareto-type distributions with random right censoring. Extremes 22, 459–498 (2019)

    Article  MathSciNet  Google Scholar 

  • Joe, H., Li, H.: Tail risk of multivariate regular variation. Methodol. Comput. Appl. Probab. 13, 671–693 (2011)

    Article  MathSciNet  Google Scholar 

  • Landsman, Z., Valdez, E.: Tail conditional expectations for elliptical distributions. N. Am. Actuar. J. 7, 55–71 (2003)

    Article  MathSciNet  Google Scholar 

  • Mhalla, L., de Carvalho, M., Chavez-Demoulin, V.: Regression type models for extremal dependence. Scand. J. Stat. 46, 1141–1167 (2019)

    Article  MathSciNet  Google Scholar 

  • Nadaraya, E.A.: On estimating regression. Theory Probab. Appl. 9, 141–142 (1964)

    Article  Google Scholar 

  • van der Vaart, A.W.: Asymptotic Statistics Cambridge. Series in Statistical and Probabilistic Mathematics, vol. 3. Cambridge University Press, Cambridge (1998)

    Google Scholar 

  • van der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes, with Applications to Statistics. Springer Series in Statistics. Springer-Verlag, New York (1996)

    MATH  Google Scholar 

  • Wand, M., Jones, M.C.: Kernel smoothing. Chapman and Hall (1995)

  • Watson, G.S.: Smooth regression analysis. Sankhya: Ser A 26, 359–372 (1964)

    MathSciNet  MATH  Google Scholar 

  • Weissman, I.: Estimation of parameters and large quantiles based on the k largest observations. J. Amer. Statist. Assoc. 73, 812–815 (1978)

    MathSciNet  MATH  Google Scholar 

  • Wellner, J.A.: Empirical Processes: Theory and Applications. Special topics course. https://www.stat.washington.edu/jaw/RESEARCH/TALKS/Delft/emp-proc-delft-big.pdf (2005)

  • Wretman, J.: A simple derivation of the asymptotic distribution of a sample quantile. Scand. J. Stat. 5, 123–124 (1978)

    MathSciNet  MATH  Google Scholar 

  • Xu, W., Wang, H.J., Li, D.: Extreme quantile estimation based on the tail single-index model. Statist. Sinica. https://doi.org/10.5705/ss.202020.0051 (2022)

  • Yao, Q.: Conditional predictive regions for stochastic processes. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.45.2449&rep=rep1&type=pdf (1999)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuri Goegebeur.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors would like to thank the referees and Associate Editor for their helpful comments. The research of Armelle Guillou was supported by the French National Research Agency under the grant ANR-19-CE40-0013-01/ExtremReg project and an International Emerging Action (IEA-00179). Computation/simulation for the work described in this paper was supported by the DeIC National HPC Centre, SDU.

Appendix A

Appendix A

Lemma 1

Assume \((\mathcal {D})\) and \(({\mathscr{H}})\) and x0 ∈Int(SX). Let (tn)n≥ 1 and (hn)n≥ 1 be arbitrary sequences satisfying \(t_{n} \to \infty \) and hn → 0 such that \({h}_{n}^{\eta _{\gamma _{j}} \wedge \eta _{\varepsilon _{j}}} \ln t_{n} \to 0\), as \(n \to \infty \), and 0 ≤ η < 1. Then

$$ \begin{array}{@{}rcl@{}} \left | \frac{t_{n} \overline{F}_{j}(U_{j}(t_{n}/y|x_{0})|x)}{y^{\eta}} -y^{1-\eta} \right | \to 0, \text{ as } n \to \infty, \end{array} $$

uniformly in y ∈ (0,T] and xB(x0,hn).

Lemma 2

Assume \((\mathcal {D})\), \(({\mathscr{H}})\), \((\mathcal {K})\) and \((\mathcal {R})\) with xR(y1,y2|x) being a continuous function, and x0Int(SX) such that fX(x0) > 0. Consider sequences \(k \to \infty \) and h → 0 as \(n \to \infty \) in such a way that k/n → 0 and \(h^{\eta _{\gamma _{1}}\wedge \eta _{\gamma _{2}} \wedge \eta _{\varepsilon _{1}}\wedge \eta _{\varepsilon _{2}} }\ln n/k \to 0\). Then, as \(n \to \infty \)

$$ \begin{array}{@{}rcl@{}} \mathbb{E} (T_{n}(y_{1},y_{2}|x_{0})) & \to & f_{X}(x_{0}) R(y_{1},y_{2}|x_{0}), \\ kh^{d} \mathbb{V}ar(T_{n}(y_{1},y_{2}|x_{0})) & \to & \|K\|_{2}^{2} f_{X}(x_{0}) R(y_{1},y_{2}|x_{0}). \end{array} $$

The proof of these lemmas and all the subsequent ones are given in Appendix A.5.

1.1 A.1 Proof of Theorem 1

To prove the result we will make use of empirical process theory with changing function classes, see for instance van der Vaart and Wellner (1996). To this aim we start by introducing some notation. Let P be the distribution measure of \(\left (Y^{(1)},Y^{(2)},X\right )\), and denote the expected value under P, the empirical version and empirical process as follows

$$ \begin{array}{@{}rcl@{}} Pf:=\int fdP,\mathbb{P}_{n}f:=\frac{1}{n}\sum\limits_{i=1}^{n}f\left( {Y}_{i}^{(1)}, {Y}_{i}^{(2)}, X_{i}\right),\mathbb{G}_{n}f:=\sqrt{n}(\mathbb{P}_{n}-P)f, \end{array} $$

for any real-valued measurable function \(f:\mathbb {R}^{2}\times \mathbb {R}^{d}\to \mathbb {R}\). For a function class \(\mathcal {F}\), let \(N_{[]}(\varepsilon , \mathcal {F}, L_{2}(P))\), denote the minimal number of ε −brackets needed to cover \(\mathcal {F}\). The bracketing integral is then defined as

$$ \begin{array}{@{}rcl@{}} J_{[]}(\delta,\mathcal{F},L_{2}(P)) ={\int}_{0}^{\delta} \sqrt{\ln N_{[]}(\varepsilon, \mathcal{F}, L_{2}(P))} d\varepsilon. \end{array} $$

We introduce our sequence of classes \(\mathcal {F}_{n}\) on \(\mathbb {R}^{2}\times \mathbb {R}^{d}\) as

$$ \begin{array}{@{}rcl@{}} \mathcal{F}_{n}&:=&\left\{(u,z)\rightarrow f_{n,y}(u,z), y\in(0,T]^{2}\right\} \end{array} $$

where

Denote also by Fn an envelope function of the class \(\mathcal {F}_{n}\). Now, according to Theorem 19.28 in van der Vaart (1998) the weak convergence of the stochastic process Eq. 3 follows from the following four conditions. Let \(\rho _{x_{0}}\) be a semimetric, possibly depending on x0, making (0,T]2 totally bounded. We have to prove that

along with the pointwise convergence of the covariance function.

Proof of Eq. 9.

Let \(\rho _{x_{0}} (y,\bar y) := |y_{1}-\bar y_{1}|+|y_{2}-\bar y_{2}|\). Denote \(A_{n,y}:= \left \{ \overline {F}_{1}\left (Y^{(1)}|x_{0}\right ) \le (k/n) y_{1}, \overline {F}_{2}\left (Y^{(2)}|x_{0}\right ) \le (k/n) y_{2} \right \} \). We have then

(13)

We consider now three cases. □

Case 1

\(y_{1} \wedge \bar y_{1} \le \delta _{n}\). Assume without loss of generality that \(y_{1} \le \bar y_{1}\). By expanding the square in the above conditional expectation and using the fact that, e.g., \(A_{n,y} \subset \{ \overline {F}_{1}(Y^{(1)}|x_{0}) \le (k/n) y_{1} \}\), we obtain the following inequality

which, after substituting in Eq. 13 leads to

$$ \begin{array}{@{}rcl@{}} &&P(f_{n,y}-f_{n,\bar y})^{2}\\ & \le & 3 \frac{n}{k} {\int}_{S_{K}} K^{2}(v) \frac{P\left( \overline{F}_{1}\left( Y^{(1)}|x_{0}\right) \le (k/n) y_{1}|X=x_{0}-hv\right)}{y_{1}^{2\eta}}f_{X}(x_{0}-hv) dv \\ & & + \frac{n}{k} {\int}_{S_{K}} K^{2}(v) \frac{P\left( \overline{F}_{1}\left( Y^{(1)}|x_{0}\right) \le (k/n) \bar y_{1}|X=x_{0}-hv\right)}{\bar {y}_{1}^{2\eta}}f_{X}(x_{0}-hv) dv. \end{array} $$

Now note that

$$ \begin{array}{@{}rcl@{}} P\left( \overline{F}_{1}\left( Y^{(1)}|x_{0}\right) \le (k/n) y_{1}|X=x_{0}-hv\right) = \overline{F}_{1}\left (U_{1}(n/(ky_{1})|x_{0})|x_{0}-h v \right ), \end{array} $$

which, together with the result of Lemma 1, motivates the following decomposition

$$ \begin{array}{@{}rcl@{}} &&P(f_{n,y}-f_{n,\bar y})^{2}\\ & \le & 3y_{1}^{1-2\eta} {\int}_{S_{K}} K^{2}(v)f_{X}(x_{0}-hv) dv \\ & & + 3 {\int}_{S_{K}} K^{2}(v) \left [ \frac{1}{{y}_{1}^{2\eta}} \frac{n}{k} \overline{F}_{1}\left (U_{1}(n/(ky_{1})|x_{0})|x_{0}-h v \right )-{y}_{1}^{1-2\eta} \right ] f_{X}(x_{0}-hv) dv \\ & & + \bar {y}_{1}^{1-2\eta} {\int}_{S_{K}} K^{2}(v)f_{X}(x_{0}-hv) dv \\ & & + {\int}_{S_{K}} K^{2}(v) \left [ \frac{1}{\bar {y}_{1}^{2\eta}} \frac{n}{k} \overline{F}_{1}\left (U_{1}(n/(k \bar y_{1})|x_{0})|x_{0}-h v \right )-\bar {y}_{1}^{1-2\eta} \right ] f_{X}(x_{0}-hv) dv. \end{array} $$

Using Lemma 1 and the fact that \(\rho _{x_{0}}(y, \bar y)\leq \delta _{n}\) which implies \(\bar y_{1} \le 2 \delta _{n}\), we get

$$ \begin{array}{@{}rcl@{}} P(f_{n,y}-f_{n,\bar y})^{2} & \le & 5 \delta_{n}^{1-2\eta} {\int}_{S_{K}} K^{2}(v)f_{X}(x_{0}-hv) dv +o(1), \end{array} $$

where the o(1) term does not depend on y1 and \(\bar y_{1}\).

Case 2

\(y_{1} \wedge \bar y_{1} > \delta _{n}\) and \(y_{2} \wedge \bar y_{2} \le \delta _{n}\). Assume without loss of generality that \(y_{2} \le \bar y_{2}\). Similarly to the approach followed in Case 1, we obtain

and thus

$$ \begin{array}{@{}rcl@{}} &&{\kern0pt}P(f_{n,y}-f_{n,\bar{y}})^{2}\\ & \le & \frac{3 y_{2}}{\left( y_{1} \wedge \bar y_{1}\right)^{2 \eta}} {\int}_{S_{K}} K^{2}(v)f_{X}(x_{0}-hv) dv \\ & & + \frac{3{y}_{2}^{2\eta}}{(y_{1} \wedge \bar y_{1})^{2 \eta}}{\int}_{S_{K}} K^{2}(v) \left [ \frac{1}{{y}_{2}^{2\eta}} \frac{n}{k} \overline{F}_{2}\left (U_{2}(n/(ky_{2})|x_{0})|x_{0}-h v \right )-{y}_{2}^{1-2\eta} \right ] f_{X}(x_{0}-hv) dv \\ & & + \frac{\bar y_{2}}{(y_{1} \wedge \bar y_{1})^{2 \eta}} {\int}_{S_{K}} K^{2}(v)f_{X}(x_{0}-hv) dv \\ & & + \frac{\bar{y}_{2}^{2\eta}}{(y_{1} \wedge \bar y_{1})^{2 \eta}}{\int}_{S_{K}} K^{2}(v) \left [ \frac{1}{\bar{y}_{2}^{2\eta}} \frac{n}{k} \overline{F}_{2}\left (U_{2}(n/(k\bar y_{2})|x_{0})|x_{0}-h v \right )-\bar{y}_{2}^{1-2\eta} \right ] f_{X}(x_{0}-hv) dv. \end{array} $$

Again by Lemma 1 and using that \(\bar y_{2} \le 2 \delta _{n}\) we have that

$$ \begin{array}{@{}rcl@{}} P(f_{n,y}-f_{n,\bar y})^{2} & \le & 5 \delta_{n}^{1-2\eta} {\int}_{S_{K}} K^{2}(v)f_{X}(x_{0}-hv) dv +o(1), \end{array} $$

where the o(1) term does not depend on y2 and \(\bar y_{2}\).

Case 3

\(y_{1} \wedge \bar y_{1} > \delta _{n}\) and \(y_{2} \wedge \bar y_{2} > \delta _{n}\). Let \(y \vee \bar y\) denote the vector with the component-wise maxima of y and \(\bar y\), and similarly \(y \wedge \bar y\) is the vector with the component-wise mimima of y and \(\bar y\). Then

Note that

(14)

which leads to

$$ \begin{array}{@{}rcl@{}} &&{}\left.{P(f_{n,y}-f_{n,\bar{y}})^{2}}\right. \\ &&\le\left. \frac{\left( y_{1}^{\eta}-\bar y_{1}^{\eta}\right)^{2}}{(y_{1} \bar y_{1})^{2\eta}} \frac{n}{k} {\int}_{S_{K}}K^{2}(v) P\left( \overline{F}_{1}\left( Y^{(1)}|x_{0}\right)\le (k/n) y_{1} \wedge \bar y_{1},\overline{F}_{2}\left( Y^{(2)}|x_{0}\right)\le (k/n) y_{2} \wedge \bar y_{2}\right|X=x_{0}-h v\right) \\ & & \times f_{X}(x_{0}-hv) dv \\ &&\left.\left. +\frac{1}{(y_{1} \wedge \bar y_{1})^{2\eta}} \frac{n}{k} {\int}_{S_{K}}K^{2}(v) \left[P\left( \overline{F}_{1}\left( Y^{(1)}|x_{0}\right)\le (k/n) y_{1} \vee \bar y_{1},\overline{F}_{2}\left( Y^{(2)}|x_{0}\right)\le (k/n) y_{2} \vee \bar y_{2}\right|X=x_{0}-h v\right) \right.\right. \\ && \left.\left.- P\left( \overline{F}_{1}\left( Y^{(1)}|x_{0}\right)\le (k/n) y_{1} \wedge \bar y_{1},\overline{F}_{2}\left( Y^{(2)}|x_{0}\right)\le (k/n) y_{2} \wedge \bar y_{2}\right|X=x_{0}-h v\right)\right] f_{X}(x_{0}-hv) dv \\ & &=: Q_{1,n}+Q_{2,n}. \end{array} $$

As for Q1,n, we easily obtain

$$ \begin{array}{@{}rcl@{}} Q_{1,n} \le \frac{\left( y_{1}^{\eta}-\bar y_{1}^{\eta}\right)^{2}}{(y_{1} \bar y_{1})^{2\eta}} {\int}_{S_{K}}K^{2}(v) \frac{n}{k} \overline{F}_{1}\left (U_{1}(n/(k y_{1}\!\wedge\! \bar y_{1})|x_{0})|x_{0} - h v \right )f_{X}(x_{0}-hv) dv. \end{array} $$

Now, by the mean value theorem, applied to \(\left (y_{1}^{\eta }-\bar y_{1}^{\eta }\right )^{2}\), and a decomposition motivated by Lemma 1,

$$ \begin{array}{@{}rcl@{}} &&{}Q_{1,n} \\ &\le & (y_{1} \wedge \bar y_{1})^{-1-2 \eta} (y_{1}-\bar y_{1})^{2} {\int}_{S_{K}}K^{2}(v)f_{X}(x_{0}-hv) dv \\ &&\left.+ (y_{1} \wedge \bar y_{1})^{-2} (y_{1}-\bar y_{1})^{2} {\int}_{S_{K}}K^{2}(v) \left[ \frac{1}{(y_{1} \wedge \bar{y}_{1})^{2\eta}} \frac{n}{k} \overline{F}_{1}\left( \vphantom{{\int}_{S_{K}}} U_{1}(n/(k y_{1} \wedge \bar y_{1})|x_{0})\right|x_{0}-h v \right)\right. \\ &-&\left. (y_{1} \wedge \bar y_{1})^{1-2\eta} \right ] \times f_{X}(x_{0}-hv) dv. \end{array} $$

This then gives

$$ \begin{array}{@{}rcl@{}} Q_{1,n} & \le & {\delta}_{n}^{1-2\eta} {\int}_{S_{K}}K^{2}(v)f_{X}(x_{0}-hv) dv +o(1), \end{array} $$

where the o(1) term does not depend on y1 and \(\bar y_{1}\).

Concerning Q2,n, we have the following inequality

$$ \begin{array}{@{}rcl@{}} &&{}Q_{2,n} \\ &\le&\left. \frac{1}{(y_{1} \wedge \bar y_{1})^{2\eta}} \frac{n}{k} {\int}_{S_{K}}K^{2}(v) P\left( (k/n) y_{1} \wedge \bar y_{1} \le \overline{F}_{1}\left( Y^{(1)}|x_{0}\right)\le (k/n) y_{1} \vee \bar y_{1}\right|X=x_{0}-h v\right) f_{X}(x_{0}-hv) dv\\ & &\left. + \frac{1}{(y_{1} \wedge \bar y_{1})^{2\eta}} \frac{n}{k} {\int}_{S_{K}}K^{2}(v) P\left( (k/n) y_{2} \wedge \bar{y}_{2} \le \overline{F}_{2}\left( Y^{(2)}|x_{0}\right)\le (k/n) y_{2} \vee \bar y_{2}\right|X=x_{0}-h v\right) f_{X}(x_{0}-hv) dv \\ & =:& Q_{2,1,n}+Q_{2,2,n}. \end{array} $$

We only give details about Q2,1,n, the term Q2,2,n can be handled analogously. Direct computations give

$$ \begin{array}{@{}rcl@{}} Q_{2,1,n} = \frac{1}{(y_{1} \wedge \bar y_{1})^{2\eta}} \frac{n}{k} {\int}_{S_{K}}K^{2}(v) {\int}_{U_{1}(n/(k (y_{1} \vee \bar y_{1}))|x_{0})}^{U_{1}(n/(k (y_{1} \wedge \bar y_{1}))|x_{0})}f_{1}(y|x_{0}-hv)dy f_{X}(x_{0}-hv)dv, \end{array} $$

and, after substituting \(u=(n/k) \overline {F}_{1}(y|x_{0})\), we have

$$ \begin{array}{@{}rcl@{}} Q_{2,1,n} =\frac{1}{(y_{1} \wedge \bar y_{1})^{2\eta}} {\int}_{S_{K}}K^{2}(v) {\int}_{y_{1} \wedge \bar y_{1}}^{y_{1} \vee \bar y_{1}} \frac{f_{1}(U_{1}(n/(ku)|x_{0})|x_{0}-hv)}{f_{1}(U_{1}(n/(ku)|x_{0})|x_{0})}du f_{X}(x_{0}-hv)dv. \end{array} $$

Using Eq. 1 and arguments similar to those used in the proof of Lemma 1 one obtains for n large and some small κ > 0,

$$ \begin{array}{@{}rcl@{}} \frac{f_{1}(U_{1}(n/(ku) |x_{0})|x_{0}-h v)}{f_{1}(U_{1}(n/(ku)|x_{0})|x_{0})} \le C u^{-\kappa} , \end{array} $$

where C does not depend on u. Then, for n large enough,

$$ \begin{array}{@{}rcl@{}} Q_{2,1,n} & \le& \frac{C}{(y_{1} \wedge \bar y_{1})^{2\eta}} {\int}_{y_{1} \wedge \bar y_{1}}^{y_{1} \vee \bar y_{1}}u^{-\kappa} du {\int}_{S_{K}} K^{2}(v) f_{X}(x_{0}-hv) dv \\ & \le & \frac{C}{(y_{1} \wedge \bar y_{1})^{2\eta}} (y_{1}\wedge \bar y_{1})^{-\kappa}\left( y_{1} \vee \bar y_{1}-y_{1} \wedge \bar y_{1}\right) \\ & \le & C \delta_{n}^{1-2\eta-\kappa} \\ & =& o(1), \end{array} $$

for a small κ ∈ (0, 1 − 2η).

Combining all the above we have verified Eq. 9.

Proof of Eq. 10. A natural envelope function of the class \({\mathcal {F}}_{n}\) is

This yields

Concerning Q3,n(T) we obtain by direct integration and a slight adjustment of Lemma 1, for large n

$$ \begin{array}{@{}rcl@{}} Q_{3,n}(T) & = & \frac{1}{1-2\eta} \left (\frac{n}{k} \right )^{1-2\eta}{\int}_{S_{K}}K^{2}(v) \left[\overline{F}_{1}(U_{1}(n/(kT) |x_{0})|x_{0}-hv)\right]^{1-2\eta} f_{X}(x_{0}-hv)dv\\ & = & \frac{T^{1-2\eta}}{1-2\eta} {\int}_{S_{K}}K^{2}(v) f_{X}(x_{0}-hv)dv\\ & & + \frac{1}{1-2\eta} {\int}_{S_{K}}K^{2}(v) \left [ \left (\frac{n}{k} \overline{F}_{1}(U_{1}(n/(kT) |x_{0})|x_{0}-hv)\right )^{1-2\eta} -T^{1-2\eta} \right ] f_{X}(x_{0}-hv)dv\\ &\le& C T^{1-2\eta-\kappa}, \end{array} $$
(15)

for κ < 1 − 2η.

Concerning Q4,n(T), combining \((\mathcal {D})\) with \(({\mathscr{H}})\) gives the following bound, for n large and yU1(n/(kT)|x0),

$$ \begin{array}{@{}rcl@{}} \left | \left (\frac{ \overline{F}_{1}(y|x_{0}-hv)}{\overline{F}_{1}(y|x_{0})} \right )^{2\eta}-1 \right | & \le & C_{1} \left (h^{\eta_{A_{1}}}+y^{C_{2} h^{\eta_{\gamma_{1}}}}h^{\eta_{\gamma_{1}}} \ln y + |\delta_{1}(y|x_{0})| h^{\eta_{B_{1}}} \right . \\ & & \left .+ |\delta_{1}(y|x_{0})|y^{C_{3} h^{\eta_{\varepsilon_{1}}}}h^{\eta_{\varepsilon_{1}}} \ln y \right ). \end{array} $$
(16)

Each of the terms in the right-hand side of the above inequality needs now to be used in Q4,n(T), leading to the terms Q4,j,n(T), j = 1,…, 4, studied below. First

$$ \begin{array}{@{}rcl@{}} &&{\kern15pt}Q_{4,1,n}(T):=\\ &&h^{\eta_{A_{1}}} \left (\frac{n}{k} \right )^{1-2\eta} {\int}_{S_{K}}K^{2}(v) {\int}_{U_{1}(n/(kT)|x_{0})}^{\infty} \frac{1}{(\overline{F}_{1}(y|x_{0}-hv))^{2\eta}}dF_{1}(y|x_{0}-hv) f_{X}(x_{0}-hv)dv. \end{array} $$

This term is clearly of smaller order than Q3,n(T) studied above and hence Q4,1,n(T) = O(1).

For the second term in the right-hand side of Eq. 16 we need to study

$$ \begin{array}{@{}rcl@{}} &&Q_{4,2,n}(T):=\\ &&h^{\eta_{\gamma_{1}}} \left (\frac{n}{k} \right )^{1-2\eta} {\int}_{S_{K}}K^{2}(v) {\int}_{t_{n}(T)}^{\infty} y^{\xi_{1,n}}\ln y \frac{1}{(\overline{F}_{1}(y|x_{0}-hv))^{2\eta}}dF_{1}(y|x_{0}-hv) f_{X}(x_{0}-hv)dv \end{array} $$

where tn(T) := U1(n/(kT)|x0) and \(\xi _{1,n}:=C_{2} h^{\eta _{\gamma _{1}}}\). Let \(p_{n}(y):=\xi _{1,n} y^{\xi _{1,n}-1}\ln y+y^{\xi _{1,n}-1}\). Applying integration by parts on the inner integral gives, for n large enough,

$$ \begin{array}{@{}rcl@{}} &&Q_{4,2,n}(T)\\ & = & \left (\frac{n}{k} \right )^{1-2\eta} \frac{ h^{\eta_{\gamma_{1}}} \ln ({t_{n}(T)}) [t_{n}(T)]^{\xi_{1,n}}}{1-2\eta} {\int}_{S_{K}}K^{2}(v) \left[\overline{F}_{1}(t_{n}(T)|x_{0}-hv)\right]^{1-2\eta}f_{X}(x_{0}-hv)dv \\ & & + \left (\frac{n}{k} \right )^{1-2\eta} \frac{h^{\eta_{\gamma_{1}}}}{1-2\eta} {\int}_{S_{K}}K^{2}(v) {\int}_{t_{n}(T)}^{\infty} p_{n}(y) \left[\overline{F}_{1}(y|x_{0}-hv)\right]^{1-2\eta}dyf_{X}(x_{0}-hv)dv \\ & =: & Q_{4,2,1,n}(T)+Q_{4,2,2,n}(T). \end{array} $$

We obtain, for n large enough

$$ \begin{array}{@{}rcl@{}} Q_{4,2,1,n}(T) &\le& C h^{\eta_{\gamma_{1}}} \ln (t_{n}(T)) [t_{n}(T)]^{\xi_{1,n}} T^{1-2\eta-\kappa}\\ &=&O(1), \end{array} $$

since for distributions satisfying \((\mathcal {D})\) one has that

$$ \begin{array}{@{}rcl@{}} U_{1}(y|x_{0}) = (A_{1}(x_{0}))^{\gamma_{1}(x_{0})}y^{\gamma_{1}(x_{0})}(1+a_{1}(y|x_{0})) \end{array} $$
(17)

where |a1(.|x0)| is regularly varying with index equal to − γ1(x0)β1(x0), and by using the fact that \(h^{\eta _{\gamma _{1}}} \ln (n/k)\to 0\) as \(n\to \infty \).

Now consider Q4,2,2,n(T). We have

$$ \begin{array}{@{}rcl@{}} Q_{4,2,2,n}(T) &=& \frac{h^{\eta_{\gamma_{1}}} T^{1-2\eta}}{1-2\eta} {\int}_{S_{K}} K^{2}(v) {\int}_{t_{n}(T)}^{\infty} p_{n}(y) \left (\frac{\overline{F}_{1}(y|x_{0}-hv)}{\overline{F}_{1}(y|x_{0})} \right )^{1-2\eta}\left (\frac{\overline{F}_{1}(y|x_{0})}{\overline{F}_{1}(t_{n}(T)|x_{0})} \right )^{1-2\eta}dy\\ && \times f_{X}(x_{0}-hv) dv. \end{array} $$

For n large and ytn(T), with \(\xi _{2,n}=Ch^{\eta _{\varepsilon _{1}}}\),

$$ \begin{array}{@{}rcl@{}} \left (\frac{\overline{F}_{1}(y|x_{0}-hv)}{\overline{F}_{1}(y|x_{0})} \right )^{1-2\eta} \le C y^{\xi_{1,n}}\left (1+y^{\xi_{2,n}}h^{\eta_{\varepsilon_{1}}} \ln y \right ). \end{array} $$

Substituting u = y/tn(T) we get

$$ \begin{array}{@{}rcl@{}} &&Q_{4,2,2,n}(T) \leq C h^{\eta_{\gamma_{1}}} T^{1-2\eta} [t_{n}(T)]^{1+\xi_{1,n}}\\ && \times {\int}_{S_{K}} K^{2}(v) {\int}_{1}^{\infty} p_{n}(t_{n}(T)u) u^{\xi_{1,n}} \left (1+(t_{n}(T)u)^{\xi_{2,n}}h^{\eta_{\varepsilon_{1}}} \ln (t_{n}(T)u) \right )\left (\frac{\overline{F}_{1}(t_{n}(T)u|x_{0})}{\overline{F}_{1}(t_{n}(T)|x_{0})} \right )^{1-2\eta}du \\ &&\times f_{X}(x_{0}-hv) dv. \end{array} $$

Since \(\overline {F}_{1}(.|x_{0})\) is regularly varying, we can apply the Potter bound (see, e.g., de Haan and Ferreira 2006, Proposition B.1.9), and obtain, for n large enough and 0 < δ < 1/γ1(x0)

$$ \begin{array}{@{}rcl@{}} Q_{4,2,2,n}(T) & \le & C h^{\eta_{\gamma_{1}}} T^{1-2\eta} [t_{n}(T)]^{2\xi_{1,n}}{\int}_{S_{K}} K^{2}(v)f_{X}(x_{0}-hv) dv \\ & & \times {\int}_{1}^{\infty} \left (\xi_{1,n}u^{\xi_{1,n}-1}\ln (t_{n}(T))+ \xi_{1,n}u^{\xi_{1,n}-1}\ln u +u^{\xi_{1,n}-1} \right ) u^{\xi_{1,n}- (1/\gamma_{1}(x_{0})-\delta)(1-2\eta)} \\ & & \times \left (1+(t_{n}(T)u)^{\xi_{2,n}}h^{\eta_{\varepsilon_{1}}} \ln (t_{n}(T)u) \right ) du. \end{array} $$

After tedious computations one gets

$$ \begin{array}{@{}rcl@{}} Q_{4,2,2,n}(T) & \le & C T^{1-2\eta} h^{\eta_{\gamma_{1}}} [t_{n}(T)]^{2\xi_{1,n}}\left\{1+h^{\eta_{\gamma_{1}}} \ln (t_{n}(T))+[t_{n}(T)]^{\xi_{2,n}} h^{\eta_{\varepsilon_{1}}}\ln (t_{n}(T)) \right\}\\ &=&O(1), \end{array} $$

by Eq. 17 and the fact that \(h^{\eta _{\gamma _{1}}\wedge \eta _{\varepsilon _{1}}} \ln (n/k)\to 0\) as \(n\to \infty \). Hence, Q4,2,n(T) = O(1).

Finally, the two last terms Q4,3,n(T) and Q4,4,n(T) can be dealt with similarly as the two previous ones since

$$ \begin{array}{@{}rcl@{}} Q_{4,3,n}(T) &:=& h^{\eta_{B_{1}}} \left (\frac{n}{k} \right )^{1-2\eta} {\int}_{S_{K}}K^{2}(v) {\int}_{t_{n}(T)}^{\infty} \frac{|\delta_{1}(y|x_{0})|}{(\overline{F}_{1}(y|x_{0}-hv))^{2\eta}}dF_{1}(y|x_{0}-hv) f_{X}(x_{0}-hv)dv\\ &\leq& \left( \underset{y\geq t_{n}(T)}{\sup} |\delta_{1}(y|x_{0})|\right) h^{\eta_{B_{1}}} \left (\frac{n}{k} \right )^{1-2\eta}\\ &&\times {\int}_{S_{K}}K^{2}(v) {\int}_{t_{n}(T)}^{\infty} \frac{1}{(\overline{F}_{1}(y|x_{0}-hv))^{2\eta}}dF_{1}(y|x_{0}-hv) f_{X}(x_{0}-hv)dv \end{array} $$
(18)

and

$$ \begin{array}{@{}rcl@{}} Q_{4,4,n}(T) &:=& h^{\eta_{\varepsilon_{1}}} \left (\frac{n}{k} \right )^{1-2\eta} {\int}_{S_{K}}K^{2}(v) {\int}_{t_{n}(T)}^{\infty} \frac{|\delta_{1}(y|x_{0})|y^{\xi_{2,n}}\ln y}{(\overline{F}_{1}(y|x_{0}-hv))^{2\eta}}dF_{1}(y|x_{0}-hv) f_{X}(x_{0}-hv)dv\\ &\leq& \left( \underset{y\geq t_{n}(T)}{\sup} |\delta_{1}(y|x_{0})|\right) h^{\eta_{\varepsilon_{1}}} \left (\frac{n}{k} \right )^{1-2\eta}\\ && \times {\int}_{S_{K}}K^{2}(v) {\int}_{t_{n}(T)}^{\infty} \frac{y^{\xi_{2,n}}\ln y}{(\overline{F}_{1}(y|x_{0}-hv))^{2\eta}}dF_{1}(y|x_{0}-hv) f_{X}(x_{0}-hv)dv. \end{array} $$
(19)

This yields Q4,3,n(T) = O(1) and Q4,4,n(T) = O(1). Combining all these results, we get Eq. 10.

Proof of Eq. 11. To this aim, for any α ∈ (0, 1/η − 2), we have

The terms into brackets can be studied similarly as Qj,n(T),j = 3, 4, and thus Eq. 11 is established since \(kh^{d}\to \infty \)

Proof of

Eq. 12. Without loss of generality assume T = 1 and consider, for \(a,\theta ,\tilde \theta <1\), the classes

$$ \begin{array}{@{}rcl@{}} \mathcal{F}_{n}^{(1)}(a) & := & \left\{ f_{n,y} \in \mathcal{F}_{n} : y_{1} \le a \right\}, \\ \mathcal{F}_{n}^{(2)}(a) & := & \left\{ f_{n,y} \in \mathcal{F}_{n} : y_{1} > a , y_{2} \le a \right\}, \\ \mathcal{F}_{n}(\ell,m) & :=& \left\{ f_{n,y} \in \mathcal{F}_{n} : \theta^{\ell+1} \le y_{1} \le \theta^{\ell}, \tilde \theta^{m+1} \le y_{2} \le \tilde \theta^{m} \right\}, \end{array} $$

where \(\ell =0,\ldots , \left \lfloor \ln a/\ln \theta \right \rfloor \) and \(m=0,\ldots , \left \lfloor \ln a/\ln \tilde \theta \right \rfloor \). We start by showing that \(\mathcal {F}_{n}^{(1)}(a) \) is an ε −bracket, for n sufficiently large. Clearly

Then

$$ \begin{array}{@{}rcl@{}} P {u}_{1,n}^{2} & = & \left (\frac{n}{k} \right )^{1-2\eta} {\int}_{S_{K}}K^{2}(v) {\int}_{t_{n}(a)}^{\infty} \frac{1}{(\overline{F}_{1}(y|x_{0}))^{2\eta}} dF_{1}(y|x_{0}-hv) f_{X}(x_{0}-hv)dv \\ & = & Q_{3,n}(a)+Q_{4,n}(a), \end{array} $$

using the same decomposition as for \(P{F_{n}^{2}}\). Thus, one can obtain the result from the above analysis of Q3,n(T) and Q4,n(T), taking into account that the various constants involved in these will not depend on a.

Concerning Q3,n(a), according to Eq. 15, for n large

$$ \begin{array}{@{}rcl@{}} Q_{3,n}(a)&\le & C a^{1-2 \eta-\kappa}, \end{array} $$

where C does not depend on a. Now, taking a = ε3/(1 − 2η), for n large enough and ε small we have |Q3,n(a)|≤ ε2.

Concerning Q4,n(a), we use the same decomposition as for Q4,n(T) based on Eq. 16, which entails that, for n large enough, ε small and some small ζ > 0

$$ \begin{array}{@{}rcl@{}} Q_{4,1,n}(a)&\leq& \varepsilon^{2},\\ Q_{4,2,1,n}(a) &\leq& C h^{\eta_{\gamma_{1}}} \ln (t_{n}(a)) \left[t_{n}(a)\right]^{\xi_{1,n}} a^{1-2\eta-\kappa}\\ &\leq& C(1+|\ln a|)a^{-\zeta} a^{1-2\eta-\kappa} \\ & \leq & C a^{1-2\eta-2\kappa}, \end{array} $$

with C a constant not depending on a, since from Eq. 17 and for n large,

$$ \begin{array}{@{}rcl@{}} h^{\eta_{\gamma_{1}}} \ln t_{n}(a) \le C (1+|\ln a|). \end{array} $$

Also, for n large, and some small ζ > 0

$$ \begin{array}{@{}rcl@{}} Q_{4,2,2,n}(a) & \leq & Ca^{1-2\eta} h^{\eta_{\gamma_{1}}} [t_{n}(a)]^{2\xi_{1,n}}\left\{1+h^{\eta_{\gamma_{1}}} \ln (t_{n}(a)) + [t_{n}(a)]^{\xi_{2,n}} h^{\eta_{\varepsilon_{1}}}\ln(t_{n}(a))\right\}\\ & \leq & C a^{1-2\eta}h^{\eta_{\gamma_{1}}}a^{-\zeta}(1+|\ln a|+a^{-\zeta}(1+|\ln a|)) \\ & \leq & Ca^{1-2\eta-\kappa}, \end{array} $$

where C does not depend on a. Hence, for n large and ε small we obtain Q4,2,2,n(a) ≤ ε2. Using Eqs. 18 and 19, we have also Q4,3,n(a) ≤ ε2 and Q4,4,n(a) ≤ ε2. Combining all the terms we get \(Pu_{1,n}^{2} \leq \varepsilon ^{2}\) for n large.

Next consider \(\mathcal {F}_{n}^{(2)}(a)\). Then

and

$$ \begin{array}{@{}rcl@{}} Pu_{2,n}^{2} & = & \frac{1}{a^{2\eta}} \frac{n}{k} {\int}_{S_{K}} K^{2}(v) \overline{F}_{2}\left( U_{2}\left( \frac{n}{ka}\left|\vphantom{\frac{n}{ka}}\right.x_{0}\right)\left|\vphantom{\frac{n}{ka}}\right.x_{0}-hv\right)f_{X}(x_{0}-hv)dv \\ & \le & \varepsilon^{2}, \end{array} $$

when n is large enough and for ε small.

Finally, we consider \(\mathcal {F}_{n}(\ell ,m)\). We obtain the following bounds

Then

The difference of the indicator functions can be decomposed as in Eq. 14, and subsequent calculations follow arguments similar to those used in the verification of Eq. 9, Case 3. Taking 𝜃 = 1 − ε3 and \(\tilde \theta =1-a\), gives for n large enough and ε small that \(P(\overline {u}_{n}-\underline {u}_{n})^{2}\le \varepsilon ^{2}\).

Combining the above, for n large and ε small one obtains that the cover number by bracketing is of the order ε− 4 − 3/(1 − 2η), and hence Eq. 12 is satisfied.

To conclude the proof, we comment on the pointwise convergence of the covariance function, which is given by \(P f_{n,y}f_{n,\bar y}-P f_{n,y} Pf_{n,\bar y}\). We have

as \(n \to \infty \), by the arguments used in the proof of Lemma 2. Also

as \(n \to \infty \). □

1.2 A.2 Proof of Theorem 2

Recall that

We follow the lines of proof of Theorem 1. We introduce the sequence of classes \(\widetilde {\mathcal {F}}_{n}\) on \(\mathbb {R}\times \mathbb {R}^{d}\) as

$$ \widetilde {\mathcal{F}}_{n}:=\left\{(u,z)\to \widetilde f_{n,y}(u,z), y\in (0, T]\right\} $$

where

We have to verify Eqs. 912 in the proof of Theorem 1 for the new functions \(\widetilde f_{n,y}\), and with \(\rho _{x_{0}}(y,\bar y):=|y-\bar y|\). Without loss of generality, we may assume that \(y>\overline y\). Thus, we have

with a o(1) −term which is uniform in y and \(\overline y\) by Lemma 1. This yields Eq. 9.

Now, concerning Eq. 10 we can use the following envelope function of the class \(\widetilde {\mathcal {F}}_{n}\)

from which we deduce that

$$ \begin{array}{@{}rcl@{}} P\widetilde {F_{n}^{2}}&=&\frac{n}{k} {\int}_{S_{K}} K^{2}(v) \overline{F}_{2}\left( U_{2}\left( \frac{n}{kT}\left|\vphantom{\frac{n}{kT}}\right.x_{0}\right)\left|\vphantom{\frac{n}{kT}}\right.x_{0}-hv\right) f_{X}(x_{0}-hv) dv=O(1). \end{array} $$

Next Eq. 11 is also a direct consequence of the definition of the envelope since

as soon as \(kh^{d}\to \infty \).

Finally, concerning Eq. 12, again without loss of generality we assume T = 1 and divide [0, 1] into m intervals of length 1/m. Then, for y ∈ [(i − 1)/m,i/m] we have the bounds

from which we deduce that

$$ \begin{array}{@{}rcl@{}} P\left( \underline{u}_{n}-\overline u_{n}\right)^{2}&=&\frac{1}{m} {\int}_{S_{K}} K^{2}(v) f_{X}(x_{0}-hv) dv\\ &&\left.\left.+{\int}_{S_{K}} K^{2}(v) \left[\frac{n}{k}\overline{F}_{2}\left( U_{2}\left( \frac{n}{k} \frac{m}{i}\right|x_{0}\right)\right|x_{0}-hv\right) -\frac{i}{m} \right] f_{X}(x_{0}-hv) dv\\ &&\left.\left.-{\int}_{S_{K}} K^{2}(v) \left[\frac{n}{k}\overline{F}_{2}\left( U_{2}\left( \frac{n}{k} \frac{m}{i-1}\right|x_{0}\right)\right|x_{0}-hv\right) -\frac{i-1}{m} \right] f_{X}(x_{0}-hv) dv\\ &\leq& \varepsilon^{3} {\int}_{S_{K}} K^{2}(v) f_{X}(x_{0}-hv) dv + 2\varepsilon^{3} \end{array} $$

when \(m=\lceil \frac {1}{\varepsilon ^{3}}\rceil \). If ε is small and n large, then \(P\left (\underline {u}_{n}-\overline u_{n}\right )^{2} \leq \varepsilon ^{2}\).

The pointwise convergence of the covariance function can be verified with arguments similar to those used in the proof of Theorem 1.

Consequently

$$ \begin{array}{@{}rcl@{}} \sqrt{kh^{d}} \left[T_{n}(\infty, y_{2}|x_{0})-\mathbb{E}(T_{n}(\infty, y_{2}|x_{0}))\right]\leadsto W(\infty, y_{2}), \end{array} $$

in D((0,T]).

Now, remark that

$$ \begin{array}{@{}rcl@{}} \mathbb{E}(T_{n}(\infty, y_{2}|x_{0}))&=&y_{2} f_{X}(x_{0})+O\left( h^{\eta_{f_{X}}}\right)\\ &&\left.+f_{X}(x_{0}) {\int}_{S_{K}} K(v) \left[\frac{n}{k}\overline{F}_{2}\left( U_{2}\left( \frac{n}{ky_{2}}\right|x_{0}\right)\left|\vphantom{\frac{n}{ky_{2}}}\right.x_{0}-hv\right) -y_{2} \right]dv\\ &&\left.\left.+{\int}_{S_{K}} K(v) \left[\frac{n}{k}\overline{F}_{2}\left( U_{2}\left( \frac{n}{ky_{2}}\right|x_{0}\right)\right|x_{0}-hv\right) -y_{2} \right] \left[f_{X}(x_{0}-hv)-f_{X}(x_{0})\right] dv. \end{array} $$

Following the lines of proof of Lemma 1, we deduce that

$$ \begin{array}{@{}rcl@{}} \left.\left.\left.\left.\vphantom{\frac{n}{k}}\right|\frac{n}{ k} \overline{F}_{2}\left( U_{2}\left( \frac{n}{ky_{2}}\right|x_{0}\right)\right|x_{0}-hv\right)-y_{2}\right|\leq C\left\{h^{\eta_{A_{2}}} + h^{\eta_{\gamma_{2}}} \ln \frac{n}{k}+|\delta_{2}(U_{2}(n/k|x_{0})|x_{0})| \left (h^{\eta_{B_{2}}} + h^{\eta_{\varepsilon_{2}}}\ln \frac{n}{k}\right)\right\} \end{array} $$

from which we obtain

$$ \begin{array}{@{}rcl@{}} \mathbb{E}(T_{n}(\infty, y_{2}|x_{0}))& = & y_{2} f_{X}(x_{0})+O\left( h^{\eta_{f_{X}}\wedge\eta_{A_{2}}}\right)+O\left( h^{\eta_{\gamma_{2}}}\ln \frac{n}{ k}\right)+ O \left (|\delta_{2}(U_{2}(n/k|x_{0})|x_{0})|h^{\eta_{B_{2}}} \right ) \\ & & +O \left (|\delta_{2}(U_{2}(n/k|x_{0})|x_{0})| h^{\eta_{\varepsilon_{2}}}\ln \frac{n}{k}\right ) \end{array} $$

with O −terms which are uniform in y2 ∈ (0,T]. This implies that, under the assumptions of Theorem 2, we have

$$ \begin{array}{@{}rcl@{}} \sqrt{kh^{d}} \left[T_{n}(\infty, y_{2}|x_{0})-y_{2}f_{X}(x_{0})\right]\leadsto W(\infty, y_{2}), \end{array} $$
(20)

in D((0,T]).

Finally,

$$ \begin{array}{@{}rcl@{}} \sqrt{kh^{d}} \left( \frac{T_{n}(\infty, y_{2}|x_{0})}{ \widehat f_{n}(x_{0})}-y_{2}\right)=\sqrt{kh^{d}} \left( \frac{T_{n}(\infty, y_{2}|x_{0})}{ f_{X}(x_{0})}-y_{2}\right) -\frac{T_{n}(\infty, y_{2}|x_{0})}{ \widehat f_{n}(x_{0})f_{X}(x_{0})} \sqrt{\frac{k}{ n}} \sqrt{n h^{d}} \left( \widehat f_{n}(x_{0})-f_{X}(x_{0})\right), \end{array} $$

from which Theorem 2 follows. __

In the sequel, for convenient representation, all the limiting processes in Theorems 1 and 2 will be defined on the same probability space, via the Skorohod construction, but it should be kept in mind that they are only in distribution equal to the original processes. The Skorohod representation theorem gives then (with keeping the same notations)

$$ \begin{array}{@{}rcl@{}} \underset{y_{1},y_{2} \in (0,T]}{\sup} \left | \frac{\sqrt{kh^{d}} \left[ T_{n}(y_{1},y_{2}|x_{0})-\mathbb{E} (T_{n}(y_{1},y_{2}|x_{0})) \right] - W(y_{1},y_{2})}{{y}_{1}^{\eta}} \right | \to 0, \text{ a.s. } \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} \underset{y_{2} \in (0,T]}{\sup} \left | \sqrt{kh^{d}} \left( \frac{T_{n}(\infty, y_{2}|x_{0})}{ \widehat f_{n}(x_{0})}-y_{2}\right) - \frac{W(\infty, y_{2})}{ f_{X}(x_{0})} \right | \to 0, \text{ a.s. .} \end{array} $$

1.3 A.3 Convergence result for an auxiliary statistic

In this section we give a convergence result for an auxiliary statistic. In particular, we generalize \(\widetilde \theta _{n}\) to \(\widetilde \theta _{n}(y_{2})\), defined as

Assuming F1(y|x0) strictly increasing in y, we have

$$ \begin{array}{@{}rcl@{}} \left.\widetilde \theta_{n}(y_{2})=-U_{1}\left( \frac{n}{k}\right|x_{0}\right){\int}_{0}^{\infty} T_{n}(s_{n}(u), y_{2}|x_{0}) du^{-\gamma_{1}(x_{0})}. \end{array} $$

As motivation for studying \(\widetilde \theta _{n}(y_{2})\), note that \(\widehat \theta _{n} = \widetilde \theta _{n}(\widehat e_{n})\), where \(\widehat e_{n} :=\frac {n}{ k} \overline {F}_{2}\left (\widehat u_{n} U_{2}(\frac {n}{ k}|x_{0})|x_{0}\right )\) with \(\widehat u_{n}:=\widehat U_{2}\left (\frac {n}{ k}|x_{0}\right ) / U_{2}\left (\frac {n}{ k}|x_{0}\right )\). To estimate U2(.|x0) we will use \(\widehat U_{2}(.|x_{0}) := \inf \{ y: \widehat F_{n,2}(y|x_{0}) \ge 1-1/. \}\) with

the empirical kernel estimator of the unknown conditional distribution function of Y(2) given X = x0. See for instance Daouia et al. (2011). The asymptotic behavior of the quantile estimator is given in Lemma 6.

Proposition 1

Assume \((\mathcal {D})\), \(({\mathscr{H}})\), \((\mathcal {K})\), \((\mathcal {R})\) with xR(y1,y2|x) being a continuous function, x0 ∈Int(SX) with fX(x0) > 0, and yFj(y|x0), j = 1, 2, are strictly increasing. Consider sequences \(k\to \infty \) and h → 0 as \(n \to \infty \), in such a way that k/n → 0, \(kh^{d}\to \infty \) and \(h^{\eta _{\gamma _{1}}\wedge \eta _{\gamma _{2}} \wedge \eta _{\varepsilon _{1}}\wedge \eta _{\varepsilon _{2}} }\ln n/k \to 0\). Then, for γ1(x0) < 1/2, we have

$$ \begin{array}{@{}rcl@{}} \underset{\frac{1}{2}\leq y_{2}\leq 2}{\sup}\left|\frac{\sqrt{kh^{d}}}{U_{1}(n/k|x_{0})} \left[\widetilde \theta_{n}(y_{2}) - \mathbb{E}(\widetilde\theta_{n}(y_{2}))\right] + {\int}_{0}^{\infty} W(u, y_{2})du^{-\gamma_{1}(x_{0})}\right| {\overset{\mathbb{P}}{\longrightarrow}} 0. \end{array} $$

Proof of Proposition 1

We use the decomposition

$$ \begin{array}{@{}rcl@{}} \underset{\frac{1}{2} \le y_{2} \le 2}{\sup} \left|\frac{\sqrt{kh^{d}}}{U_{1}(n/k|x_{0})} \left[\widetilde \theta_{n}(y_{2}) - \mathbb{E}(\widetilde\theta_{n}(y_{2}))\right] + {\int}_{0}^{\infty}W(u, y_{2})du^{-\gamma_{1}(x_{0})}\right|\leq I_{1}(T)+\sum\limits_{i=2}^{4} I_{i,n}(T), \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} I_{1}(T)&:=& \underset{\frac{1}{2} \le y_{2} \le 2}{\sup}\left|{\int}_{T}^{\infty} W(u, y_{2}) du^{-\gamma_{1}(x_{0})}\right|,\\ I_{2,n}(T)&:=&\underset{\frac{1}{2}\le y_{2} \le 2}{\sup} \left|{\int}_{T}^{\infty} \sqrt{kh^{d}} \left[T_{n}(s_{n}(u), y_{2}|x_{0})-\mathbb{E} \left( T_{n}(s_{n}(u), y_{2}|x_{0})\right)\right]du^{-\gamma_{1}(x_{0})}\right|,\\ I_{3,n}(T)&:=& \underset{\frac{1}{2} \le y_{2} \le 2}{\sup}\left|{{\int}_{0}^{T}} \left\{\sqrt{kh^{d}} \left[T_{n}(s_{n}(u), y_{2}|x_{0})-\mathbb{E}\left( T_{n}(s_{n}(u), y_{2}|x_{0})\right)\right] -W(s_{n}(u), y_{2})\right\} du^{-\gamma_{1}(x_{0})}\right|,\\ I_{4,n}(T)&:=& \underset{\frac{1}{2} \le y_{2} \le 2}{\sup} \left|{{\int}_{0}^{T}} \left[W(s_{n}(u), y_{2})-W(u, y_{2})\right]du^{-\gamma_{1}(x_{0})}\right|. \end{array} $$

Similarly to the proof of Proposition 2 in Cai et al. (2015), it is sufficient to show that for any ε > 0, there exists T0 = T0(ε) such that

$$ \begin{array}{@{}rcl@{}} \mathbb{P} (I_{1}(T_{0})>\varepsilon )<\varepsilon, \end{array} $$
(21)

and n0 = n0(T0) such that, for any n > n0

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(I_{j,n}(T_{0})>\varepsilon )<\varepsilon, \text{ for $j=2, 3$ and 4.} \end{array} $$

Clearly

$$ I_{1}(T)\leq \underset{u\geq T, \frac{1}{ 2}\leq y_{2}\leq 2}{\sup} |W(u, y_{2})| T^{-\gamma_{1}(x_{0})}. $$

Since a rescaled version of our Gaussian process W(.,.) gives the one in Cai et al. (2015), according to their Lemma 2, we have \(\sup _{0<u< \infty , \frac {1}{ 2}\leq y_{2}\leq 2} |W(u, y_{2})|<\infty \) with probability one. This implies that there exists T1 = T1(ε) such that

$$ \mathbb{P}\left( \underset{0<u< \infty, \frac{1}{ 2}\leq y_{2}\leq 2}{\sup} |W(u, y_{2})|>{T}_{1}^{\gamma_{1}(x_{0})}\varepsilon\right)<\varepsilon, $$

from which we deduce that, for any T > T1

$$ \mathbb{P}\left( I_{1}(T)>\varepsilon\right)\leq \mathbb{P}\left( \underset{0<u<\infty, \frac{1}{ 2}\leq y_{2}\leq 2}{\sup} |W(u, y_{2})|>{T}_{1}^{\gamma_{1}(x_{0})}\varepsilon\right)<\varepsilon. $$

Consequently Eq. 21 holds for T0 > T1.

We continue with the term I2,n(T). We have

Consider the class of functions

with y1T and 1/2 ≤ y2 ≤ 2, and with envelope function

This class of functions satisfies the conditions of Theorem 7.3 in Wellner (2005) with σ2 = O(khd/n) and \(P{G_{n}^{2}} = O(kh^{d}/n)\) for n large, and thus, for some constant C,

$$ \begin{array}{@{}rcl@{}} \mathbb{P}\left( I_{2,n}(T)>\varepsilon\right) \le \frac{C}{\varepsilon T^{\gamma_{1}(x_{0})}} \end{array} $$

for n large enough. We have then that for every ε there is a T = T(ε) such that for n large enough

$$ \begin{array}{@{}rcl@{}} \mathbb{P}\left( I_{2,n}(T)>\varepsilon\right) \le \varepsilon. \end{array} $$

Now, to study I3,n(T), remark that for any T > 0,∃n1 = n1(T) : ∀n > n1 : sn(T) < T + 1. Hence for n > n1 and any \(\eta _{0}\in \left (\gamma _{1}(x_{0}), 1/2\right ):\)

$$ \begin{array}{@{}rcl@{}} \mathbb{P}\left( I_{3,n}(T)>\varepsilon\right)&\leq& \mathbb{P}\left( \underset{0<y_{1}\leq T+1, \frac{1}{ 2}\leq y_{2}\leq 2}{\sup} \left|\frac{\sqrt{kh^{d}} [T_{n}(y_{1},y_{2}|x_{0})-\mathbb{E}(T_{n}(y_{1},y_{2}|x_{0}))]-W(y_{1}, y_{2})}{{y}_{1}^{\eta_{0}}}\right| \right. \\ &&\left. \times \left|{{\int}_{0}^{T}} [s_{n}(u)]^{\eta_{0}} du^{-\gamma_{1}(x_{0})}\right|>\varepsilon\right). \end{array} $$

According to Lemma 3 in Cai et al. (2015)

$$ \left|{{\int}_{0}^{T}} [s_{n}(u)]^{\eta_{0}} du^{-\gamma_{1}(x_{0})}\right|\longrightarrow \frac{\gamma_{1}(x_{0})}{ \eta_{0}-\gamma_{1}(x_{0})} T^{\eta_{0}-\gamma_{1}(x_{0})}, $$

which, combining with our Theorem 1 and the Skorohod construction, entails that there exists n2(T) > n1(T) such that ∀n > n2(T), \(\mathbb {P}(I_{3,n}(T)>\varepsilon )<\varepsilon \).

Finally, concerning I4,n(T), we first remark that according to Lemma 2 in Cai et al. (2015), we have for η0 ∈ (γ1(x0), 1/2) and any T > 0, with probability one,

$$ \underset{0<y_{1} \le T, \frac{1}{ 2}\leq y_{2}\leq 2}{\sup} \frac{|W(y_{1}, y_{2})| }{{y}_{1}^{\eta_{0}}}<\infty. $$

Then, applying Lemma 3 in Cai et al. (2015) with S = T,S0 = T + 1 and g = W, we deduce that there exists n3(T) such that for n > n3(T) we have \(\mathbb {P}(I_{4,n}(T)>\varepsilon )<\varepsilon \).

This achieves the proof of Proposition 1. □

In order to prove Theorem 3 we need some auxiliary results. Define for u > 0 and vSK

$$ \begin{array}{@{}rcl@{}} \widetilde{s}_{n}(u)&:=&\frac{n}{k} \overline{F}_{1}\left( u^{-\gamma_{1}(x_{0})} U_{1}\left( \frac{n}{k}\left|\vphantom{\left( \frac{n}{k}\left|x_{0}\right.\right)}x_{0}\right.\right)\left.\vphantom{\left( \frac{n}{k}\left|x_{0}\right.\right)}\right|x_{0}-hv\right),\\ t_{n}(y_{2})&:=&\frac{n}{k} \overline{F}_{2}\left( U_{2}\left( \frac{n}{ky_{2}}\left|\vphantom{\frac{n}{ky_{2}}}x_{0}\right.\right)\left|x_{0}\vphantom{\frac{n}{ky_{2}}}\right.-hv\right). \end{array} $$

Lemma 3

Assume \((\mathcal {D})\) and \(({\mathscr{H}})\) and x0 ∈ Int(SX). Consider sequences \(k\to \infty \) and h → 0 as \(n\to \infty \), in such a way that k/n → 0 and \(h^{\eta _{\varepsilon _{1}}\wedge \eta _{\gamma _{1}}} \ln \frac {n}{ k}\to 0\). Then, we have, for any \(u\leq T_{n}\to \infty \) such that kTn/n → 0 and 0 < ε < β1(x0), that

$$ \begin{array}{@{}rcl@{}} \left|\vphantom{\frac{n}{k}}\widetilde{s}_{n}\right.(u)-u\left|\vphantom{\frac{n}{k}}\right.&\leq& C u\left\{h^{\eta_{A_{1}}} + h^{\eta_{\gamma_{1}}} \ln\frac{n}{k} + h^{\eta_{\gamma_{1}}}|\ln u| u^{\pm Ch^{\eta_{\gamma_{1}}}}\right.\\ &&+ \left|\delta_{1}\left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right. x_{0}\right)\right|\left[1+u^{\pm Ch^{\eta_{\gamma_{1}}}}h^{\eta_{\gamma_{1}}}|\ln u|\right] \\ & & \times \left[ u^{\gamma_{1}(x_{0})\beta_{1}(x_{0})}\left (1+u^{\pm \gamma_{1}(x_{0}) \varepsilon} \right ) \left (h^{\eta_{B_{1}}} +u^{-Ch^{\eta_{\varepsilon_{1}}}}h^{\eta_{\varepsilon_{1}}} \left (|\ln u| +\ln \frac{n}{k} \right ) \right ) \right . \\ && + \left . \left . u^{\gamma_{1}(x_{0})(\beta_{1}(x_{0})\pm \varepsilon)} + \left | u^{\gamma_{1}(x_{0})\beta_{1}(x_{0})} -1 \right | \right] \right\}, \end{array} $$

where u±∙ means u if u is greater than 1, and u−∙ if u is smaller than 1.

Lemma 4

Assume \((\mathcal {D})\), \(({\mathscr{H}})\), γ1(x0) < 1 and x0 ∈ Int(SX). For sequences \(k=\left \lfloor n^{\alpha } \ell _{1}(n)\right \rfloor \) and h = nΔ2(n), where 1 and 2 are slowly varying functions at infinity, with α ∈ (0, 1) and

$$ \begin{array}{@{}rcl@{}} &&\max\left( \frac{\alpha}{d+2\gamma_{1}(x_{0})(\eta_{A_{1}}\wedge \eta_{\gamma_{1}})}, \frac{\alpha}{d+2(1-\gamma_{1}(x_{0}))(\eta_{A_{2}}\wedge \eta_{\gamma_{2}}\wedge \eta_{B_{2}}\wedge \eta_{\varepsilon_{2}})}, \right.\\ && \left.\frac{\alpha}{ d}-\frac{2(1-\alpha){\gamma_{1}^{2}}(x_{0})\beta_{1}(x_{0})}{d+d(\beta_{1}(x_{0})+\varepsilon)\gamma_{1}(x_{0})}, \frac{\alpha-2(1-\alpha)\gamma_{1}(x_{0})}{d}\right)<{{\varDelta}} <\frac{\alpha }{ d}, \end{array} $$

one has that

$$ \underset{v \in S_{K}}{\sup} \underset{\frac{1}{2}\leq y_{2}\leq 2}{\sup} \sqrt{kh^{d}} \left|{\int}_{0}^{\infty} \left[R\left( \widetilde s_{n}(u), t_{n}(y_{2})|x_{0}\right) - R(u,y_{2}|x_{0})\right] du^{-\gamma_{1}(x_{0})}\right|\longrightarrow 0 $$

and

$$ \underset{\frac{1}{2}\leq y_{2}\leq 2}{\sup} \sqrt{kh^{d}} \left|{\int}_{0}^{\infty} \left[R\left( s_{n}(u), y_{2}|x_{0}\right) - R(u,y_{2}|x_{0})\right] du^{-\gamma_{1}(x_{0})}\right|\longrightarrow 0. $$

Lemma 5

Assume \((\mathcal {D})\), \(({\mathscr{H}})\), \((\mathcal {K})\), x0 ∈ Int(SX) with fX(x0) > 0 and yF2(y|x0) is strictly increasing. Consider sequences \(k\to \infty \) and h → 0 as \(n\to \infty \), in such a way that k/n → 0, \(kh^{d}\to \infty \), \( h^{\eta _{\varepsilon _{2}}}\ln n/k \to 0\), \(\sqrt {kh^{d}} h^{\eta _{f_{X}}\wedge \eta _{A_{2}}}\to 0\), \(\sqrt {kh^{d}} h^{\eta _{\gamma _{2}}}\ln n/k\to 0\), \(\sqrt {kh^{d}} |\delta _{2}(U_{2}(n/k|x_{0})|x_{0})| h^{\eta _{B_{2}}}\to 0\), and

\(\sqrt {kh^{d}} |\delta _{2}(U_{2}(n/k|x_{0})|x_{0})| h^{\eta _{\varepsilon _{2}}}\ln n/k \to 0\). Then, for any sequence un satisfying

$$ \begin{array}{@{}rcl@{}} \sqrt{kh^{d}} \left (\frac{\overline{F}_{2}(U_{2}(n/k|x_{0})|x_{0})}{\overline{F}_{2}(u_{n}|x_{0})} -1\right ) \rightarrow c \in \mathbb{R}, \end{array} $$

as \(n \to \infty \), we have

$$ \begin{array}{@{}rcl@{}} \sqrt{nh^{d}\overline{F}_{2}(u_{n}|x_{0})}\left( \frac{\widehat{\overline{F}}_{n,2}(u_{n}|x_{0})}{\overline{F}_{2}(u_{n}|x_{0})}-1 \right) \leadsto \frac{W(\infty, 1)}{ f_{X}(x_{0})}. \end{array} $$

Lemma 6

Assume \((\mathcal {D})\), \(({\mathscr{H}})\), \((\mathcal {K})\), x0 ∈ Int(SX) with fX(x0) > 0 and yF2(y|x0) is strictly increasing. Consider sequences \(k\to \infty \) and h → 0 as \(n\to \infty \), in such a way that k/n → 0, \(kh^{d}\to \infty \), \( h^{\eta _{\varepsilon _{2}}}\ln n/k \to 0\), \(\sqrt {kh^{d}} h^{\eta _{f_{X}}\wedge \eta _{A_{2}}}\to 0\), \(\sqrt {kh^{d}} h^{\eta _{\gamma _{2}}}\ln n/k\to 0\), \(\sqrt {kh^{d}} |\delta _{2}(U_{2}(n/k|x_{0})|x_{0})|\to 0\). Then, as \(n \to \infty \), we have

$$ \begin{array}{@{}rcl@{}} \sqrt{kh^{d}} \left (\widehat u_{n} -1 \right ) \leadsto \frac{\gamma_{2}(x_{0}) W(\infty,1)}{f_{X}(x_{0})}. \end{array} $$

1.4 A.4 Proof of Theorem 3

Let \(\mathbb {E}_{n}(y) := \mathbb {E}\left (\widetilde \theta _{n}(y)/U_{1}(n/k|x_{0})\right )\). We have the following decomposition:

$$ \begin{array}{@{}rcl@{}} \sqrt{kh^{d}} \left( \frac{\widehat \theta_{n} }{ f_{X}(x_{0})\theta_{k/n}} - 1\right)&=& \frac{U_{1}(n/k|x_{0}) }{ \theta_{k/n}} \frac{\sqrt{kh^{d}} }{ f_{X}(x_{0})} \left( \frac{\widehat \theta_{n}}{U_{1}(n/k|x_{0})} - \mathbb{E}_{n}(1)\right)\\ &&+\frac{U_{1}(n/k|x_{0})}{\theta_{k/n}} \frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( \mathbb{E}_{n}(1) - \frac{f_{X}(x_{0})\theta_{k/n}}{U_{1}(n/k|x_{0}) }\right)\\ &=& \frac{U_{1}(n/k|x_{0})}{\theta_{k/n}} \frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( \frac{\widetilde \theta_{n}(\widehat e_{n})}{U_{1}(n/k|x_{0})} - \mathbb{E}_{n}(\widehat e_{n})\right)\\ &&+\frac{U_{1}(n/k|x_{0})}{\theta_{k/n}} \frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( \mathbb E_{n}(\widehat e_{n}) - \mathbb E_{n}(1)\right)\\ &&+\frac{U_{1}(n/k|x_{0})}{\theta_{k/n}} \frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( \mathbb E_{n}(1) - \frac{f_{X}(x_{0})\theta_{k/n}}{U_{1}(n/k|x_{0}) }\right)\\ &=:& T_{1} + T_{2} + T_{3}. \end{array} $$

First, remark that the common factor of the three terms, U1(n/k|x0)/𝜃k/n can be handled in a similar way as in Proposition 1 in Cai et al. (2015), i.e., as \(n\to \infty \)

$$ \frac{U_{1}(n/k|x_{0})}{\theta_{k/n}} \longrightarrow {-1\over {\int}_{0}^{\infty} R(s, 1|x_{0}) ds^{-\gamma_{1}(x_{0})}}. $$

Thus the three terms without this factor need to be studied.

We start with T1. Note that

$$ \begin{array}{@{}rcl@{}} \sqrt{kh^{d}} (\widehat e_{n}-1) = -\frac{f_{2}(\tilde u_{n}U_{2}(n/k|x_{0})|x_{0})U_{2}(n/k|x_{0})}{\overline{F}_{2}(U_{2}(n/k|x_{0})|x_{0})}\sqrt{kh^{d}} (\widehat u_{n}-1), \end{array} $$

where \(\tilde u_{n}\) is a random value between \(\widehat u_{n}\) and 1. By the continuous mapping theorem we have then

$$ \begin{array}{@{}rcl@{}} \frac{f_{2}(\tilde u_{n}U_{2}(n/k|x_{0})|x_{0})U_{2}(n/k|x_{0})}{\overline{F}_{2}(U_{2}(n/k|x_{0})|x_{0})} \stackrel{\mathbb P}{\rightarrow} \frac{1}{\gamma_{2}(x_{0})}, \end{array} $$

and hence by Lemma 6

$$ \begin{array}{@{}rcl@{}} \sqrt{kh^{d}} (\widehat e_{n}-1) \leadsto -W(\infty,1)/f_{X}(x_{0}). \end{array} $$
(22)

This implies that

$$ \mathbb P\left( |\widehat e_{n}-1|>(kh^{d})^{-1/4}\right) \to 0. $$

Hence, with probability tending to one,

$$ \begin{array}{@{}rcl@{}} &&\left|\frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( \frac{\widetilde \theta_{n}(\widehat e_{n})}{U_{1}(n/k|x_{0})} - \mathbb E_{n}(\widehat e_{n})\right) +\frac{1}{f_{X}(x_{0})} {\int}_{0}^{\infty} W(s, 1) ds^{-\gamma_{1}(x_{0})}\right|\\ &\leq& \underset{|y-1|\leq (kh^{d})^{-1/4}}{\sup} \left|\frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( \frac{\widetilde \theta_{n}(y)}{U_{1}(n/k|x_{0})} - \mathbb E_{n}(y)\right) +\frac{1}{f_{X}(x_{0})} {\int}_{0}^{\infty} W(s, y) ds^{-\gamma_{1}(x_{0})}\right|\\ &&+ \frac{1}{f_{X}(x_{0})} \underset{|y-1|\leq (kh^{d})^{-1/4}}{\sup} \left|{\int}_{0}^{\infty} [W(s, y)-W(s,1)] ds^{-\gamma_{1}(x_{0})}\right|. \end{array} $$

The first term of the right-hand side tends to 0 in probability by our Proposition 1, whereas the second term can be handled similarly as in the proof of Proposition 3 in Cai et al. (2015). Consequently

$$ \begin{array}{@{}rcl@{}} T_{1} \leadsto \frac{1}{{\int}_{0}^{\infty} R(s, 1|x_{0}) ds^{-\gamma_{1}(x_{0})}}\frac{1}{f_{X}(x_{0})} {\int}_{0}^{\infty} W(s, 1) ds^{-\gamma_{1}(x_{0})}. \end{array} $$
(23)

Next step consists to look at T2. To this aim, remark that for y equal either to 1 or \(\widehat e_{n}\), we have

$$ \begin{array}{@{}rcl@{}} &&{}{\int}_{0}^{\infty} \mathbb E\left( T_{n}(s_{n}(u), y|x_{0})\right)du^{-\gamma_{1}(x_{0})}\\ &=&{\int}_{0}^{\infty} {\int}_{S_{K}} K(v) R_{\frac{n}{k}}\left( \widetilde s_{n}(u), t_{n}(y)|x_{0}-hv\right) f_{X}(x_{0}-hv)dv du^{-\gamma_{1}(x_{0})}\\ &=&{\int}_{0}^{\infty} {\int}_{S_{K}} K(v) R\left( \widetilde s_{n}(u), t_{n}(y)|x_{0}\right) f_{X}(x_{0}-hv)dv du^{-\gamma_{1}(x_{0})}\\ &&+{\int}_{0}^{\infty} {\int}_{S_{K}} K(v) \left[R_{\frac{n}{k}}\left( \widetilde s_{n}(u), t_{n}(y)|x_{0}-hv\right) - R\left( \widetilde s_{n}(u), t_{n}(y)|x_{0}\right)\right] f_{X}(x_{0}-hv)dv du^{-\gamma_{1}(x_{0})}\\ &=& {\int}_{0}^{\infty} R\left( u, y|x_{0}\right) du^{-\gamma_{1}(x_{0})} {\int}_{S_{K}} K(v)f_{X}(x_{0}-hv) dv\\ &&+{\int}_{S_{K}} K(v) {\int}_{0}^{\infty} \left[R\left( \widetilde s_{n}(u), t_{n}(y)|x_{0}\right) - R(u, y|x_{0})\right] du^{-\gamma_{1}(x_{0})} f_{X}(x_{0}-hv)dv\\ &&+ {\int}_{0}^{\infty} {\int}_{S_{K}} K(v) \left[R_{\frac{n}{k}}\left( \widetilde s_{n}(u), t_{n}(y)|x_{0}-hv\right) - R\left( \widetilde s_{n}(u), t_{n}(y)|x_{0}\right)\right] f_{X}(x_{0}-hv)dv du^{-\gamma_{1}(x_{0})} \\ & =:& \widetilde T_{2,1}+\widetilde T_{2,2}+\widetilde T_{2,3}. \end{array} $$

By Lemma 4, Assumptions \((\mathcal {S})\) and \(({\mathscr{H}})\) we obtain

$$ \begin{array}{@{}rcl@{}} \widetilde{T}_{2,1 }&=&f_{X}(x_{0}) {\int}_{0}^{\infty} R\left( u, y|x_{0}\right) du^{-\gamma_{1}(x_{0})}+O_{\mathbb{P}}\left( h^{\eta_{f_{X}}}\right), \\ \widetilde T_{2,2}&=& o_{\mathbb{P}}\left( \frac{1}{\sqrt{kh^{d}}}\right),\\ | \widetilde{T}_{2,3} | & \le & - \underset{x\in B(x_{0}, h)}{\sup}\underset{0<y_{1}<\infty, \frac{1}{2}\leq y_{2}\leq 2}{\sup} \frac{|R_{n/k}(y_{1},y_{2}|x)-R(y_{1}, y_{2}|x_{0})|}{{y}_{1}^{\beta} \wedge 1} \\ & & \times {\int}_{S_{K}} K(v){\int}_{0}^{\infty} \left( [\widetilde s_{n}(u)]^{\beta}\wedge 1\right) du^{-\gamma_{1}(x_{0})} f_{X}(x_{0}-hv)dv \\ & = & O_{\mathbb P}\left( \left( \frac{n}{k}\right)^{\tau}\right). \end{array} $$

Note that the integral appearing in the bound for \(|\widetilde T_{2,3}|\) is finite for n large, as \(\widetilde s_{n}(u) \le C u^{1-\xi }\) for u ∈ (0, 1/2],ξ ∈ (0, (βγ1(x0))/β) and n large. Consequently, under our assumptions and using the homogeneity of the R −function and the mean value theorem combining with Eq. 22, we have

$$ \begin{array}{@{}rcl@{}} &&\frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( \mathbb{E}_{n}(\widehat e_{n}) - \mathbb{E}_{n}(1)\right)\\ &=&\frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( {\int}_{0}^{\infty} \mathbb E\left( T_{n}(s_{n}(u), 1|x_{0})\right)du^{-\gamma_{1}(x_{0})}- {\int}_{0}^{\infty} \mathbb E\left( T_{n}(s_{n}(u), \widehat e_{n}|x_{0})\right)du^{-\gamma_{1}(x_{0})}\right)\\ &=&\sqrt{kh^{d}} \left( {\int}_{0}^{\infty} R\left( u, 1|x_{0}\right) du^{-\gamma_{1}(x_{0})} - {\int}_{0}^{\infty} R\left( u, \widehat e_{n}|x_{0}\right) du^{-\gamma_{1}(x_{0})}\right)+o_{\mathbb{P}}(1)\\ &=&\sqrt{kh^{d}} \left( 1-\widehat e_{n}^{1-\gamma_{1}(x_{0})}\right){\int}_{0}^{\infty} R\left( u, 1|x_{0}\right) du^{-\gamma_{1}(x_{0})}+o_{\mathbb{P}}(1)\\ & \leadsto & (1-\gamma_{1}(x_{0})) \frac{W(\infty, 1)}{f_{X}(x_{0})} {\int}_{0}^{\infty} R\left( u, 1|x_{0}\right) du^{-\gamma_{1}(x_{0})}. \end{array} $$

This implies that

$$ \begin{array}{@{}rcl@{}} T_{2} \leadsto -(1-\gamma_{1}(x_{0})) \frac{W(\infty, 1)}{f_{X}(x_{0})}. \end{array} $$
(24)

Finally, for T3 we have,

$$ \begin{array}{@{}rcl@{}} &&\frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( \mathbb E_{n}(1) - \frac{f_{X}(x_{0})\theta_{k/n}}{U_{1}(n/k|x_{0}) }\right)\\ &=&\frac{\sqrt{kh^{d}}}{f_{X}(x_{0})} \left( -{\int}_{0}^{\infty} \mathbb E(T_{n}(s_{n}(u), 1|x_{0})) du^{-\gamma_{1}(x_{0})} - \frac{f_{X}(x_{0})\theta_{k/n}}{U_{1}(n/k|x_{0}) }\right)\\ &=&\sqrt{kh^{d}} {\int}_{0}^{\infty} \left[R_{n/k}(s_{n}(u), 1|x_{0})-R(u,1|x_{0})\right] du^{-\gamma_{1}(x_{0})} + o(1)\\ &=&\sqrt{kh^{d}} {\int}_{0}^{\infty} \left[R_{n/k}(s_{n}(u), 1|x_{0})-R(s_{n}(u),1|x_{0})\right] du^{-\gamma_{1}(x_{0})}\\ &&+\sqrt{kh^{d}} {\int}_{0}^{\infty} \left[R(s_{n}(u), 1|x_{0})-R(u,1|x_{0})\right] du^{-\gamma_{1}(x_{0})}+ o (1)\\ & =: & \widetilde T_{3,1}+\widetilde T_{3,2}+o(1), \\ \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} |\widetilde T_{3,1}| &\leq& \sqrt{kh^{d}} \underset{x\in B(x_{0}, h)}{\sup}\underset{0<y_{1}<\infty, \frac{1}{2}\leq y_{2}\leq 2}{\sup} \frac{|R_{n/k}(y_{1},y_{2}|x)-R(y_{1}, y_{2}|x_{0})|}{{y}_{1}^{\beta} \wedge 1}\\ &&\times \left|{\int}_{0}^{\infty} \left( [ s_{n}(u)]^{\beta}\wedge 1\right) du^{-\gamma_{1}(x_{0})}\right|\\ &=&O\left( \sqrt{kh^{d}}\left( \frac{n}{k}\right)^{\tau}\right), \\ \widetilde T_{3,2} & = & o(1). \end{array} $$

Overall, we have then

$$ \begin{array}{@{}rcl@{}} T_{3}=o(1). \end{array} $$
(25)

Combining Eqs. 2324 and 25, and following the argument as at the end of the proof of Theorem 2, we can establish the result of Theorem 3.

1.5 A.5 Proofs of the auxiliary results

Proof of Lemma 1

First note that, by continuity of yFj(y|x),

$$ \begin{array}{@{}rcl@{}} t_{n}\overline{F}_{j}(U_{j}(t_{n}/y|x_{0})|x) = y \frac{\overline{F}_{j}(U_{j}(t_{n}/y|x_{0})|x)}{\overline{F}_{j}(U_{j}(t_{n}/y|x_{0})|x_{0})}. \end{array} $$

Then, from condition \((\mathcal {D})\), and a straightforward decomposition,

$$ \begin{array}{@{}rcl@{}} \lefteqn{\left | \frac{t_{n} \overline{F}_{j}(U_{j}(t_{n}/y|x_{0})|x)}{y^{\eta}} -y^{1-\eta} \right | } \\ &\le & y^{1-\eta} \left \{ \left | \frac{A_{j}(x)}{A_{j}(x_{0})}-1 \right | (U_{j}(t_{n}/y|x_{0}))^{1/\gamma_{j}(x_{0})-1/\gamma_{j}(x)}\frac{1+\frac{1}{\gamma_{j}(x)} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x)}{1+\frac{1}{\gamma_{j}(x_{0})} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x_{0})} \right . \\ & & + \left | (U_{j}(t_{n}/y|x_{0}))^{1/\gamma_{j}(x_{0})-1/\gamma_{j}(x)}-1 \right | \frac{1+\frac{1}{\gamma_{j}(x)} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x)}{1+\frac{1}{\gamma_{j}(x_{0})} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x_{0})} \\ & & + \left . \left | \frac{1+\frac{1}{\gamma_{j}(x)} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x)}{1+\frac{1}{\gamma_{j}(x_{0})} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x_{0})}-1 \right | \right \}. \end{array} $$

Each of the absolute differences in the right-hand side of the above display can be handled by condition \((\mathcal H)\). Obviously, for some constant C,

$$ \begin{array}{@{}rcl@{}} \left | \frac{A_{j}(x)}{A_{j}(x_{0})}-1 \right | \le C h_{n}^{\eta_{A_{j}}}, \text{ for } x \in B(x_{0},h_{n}). \end{array} $$

Next, using the inequality |ez − 1|≤ e|z||z|, we have, for some constant C (not necessarily equal to the one introduced above), and xB(x0,hn),

$$ \begin{array}{@{}rcl@{}} \left | (U_{j}(t_{n}/y|x_{0}))^{1/\gamma_{j}(x_{0})-1/\gamma_{j}(x)}-1 \right | \le e^{Ch_{n}^{\eta_{\gamma_{j}}} \ln U_{j}(t_{n}/y|x_{0})} Ch_{n}^{\eta_{\gamma_{j}}} \ln U_{j}(t_{n}/y|x_{0}). \end{array} $$

For distributions satisfying \((\mathcal D)\), one easily verifies that

$$ U_{j}(t_{n}|x_{0}) = (A_{j}(x_{0}))^{\gamma_{j}(x_{0})}t_{n}^{\gamma_{j}(x_{0})}(1+a_{j}(t_{n}|x_{0})) $$

where |aj(.|x0)| is regularly varying with index equal to − γj(x0)βj(x0). Hence, for some constants C1 and C2, not depending on y, one gets for xB(x0,hn) and n large,

$$ \begin{array}{@{}rcl@{}} \left | (U_{j}(t_{n}/y|x_{0}))^{1/\gamma_{j}(x_{0})-1/\gamma_{j}(x)}-1 \right | \le C_{1}t_{n}^{C_{2}h_{n}^{\eta_{\gamma_{j}}}}y^{-C_{2}h_{n}^{\eta_{\gamma_{j}}}}\left ({h}_{n}^{\eta_{\gamma_{j}}} \ln t_{n} - {h}_{n}^{\eta_{\gamma_{j}}} \ln y \right ). \end{array} $$

Finally, for n large,

$$ \begin{array}{@{}rcl@{}} \lefteqn{\left | \frac{1+\frac{1}{\gamma_{j}(x)} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x)}{1+\frac{1}{\gamma_{j}(x_{0})} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x_{0})}-1 \right | } \\ &\le & C |\delta_{j}(U_{j}(t_{n}/y|x_{0})|x_{0})| \left \{ \left | \frac{\delta_{j}(U_{j}(t_{n}/y|x_{0})|x)}{\delta_{j}(U_{j}(t_{n}/y|x_{0})|x_{0})} -1 \right | + \left | \frac{1}{\gamma_{j}(x)}-\frac{1}{\gamma_{j}(x_{0})} \right | \right \}. \end{array} $$

By the assumptions on δj we obtain

$$ \begin{array}{@{}rcl@{}} \left | \frac{\delta_{j}(U_{j}(t_{n}/y|x_{0})|x)}{\delta_{j}(U_{j}(t_{n}/y|x_{0})|x_{0})} -1 \right | & \le & \left | \frac{B_{j}(x)}{B_{j}(x_{0})}-1 \right | e^{{\int}_{1}^{U_{j}(t_{n}/y|x_{0})} \frac{\varepsilon_{j}(u|x)-\varepsilon_{j}(u|x_{0})}{u} du} \\ & & + \left | e^{{\int}_{1}^{U_{j}(t_{n}/y|x_{0})} \frac{\varepsilon_{j}(u|x)-\varepsilon_{j}(u|x_{0})}{u} du}-1 \right |, \end{array} $$

and, hence, using \((\mathcal H)\), for xB(x0,hn) and n large,

$$ \begin{array}{@{}rcl@{}} \lefteqn{\left | \frac{1+\frac{1}{\gamma_{j}(x)} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x)}{1+\frac{1}{\gamma_{j}(x_{0})} \delta_{j}(U_{j}(t_{n}/y|x_{0}) |x_{0})}-1 \right |} \\ & & \le C_{1} \left [ {h}_{n}^{\eta_{\gamma_{j}} \wedge \eta_{B_{j}}} +t_{n}^{C_{2}h_{n}^{\eta_{\varepsilon_{j}}}}y^{-C_{2}h_{n}^{\eta_{\varepsilon_{j}}}}\left (h_{n}^{\eta_{\varepsilon_{j}}} \ln t_{n} - h_{n}^{\eta_{\varepsilon_{j}}} \ln y \right ) \right ]. \end{array} $$

Combining the above results establishes the lemma. □

Proof of Lemma 2

We have

Concerning T1,n, by the continuity of fX(x) and R(y1,y2|x) at x0, we have that fX and R are bounded in a neighborhood of x0, and hence, by Lebesgue’s dominated convergence theorem

$$ \begin{array}{@{}rcl@{}} T_{1,n} \to f_{X}(x_{0})R(y_{1},y_{2}|x_{0}), \text{ as } n \to \infty. \end{array} $$

As for T2,n,

$$ \begin{array}{@{}rcl@{}} |T_{2,n}| & \le & \underset{v \in S_{K}}{\sup} \left | \frac{n}{k} \mathbb P\left( \overline{F}_{1}\left( Y^{(1)}|x_{0}\right) \le (k/n) y_{1}, \overline{F}_{2}\left( Y^{(2)}|x_{0}\right) \le (k/n) y_{2} |X=x_{0}-hv\right) \right . \\ & & -R(y_{1},y_{2}|x_{0}-hv) \left | {\int}_{S_{K}} K(v) f_{X}(x_{0}-hv) dv,\right. \end{array} $$

and note that

$$ \begin{array}{@{}rcl@{}} &&{}{\mathbb{P}\left( \overline{F}_{1}\left( Y^{(1)}|x_{0}\right) \le (k/n) y_{1}, \overline{F}_{2}\left( Y^{(2)}|x_{0}\right) \le (k/n) y_{2} |X=x_{0}-hv\right) } \\ & = & \mathbb P \left (\overline{F}_{1}\left( Y^{(1)}|x_{0}-hv\right) \le \frac{k}{n} \frac{n}{k}\overline{F}_{1}(U_{1}(n/(ky_{1})|x_{0})|x_{0}-hv), \right . \\ & & \left . \overline{F}_{2}\left( Y^{(2)}|x_{0}-hv\right) \le \frac{k}{n} \frac{n}{k}\overline{F}_{2}(U_{2}(n/(ky_{2})|x_{0})|x_{0}-hv) | X=x_{0}-hv \right ). \end{array} $$

Then, by the result of Lemma 1 and the uniformity of the convergence in Assumption \((\mathcal R)\), we have that T2,n → 0 as \(n \to \infty \).

Now, consider the variance. We have

from which the result follows. □

Proof of Lemma 3

Using Assumption \((\mathcal D)\), we have

$$ \begin{array}{@{}rcl@{}} \widetilde s_{n}(u)&=& \frac{\overline{F}_{1}\left( u^{-\gamma_{1}(x_{0})} U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}-hv\right)}{\overline{F}_{1}\left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)}\\ &=&\frac{A_{1}(x_{0}-hv)}{A_{1}(x_{0})} \left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\right)^{\frac{1}{\gamma_{1}(x_{0})}-\frac{1}{\gamma_{1}(x_{0}-hv)}} u^{\frac{\gamma_{1}(x_{0})}{\gamma_{1}(x_{0}-hv)}} \\ &&\times \frac{1+\frac{1}{\gamma_{1}(x_{0}-hv)} \delta_{1}\left( u^{-\gamma_{1}(x_{0})} U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}-hv\right) }{ 1+\frac{1}{\gamma_{1}(x_{0})} \delta_{1}\left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)}. \end{array} $$

This implies that

$$ \begin{array}{@{}rcl@{}} &&\left|\widetilde s_n(u)-u^{\frac{\gamma_1(x_0)}{\gamma_1(x_0-hv)}}\right| \\ &\leq& u^{\frac{\gamma_1(x_0)}{\gamma_1(x_0-hv)}} \left\{\left|\frac{A_1(x_0-hv)}{A_1(x_0)}-1\right| \left( U_1\left( \frac{n}{k}\left|x_0\right.\right)\right)^{\frac{1}{\gamma_1(x_0)}-\frac{1}{\gamma_1(x_0-hv)}}\right.\\ &&\times\left|\frac{1+\frac{1}{\gamma_1(x_0-hv)} \delta_1\left( u^{-\gamma_1(x_0)} U_1\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}x_0\right.\right)\left.\vphantom{\frac{n}{k}}\right|x_0-hv\right)}{1+\frac{1}{\gamma_1(x_0)} \delta_1\left( U_1\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}x_0\right.\right)\left.\vphantom{\frac{n}{k}}\right|\vphantom{\frac{n}{k}}x_0\right)}\right|\\ &&+\left|\left( U_1\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_0\right)\right)^{\frac{1}{\gamma_1(x_0)}-\frac{1}{\gamma_1(x_0-hv)}}-1\right|\\ &&\times\left|\frac{1+\frac{1}{\gamma_1(x_0-hv)} \delta_1\left( u^{-\gamma_1(x_0)} U_1\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}x_0\right.\right)\left.\vphantom{\frac{n}{k}}\right|x_0-hv\right)}{1+\frac{1}{\gamma_1(x_0)} \delta_1\left( U_1\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}x_0\right.\right)\left.\vphantom{\frac{n}{k}}\right|x_0\right)}\right|\\ &&\left.+\left|\frac{1+\frac{1}{\gamma_1(x_0-hv)} \delta_1\left( u^{-\gamma_1(x_0)} U_1\left( \frac{n}{k}\left|x_0\right.\right)\left.\vphantom{\frac{n}{k}}\right|x_0-hv\right)}{1+\frac{1}{\gamma_1(x_0)} \delta_1\left( U_1\left( \frac{n}{k}\left|x_0\right.\right)\left.\vphantom{\frac{n}{k}}\right|x_0\right)}-1\right|\right\}\\ &=:&u^{\frac{\gamma_1(x_0)}{\gamma_1(x_0-hv)}} \{T_1+T_2+T_3\}. \end{array} $$

Using Assumption \((\mathcal H)\) and the inequality |ex − 1|≤|x|e|x|, we deduce that, for n large,

$$ \begin{array}{@{}rcl@{}} |\frac{A_{1}(x_{0}-hv)}{A_{1}(x_{0})}-1 | & \leq & C h^{\eta_{A_{1}}} \end{array} $$
(26)
$$ \begin{array}{@{}rcl@{}} | \left( U_{1}\left( \frac{n}{k}|x_{0}\right)\right)^{\frac{1}{\gamma_{1}(x_{0})}-\frac{1}{\gamma_{1}(x_{0}-hv)}}-1 | &\leq & C h^{\eta_{\gamma_{1}}} \ln\frac{n}{k}. \end{array} $$
(27)

Now, direct computations yield, for n large,

$$ \begin{array}{@{}rcl@{}} T_{3}& \leq & C \left|\delta_{1}\left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\right| \left\{\left|\frac{\gamma_{1}(x_{0})}{\gamma_{1}(x_{0}-hv)}-1\right| \right . \\ & & \left. \times \left| \delta_{1}\left( u^{-\gamma_{1}(x_{0})} U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}-hv\right) \over \delta_{1}\left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\right|\right.\\ &&\left.+\left|\frac{\delta_{1}\left( u^{-\gamma_{1}(x_{0})} U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}-hv\right)}{\delta_{1}\left( u^{-\gamma_{1}(x_{0})}U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)}\frac{\delta_{1}\left( u^{-\gamma_{1}(x_{0})} U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)}{\delta_{1}\left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)} -1\right|\right\}. \end{array} $$
(28)

Using the assumed form for δ1(y|x), \((\mathcal H)\), and the uniform bound from Proposition B.1.10 in de Haan and Ferreira (2006) with 0 < ε < β1(x0), we obtain, for n large, that

$$ \begin{array}{@{}rcl@{}} T_{3} & \le & C \left|\delta_{1}\left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}x_{0}\right)\right| \left\{ h^{\eta_{\gamma_{1}}} + u^{\gamma_{1}(x_{0})\beta_{1}(x_{0})} \left( 1+u^{\pm \gamma_{1}(x_{0}) \varepsilon} \right)\right.\right. \\ && \left. \times \left[ h^{\eta_{B_{1}}} +u^{-Ch^{\eta_{\varepsilon_{1}}}}h^{\eta_{\varepsilon_{1}}} \left( |\ln u| +\ln \frac{n}{k} \right)\right]\right. \\ && \left. + u^{\gamma_{1}(x_{0})(\beta_{1}(x_{0})\pm \varepsilon)} + \left| u^{\gamma_{1}(x_{0})\beta_{1}(x_{0})} -1 \right| \right\}. \qquad \end{array} $$
(29)

Since

$$ \begin{array}{@{}rcl@{}} \left|\widetilde{s}_{n}(u)-u\right| & \leq & \left|\widetilde s_{n}(u)-u^{\frac{\gamma_{1}(x_{0})}{\gamma_{1}(x_{0}-hv)}}\right|+u\left|u^{\frac{\gamma_{1}(x_{0})-\gamma_{1}(x_{0}-hv)}{\gamma_{1}(x_{0}-hv)}}-1\right|\\ & \leq & \left|\widetilde s_{n}(u)-u^{\frac{\gamma_{1}(x_{0})}{\gamma_{1}(x_{0}-hv)}}\right| + Cu^{1\pm Ch^{\eta_{\gamma_{1}}}} h^{\eta_{\gamma_{1}}} |\ln u|, \end{array} $$
(30)

combining Eqs. 262729 with Eq. 30, Lemma 3 is established. □

Proof of Lemma 4

We use the following decomposition along with the Lipschitz property of the function R:

$$ \begin{array}{@{}rcl@{}} &&{}\sqrt{kh^{d}} \left|{\int}_{0}^{\infty} \left[R\left( \widetilde s_{n}(u), t_{n}(y_{2})|x_{0}\right) - R(u,y_{2}|x_{0})\right] du^{-\gamma_{1}(x_{0})}\right|\\ & \leq & \sqrt{kh^{d}} \left|{\int}_{0}^{\delta_{n}} \left[R\left( \widetilde s_{n}(u), t_{n}(y_{2})|x_{0}\right) - R(u,y_{2}|x_{0})\right] du^{-\gamma_{1}(x_{0})}\right|\\ &&+ \sqrt{kh^{d}} \left|{\int}_{\delta_{n}}^{T_{n}} \left[R\left( \widetilde s_{n}(u), t_{n}(y_{2})|x_{0}\right) - R(u,y_{2}|x_{0})\right] du^{-\gamma_{1}(x_{0})}\right|\\ \end{array} $$
$$ \begin{array}{@{}rcl@{}} &&+ \sqrt{kh^{d}} \left|{\int}_{T_{n}}^{\infty} \left[R\left( \widetilde s_{n}(u), t_{n}(y_{2})|x_{0}\right) - R(u,y_{2}|x_{0})\right] du^{-\gamma_{1}(x_{0})}\right|\\ & \leq & \sqrt{kh^{d}} \left|{\int}_{0}^{\delta_{n}} R\left( \widetilde{s}_{n}(u), t_{n}(y_{2})|x_{0}\right) du^{-\gamma_{1}(x_{0})}\right|\\ &&+ \sqrt{kh^{d}} \left|{\int}_{0}^{\delta_{n}} R(u,y_{2}|x_{0}) du^{-\gamma_{1}(x_{0})}\right|\\ &&- \sqrt{kh^{d}} {\int}_{\delta_{n}}^{T_{n}} \left[\left|\widetilde{s}_{n}(u)-u\right|+ \left| t_{n}(y_{2})-y_{2}\right|\right] du^{-\gamma_{1}(x_{0})}\\ &&+2 \underset{u\geq 0, \frac{1}{2}-\zeta \leq y_{2}\leq 2+\zeta}{\sup} R(u,y_{2}|x_{0}) \sqrt{kh^{d}} T_{n}^{-\gamma_{1}(x_{0})}\\ &=:&T_{1}+T_{2}+T_{3}+T_{4}, \end{array} $$

for ζ > 0 small and where δn → 0 and \(T_{n}\to \infty \), as \(n\to \infty \).

Now, since R(y1,y2|x0) ≤ y1y2, using Lemma 3, and assuming \(h^{\eta _{\varepsilon _{1}}\wedge \eta _{\gamma _{1}}}|\ln \delta _{n}|\to 0\), we obtain after tedious calculations, for n large,

$$ \begin{array}{@{}rcl@{}} T_{1}+T_{2}& \leq & - 2\sqrt{kh^{d}} {\int}_{0}^{\delta_{n}} u du^{-\gamma_{1}(x_{0})} - \sqrt{kh^{d}} {\int}_{0}^{\delta_{n}} \left|\widetilde s_{n}(u)-u\right| du^{-\gamma_{1}(x_{0})} \\ & \leq & C \sqrt{kh^{d}} \delta_{n}^{1-\gamma_{1}(x_{0})}. \end{array} $$
(31)

As for T3, using again Lemma 3 and following the lines of proof of Lemma 1, we have, for n large,

$$ \begin{array}{@{}rcl@{}} T_{3}& \leq &- \sqrt{kh^{d}} {\int}_{0}^{T_{n}} \left|\widetilde{s}_{n}(u)-u\right| du^{-\gamma_{1}(x_{0})}- \sqrt{kh^{d}} {\int}_{\delta_{n}}^{T_{n}}\left| t_{n}(y_{2})-y_{2}\right| du^{-\gamma_{1}(x_{0})}\\ & \leq & C\sqrt{kh^{d}} {T}_{n}^{1-\gamma_{1}(x_{0})} \left\{h^{\eta_{A_{1}}}+h^{\eta_{\gamma_{1}}}\ln\frac{n}{k} +h^{\eta_{\gamma_{1}}}\ln T_{n} \right . \\ & & \left . + \left|\delta_{1}\left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\right| T_{n}^{(\beta_{1}(x_{0})+\varepsilon)\gamma_{1}(x_{0})} \right \} \\ &&+ C\sqrt{kh^{d}} \delta_{n}^{-\gamma_{1}(x_{0})} \left\{h^{\eta_{A_{2}}}+h^{\eta_{\gamma_{2}}}\ln \frac{n}{k}+ \left|\delta_{2}\left( U_{2}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\right| \right . \\ & & \left . \times \left[h^{\eta_{B_{2}}}+h^{\eta_{\varepsilon_{2}}}\ln\frac{n}{k}\right]\right\} \end{array} $$
(32)

assuming \(h^{\eta _{\varepsilon _{1}}\wedge \eta _{\gamma _{1}}}\ln T_{n}\to 0\).

Finally

$$ \begin{array}{@{}rcl@{}} T_{4}& \leq & C \sqrt{kh^{d}} T_{n}^{-\gamma_{1}(x_{0})}. \end{array} $$
(33)

Take δn = hξ and Tn = nκ, with ξ and κ positive numbers, and 0 < ε < β1(x0). Combining Eqs. 3132 and 33, the first part of Lemma 4 follows if the sequences δn and Tn are chosen such that

$$ \begin{array}{@{}rcl@{}} \alpha-{{\varDelta}}\left[d-2\xi \gamma_{1}(x_{0})+2(\xi\wedge \eta_{A_{2}}\wedge \eta_{B_{2}}\wedge \eta_{\gamma_{2}}\wedge \eta_{\varepsilon_{2}}) \right]&<&0,\\ \alpha-{{\varDelta}} d - 2 \kappa \gamma_{1}(x_{0})&<&0,\\ \alpha-{{\varDelta}} d + 2 \kappa(1- \gamma_{1}(x_{0})) -2{{\varDelta}} (\eta_{A_{1}}\wedge \eta_{\gamma_{1}})&<&0, \\ \alpha-{{\varDelta}} d -2(1-\alpha) \gamma_{1}(x_{0})\beta_{1}(x_{0}) +2 \kappa[1+(\beta_{1}(x_{0})+\varepsilon)\gamma_{1}(x_{0}) -\gamma_{1}(x_{0})] & < & 0. \end{array} $$

Note that this is possible if we proceed as follows:

  • α and Δ are chosen as stated in Lemma 4;

  • κ is chosen such that

    $$ \begin{array}{@{}rcl@{}} &&\frac{\alpha-{{\varDelta}} d}{2\gamma_{1}(x_{0})}<\kappa < \\ & & \min\left( 1-\alpha, \frac{2{{\varDelta}}(\eta_{A_{1}}\wedge \eta_{\gamma_{1}})-(\alpha-{{\varDelta}} d)}{2(1-\gamma_{1}(x_{0}))}, \frac{2(1-\alpha)\gamma_{1}(x_{0})\beta_{1}(x_{0}) - (\alpha-{{\varDelta}} d)}{2[1-\gamma_{1}(x_{0})+(\beta_{1}(x_{0})+\varepsilon)\gamma_{1}(x_{0})]}\right); \end{array} $$
  • ξ is chosen such that

    $$ \frac{\alpha-{{\varDelta}} d}{2{{\varDelta}}(1-\gamma_{1}(x_{0}))}<\xi<\eta_{A_{2}}\wedge \eta_{\gamma_{2}}\wedge \eta_{B_{2}}\wedge \eta_{\varepsilon_{2}}. $$

    Note that the choices of κ and ξ only depend on those of α and Δ.

The second part of Lemma 4 is similar, although simpler. Indeed, a decomposition of the quantity of interest this time into two parts yields

$$ \begin{array}{@{}rcl@{}} &&{}\sqrt{kx^{d}} \left|{\int}_{0}^{\infty} \left[R\left( s_{n}(u), y_{2}|x_{0}\right) - R(u,y_{2}|x_{0})\right] du^{-\gamma_{1}(x_{0})}\right|\\ & \leq & \sqrt{kh^{d}} \left|{\int}_{0}^{T_{n}} \left[R\left( s_{n}(u), y_{2}|x_{0}\right) - R(u,y_{2}|x_{0})\right] du^{-\gamma_{1}(x_{0})}\right|\\ &&+ \sqrt{kh^{d}} \left|{\int}_{T_{n}}^{\infty} \left[R\left( s_{n}(u), y_{2}|x_{0}\right) - R(u,y_{2}|x_{0})\right] du^{-\gamma_{1}(x_{0})}\right|\\ & \leq &- \sqrt{kh^{d}} {\int}_{0}^{T_{n}} \left|s_{n}(u)-u\right| du^{-\gamma_{1}(x_{0})}+2 \underset{u\geq 0, \frac{1}{2}\leq y_{2}\leq 2}{\sup} R(u,y_{2}|x_{0}) \sqrt{kh^{d}} T_{n}^{-\gamma_{1}(x_{0})}\\ & \leq &- \sqrt{kh^{d}} \frac{|\delta_{1}\left( U_{1}\left( \frac{n}{k}|x_{0}\right)|x_{0}\right)|}{|\gamma_{1}(x_{0})+ \delta_{1}\left( U_{1}\left( \frac{n}{k}|x_{0}\right)|x_{0}\right)|} {\int}_{0}^{T_{n}} u \left|\frac{\delta_{1}\left( u^{-\gamma_{1}(x_{0})}U_{1}\left( \frac{n}{k}|x_{0}\right)|x_{0}\right)}{\delta_{1}\left( U_{1}\left( \frac{n}{k}|x_{0}\right)|x_{0}\right)} - 1\right| du^{-\gamma_{1}(x_{0})} \\ & & +C \sqrt{kh^{d}} T_{n}^{-\gamma_{1}(x_{0})}\\ & \leq & C \sqrt{kh^{d}} \left|\delta_{1}\left( U_{1}\left( \frac{n}{k}\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\left|\vphantom{\frac{n}{k}}\right.x_{0}\right)\right| T_{n}^{1-\gamma_{1}(x_{0})+(\beta_{1}(x_{0})+\varepsilon)\gamma_{1}(x_{0})} +C \sqrt{kh^{d}} T_{n}^{-\gamma_{1}(x_{0})}. \end{array} $$

This achieves the proof of Lemma 4. □

Proof of Lemma 5

In this proof, as mentioned above, we will use the Skorohod representation with keeping the same notation. First remark that

We have, with \(r_{n}:= \sqrt {nh^{d}\overline {F}_{2}(u_{n}|x_{0})}\),

$$ \begin{array}{@{}rcl@{}} &&{}\left | r_{n} \left [ \frac{\overline{F}_{2}(U_{2}(n/k|x_{0})|x_{0})}{\overline{F}_{2}(u_{n}|x_{0})} T_{n}\left (\infty, \frac{n}{k} \overline{F}_{2}(u_{n}|x_{0})\left|\vphantom{\frac{n}{k}}\right.x_{0} \right ) - f_{X}(x_{0})\right ] - W(\infty,1) \right | \\ & \le & \left | \sqrt{kh^{d}} \left [ T_{n}\left (\infty, \frac{n}{k} \overline{F}_{2}(u_{n}|x_{0})\left|\vphantom{\frac{n}{k}}\right.x_{0} \right ) - \frac{n}{k} \overline{F}_{2}(u_{n}|x_{0})f_{X}(x_{0}) \right ] - W\left (\infty, \frac{n}{k} \overline{F}_{2}(u_{n}|x_{0}) \right ) \right|\\ & & + \sqrt{kh^{d}} \left| \sqrt{\frac{\overline{F}_{2}(u_{n}|x_{0})}{\overline{F}_{2}(U_{2}(n/k|x_{0})|x_{0})} } -1 \right| \left| T_{n}\left (\infty, \frac{n}{k} \overline{F}_{2}(u_{n}|x_{0})\left|\vphantom{\frac{n}{k}}\right.x_{0} \right ) - \frac{n}{k} \overline{F}_{2}(u_{n}|x_{0})f_{X}(x_{0}) \right | \\ & & + \left | W\left (\infty, \frac{n}{k} \overline{F}_{2}(u_{n}|x_{0}) \right ) - W(\infty,1) \right| \\ & & + r_{n} \left | \frac{\overline{F}_{2}(U_{2}(n/k|x_{0})|x_{0})}{\overline{F}_{2}(u_{n}|x_{0})}-1 \right | \left | T_{n}\left (\infty, \frac{n}{k} \overline{F}_{2}(u_{n}|x_{0})\left|\vphantom{\frac{n}{k}}\right.x_{0} \right ) - \frac{n}{k} \overline{F}_{2}(u_{n}|x_{0})f_{X}(x_{0}) \right |. \end{array} $$
(34)

From Eq. 20 combined with the Skorohod construction, we have

Finally

Proof of Lemma 6

To prove the lemma we will use the idea of Wretman (1978), applied to our situation. We have, for \(z\in \mathbb R\), and un from Lemma 5 taken as \(U_{2}(n/k|x_{0})(1+z/\sqrt {kh^{d}})\), that

$$ \begin{array}{@{}rcl@{}} &&{}\mathbb{P} \left( \sqrt{kh^{d}} \left( \widehat u_{n} -1 \right) \le z \right) \\ && = \mathbb{P} \left( \sqrt{nh^{d}\overline{F}_{2}(u_{n}|x_{0})} \left( \frac{\widehat{\overline{F}}_{n,2}(u_{n}|x_{0})}{\overline{F}_{2}(u_{n}|x_{0})}-1 \right)\right.\\ && \qquad ~~\left. \le \sqrt{nh^{d}\overline{F}_{2}(u_{n}|x_{0})} \left( \frac{\overline{F}_{2}(U_{2}(n/k|x_{0})|x_{0})}{\overline{F}_{2}(u_{n}|x_{0})}-1 \right)\right). \end{array} $$

We have that in the present context

$$ \begin{array}{@{}rcl@{}} a_{n} :=\sqrt{nh^{d}\overline{F}_{2}(u_{n}|x_{0})} \left( \frac{\overline{F}_{2}(U_{2}(n/k|x_{0})|x_{0})}{\overline{F}_{2}(u_{n}|x_{0})}-1 \right) \to \frac{z}{\gamma_{2}(x_{0})}. \end{array} $$

Let Hn denote the distribution function of \(\sqrt {nh^{d}\overline {F}_{2}(u_{n}|x_{0})} (\widehat {\overline {F}}_{n,2}(u_{n}|x_{0})/\overline {F}_{2}(u_{n}|x_{0})-1 ) \), and H is the distribution function of \(W(\infty ,1)/f_{X}(x_{0})\). Then by Lemma 5 and by continuity of H one has that Hn(an) → H(z/γ2(x0)), as \(n \to \infty \), hence the result of the lemma. □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Goegebeur, Y., Guillou, A., Le Ho, N. et al. Conditional marginal expected shortfall. Extremes 24, 797–847 (2021). https://doi.org/10.1007/s10687-020-00403-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10687-020-00403-1

Keywords

AMS 2000 Subject Classification

Navigation