Skip to main content
Log in

Directional bivariate quantiles: a robust approach based on the cumulative distribution function

  • Original Paper
  • Published:
AStA Advances in Statistical Analysis Aims and scope Submit manuscript

Abstract

The definition of multivariate quantiles has gained considerable attention in previous years as a tool for understanding the structure of a multivariate data cloud. Due to the lack of a natural ordering for multivariate data, many approaches have either considered geometric generalisations of univariate quantiles or data depths that measure centrality of data points. Both approaches provide a centre-outward ordering of data points but do no longer possess a relation to the cumulative distribution function of the data generating process and corresponding tail probabilities. We propose a new notion of bivariate quantiles that is based on inverting the bivariate cumulative distribution function and therefore provides a directional measure of extremeness as defined by the contour lines of the cumulative distribution function which define the quantile curves of interest. To determine unique solutions, we transform the bivariate data to the unit square. This allows us to introduce directions along which the quantiles are unique. Choosing a suitable transformation also ensures that the resulting quantiles are equivariant under monotonically increasing transformations. We study the resulting notion of bivariate quantiles in detail, with respect to computation based on linear programming and theoretical properties including asymptotic behaviour and robustness. It turns out that our approach is especially useful for data situations that deviate from the elliptical shape typical for ‘normal-like’  bivariate distributions. Moreover, the bivariate quantiles inherit the robustness of univariate quantiles even in case of extreme outliers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Abdous, B., Theodorescu, R.: Note on the spatial quantile of a random vector. Stat. Probab. Lett. 13, 333–336 (1992)

    Article  MathSciNet  Google Scholar 

  • Andrews, D.W.K.: Emprical process methods in econometrics. In: Engle, R.F., McFadden, D.L. (eds.) Handbook of Econometrics, vol. 4, pp. 2247–2294. Elsevier Science B.V., North-Holland, New York (1994)

    Google Scholar 

  • Belzunce, F., Castaño, A., Olvera-Cervantes, A., Suárez-Llorens, A.: Quantile curves and dependence structure for bivariate distributions. Comput. Stat. Data Anal. 51, 5112–5129 (2007)

    Article  MathSciNet  Google Scholar 

  • Berkelaar, M., et al.: lpSolve: Interface to “Lp\(\_\)solve’ v. 5.5 to Solve Linear/Integer Program. R package version 5.6.13 (2015)

  • Carlier, G., Chernozhukov, V., Galichon, A.: Vector quantile regression: an optimal transport approach. Ann. Stat. 44(3), 1165–1192 (2016)

    Article  MathSciNet  Google Scholar 

  • Chakraborty, B.: On affine equivariant multivariate quantiles. Ann. Inst. Stat. Math. 53, 380–403 (2001)

    Article  MathSciNet  Google Scholar 

  • Chaudhuri, P.: On a geometric notion of quantiles for multivariate data. J. Am. Stat. Assoc. 91, 862–872 (1996)

    Article  MathSciNet  Google Scholar 

  • Chen, L.-A., Welsh, A.H.: Distribution-function-based bivariate quantiles. J. Multivar. Anal. 83, 208–231 (2002)

    Article  MathSciNet  Google Scholar 

  • Chernozhukov, V., Galichon, A., Hallin, M., Henry, M.: Monge–Kantorovich depth, quantiles, ranks, and signs. Ann. Stat. 45(1), 223–256 (2017)

    Article  MathSciNet  Google Scholar 

  • Einmahl, J.H.J., Mason, D.M.: Generalized quantile processes. Ann. Stat. 20, 1062–1078 (1992)

    Article  MathSciNet  Google Scholar 

  • Ferguson, T.S.: Mathematical Statistics: A Decision Theoretic Approach. Academic Press, New York (1967)

    MATH  Google Scholar 

  • Fernandez-Ponce, J.M., Suarez-Llorens, A.: Quantile curves and dependence structure for bivariate distributions. Comput. Stat. Data Anal. 17, 236–256 (2007)

    MathSciNet  Google Scholar 

  • Genest, C., Segers, J.: On the covariance of the asymptotic empirical copula process. J. Multivar. Anal. 101, 1837–1845 (2010)

    Article  MathSciNet  Google Scholar 

  • Genest, C., Ghoudi, K., Rivest, L.-P.: A semiparametric estimation procedure of dependence parameters in multivariate families of distributions. Biometrika 82, 543–552 (1995)

    Article  MathSciNet  Google Scholar 

  • Guggisberg, M.: A Bayesian Approach to Multiple-Output Quantile Regression, Technical report (2016)

  • Hallin, M.: On distribution and quantile functions, ranks and signs in \(r^d\), ECARES working paper 2017-34 (2017)

  • Hallin, M., Paindaveine, D., S̆iman, M.: Multivariate quantiles and multiple-output regression quantiles: from \({L}_1\) optimization to halfspace depth. Ann. Stat. 38, 635–669 (2010)

    Article  Google Scholar 

  • Joe, H.: Dependence Modeling with Copulas. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. Taylor & Francis, London (2014)

    Book  Google Scholar 

  • Klein, N.: bivquant: Estimation of Bivariate Quantiles. R package version 0.1 (2019)

  • Klein, N., Kneib, T.: Simultaneous inference in structured additive conditional copula regression models: a unifying Bayesian approach. Stat. Comput. 26, 841–860 (2016)

    Article  MathSciNet  Google Scholar 

  • Koenker, R.: Quantile Regression. Economic Society Monographs. Cambridge University Press, New York (2005)

    Book  Google Scholar 

  • Koltchinskii, V.I.: M-estimation, convexity and quantiles. Ann. Stat. 25, 435–477 (1997)

    Article  MathSciNet  Google Scholar 

  • Koshevoy, G., Mosler, K.: Zonoid trimming for multivariate distributions. Ann. Stat. 25, 1998–2017 (1997)

    Article  MathSciNet  Google Scholar 

  • Liu, R.Y.: On a notion of data depth based on random simplices. Ann. Stat. 18, 405–414 (1990)

    Article  MathSciNet  Google Scholar 

  • Liu, R.Y., Parelius, J.M., Singh, K.: Multivariate analysis by data depth: descriptive statistics, graphics and inference (with discussion and a rejoinder by Liu and Singh). Ann. Stat. 27, 783–858 (1999)

    MATH  Google Scholar 

  • Mosler, K.: Multivariate Dispersion, Central Regions and Depth: The Lift Zonoid Approach. Springer, New York (2002)

    Book  Google Scholar 

  • Oja, H.: Descriptive statistics for multivariate distributions. Stat. Probab. Lett. 1, 327–332 (1983)

    Article  MathSciNet  Google Scholar 

  • Pokotylo, O., Mozharovskyi, P., Dyckerhoff, R.: ddalpha: Depth-Based Classification and Calculation of Data Depth. R package version 1.3.1 (2015)

  • Serfling, R.: Approximation Theorems of Mathematical Statistics. Series in Probability and Mathematical Statistics. Wiley, New York (1980)

    Book  Google Scholar 

  • Serfling, R.: Quantile functions for multivariate analysis: approaches and applications. Stat. Neerl. 56, 214–232 (2002)

    Article  MathSciNet  Google Scholar 

  • Small, C.G.: A survey of multidimensional medians. Int. Stat. Rev. 58, 263–277 (1990)

    Article  Google Scholar 

  • Tukey, J.: Mathematics and the picturing of data. In: Proceedings of the 1975 International Congress of Mathematicians, vol. 2, pp. 523–531 (1975)

  • Zuo, Y., Serfling, R.: General notions of statistical depth function. Ann. Stat. 28, 461–482 (2000)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nadja Klein.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Thomas Kneib received financial support from the German Research Foundation (DFG) within the research Project KN 922/4-2. Both authors are grateful for the comments provided by two anonymous referees which, in particular, prompted us to incorporate changes in the theoretical results when transforming with the empirical instead of the true marginal cumulative distribution functions.

A Further proofs

A Further proofs

1.1 A.1 Proof of Theorem 2

Proof

For fixed \(b\in \mathbb {R}\), we define the expected loss as a function of \(v\in \mathbb {R}\) by

$$\begin{aligned} \mathbb {E}(\rho _{b,\tau }(\varvec{Y},\varvec{q}))=\mathbb {E}(\rho _{\tau }(\varvec{Y},(v,v+b)')). \end{aligned}$$
(12)

Clearly, the bivariate quantile curves from (1) can be obtained as

$$\begin{aligned} \mathcal {Q}_{\tau }=\bigcup _{b\in \mathbb {R}}\left\{ v\in \mathbb {R}|F(v,v+b)=\tau \right\} \end{aligned}$$

which intuitively means that we describe \(\mathbb {R}^2\) by straight lines with slope one and intercepts b. With the definition

$$\begin{aligned} u(\varvec{y},v)=\max \left( y_1-v,y_2-v-b\right) , \end{aligned}$$

the expected loss for fixed \(b\in \mathbb {R}\) is given by

$$\begin{aligned} \mathbb {E}\left( \rho _{b,\tau }\left( \varvec{Y},v\right) \right)&= (\tau -1)\displaystyle \int \limits _{-\infty }^{ v}\displaystyle \int \limits _{-\infty }^{v+b}u\left( \varvec{y},v\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1 \\&\quad + \tau \displaystyle \int \limits _{ v}^{\infty }\displaystyle \int \limits _{v+b}^{\infty }u(\varvec{y},v) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad + \, \tau \displaystyle \int \limits _{ v}^{\infty }\displaystyle \int \limits _{-\infty }^{v+b}u(\varvec{y},v) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1 \\&\quad + \tau \displaystyle \int \limits _{-\infty }^{ v}\displaystyle \int \limits _{v+b}^{\infty }u(\varvec{y},v) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&=(\tau -1)\displaystyle \int \limits _{-\infty }^{ v}\displaystyle \int \limits _{-\infty }^{y_1+b}\left( y_1-v\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\ {}&\quad +\,(\tau -1)\displaystyle \int \limits _{-\infty }^{ v}\displaystyle \int \limits _{y_1+b}^{v+b}\left( y_2-v-b\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1 \\ {}&\quad +\,\tau \displaystyle \int \limits _{ v}^{\infty }\displaystyle \int \limits _{v+b}^{y_1+b}\left( y_1-v\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad +\,\tau \displaystyle \int \limits _{ v}^{\infty }\displaystyle \int \limits _{y_1+b}^{\infty }\left( y_2-v-b\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad + \, \tau \displaystyle \int \limits _{ v}^{\infty }\displaystyle \int \limits _{-\infty }^{v+b}\left( y_1-v\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\ {}&\quad +\, \tau \displaystyle \int \limits _{-\infty }^{ v}\displaystyle \int \limits _{v+b}^{\infty }\left( y_2-v-b\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1. \end{aligned}$$

Our strategy is now to show that for all \(b\in \mathbb {R}\) the expected loss \(\mathbb {E}(\rho _{b,\tau }(\varvec{Y},v))\) is uniquely minimised at \(q\in \mathbb {R}\) and fulfils the condition \(\mathbb {P}(Y_1\le q,Y_2\le q+b)=\tau \). We therefore investigate the first derivative of \(\mathbb {E}(\rho _{b,\tau }(\varvec{Y},v))\) with respect to v. The derivative is obtained by applying the Leibniz rule for parameter integrals twice.

$$\begin{aligned} \frac{\partial }{\partial v}\mathbb {E}\left( \rho _{b,\tau }\left( \varvec{y},v\right) \right)&=(\tau -1)\displaystyle \int \limits _{-\infty }^{v}\frac{\partial }{\partial v}\displaystyle \int \limits _{-\infty }^{y_1+b}\left( y_1-v\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad +\,(\tau -1)\displaystyle \int \limits _{-\infty }^{v}\frac{\partial }{\partial v}\displaystyle \int \limits _{y_1+b}^{v+b}\left( y_2-v-b\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad +\,\tau \displaystyle \int \limits _{v}^{\infty }\frac{\partial }{\partial v}\displaystyle \int \limits _{v+b}^{y_1+b}\left( y_1-v\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad +\,\tau \displaystyle \int \limits _{v}^{\infty }\frac{\partial }{\partial v}\displaystyle \int \limits _{y_1+b}^{\infty }\left( y_2-v-b\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad -\tau \displaystyle \int \limits _{v+b}^{\infty }(y_2-v-b)f(v,y_2)\mathrm {d}y_2\\&\quad +\,\tau \displaystyle \int \limits _{v}^{\infty }\frac{\partial }{\partial v}\displaystyle \int \limits _{-\infty }^{v+b}\left( y_1-v\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad +\,\tau \displaystyle \int \limits _{-\infty }^{v}\frac{\partial }{\partial v}\displaystyle \int \limits _{v+b}^{\infty }\left( y_2-v-b\right) f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad +\,\tau \displaystyle \int \limits _{v+b}^{\infty }(y_2-v-b)f(v,y_2)\mathrm {d}y_2 \\&=-(\tau -1)\displaystyle \int \limits _{-\infty }^{v}\displaystyle \int \limits _{-\infty }^{y_1+b} f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad -(\tau -1)\displaystyle \int \limits _{-\infty }^{v}\displaystyle \int \limits _{y_1+b}^{v+b} f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad -\,\tau \displaystyle \int \limits _{v}^{\infty }\displaystyle \int \limits _{v+b}^{y_1+b} f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1-\tau \displaystyle \int \limits _{v}^{\infty }(y_1-v)f(y_1,v+b)\mathrm {d}y_1\\&\quad -\,\tau \displaystyle \int \limits _{v}^{\infty }\displaystyle \int \limits _{y_1+b}^{\infty } f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1-\tau \displaystyle \int \limits _{v+b}^{\infty }(y_2-v-b)f(v,y_2)\mathrm {d}y_2\\&\quad -\,\tau \displaystyle \int \limits _{v}^{\infty }\displaystyle \int \limits _{-\infty }^{v+b} f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1+\tau \displaystyle \int \limits _{v}^{\infty }(y_1-v)f(y_1,v+b)\mathrm {d}y_1\\&\quad -\,\tau \displaystyle \int \limits _{-\infty }^{v}\displaystyle \int \limits _{v+b}^{\infty } f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1+\tau \displaystyle \int \limits _{v+b}^{\infty }(y_2-v-b)f(v,y_2)\mathrm {d}y_2 \\&=-(\tau -1)\displaystyle \int \limits _{-\infty }^{v}\displaystyle \int \limits _{-\infty }^{v+b} f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1-\tau \displaystyle \int \limits _{v}^{\infty }\displaystyle \int \limits _{v+b}^{\infty } f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&\quad -\,\tau \displaystyle \int \limits _{v}^{\infty }\displaystyle \int \limits _{-\infty }^{v+b} f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1-\tau \displaystyle \int \limits _{-\infty }^{v}\displaystyle \int \limits _{v+b}^{\infty } f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1\\&=\displaystyle \int \limits _{-\infty }^{v}\displaystyle \int \limits _{-\infty }^{v+b}f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1-\tau . \end{aligned}$$

In summary, we have

$$\begin{aligned} \begin{aligned} \frac{\partial }{\partial v}\mathbb {E}\left( \rho _{b,\tau }\left( \varvec{y},v\right) \right) = \displaystyle \int \limits _{-\infty }^{v}\displaystyle \int \limits _{-\infty }^{v+b}f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1-\tau . \end{aligned} \end{aligned}$$
(13)

Let us first assume that \(\mathbb {P}(Y_1\le q,Y_2\le q+b)=\tau \) holds. It then follows from Eq. (13) that

$$\begin{aligned} \frac{\partial }{\partial v}\mathbb {E}\left( \rho _{b,\tau }\left( \varvec{y},v\right) \right) \bigg |_{v=q}=\tau -\tau =0. \end{aligned}$$

In addition,

$$\begin{aligned} \frac{\partial ^2}{\partial v^2}\mathbb {E}\left( \rho _{b,\tau }\left( \varvec{y},v\right) \right) \bigg |_{v=q}=\displaystyle \int \limits _{-\infty }^{q}f(y_1,q+b)\mathrm {d}y_1+\displaystyle \int \limits _{-\infty }^{q+b}f(q,y_2)\mathrm {d}y_2>0 \end{aligned}$$

holds since we assumed \(f(y_1,y_2)>0\). Consequently, \((q,q+b)'\) is a minimiser of \(\mathbb {E}\left( \rho _{b,\tau }\left( \varvec{y},v\right) \right) \) and in particular of \(\mathbb {E}\left( \rho _{\tau }\left( \varvec{y},\varvec{q}\right) \right) \).

Reversely, if \((q,q+b)'\) is a minimiser of \(\mathbb {E}\left( \rho _{b,\tau }\left( \varvec{y},v\right) \right) \), a zero first derivative

$$\begin{aligned} \frac{\partial }{\partial v}\mathbb {E}\left( \rho _{b,\tau }\left( \varvec{y},v\right) \right) \bigg |_{v=q}=\displaystyle \int \limits _{-\infty }^{q}\displaystyle \int \limits _{-\infty }^{q+b}f(y_1,y_2)\mathrm {d}y_2\mathrm {d}y_1-\tau =0 \end{aligned}$$

is required which is equivalent to

$$\begin{aligned} \mathbb {P}(Y_1\le q,Y_2\le q+b)=\tau . \end{aligned}$$

\(\square \)

1.2 A.2 Proof of Theorem 6

Proof

Recall first that \({\tilde{\varvec{Y}}}=({\tilde{Y}}_1,{\tilde{Y}}_2)'=(F_{1}(Y_1),F_{2}(Y_2))'\), and let \(\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)>0\) be the density of \({\tilde{\varvec{Y}}}\). From Sect. 3.1, we furthermore have that \(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},\tilde{r})\) can be decomposed into

$$\begin{aligned} \begin{aligned} {\left\{ \begin{array}{ll} (1-\tau )\left( \frac{1-{\tilde{y}}_{1}}{\cos (\alpha )} -\tilde{r}\right) &{}\quad \text{ if } \frac{1-{\tilde{y}}_{1}}{\cos (\alpha )} -\tilde{r}\le \frac{1-{\tilde{y}}_{2}}{\sin (\alpha )} -\tilde{r}<0\\ (1-\tau )\left( \frac{1-{\tilde{y}}_{2}}{\sin (\alpha )} -\tilde{r}\right) &{}\quad \text{ if } \frac{1-{\tilde{y}}_{2}}{\sin (\alpha )} -\tilde{r}<\frac{1-{\tilde{y}}_{1}}{\cos (\alpha )} -\tilde{r}<0\\ -\,\tau \quad \;\;\,\left( \frac{1-{\tilde{y}}_{1}}{\cos (\alpha )} -\tilde{r}\right) &{}\quad \text{ if } \min \left( \frac{1-{\tilde{y}}_{1}}{\cos (\alpha )} -\tilde{r},\frac{1-{\tilde{y}}_{2}}{\sin (\alpha )} -\tilde{r}\right) \ge 0 \text{ and } \frac{1-{\tilde{y}}_{2}}{\sin (\alpha )} -\tilde{r}\ge \frac{1-{\tilde{y}}_{1}}{\cos (\alpha )}-\tilde{r}\\ -\,\tau \quad \;\;\,\left( \frac{1-{\tilde{y}}_{2}}{\sin (\alpha )} -\tilde{r}\right) &{}\quad \text{ if } \min \left( \frac{1-{\tilde{y}}_{1}}{\cos (\alpha )}-\tilde{r},\frac{1-{\tilde{y}}_{2}}{\sin (\alpha )} -\tilde{r}\right) \ge 0 \text{ and } \frac{1-{\tilde{y}}_{2}}{\sin (\alpha )} -\tilde{r}< \frac{1-{\tilde{y}}_{1}}{\cos (\alpha )}-\tilde{r}.\\ \end{array}\right. } \end{aligned} \end{aligned}$$

Accordingly, the expected loss is

$$\begin{aligned} \mathbb {E}(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},\tilde{r}))&=(1-\tau )\displaystyle \int _{0}^{1-\tilde{r}\cos (\alpha )}\displaystyle \int _{0}^{1-(1-{\tilde{y}}_1)\tfrac{\sin (\alpha )}{\cos (\alpha )}}\frac{1-\tilde{r}\cos (\alpha )-{\tilde{y}}_1}{\cos (\alpha )}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,(1-\tau )\displaystyle \int _{0}^{1-\tilde{r}\cos (\alpha )}\displaystyle \int _{1-(1-{\tilde{y}}_1)\tfrac{\sin (\alpha )}{\cos (\alpha )}}^{1-\tilde{r}\sin (\alpha )}\frac{1-\tilde{r}\sin (\alpha )-{\tilde{y}}_2}{\sin (\alpha )}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,\tau \displaystyle \int _{0}^{1-\tilde{r}\cos (\alpha )}\displaystyle \int _{1-\tilde{r}\sin (\alpha )}^{1}\frac{{\tilde{y}}_2-1+\tilde{r}\sin (\alpha )}{\sin (\alpha )}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,\tau \displaystyle \int _{1-\tilde{r}\cos (\alpha )}^{1}\displaystyle \int _{1-(1-{\tilde{y}}_1)\tfrac{\sin (\alpha )}{\cos (\alpha )}}^{1}\frac{{\tilde{y}}_2-1+\tilde{r}\sin (\alpha )}{\sin (\alpha )}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,\tau \displaystyle \int _{1-\tilde{r}\cos (\alpha )}^{1}\displaystyle \int _{1-\tilde{r}\sin (\alpha )}^{1-(1-{\tilde{y}}_1)\tfrac{\sin (\alpha )}{\cos (\alpha )}}\frac{{\tilde{y}}_1-1+\tilde{r}\cos (\alpha )}{\cos (\alpha )}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,\tau \displaystyle \int _{1-\tilde{r}\cos (\alpha )}^{1}\displaystyle \int _{0}^{1-\tilde{r}\sin (\alpha )}\frac{{\tilde{y}}_1-1+\tilde{r}\cos (\alpha )}{\cos (\alpha )}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1 \end{aligned}$$

Now, in analogy to the strategy of the proof of Theorem 2, we apply the Leibniz rule for integrals twice, add or subtract terms with identical limits of integration afterwards and after some further basic calculations obtain

$$\begin{aligned} \frac{\partial }{\partial \tilde{r}}\mathbb {E}(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},\tilde{r}))&=(\tau -1)\displaystyle \int _{0}^{1-\tilde{r}\cos (\alpha )}\displaystyle \int _{0}^{1-(1-{\tilde{y}}_1)\tfrac{\sin (\alpha )}{\cos (\alpha )}}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,(\tau -1)\displaystyle \int _{0}^{1-\tilde{r}\cos (\alpha )}\displaystyle \int _{1-(1-{\tilde{y}}_1)\tfrac{\sin (\alpha )}{\cos (\alpha )}}^{1-\tilde{r}\sin (\alpha )}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,\tau \displaystyle \int _{0}^{1-\tilde{r}\cos (\alpha )}\displaystyle \int _{1-\tilde{r}\sin (\alpha )}^{1}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,\tau \displaystyle \int _{1-\tilde{r}\cos (\alpha )}^{1}\displaystyle \int _{1-(1-{\tilde{y}}_1)\tfrac{\sin (\alpha )}{\cos (\alpha )}}^{1}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,\tau \displaystyle \int _{1-\tilde{r}\cos (\alpha )}^{1}\displaystyle \int _{1-\tilde{r}\sin (\alpha )}^{1-(1-{\tilde{y}}_1)\tfrac{\sin (\alpha )}{\cos (\alpha )}}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1\\&\quad +\,\tau \displaystyle \int _{1-\tilde{r}\cos (\alpha )}^{1}\displaystyle \int _{0}^{1-\tilde{r}\sin (\alpha )}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1 \end{aligned}$$

Adding together the different integrals yields

$$\begin{aligned} \frac{\partial }{\partial \tilde{r}}\mathbb {E}(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},\tilde{r}))=\tau -\displaystyle \int _{0}^{1-\tilde{r}\cos (\alpha )}\displaystyle \int _{0}^{1-\tilde{r}\sin (\alpha )}\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2\mathrm {d}{\tilde{y}}_1 \end{aligned}$$
(14)

A necessary condition for \(\tilde{r}\) giving a minimum of the expected loss is that (14) is zero, i.e. that \(\tfrac{\partial }{\partial \tilde{r}}\mathbb {E}(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},\tilde{r}))|_{\tilde{r}=\tilde{r}_{\alpha ,\tau }}=0\). This implies that the quantile condition \(\mathbb {P}({\tilde{Y}}_1\le {\tilde{q}}_1,{\tilde{Y}}_2\le {\tilde{q}}_2)=\tau \) is fulfilled if and only if the first derivative of the expected loss at \(\tilde{r}\) is zero. What remains to show is that \(\tfrac{\partial ^2}{\partial \tilde{r}^2}\mathbb {E}(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},\tilde{r}))|_{\tilde{r}=\tilde{r}_{\alpha ,\tau }}>0\) holds.

  • For \(\alpha \in (0,\pi /2)\), this follows from

    $$\begin{aligned} \frac{\partial ^2}{\partial \tilde{r}^2}\mathbb {E}(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},{\tilde{\varvec{q}}}))= & {} \displaystyle \int _{0}^{1-\tilde{r}\cos (\alpha )}\sin (\alpha )\tilde{f}({\tilde{y}}_1,1-\tilde{r}\sin (\alpha ))\mathrm {d}{\tilde{y}}_1\nonumber \\&+\,\displaystyle \int _{0}^{1-\tilde{r}\sin (\alpha )}\cos (\alpha )\tilde{f}(1-\tilde{r}\cos (\alpha ),{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2 \end{aligned}$$
    (15)

    since \(\tilde{f}(\cdot ,\cdot )>0\) and \(\cos (\alpha )>0\), \(\sin (\alpha )>0\).

  • In case of \(\alpha =0\), we have \(\sin (\alpha )=0\), \(\cos (\alpha )=1\) such that the second integral in (15) is zero while the first one is \(\int _0^1 \tilde{f}(1-\tilde{r},{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2=\tilde{f}_1(1-\tilde{r})>0\).

  • In case of \(\alpha =\pi /2\), we have \(\cos (\alpha )=0\), \(\sin (\alpha )=1\) such that the first integral in (15) is zero while the second one is \(\int _0^1 \tilde{f}({\tilde{y}}_1,1-\tilde{r})\mathrm {d}{\tilde{y}}_1=\tilde{f}_2(1-\tilde{r})>0\). \(\square \)

1.3 A.3 Proof of Lemma 10

In the following and in order to prove asymptotic results of Sect. 4, we treat the observed data as i.i.d. replications of \(\varvec{Y}\) defined on the probability space \((\Omega ,\mathcal F, \mathbb {P})=(\mathbb {R}^2,\mathcal B(\mathbb {R}^2),F)\). Consequently, the transformed data are i.i.d. replicates of \(\tilde{\varvec{Y}}=({\tilde{Y}}_1,{\tilde{Y}}_2)'=(F_1(Y_1), F_2(Y_2))\) supplemented with the probability space \((\tilde{\Omega }, \tilde{\mathcal F}, \tilde{\mathbb {P}})=([0,1]^2,\mathcal B([0,1]^2),\tilde{F})\) and CDF \(\tilde{F}({\tilde{y}}_1, {\tilde{y}}_2)=\mathbb {P}(Y_1\le F_1^{-1}({\tilde{y}}_1),Y_2\le F_2^{-1}({\tilde{y}}_2))\). In addition, we introduce

$$\begin{aligned} {\tilde{R}}=\min \left( \tfrac{1-{\tilde{Y}}_{1}}{\cos (\alpha )},\tfrac{1-{\tilde{Y}}_{2}}{\sin (\alpha )}\right) ,\;\alpha \in D(\alpha ) \end{aligned}$$

as a random variable on the probability space \((\tilde{\Omega }_{\alpha }, \tilde{\mathcal F}_{\alpha }, \tilde{\mathbb {P}}_{\alpha })=(D_{\tilde{r}}(\alpha ),\mathcal B(D_{\tilde{r}}(\alpha )),\tilde{F}_{\alpha })\).

Proof

  • On 1. The claim follows directly due to the i.i.d. property of \(\varvec{Y}_1,\ldots ,\varvec{Y}_n\).

  • On 2. We introduce the random variables \(Z_i=\mathbb {1}_{\lbrace ({\tilde{Y}}_{i1}\le 1-\tilde{r}\cos (\alpha ),{\tilde{Y}}_{i2}\le \tilde{1}-\tilde{r}\sin (\alpha ))\rbrace }\) which are i.i.d. since \(\tilde{\varvec{Y}}_1,\tilde{\varvec{Y}}_2,\ldots \) are assumed to be i.i.d. We then have

    $$\begin{aligned} \mathbb {P}(Z_i=1)= & {} \mathbb {P}({\tilde{Y}}_{i1}\le 1-\tilde{r}\cos (\alpha ),{\tilde{Y}}_{i2}\le 1-\tilde{r}\sin (\alpha ))=\tilde{S}_{\alpha }(\tilde{r})\\ \mathbb {P}(Z_i=0)= & {} 1-\tilde{S}_{\alpha }(\tilde{r}) \end{aligned}$$

    and hence \(\mathbb {E}(Z_i)=\tilde{S}_{\alpha }(\tilde{r})\). With the strong law of large numbers, we immediately find

    $$\begin{aligned} \tilde{S}_{n,\alpha }(\tilde{r})=\frac{1}{n}\sum _{i=1}^n Z_i\xrightarrow {a.s.}\mathbb {E}(Z_i)=\mathbb {E}(Z_1)=\tilde{S}_{\alpha }(\tilde{r}). \end{aligned}$$
  • On 3. From 2., we have that \(\mathbb {E}(Z_i)=\tilde{S}_{\alpha }(\tilde{r})\) and \({{\,\mathrm{Var}\,}}(Z_i)=\tilde{S}_{\alpha }(\tilde{r})(1-\tilde{S}_{\alpha }(\tilde{r}))\). Applying the central limit theorem implies

    $$\begin{aligned} \sqrt{n}\frac{\tilde{S}_{n,\alpha }(\tilde{r})-\tilde{S}_{\alpha }(\tilde{r})}{\sqrt{\tilde{S}_{\alpha }(\tilde{r})(1-\tilde{S}_{\alpha }(\tilde{r}))}}{=}\frac{\tfrac{1}{n}\sum _{i=1}^n Z_i-\mathbb {E}(Z_1)}{\sqrt{{{\,\mathrm{Var}\,}}(Z_1)}}{=}\frac{\sum _{i=1}^n Z_i-n\mathbb {E}(Z_1)}{\sqrt{n{{\,\mathrm{Var}\,}}(Z_1)}}\xrightarrow {d}{{\,\mathrm{N}\,}}(0,1). \end{aligned}$$
  • On 4. Define

    $$\begin{aligned} \tilde{D}_n:=\sup _{\tilde{r}\in D_{\tilde{r}}(\alpha )} |\tilde{S}_{n,\alpha }(\tilde{r})-\tilde{S}_{\alpha }(\tilde{r})|. \end{aligned}$$
    1. (i)

      \(\tilde{S}_{\alpha }\) is continuous and monotonically decreasing in \(\tilde{r}\). Hence, we can find a decomposition \(\tilde{r}_{\min }=z_0<z_1<z_2<\cdots<z_{m-1}<z_m=\tilde{r}_{\max }\) such that \(\tilde{S}_{\alpha }(z_0)=1,\tilde{S}_{\alpha }(z_1)=\tfrac{m-1}{m},\tilde{D}_{\alpha }(z_2)=\tfrac{m-2}{m},\ldots ,\tilde{S}_{\alpha }(z_{m-1})=\tfrac{1}{m},\tilde{S}_{\alpha }(z_m)=0\) and where \(\tilde{r}_{\min }\) is the smallest \(r\in D_{\tilde{r}}(\alpha )\) and similar \(\tilde{r}_{\max }\) the largest \(r\in D_{\tilde{r}}(\alpha )\).

    2. (ii)

      We use this decomposition to obtain approximations of \(\tilde{S}_{n,\alpha }(z)-\tilde{S}_{\alpha }( z)\) for arbitrary \(z\in D_{\tilde{r}}(\alpha )\). Let k be such that \(z\in [z_{k},z_{k+1})\). Then,

      $$\begin{aligned} \tilde{S}_{n,\alpha }( z)-\tilde{S}_{\alpha }( z)&\le \tilde{S}_{n,\alpha }( z_{k})-\tilde{S}_{\alpha }( z_{k+1})=\tilde{S}_{n,\alpha }( z_{k})-\left( \tilde{S}_{n,\alpha }( z_{k})-\frac{1}{m}\right) \\ \tilde{S}_{n,\alpha }( z)-\tilde{S}_{\alpha }( z)&\ge \tilde{S}_{n,\alpha }( z_{k+1})-\tilde{S}_{\alpha }( z_{k})=\tilde{S}_{n,\alpha }( z_{k+1})-\left( \tilde{S}_{n,\alpha }( z_{k+1})+\frac{1}{m}\right) \end{aligned}$$

      due to the monotonicity of \(\tilde{S}_{\alpha }\).

    3. (iii)

      For \(m\in \mathbb {N}\), \(k=0,1,\ldots ,m\), define

      $$\begin{aligned} A_{m,k}:=\left\{ \tilde{\omega }_{\alpha }\in \tilde{\Omega }_{\alpha }{:}\,\lim _{n\rightarrow \infty }\tilde{S}_{n,\alpha }( z_{k};\tilde{\omega }_{\alpha })=\tilde{S}_{\alpha }( z_{k})\right\} . \end{aligned}$$

      Due to the almost sure convergence of \(\tilde{S}_{n,\alpha }\) from 2., we have

      $$\begin{aligned} \mathbb {P}[A_{m,k}]=1\quad \forall m\in \mathbb {N},\quad k=0,1,\ldots ,m. \end{aligned}$$
    4. (iv)

      Define \(A_{m}=\cap _{k=0}^m A_{m,k}.\) This is a finite intersection of sets such that \( \mathbb {P}[A_{m}]=1\) for all \(m\in \mathbb {N}.\) Define \(A=\cap _{m\in \mathbb {N}} A_{m}.\) This is a countable intersection of sets such that \( \mathbb {P}[A]=1.\)

    5. (v)

      Consider now \(\tilde{\omega }_{\alpha }\in A_{m}\). By definition of \(A_{m,k}\), there exists an \(n(\tilde{\omega }_{\alpha },m)\in \mathbb {N}\) such that

      $$\begin{aligned} |\tilde{S}_{n,\alpha }( z_{k};\tilde{\omega }_{\alpha })-\tilde{S}_{\alpha }( z_{k})|<\frac{1}{m}\quad \forall n>n(\tilde{\omega }_{\alpha },m),\, k=1,\ldots ,m.\text { Hence,} \end{aligned}$$
      $$\begin{aligned} |\tilde{S}_{n,\alpha }(z)-\tilde{S}_{\alpha }(z)|<\frac{1}{m}\quad \forall \tilde{\omega }_{\alpha }\in A_{m},\, n>n(\tilde{\omega }_{\alpha },m),\, z\in D_{\tilde{r}}(\alpha ). \end{aligned}$$

      From (ii), it follows

      $$\begin{aligned} \tilde{D}_n(\tilde{\omega }_{\alpha }):=\sup _{\tilde{r}\in D_{\tilde{r}}(\alpha )} |\tilde{S}_{n,\alpha }(\tilde{r};\tilde{\omega }_{\alpha })-\tilde{S}_{\alpha }(\tilde{r})|<\frac{2}{m}. \end{aligned}$$

      Furthermore, due to the definition of A , \(\tilde{\omega }_{\alpha }\in A\) is element of all \(A_m\), \(m\in \mathbb {N}.\) Hence, \(\forall m\in \mathbb {N}\) there exists an \(n(\tilde{\omega }_{\alpha },m)\in \mathbb {N}\) such that \(\forall n>n(\tilde{\omega }_{\alpha },m)\)

      $$\begin{aligned} 0\le \tilde{D}_n(\tilde{\omega }_{\alpha })<\frac{2}{m} \text{ and } \text{ in } \text{ consequence } \lim _{n\rightarrow \infty }\tilde{D}_n(\tilde{\omega }_{\alpha })=0\quad \forall \tilde{\omega }_{\alpha }\in A. \end{aligned}$$

      Finally, we have \(\lbrace \tilde{\omega }_{\alpha }\in \tilde{\Omega }_{\alpha }{:}\,\lim _{n\rightarrow \infty }\tilde{D}_n(\tilde{\omega }_{\alpha })=0\rbrace \supseteq A\) and from 4. that \(\mathbb {P}[A]=1\) holds such that

      $$\begin{aligned} \mathbb {P}[\left\{ \tilde{\omega }_{\alpha }\in \tilde{\Omega }_{\alpha }{:}\,\lim _{n\rightarrow \infty }\tilde{D}_n(\tilde{\omega }_{\alpha })=0\right\} ]\ge \mathbb {P}[A]=1. \end{aligned}$$

      \(\square \)

1.4 A.4 Proof of Lemma 11

Proof

The uniqueness of \(\tilde{r}_{\alpha ,\tau }\) yields \(\tilde{S}_{\alpha }(r_{\alpha ,\tau }+\varepsilon )<\tau <\tilde{S}_{\alpha }(r_{\alpha ,\tau }-\varepsilon )\) for any \(\varepsilon >0\). The strong consistency of \(\tilde{S}_{n,\alpha }(\tilde{r})\) furthermore ensures

$$\begin{aligned} \begin{aligned} \tilde{S}_{n,\alpha }(\tilde{r}_{\alpha ,\tau }-\varepsilon )&\xrightarrow {a.s.}\tilde{S}_{\alpha }(\tilde{r}_{\alpha ,\tau }-\varepsilon )\\ \tilde{S}_{n,\alpha }(\tilde{r}_{\alpha ,\tau }+\varepsilon )&\xrightarrow {a.s.}\tilde{S}_{\alpha }(\tilde{r}_{\alpha ,\tau }+\varepsilon )\end{aligned} \end{aligned}$$

which is equivalent to

$$\begin{aligned} \begin{aligned}&\mathbb {P}\left( \lim _{n\rightarrow \infty }\tilde{S}_{n,\alpha }(\tilde{r}_{\alpha ,\tau }-\varepsilon )=\tilde{S}_{\alpha }(\tilde{r}_{\alpha ,\tau }-\varepsilon )>\tau \right) =1\\&\mathbb {P}\left( \lim _{n\rightarrow \infty }\tilde{S}_{n,\alpha }(\tilde{r}_{\alpha ,\tau }+\varepsilon )=\tilde{S}_{\alpha }(\tilde{r}_{\alpha ,\tau }+\varepsilon )<\tau \right) =1. \end{aligned} \end{aligned}$$

Using that almost sure convergence \(\mathbb {P}(\lim _{n\rightarrow \infty }X_n=X)=1\) is equivalent to \(\lim _{n\rightarrow \infty }\mathbb {P}(|X_m-X|<\varepsilon \;\;\forall m\ge n)\) in combination with \(\mathbb {P}(A\cap B)=1-\mathbb {P}(A^{\mathsf {c}}\cup B^{\mathsf {c}})\ge 1-\mathbb {P}(A^{\mathsf {c}})-\mathbb {P}(B^{\mathsf {c}})\) implies

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {P}\left( \tilde{S}_{m,\alpha }(\tilde{r}_{\alpha ,\tau }+\varepsilon )<\tau <\tilde{S}_{m,\alpha }(\tilde{r}_{\alpha ,\tau }-\varepsilon )\;\;\forall m\ge n\right) =1. \end{aligned}$$

Due to the monotonicity of \(\tilde{S}_{\alpha }\), we have \(\tilde{S}_{\alpha }(\tilde{r})\le \tau \Leftrightarrow \tilde{r}\le \tilde{S}_{\alpha }^{-1}(\tau )\) and therefore

$$\begin{aligned} \begin{aligned}&\lim _{n\rightarrow \infty }\mathbb {P}\left( \tilde{r}_{\alpha ,\tau }+\varepsilon<\tilde{S}_{m,\alpha }^{-1}(\tau )=\tilde{r}_{m,\alpha ,\tau }<\tilde{r}_{\alpha ,\tau }-\varepsilon \;\;\forall m\ge n\right) =1\\&\quad \Leftrightarrow \lim _{n\rightarrow \infty }\mathbb {P}\left( |\tilde{r}_{m,\alpha ,\tau }-\tilde{r}_{\alpha ,\tau }|<\varepsilon \;\;\forall m\ge n\right) =1. \end{aligned} \end{aligned}$$

Finally, \({\tilde{q}}_{j,n,\alpha ,\tau }\xrightarrow {a.s.}{\tilde{q}}_{j,\alpha ,\tau }\), \(j=1,2\), is a direct consequence of the continuous mapping theorem which in turn implies \({\tilde{\varvec{q}}}_{n,\alpha ,\tau }=({\tilde{q}}_{1,n,\alpha ,\tau },{\tilde{q}}_{2,n,\alpha ,\tau })'\xrightarrow {a.s.}{\tilde{\varvec{q}}}_{\alpha ,\tau }=({\tilde{q}}_{1,\alpha ,\tau },{\tilde{q}}_{2,\alpha ,\tau })'\), compare  Serfling (1980, 1.P, 2.b on page 52). \(\square \)

1.5 A.5 Proof of Theorem 13

Proof

For the proof of Theorem 13, we will use Lemma 10 together with the following Lemma 14.

Lemma 14

(Jump heights of \(\tilde{S}_{n,\alpha })\) Given the general assumptions from Sect. 2.1, the ordered sample \({\tilde{R}}_{(1)}<{\tilde{R}}_{(2)}<\cdots<{\tilde{R}}_{(n-1)}<{\tilde{R}}_{(n)}\) of distances \({\tilde{R}}_i=\min (\tfrac{1-{\tilde{Y}}_{i1}}{\cos (\alpha )},\tfrac{1-{\tilde{Y}}_{i2}}{\sin (\alpha )})\) will almost surely have no ties and therefore

$$\begin{aligned} |\tilde{S}_{n,\alpha }(\tilde{S}_{n,\alpha }^{-1}(\tau ))-\tau |\le \frac{1}{n}\; a.s. \end{aligned}$$

From Lemma 10.3, we have that for any \(\tilde{r}\in D_{\alpha }(\tilde{r})\) with survivor function \(\tilde{S}_{\alpha }\)

$$\begin{aligned} \sqrt{n}(\tilde{S}_{n,\alpha }(\tilde{r})- \tilde{S}_{\alpha }(\tilde{r}))\xrightarrow {d}{{\,\mathrm{N}\,}}(0,\tilde{S}_{\alpha }(\tilde{r})(1-\tilde{S}_{\alpha }(\tilde{r}))) \end{aligned}$$

holds. Let \(\tilde{r}=\tilde{r}_{\alpha ,\tau }=\tilde{S}_{\alpha }^{-1}(\tau )\). Then, we know that

$$\begin{aligned} \sqrt{n}(\tilde{S}_{n,\alpha }(\tilde{r}_{\alpha ,\tau })- \tilde{S}_{\alpha }(\tilde{r}_{\alpha ,\tau }))\xrightarrow {d}{{\,\mathrm{N}\,}}(0, \tau (1-\tau )). \end{aligned}$$

Using the property of stochastic equicontinuity for \(\tilde{S}_{n,\alpha }\) interpreted as an empirical process (for an introduction and definition of stochastic equicontinuity, see Andrews 1994), we can replace \(\tilde{r}_{\alpha ,\tau }\) by a consistent estimator \(\tilde{r}_{n,\alpha ,\tau }\) such that

$$\begin{aligned} \sqrt{n}(\tilde{S}_{n,\alpha }(\tilde{r}_{n,\alpha ,\tau })- \tilde{S}_{\alpha }(\tilde{r}_{n,\alpha ,\tau }))\xrightarrow {d}{{\,\mathrm{N}\,}}(0, \tau (1-\tau )) \end{aligned}$$

holds. From Lemma (ii) in Serfling (1980, Sec. 1.1.4, p. 3) it now follows that

$$\begin{aligned} \sqrt{n}(\tilde{S}_{n,\alpha }(\tilde{r}_{n,\alpha ,\tau })- \tilde{S}_{\alpha }(\tilde{r}_{n,\alpha ,\tau }))\ge \sqrt{n} (\tau - \tilde{S}_{\alpha }(\tilde{r}_{n,\alpha ,\tau })). \end{aligned}$$

Since \(\tilde{f}_{\alpha }\) is continuous, the probability of observing duplicates of \({\tilde{R}}_i\) is zero. Hence, using Lemma 14

$$\begin{aligned} \sqrt{n}(\tilde{S}_{n,\alpha }(\tilde{r}_{n,\alpha ,\tau })- \tilde{S}_{\alpha }(\tilde{r}_{n,\alpha ,\tau }))= \sqrt{n}(\tau -\tilde{S}_{\alpha }(\tilde{r}_{n,\alpha ,\tau }))+\mathcal {O}_p(1/\sqrt{n}) \end{aligned}$$

holds with probability one which (using Lemma 10.2) implies

$$\begin{aligned} \sqrt{n}(\tau -\tilde{S}_{\alpha }(\tilde{r}_{n,\alpha ,\tau }))\xrightarrow {d}{{\,\mathrm{N}\,}}(\tau (1-\tau )). \end{aligned}$$

Applying the Delta-method, i.e. Taylor expansion around of \(\tilde{S}_{\alpha }\), \(\tilde{r}_{\alpha ,\tau }\) yields

$$\begin{aligned} \tilde{S}_{\alpha }(\tilde{r}_{n,\alpha ,\tau })\approx \tilde{S}_{\alpha }(\tilde{r}_{\alpha ,\tau })-\tilde{f}_{\alpha }(\bar{r}_{\alpha ,\tau })(\tilde{r}_{n,\alpha ,\tau }-\tilde{r}_{\alpha ,\tau }), \end{aligned}$$

for \(\bar{r}_{\alpha ,\tau }\) on the line segment between \(\tilde{r}_{n,\alpha ,\tau }\) and \(\tilde{r}_{\alpha ,\tau }\). The last step is to apply Slutsky’s theorem and the fact that \(\bar{r}_{\alpha ,\tau }\rightarrow \tilde{r}_{\alpha ,\tau }\) since \(\tilde{r}_{n,\alpha ,\tau }\rightarrow \tilde{r}_{\alpha ,\tau }\), such that we obtain

$$\begin{aligned} \sqrt{n}( \tilde{r}_{n,\alpha ,\tau }- \tilde{r}_{\alpha ,\tau })=\sqrt{n}\frac{\tau -S_{\alpha }(\tilde{r}_{n,\alpha ,\tau })}{\tilde{f}_{\alpha }(\tilde{r}_{\alpha ,\tau })}\xrightarrow {d}{{\,\mathrm{N}\,}}\left( 0,\frac{\tau (1-\tau )}{(\tilde{f}_{\alpha }(\tilde{r}_{\alpha ,\tau }))^2}\right) . \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Klein, N., Kneib, T. Directional bivariate quantiles: a robust approach based on the cumulative distribution function. AStA Adv Stat Anal 104, 225–260 (2020). https://doi.org/10.1007/s10182-019-00355-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10182-019-00355-3

Keywords

Navigation