Abstract
The definition of multivariate quantiles has gained considerable attention in previous years as a tool for understanding the structure of a multivariate data cloud. Due to the lack of a natural ordering for multivariate data, many approaches have either considered geometric generalisations of univariate quantiles or data depths that measure centrality of data points. Both approaches provide a centre-outward ordering of data points but do no longer possess a relation to the cumulative distribution function of the data generating process and corresponding tail probabilities. We propose a new notion of bivariate quantiles that is based on inverting the bivariate cumulative distribution function and therefore provides a directional measure of extremeness as defined by the contour lines of the cumulative distribution function which define the quantile curves of interest. To determine unique solutions, we transform the bivariate data to the unit square. This allows us to introduce directions along which the quantiles are unique. Choosing a suitable transformation also ensures that the resulting quantiles are equivariant under monotonically increasing transformations. We study the resulting notion of bivariate quantiles in detail, with respect to computation based on linear programming and theoretical properties including asymptotic behaviour and robustness. It turns out that our approach is especially useful for data situations that deviate from the elliptical shape typical for ‘normal-like’ bivariate distributions. Moreover, the bivariate quantiles inherit the robustness of univariate quantiles even in case of extreme outliers.
Similar content being viewed by others
References
Abdous, B., Theodorescu, R.: Note on the spatial quantile of a random vector. Stat. Probab. Lett. 13, 333–336 (1992)
Andrews, D.W.K.: Emprical process methods in econometrics. In: Engle, R.F., McFadden, D.L. (eds.) Handbook of Econometrics, vol. 4, pp. 2247–2294. Elsevier Science B.V., North-Holland, New York (1994)
Belzunce, F., Castaño, A., Olvera-Cervantes, A., Suárez-Llorens, A.: Quantile curves and dependence structure for bivariate distributions. Comput. Stat. Data Anal. 51, 5112–5129 (2007)
Berkelaar, M., et al.: lpSolve: Interface to “Lp\(\_\)solve’ v. 5.5 to Solve Linear/Integer Program. R package version 5.6.13 (2015)
Carlier, G., Chernozhukov, V., Galichon, A.: Vector quantile regression: an optimal transport approach. Ann. Stat. 44(3), 1165–1192 (2016)
Chakraborty, B.: On affine equivariant multivariate quantiles. Ann. Inst. Stat. Math. 53, 380–403 (2001)
Chaudhuri, P.: On a geometric notion of quantiles for multivariate data. J. Am. Stat. Assoc. 91, 862–872 (1996)
Chen, L.-A., Welsh, A.H.: Distribution-function-based bivariate quantiles. J. Multivar. Anal. 83, 208–231 (2002)
Chernozhukov, V., Galichon, A., Hallin, M., Henry, M.: Monge–Kantorovich depth, quantiles, ranks, and signs. Ann. Stat. 45(1), 223–256 (2017)
Einmahl, J.H.J., Mason, D.M.: Generalized quantile processes. Ann. Stat. 20, 1062–1078 (1992)
Ferguson, T.S.: Mathematical Statistics: A Decision Theoretic Approach. Academic Press, New York (1967)
Fernandez-Ponce, J.M., Suarez-Llorens, A.: Quantile curves and dependence structure for bivariate distributions. Comput. Stat. Data Anal. 17, 236–256 (2007)
Genest, C., Segers, J.: On the covariance of the asymptotic empirical copula process. J. Multivar. Anal. 101, 1837–1845 (2010)
Genest, C., Ghoudi, K., Rivest, L.-P.: A semiparametric estimation procedure of dependence parameters in multivariate families of distributions. Biometrika 82, 543–552 (1995)
Guggisberg, M.: A Bayesian Approach to Multiple-Output Quantile Regression, Technical report (2016)
Hallin, M.: On distribution and quantile functions, ranks and signs in \(r^d\), ECARES working paper 2017-34 (2017)
Hallin, M., Paindaveine, D., S̆iman, M.: Multivariate quantiles and multiple-output regression quantiles: from \({L}_1\) optimization to halfspace depth. Ann. Stat. 38, 635–669 (2010)
Joe, H.: Dependence Modeling with Copulas. Chapman & Hall/CRC Monographs on Statistics & Applied Probability. Taylor & Francis, London (2014)
Klein, N.: bivquant: Estimation of Bivariate Quantiles. R package version 0.1 (2019)
Klein, N., Kneib, T.: Simultaneous inference in structured additive conditional copula regression models: a unifying Bayesian approach. Stat. Comput. 26, 841–860 (2016)
Koenker, R.: Quantile Regression. Economic Society Monographs. Cambridge University Press, New York (2005)
Koltchinskii, V.I.: M-estimation, convexity and quantiles. Ann. Stat. 25, 435–477 (1997)
Koshevoy, G., Mosler, K.: Zonoid trimming for multivariate distributions. Ann. Stat. 25, 1998–2017 (1997)
Liu, R.Y.: On a notion of data depth based on random simplices. Ann. Stat. 18, 405–414 (1990)
Liu, R.Y., Parelius, J.M., Singh, K.: Multivariate analysis by data depth: descriptive statistics, graphics and inference (with discussion and a rejoinder by Liu and Singh). Ann. Stat. 27, 783–858 (1999)
Mosler, K.: Multivariate Dispersion, Central Regions and Depth: The Lift Zonoid Approach. Springer, New York (2002)
Oja, H.: Descriptive statistics for multivariate distributions. Stat. Probab. Lett. 1, 327–332 (1983)
Pokotylo, O., Mozharovskyi, P., Dyckerhoff, R.: ddalpha: Depth-Based Classification and Calculation of Data Depth. R package version 1.3.1 (2015)
Serfling, R.: Approximation Theorems of Mathematical Statistics. Series in Probability and Mathematical Statistics. Wiley, New York (1980)
Serfling, R.: Quantile functions for multivariate analysis: approaches and applications. Stat. Neerl. 56, 214–232 (2002)
Small, C.G.: A survey of multidimensional medians. Int. Stat. Rev. 58, 263–277 (1990)
Tukey, J.: Mathematics and the picturing of data. In: Proceedings of the 1975 International Congress of Mathematicians, vol. 2, pp. 523–531 (1975)
Zuo, Y., Serfling, R.: General notions of statistical depth function. Ann. Stat. 28, 461–482 (2000)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Thomas Kneib received financial support from the German Research Foundation (DFG) within the research Project KN 922/4-2. Both authors are grateful for the comments provided by two anonymous referees which, in particular, prompted us to incorporate changes in the theoretical results when transforming with the empirical instead of the true marginal cumulative distribution functions.
A Further proofs
A Further proofs
1.1 A.1 Proof of Theorem 2
Proof
For fixed \(b\in \mathbb {R}\), we define the expected loss as a function of \(v\in \mathbb {R}\) by
Clearly, the bivariate quantile curves from (1) can be obtained as
which intuitively means that we describe \(\mathbb {R}^2\) by straight lines with slope one and intercepts b. With the definition
the expected loss for fixed \(b\in \mathbb {R}\) is given by
Our strategy is now to show that for all \(b\in \mathbb {R}\) the expected loss \(\mathbb {E}(\rho _{b,\tau }(\varvec{Y},v))\) is uniquely minimised at \(q\in \mathbb {R}\) and fulfils the condition \(\mathbb {P}(Y_1\le q,Y_2\le q+b)=\tau \). We therefore investigate the first derivative of \(\mathbb {E}(\rho _{b,\tau }(\varvec{Y},v))\) with respect to v. The derivative is obtained by applying the Leibniz rule for parameter integrals twice.
In summary, we have
Let us first assume that \(\mathbb {P}(Y_1\le q,Y_2\le q+b)=\tau \) holds. It then follows from Eq. (13) that
In addition,
holds since we assumed \(f(y_1,y_2)>0\). Consequently, \((q,q+b)'\) is a minimiser of \(\mathbb {E}\left( \rho _{b,\tau }\left( \varvec{y},v\right) \right) \) and in particular of \(\mathbb {E}\left( \rho _{\tau }\left( \varvec{y},\varvec{q}\right) \right) \).
Reversely, if \((q,q+b)'\) is a minimiser of \(\mathbb {E}\left( \rho _{b,\tau }\left( \varvec{y},v\right) \right) \), a zero first derivative
is required which is equivalent to
\(\square \)
1.2 A.2 Proof of Theorem 6
Proof
Recall first that \({\tilde{\varvec{Y}}}=({\tilde{Y}}_1,{\tilde{Y}}_2)'=(F_{1}(Y_1),F_{2}(Y_2))'\), and let \(\tilde{f}({\tilde{y}}_1,{\tilde{y}}_2)>0\) be the density of \({\tilde{\varvec{Y}}}\). From Sect. 3.1, we furthermore have that \(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},\tilde{r})\) can be decomposed into
Accordingly, the expected loss is
Now, in analogy to the strategy of the proof of Theorem 2, we apply the Leibniz rule for integrals twice, add or subtract terms with identical limits of integration afterwards and after some further basic calculations obtain
Adding together the different integrals yields
A necessary condition for \(\tilde{r}\) giving a minimum of the expected loss is that (14) is zero, i.e. that \(\tfrac{\partial }{\partial \tilde{r}}\mathbb {E}(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},\tilde{r}))|_{\tilde{r}=\tilde{r}_{\alpha ,\tau }}=0\). This implies that the quantile condition \(\mathbb {P}({\tilde{Y}}_1\le {\tilde{q}}_1,{\tilde{Y}}_2\le {\tilde{q}}_2)=\tau \) is fulfilled if and only if the first derivative of the expected loss at \(\tilde{r}\) is zero. What remains to show is that \(\tfrac{\partial ^2}{\partial \tilde{r}^2}\mathbb {E}(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},\tilde{r}))|_{\tilde{r}=\tilde{r}_{\alpha ,\tau }}>0\) holds.
For \(\alpha \in (0,\pi /2)\), this follows from
$$\begin{aligned} \frac{\partial ^2}{\partial \tilde{r}^2}\mathbb {E}(\rho _{\alpha ,\tau }({\tilde{\varvec{y}}},{\tilde{\varvec{q}}}))= & {} \displaystyle \int _{0}^{1-\tilde{r}\cos (\alpha )}\sin (\alpha )\tilde{f}({\tilde{y}}_1,1-\tilde{r}\sin (\alpha ))\mathrm {d}{\tilde{y}}_1\nonumber \\&+\,\displaystyle \int _{0}^{1-\tilde{r}\sin (\alpha )}\cos (\alpha )\tilde{f}(1-\tilde{r}\cos (\alpha ),{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2 \end{aligned}$$(15)since \(\tilde{f}(\cdot ,\cdot )>0\) and \(\cos (\alpha )>0\), \(\sin (\alpha )>0\).
In case of \(\alpha =0\), we have \(\sin (\alpha )=0\), \(\cos (\alpha )=1\) such that the second integral in (15) is zero while the first one is \(\int _0^1 \tilde{f}(1-\tilde{r},{\tilde{y}}_2)\mathrm {d}{\tilde{y}}_2=\tilde{f}_1(1-\tilde{r})>0\).
In case of \(\alpha =\pi /2\), we have \(\cos (\alpha )=0\), \(\sin (\alpha )=1\) such that the first integral in (15) is zero while the second one is \(\int _0^1 \tilde{f}({\tilde{y}}_1,1-\tilde{r})\mathrm {d}{\tilde{y}}_1=\tilde{f}_2(1-\tilde{r})>0\). \(\square \)
1.3 A.3 Proof of Lemma 10
In the following and in order to prove asymptotic results of Sect. 4, we treat the observed data as i.i.d. replications of \(\varvec{Y}\) defined on the probability space \((\Omega ,\mathcal F, \mathbb {P})=(\mathbb {R}^2,\mathcal B(\mathbb {R}^2),F)\). Consequently, the transformed data are i.i.d. replicates of \(\tilde{\varvec{Y}}=({\tilde{Y}}_1,{\tilde{Y}}_2)'=(F_1(Y_1), F_2(Y_2))\) supplemented with the probability space \((\tilde{\Omega }, \tilde{\mathcal F}, \tilde{\mathbb {P}})=([0,1]^2,\mathcal B([0,1]^2),\tilde{F})\) and CDF \(\tilde{F}({\tilde{y}}_1, {\tilde{y}}_2)=\mathbb {P}(Y_1\le F_1^{-1}({\tilde{y}}_1),Y_2\le F_2^{-1}({\tilde{y}}_2))\). In addition, we introduce
as a random variable on the probability space \((\tilde{\Omega }_{\alpha }, \tilde{\mathcal F}_{\alpha }, \tilde{\mathbb {P}}_{\alpha })=(D_{\tilde{r}}(\alpha ),\mathcal B(D_{\tilde{r}}(\alpha )),\tilde{F}_{\alpha })\).
Proof
-
On 1. The claim follows directly due to the i.i.d. property of \(\varvec{Y}_1,\ldots ,\varvec{Y}_n\).
-
On 2. We introduce the random variables \(Z_i=\mathbb {1}_{\lbrace ({\tilde{Y}}_{i1}\le 1-\tilde{r}\cos (\alpha ),{\tilde{Y}}_{i2}\le \tilde{1}-\tilde{r}\sin (\alpha ))\rbrace }\) which are i.i.d. since \(\tilde{\varvec{Y}}_1,\tilde{\varvec{Y}}_2,\ldots \) are assumed to be i.i.d. We then have
$$\begin{aligned} \mathbb {P}(Z_i=1)= & {} \mathbb {P}({\tilde{Y}}_{i1}\le 1-\tilde{r}\cos (\alpha ),{\tilde{Y}}_{i2}\le 1-\tilde{r}\sin (\alpha ))=\tilde{S}_{\alpha }(\tilde{r})\\ \mathbb {P}(Z_i=0)= & {} 1-\tilde{S}_{\alpha }(\tilde{r}) \end{aligned}$$and hence \(\mathbb {E}(Z_i)=\tilde{S}_{\alpha }(\tilde{r})\). With the strong law of large numbers, we immediately find
$$\begin{aligned} \tilde{S}_{n,\alpha }(\tilde{r})=\frac{1}{n}\sum _{i=1}^n Z_i\xrightarrow {a.s.}\mathbb {E}(Z_i)=\mathbb {E}(Z_1)=\tilde{S}_{\alpha }(\tilde{r}). \end{aligned}$$ -
On 3. From 2., we have that \(\mathbb {E}(Z_i)=\tilde{S}_{\alpha }(\tilde{r})\) and \({{\,\mathrm{Var}\,}}(Z_i)=\tilde{S}_{\alpha }(\tilde{r})(1-\tilde{S}_{\alpha }(\tilde{r}))\). Applying the central limit theorem implies
$$\begin{aligned} \sqrt{n}\frac{\tilde{S}_{n,\alpha }(\tilde{r})-\tilde{S}_{\alpha }(\tilde{r})}{\sqrt{\tilde{S}_{\alpha }(\tilde{r})(1-\tilde{S}_{\alpha }(\tilde{r}))}}{=}\frac{\tfrac{1}{n}\sum _{i=1}^n Z_i-\mathbb {E}(Z_1)}{\sqrt{{{\,\mathrm{Var}\,}}(Z_1)}}{=}\frac{\sum _{i=1}^n Z_i-n\mathbb {E}(Z_1)}{\sqrt{n{{\,\mathrm{Var}\,}}(Z_1)}}\xrightarrow {d}{{\,\mathrm{N}\,}}(0,1). \end{aligned}$$ -
On 4. Define
$$\begin{aligned} \tilde{D}_n:=\sup _{\tilde{r}\in D_{\tilde{r}}(\alpha )} |\tilde{S}_{n,\alpha }(\tilde{r})-\tilde{S}_{\alpha }(\tilde{r})|. \end{aligned}$$- (i)
\(\tilde{S}_{\alpha }\) is continuous and monotonically decreasing in \(\tilde{r}\). Hence, we can find a decomposition \(\tilde{r}_{\min }=z_0<z_1<z_2<\cdots<z_{m-1}<z_m=\tilde{r}_{\max }\) such that \(\tilde{S}_{\alpha }(z_0)=1,\tilde{S}_{\alpha }(z_1)=\tfrac{m-1}{m},\tilde{D}_{\alpha }(z_2)=\tfrac{m-2}{m},\ldots ,\tilde{S}_{\alpha }(z_{m-1})=\tfrac{1}{m},\tilde{S}_{\alpha }(z_m)=0\) and where \(\tilde{r}_{\min }\) is the smallest \(r\in D_{\tilde{r}}(\alpha )\) and similar \(\tilde{r}_{\max }\) the largest \(r\in D_{\tilde{r}}(\alpha )\).
- (ii)
We use this decomposition to obtain approximations of \(\tilde{S}_{n,\alpha }(z)-\tilde{S}_{\alpha }( z)\) for arbitrary \(z\in D_{\tilde{r}}(\alpha )\). Let k be such that \(z\in [z_{k},z_{k+1})\). Then,
$$\begin{aligned} \tilde{S}_{n,\alpha }( z)-\tilde{S}_{\alpha }( z)&\le \tilde{S}_{n,\alpha }( z_{k})-\tilde{S}_{\alpha }( z_{k+1})=\tilde{S}_{n,\alpha }( z_{k})-\left( \tilde{S}_{n,\alpha }( z_{k})-\frac{1}{m}\right) \\ \tilde{S}_{n,\alpha }( z)-\tilde{S}_{\alpha }( z)&\ge \tilde{S}_{n,\alpha }( z_{k+1})-\tilde{S}_{\alpha }( z_{k})=\tilde{S}_{n,\alpha }( z_{k+1})-\left( \tilde{S}_{n,\alpha }( z_{k+1})+\frac{1}{m}\right) \end{aligned}$$due to the monotonicity of \(\tilde{S}_{\alpha }\).
- (iii)
For \(m\in \mathbb {N}\), \(k=0,1,\ldots ,m\), define
$$\begin{aligned} A_{m,k}:=\left\{ \tilde{\omega }_{\alpha }\in \tilde{\Omega }_{\alpha }{:}\,\lim _{n\rightarrow \infty }\tilde{S}_{n,\alpha }( z_{k};\tilde{\omega }_{\alpha })=\tilde{S}_{\alpha }( z_{k})\right\} . \end{aligned}$$Due to the almost sure convergence of \(\tilde{S}_{n,\alpha }\) from 2., we have
$$\begin{aligned} \mathbb {P}[A_{m,k}]=1\quad \forall m\in \mathbb {N},\quad k=0,1,\ldots ,m. \end{aligned}$$ - (iv)
Define \(A_{m}=\cap _{k=0}^m A_{m,k}.\) This is a finite intersection of sets such that \( \mathbb {P}[A_{m}]=1\) for all \(m\in \mathbb {N}.\) Define \(A=\cap _{m\in \mathbb {N}} A_{m}.\) This is a countable intersection of sets such that \( \mathbb {P}[A]=1.\)
- (v)
Consider now \(\tilde{\omega }_{\alpha }\in A_{m}\). By definition of \(A_{m,k}\), there exists an \(n(\tilde{\omega }_{\alpha },m)\in \mathbb {N}\) such that
$$\begin{aligned} |\tilde{S}_{n,\alpha }( z_{k};\tilde{\omega }_{\alpha })-\tilde{S}_{\alpha }( z_{k})|<\frac{1}{m}\quad \forall n>n(\tilde{\omega }_{\alpha },m),\, k=1,\ldots ,m.\text { Hence,} \end{aligned}$$$$\begin{aligned} |\tilde{S}_{n,\alpha }(z)-\tilde{S}_{\alpha }(z)|<\frac{1}{m}\quad \forall \tilde{\omega }_{\alpha }\in A_{m},\, n>n(\tilde{\omega }_{\alpha },m),\, z\in D_{\tilde{r}}(\alpha ). \end{aligned}$$From (ii), it follows
$$\begin{aligned} \tilde{D}_n(\tilde{\omega }_{\alpha }):=\sup _{\tilde{r}\in D_{\tilde{r}}(\alpha )} |\tilde{S}_{n,\alpha }(\tilde{r};\tilde{\omega }_{\alpha })-\tilde{S}_{\alpha }(\tilde{r})|<\frac{2}{m}. \end{aligned}$$Furthermore, due to the definition of A , \(\tilde{\omega }_{\alpha }\in A\) is element of all \(A_m\), \(m\in \mathbb {N}.\) Hence, \(\forall m\in \mathbb {N}\) there exists an \(n(\tilde{\omega }_{\alpha },m)\in \mathbb {N}\) such that \(\forall n>n(\tilde{\omega }_{\alpha },m)\)
$$\begin{aligned} 0\le \tilde{D}_n(\tilde{\omega }_{\alpha })<\frac{2}{m} \text{ and } \text{ in } \text{ consequence } \lim _{n\rightarrow \infty }\tilde{D}_n(\tilde{\omega }_{\alpha })=0\quad \forall \tilde{\omega }_{\alpha }\in A. \end{aligned}$$Finally, we have \(\lbrace \tilde{\omega }_{\alpha }\in \tilde{\Omega }_{\alpha }{:}\,\lim _{n\rightarrow \infty }\tilde{D}_n(\tilde{\omega }_{\alpha })=0\rbrace \supseteq A\) and from 4. that \(\mathbb {P}[A]=1\) holds such that
$$\begin{aligned} \mathbb {P}[\left\{ \tilde{\omega }_{\alpha }\in \tilde{\Omega }_{\alpha }{:}\,\lim _{n\rightarrow \infty }\tilde{D}_n(\tilde{\omega }_{\alpha })=0\right\} ]\ge \mathbb {P}[A]=1. \end{aligned}$$\(\square \)
- (i)
1.4 A.4 Proof of Lemma 11
Proof
The uniqueness of \(\tilde{r}_{\alpha ,\tau }\) yields \(\tilde{S}_{\alpha }(r_{\alpha ,\tau }+\varepsilon )<\tau <\tilde{S}_{\alpha }(r_{\alpha ,\tau }-\varepsilon )\) for any \(\varepsilon >0\). The strong consistency of \(\tilde{S}_{n,\alpha }(\tilde{r})\) furthermore ensures
which is equivalent to
Using that almost sure convergence \(\mathbb {P}(\lim _{n\rightarrow \infty }X_n=X)=1\) is equivalent to \(\lim _{n\rightarrow \infty }\mathbb {P}(|X_m-X|<\varepsilon \;\;\forall m\ge n)\) in combination with \(\mathbb {P}(A\cap B)=1-\mathbb {P}(A^{\mathsf {c}}\cup B^{\mathsf {c}})\ge 1-\mathbb {P}(A^{\mathsf {c}})-\mathbb {P}(B^{\mathsf {c}})\) implies
Due to the monotonicity of \(\tilde{S}_{\alpha }\), we have \(\tilde{S}_{\alpha }(\tilde{r})\le \tau \Leftrightarrow \tilde{r}\le \tilde{S}_{\alpha }^{-1}(\tau )\) and therefore
Finally, \({\tilde{q}}_{j,n,\alpha ,\tau }\xrightarrow {a.s.}{\tilde{q}}_{j,\alpha ,\tau }\), \(j=1,2\), is a direct consequence of the continuous mapping theorem which in turn implies \({\tilde{\varvec{q}}}_{n,\alpha ,\tau }=({\tilde{q}}_{1,n,\alpha ,\tau },{\tilde{q}}_{2,n,\alpha ,\tau })'\xrightarrow {a.s.}{\tilde{\varvec{q}}}_{\alpha ,\tau }=({\tilde{q}}_{1,\alpha ,\tau },{\tilde{q}}_{2,\alpha ,\tau })'\), compare Serfling (1980, 1.P, 2.b on page 52). \(\square \)
1.5 A.5 Proof of Theorem 13
Proof
For the proof of Theorem 13, we will use Lemma 10 together with the following Lemma 14.
Lemma 14
(Jump heights of \(\tilde{S}_{n,\alpha })\) Given the general assumptions from Sect. 2.1, the ordered sample \({\tilde{R}}_{(1)}<{\tilde{R}}_{(2)}<\cdots<{\tilde{R}}_{(n-1)}<{\tilde{R}}_{(n)}\) of distances \({\tilde{R}}_i=\min (\tfrac{1-{\tilde{Y}}_{i1}}{\cos (\alpha )},\tfrac{1-{\tilde{Y}}_{i2}}{\sin (\alpha )})\) will almost surely have no ties and therefore
From Lemma 10.3, we have that for any \(\tilde{r}\in D_{\alpha }(\tilde{r})\) with survivor function \(\tilde{S}_{\alpha }\)
holds. Let \(\tilde{r}=\tilde{r}_{\alpha ,\tau }=\tilde{S}_{\alpha }^{-1}(\tau )\). Then, we know that
Using the property of stochastic equicontinuity for \(\tilde{S}_{n,\alpha }\) interpreted as an empirical process (for an introduction and definition of stochastic equicontinuity, see Andrews 1994), we can replace \(\tilde{r}_{\alpha ,\tau }\) by a consistent estimator \(\tilde{r}_{n,\alpha ,\tau }\) such that
holds. From Lemma (ii) in Serfling (1980, Sec. 1.1.4, p. 3) it now follows that
Since \(\tilde{f}_{\alpha }\) is continuous, the probability of observing duplicates of \({\tilde{R}}_i\) is zero. Hence, using Lemma 14
holds with probability one which (using Lemma 10.2) implies
Applying the Delta-method, i.e. Taylor expansion around of \(\tilde{S}_{\alpha }\), \(\tilde{r}_{\alpha ,\tau }\) yields
for \(\bar{r}_{\alpha ,\tau }\) on the line segment between \(\tilde{r}_{n,\alpha ,\tau }\) and \(\tilde{r}_{\alpha ,\tau }\). The last step is to apply Slutsky’s theorem and the fact that \(\bar{r}_{\alpha ,\tau }\rightarrow \tilde{r}_{\alpha ,\tau }\) since \(\tilde{r}_{n,\alpha ,\tau }\rightarrow \tilde{r}_{\alpha ,\tau }\), such that we obtain
\(\square \)
Rights and permissions
About this article
Cite this article
Klein, N., Kneib, T. Directional bivariate quantiles: a robust approach based on the cumulative distribution function. AStA Adv Stat Anal 104, 225–260 (2020). https://doi.org/10.1007/s10182-019-00355-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10182-019-00355-3