Skip to main content
Log in

Statistical inference for heavy tailed series with extremal independence

  • Published:
Extremes Aims and scope Submit manuscript

Abstract

We consider stationary time series \(\{X_{j},j\in \mathbb {Z}\}\) whose finite dimensional distributions are regularly varying with extremal independence. We assume that for each h ≥ 1, conditionally on X0 to exceed a threshold tending to infinity, the conditional distribution of Xh suitably normalized converges weakly to a non degenerate distribution. We consider in this paper the estimation of the normalization and of the limiting distribution.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Bradley, R.C.: Basic properties of strong mixing conditions. A survey and some open questions. Probab. Surv. 2, 107–144 (2005)

    Article  MathSciNet  Google Scholar 

  • Das, B., Resnick, S.I.: Conditioning on an extreme component: model consistency with regular variation on cones. Bernoulli 17(1), 226–252 (2011)

    Article  MathSciNet  Google Scholar 

  • Davis, R.A., Mikosch, T.: Point process convergence of stochastic volatility processes with application to sample autocorrelation. J. Appl. Probab. 38A, 93–104 (2001). Probability, statistics and seismology

    Article  MathSciNet  Google Scholar 

  • Drees, H.: Weighted approximations of tail processes for β-mixing random variables. Ann. Appl. Probab. 10(4), 1274–1301 (2000)

    MathSciNet  MATH  Google Scholar 

  • Drees, H.: Tail empirical processes under mixing conditions. In: Empirical Process Techniques for Dependent Data, pp 325–342. Birkhäuser, Boston (2002)

  • Drees, H.: Bootstrapping empirical processes of cluster functionals with application to extremograms. arXiv:1511.00420

  • Drees, H., Janßen, A.: Conditional extreme value models: Fallacies and pitfalls. Extremes 20(4), 777–805 (2017)

    Article  MathSciNet  Google Scholar 

  • Drees, H., Rootzén, H.: Limit theorems for empirical processes of cluster functionals. Ann. Statist. 38(4), 2145–2186 (2010)

    Article  MathSciNet  Google Scholar 

  • Drees, H., Segers, J., Warchoł, M.: Statistics for tail processes of Markov chains. Extremes 18(3), 369–402 (2015)

    Article  MathSciNet  Google Scholar 

  • Giné, E., Nickl, R.: Mathematical Foundations of Infinite-Dimensional Statistical Models. Cambridge University Press, New York (2016)

    Book  Google Scholar 

  • Heffernan, J.E., Resnick, S.I.: Limit laws for random vectors with an extreme component. Ann. Appl. Probab. 17(2), 537–571 (2007)

    Article  MathSciNet  Google Scholar 

  • Hult, H., Lindskog, F.: Regular variation for measures on metric spaces. Publ. Inst Math. (Beograd) (N.S.) 80(94), 121–140 (2006)

    Article  MathSciNet  Google Scholar 

  • Janßen, A., Drees, H.: A stochastic volatility model with flexible extremal dependence structure. Bernoulli 22(3), 1448–1490 (2016)

    Article  MathSciNet  Google Scholar 

  • Kallenberg, O.: Random Measures. Theory and Applications, Volume 77 of Probability Theory and Stochastic Modelling. Springer, New York (2017)

    Book  Google Scholar 

  • Kosorok, M.: Introduction to Empirical Processes and Semiparametric Inference. Springer Series in Statistics. Springer, New York (2008)

    MATH  Google Scholar 

  • Kulik, R., Soulier, P.: The tail empirical process for long memory stochastic volatility sequences. Stoch. Process. Appl. 121(1), 109–134 (2011)

    Article  MathSciNet  Google Scholar 

  • Kulik, R., Soulier, P.: Heavy tailed time series with extremal independence. Extremes 18, 273–299 (2015)

    Article  MathSciNet  Google Scholar 

  • Kulik, R, Soulier, P., Wintenberger, O.: The tail empirical process of regularly varying functions of geometrically ergodic Markov chains. Stoch. Process. Their Appl. 129, 4209–4238 (2019)

    Article  MathSciNet  Google Scholar 

  • Lindskog, F., Resnick, S.I., Roy, J.: Regularly varying measures on metric spaces: hidden regular variation and hidden jumps. Probab. Surv. 11, 270–314 (2014)

    Article  MathSciNet  Google Scholar 

  • Meyn, S., Tweedie, R.L.: Markov Chains and Stochastic Stability. Cambridge University Press (2009)

  • Mikosch, T., Rezapour, M.: Stochastic volatility models with possible extremal clustering. Bernoulli 19(5A), 1688–1713 (2013)

    Article  MathSciNet  Google Scholar 

  • Rootzén, H.: Weak convergence of the tail empirical process for dependent sequences. Stoch. Proc. Appl. 119(2), 468–490 (2009)

    Article  MathSciNet  Google Scholar 

  • van der Vaart, A.W., Wellner, J.A.: Weak Convergence and Empirical Processes. Springer, New York (1996)

    Book  Google Scholar 

  • Vervaat, W.: Functional central limit theorems for processes with positive drift and their inverses. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 23, 245–253 (1972)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The research of Clemonell Bilayi-Biakana and Rafal Kulik was supported by the NSERC grant 210532-170699-2001. The research of Philippe Soulier was partially supported by the LABEX MME-DII.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rafał Kulik.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Convergence in \(\ell ^{\infty }\)

Appendix: Convergence in \(\ell ^{\infty }\)

Theorem A.1

Giné and Nickl (2016, Theorem 3.7.23) Let\(\{\mathbb {Z}_{n},n \in \mathbb {N}\}\), be a sequence of processes with values in\(\ell ^{\infty }(\mathcal {F})\)indexed by a semi-metric space\({\mathcal {F}}\). Then the following statements are equivalent.

  1. (i)

    The finite dimensional distributions of the processes\(\mathbb {Z}_{n}\)converge in law and there exists a pseudometricρon\(\mathcal {F}\)such that\((\mathcal {F}, \rho )\)is totally bounded and for all𝜖 > 0,

    $$ \lim_{\delta\to0} \limsup_{n\to\infty} \mathbb P^{*}\left( \sup_{\rho(f,g)<\delta} |\mathbb{Z}_{n}(f)-\mathbb{Z}_{n}(g)| > \epsilon\right) = 0 . $$
    (A.1)
  2. (ii)

    There exists a process\(\mathbb {Z}\)whose law is a tight Borel probability measure on\(\ell ^{\infty }(\mathcal {F})\)and such that\(\mathbb {Z}_{n} \overset {{w}}{\Longrightarrow } \mathbb {Z}\)in\(\ell ^{\infty }(\mathcal {F})\).

Moreover, if (i) holds, then the process\(\mathbb {Z}\) in (ii) has a version with bounded uniformly continuous paths for ρ.

The following result provides a sufficient condition for (A.1) above. Let {Zn, i, 1 ≤ imn}, n ≥ 1, be a triangular array of rowwise i.i.d. processes by a class \(\mathcal {F}\). Define the random pseudometric dn on \(\mathcal {F}\) by

$$ \begin{array}{@{}rcl@{}} {d_{n}^{2}}(f,g) = \sum\limits_{i=1}^{m_{n}} \{Z_{n,i}(f)-Z_{n,i}(g)\}^{2} , \ \ f,g\in \mathcal{F} . \end{array} $$

Let \(N(\epsilon ,\mathcal {F},d_{n})\) be the minimum number of balls with radius 𝜖 in the pseudometric dn needed to cover \(\mathcal {F}\). Let \(\mathbb {Z}_{n}\) be the empirical process defined by

$$ \begin{array}{@{}rcl@{}} \mathbb{Z}_{n}(f) = \sum\limits_{i=1}^{m_{n}} \{Z_{n,i}(f)-\mathbb E[Z_{n,i}(f)]\} , \ \ f \in \mathcal{F} \end{array} $$

Define \(\|H\|_{\mathcal {F}} = \sup _{f\in \mathcal {F}} |H(f)|\) for a functional H on \(\mathcal {F}\).

Theorem A.2 (Adapted from van der Vaart and Wellner 1996, Theorem 2.11.1)

Assume that the stochastic processes\(\{{Z}_{n,i}(f),f\in \mathcal {F}\}\), \(i=1,\dots ,m_{n}\), n ≥ 1, are separable and that the pseudometric space\(\mathcal {F}\)is totally bounded. Assume moreover that for all ζ > 0,

$$ \lim\limits_{n\to\infty} {m_{n}} \mathbb E[\|Z_{n,1}\|^{2}_{\mathcal{F}}\mathbbm{1}{\left\{\|Z_{n,1}\|_{\mathcal{F}}>\zeta\right\}}] = 0 . $$
(2)

Assume that for every sequence {δn} which decreases to zero,

$$ \lim\limits_{n\to\infty} \sup_{f,g\in\mathcal{F}\atop \rho(f,g)\leq\delta_{n}} {\mathbb E[d_{n}^{2}}(f,g)] = 0 , $$
(3)

Assume finally that there exists a measurable majorant\(N^{*}(\epsilon ,\mathcal {F},d_{n})\) of \(N(\epsilon ,\mathcal {F},d_{n})\)such that for every sequence {δn} which decreases to zero,

$$ {\int}_{0}^{\delta_{n}} \sqrt{\log N^{*}(\epsilon,\mathcal{F},d_{n})} \mathrm{d} \epsilon \stackrel{\mathbb P}{\longrightarrow} 0 . $$
(4)

Then\(\mathbb {Z}_{n}\)is asymptotically ρ-equicontinuous, i.e. Eq. A.1holds.

Condition (4) holds if \(\mathcal {F}\) is linearly ordered. We now provide a sufficient condition for (4) when the class \(\mathcal {F}\) is approximable by subclasses with finite VC-dimension which possibly increase at a certain rate. We consider a triangular array of independent random elements \(\mathbb {X}_{n,i}\), 1 ≤ imn, in a measurable space \((\mathsf {E},\mathcal {E})\) and assume that \(Z_{n,i} = v_{n}^{-1/2}f(\mathbb {X}_{n,i})\). We consider a class \(\widehat {\mathcal {G}}\) of functions on E and the random semi-metric dn on \(\widehat {\mathcal {G}}\) defined by

$$ \begin{array}{@{}rcl@{}} {d_{n}^{2}}(f,g) = v_{n}^{-1}\sum\limits_{i=1}^{m_{n}} \{f(\mathbb{X}_{n,i}) - g(\mathbb{X}_{n,i})\}^{2} . \end{array} $$

The following result formalizes ideas which can be found in the proof of the result in Drees and Rootzén (2010, Example 4.4).

Lemma A.3

Let\(\{\widehat {\mathcal {G}}_{k},k\geq 1\}\)be an a non-decreasing sequence of subclasses of\(\widehat {\mathcal {G}}\). Assume that:

  1. (i)

    The envelope function Gof\(\widehat {\mathcal {G}}\)is measurable.

  2. (ii)

    There exists a constant\(\text {cst}_{\mathcal {G}}\)such that for every\(k\in \mathbb {N}^{*}\), \(\widehat {\mathcal {G}}_{k}\)is VC-subgraph class with index\(\text {VC}(\widehat {\mathcal {G}}_{k})\)not greater than\(\text {cst}_{\mathcal {G}} k\).

  3. (iii)

    For every k ≥ 1, there exists a measurable functionGksuch that for all\(f\in \widehat {\mathcal {G}}\), there exists\(f_{k}\in \widehat {\mathcal {G}}_{k}\)such that |ffk|≤ Gk.

  4. (iv)

    There exists\(\theta \in (0,\infty )\)such that

    $$ \frac{4}{v_{n}}\sum\limits_{i=1}^{m_{n}} G^{2}(\mathbb{X}_{n,i}) \overset{\mathbb{P}}{\longrightarrow} \theta . $$
    (5)
  5. (v)

    There exists xς ∈ (0, 1) such that

    $$ \frac{1}{v_{n}}\sum\limits_{i=1}^{m_{n}} {G_{k}^{2}}(\mathbb{X}_{n,i}) = O_{P}(k^{-1/\varsigma}) . $$
    (6)

Then Eq. 4holds.

Proof

Define the (random) probability measure Qn on E by

$$ \begin{array}{@{}rcl@{}} Q_{n} = \frac1{m_{n}} \sum\limits_{i=1}^{m_{n}} \delta_{\mathbb{X}_{n,i}} . \end{array} $$

Define the L2(Qn) distance on \(\widehat {\mathcal {G}}\) by

$$ \begin{array}{@{}rcl@{}} d_{L^{2}(Q_{n})}^{2} (f,g) = m_{n}^{-1}\sum\limits_{i=1}^{m_{n}} \{f(\mathbb{X}_{n,i})-g(\mathbb{X}_{n,i})\}^{2} . \end{array} $$

For 𝜖 > 0, define

$$ \begin{array}{@{}rcl@{}} K_{n}(\epsilon) & =& \min\left\{k\in\mathbb{N}: \frac{4}{v_{n}}\sum\limits_{i=1}^{m_{n}} {G_{k}^{2}}(\mathbb{X}_{n,i}) < \frac{\epsilon^{2}}2 \right\} . \end{array} $$

Then, for \(f,g\in \widehat {\mathcal {G}}\) and k > Kn(𝜖), we have by Assumption (iii) of our lemma,

$$ \begin{array}{@{}rcl@{}} {d_{n}^{2}}(f,g) & \leq& 2{d_{n}^{2}}(f_{k},g_{k}) + \frac{4}{v_{n}}\sum\limits_{i=1}^{m_{n}} {G_{k}^{2}}(\mathbb{X}_{n,i}) \leq \frac{2m_{n}}{v_{n}} d_{L^{2}(Q_{n})}^{2}(f_{k},g_{k}) + \frac{\epsilon^{2}}2 . \end{array} $$

This bound implies that

$$ N(\widehat{\mathcal{G}},d_{n},\epsilon) \leq N\left( \mathcal{G}_{K_{n}(\epsilon)},d_{L^{2}(Q_{n})},\epsilon \left( \frac{v_{n}}{4m_{n}}\right)^{1/2}\right) + 1 . $$
(A.7)

Set

$$ \begin{array}{@{}rcl@{}} {\zeta_{n}^{2}} = \frac{4m_{n}}{v_{n}} Q_{n}(G^{2}) = \frac{4}{v_{n}}\sum\limits_{i=1}^{m_{n}} G^{2}(\mathbb{X}_{n,i}) \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} J_{k}(\epsilon) = \text{VC}(\widehat{\mathcal{G}}_{k})(16\mathrm{e})^{\text{VC}(\widehat{\mathcal{G}}_{k})} \epsilon^{-(2(\text{VC}(\widehat{\mathcal{G}}_{k})-1))} . \end{array} $$

Since \(\widehat {\mathcal {G}}_{k}\subset \widehat {\mathcal {G}}\), the envelope function of \(\widehat {\mathcal {G}}_{k}\) is smaller than G. Thus, by Theorem 2.6.7 in van der Vaart and Wellner (1996) we obtain for each k

$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!\!\!N\left( \widehat{\mathcal{G}}_{k},d_{L^{2}(Q_{n})},\epsilon \!\left( \frac{v_{n}}{4m_{n}}\right)^{1/2}\right) \!& \leq& \text{cst} J_{k}\left( \epsilon\sqrt{\frac{v_{n}}{4m_{n} Q_{n}(G^{2})}}\right) \\ & \leq &\text{cst} J_{k}(\epsilon \zeta_{n}^{-1/2}) \leq \text{cst} J_{k}(\epsilon) (\zeta_{n}\vee1)^{\text{VC}(\widehat{\mathcal{G}}_{k})} \!. \end{array} $$
(A.8)

Combining (A.7) and (A.8) yields

$$ \begin{array}{@{}rcl@{}} \log N(\widehat{\mathcal{G}},d_{n},\epsilon) \leq \text{cst} + \log J_{K_{n}(\epsilon)}(\epsilon)+\text{VC}(\widehat{\mathcal{G}}_{K_{n}(\epsilon)}) \log(\zeta_{n}\vee1) . \end{array} $$

By Eq. 5, \(\log (\zeta _{n}\vee 1)= O_{P}(1)\), thus we need to prove that for all ζ > 0,

$$ \begin{array}{@{}rcl@{}} \lim\limits_{\delta\to0} \limsup\limits_{n\to\infty} \mathbb P\left( {\int}_{0}^{\delta}\sqrt{\log J_{K_{n}(\epsilon)}(\epsilon)} \mathrm{d} \epsilon > \zeta\right) = 0 , \end{array} $$
(A.9a)
$$ \begin{array}{@{}rcl@{}} \lim\limits_{\delta\to0} \limsup\limits_{n\to\infty} \mathbb P\left( {\int}_{0}^{\delta}\sqrt{\text{VC}(\widehat{\mathcal{G}}_{K_{n}(\epsilon)})} \mathrm{d} \epsilon > \zeta\right) = 0 . \end{array} $$
(A.9b)

By assumption (6), Kn(𝜖) = OP(𝜖− 2ς). Thus, for ξ ∈ (0, 1), A0 can be chosen such that Kn(𝜖) ≤ A0𝜖− 2ς with probability greater than 1 − ξ. Since ς ∈ (0, 1) and \(\text {VC}(\widehat {\mathcal {G}}_{k})=O(k)\) by Assumption (ii), this yields, with probability tending to 1,

$$ \begin{array}{@{}rcl@{}} {\int}_{0}^{\delta}\sqrt{\text{VC}(\widehat{\mathcal{G}}_{K_{n}(\epsilon)})} \mathrm{d} \epsilon \leq \text{cst} {\int}_{0}^{\delta} \epsilon^{-\varsigma} \mathrm{d} \epsilon = O(\delta^{1-\varsigma}) . \end{array} $$

Similarly, with probability tending to 1, we have

$$ \begin{array}{@{}rcl@{}} {\int}_{0}^{\delta} \sqrt{\log J_{K_{n}(\epsilon)}(\epsilon)} \mathrm{d}\epsilon = O(\delta^{1-\varsigma}) . \end{array} $$

This proves (A.9a,b). □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bilayi-Biakana, C., Kulik, R. & Soulier, P. Statistical inference for heavy tailed series with extremal independence. Extremes 23, 1–33 (2020). https://doi.org/10.1007/s10687-019-00365-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10687-019-00365-z

Keywords

AMS 2000 Subject Classifications

Navigation