Abstract
It is well known that the distribution of extreme values of strictly stationary sequences differ from those of independent and identically distributed sequences in that extremal clustering may occur. Here we consider non-stationary but identically distributed sequences of random variables subject to suitable long range dependence restrictions. We find that the limiting distribution of appropriately normalized sample maxima depends on a parameter that measures the average extremal clustering of the sequence. Based on this new representation we derive the asymptotic distribution for the time between consecutive extreme observations and construct moment and likelihood based estimators for measures of extremal clustering. We specialize our results to random sequences with periodic dependence structure.
Similar content being viewed by others
1 Introduction
Extreme value theory for strictly stationary sequences has been extensively studied, initiated in the works of Watson (1954), Berman (1964), and Loynes (1965), and continued by Leadbetter (1974, 1983) and O’Brien (1987) amongst others. One of the key findings in this line of research is that unlike in independent and identically distributed sequences where extreme values tend to occur in isolation, stationary sequences possess an intrinsic potential for clustering of extremes, i.e., several successive or close extreme values may be observed. Understanding the extremal clustering characteristics of a stochastic process is critical in many applications where a cluster of extreme values may have serious consequences. For example, if a sequence consists of daily temperatures at some fixed location then a cluster of extremes may correspond to a heatwave.
The extent to which extremal clustering may occur is naturally measured, for strictly stationary sequences, by a parameter known as the extremal index. Let \(\{X_{n}\}_{n=1}^{\infty }\) be a sequence of random variables with common marginal distribution function F, and let \(\bar {F}=1-F\) and \(M_{n} = \max \limits \{X_{1},\ldots ,X_{n}\}\). Also, let \(\{x_{n}\}_{n=1}^{\infty }\) be a sequence of real numbers that we may informally think of as thresholds or levels. In the special case that Xi and Xj are independent, i≠j, then a necessary and sufficient condition for \(\mathbb {P}(M_{n} \leq x_{n})\) to converge to a limit in (0,1) as \(n\to \infty \) is that \(n\bar {F}(x_{n}) \to \tau > 0,\) in which case \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau }\) (Leadbetter et al. 1983, Theorem 1.5.1). More generally, if \(\{X_{n}\}_{n=1}^{\infty }\) is a strictly stationary sequence, then \(n\bar {F}(x_{n}) \to \tau \) is not sufficient to ensure the convergence of \(\mathbb {P}(M_{n} \leq x_{n})\). However, in most cases of practical interest, provided that a suitable long range dependence restriction is satisfied, such as condition D of Leadbetter (1974), one has \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\theta \tau }\) where 𝜃 ∈ [0,1] is the extremal index. Leadbetter (1983) showed that exceedances of the level xn occur in clusters with the limiting mean cluster size being equal to 𝜃− 1, and Hsing (1987) showed that distinct clusters may be considered independent in the limit.
Another characterization of 𝜃 that links it to the extremal clustering properties of a strictly stationary sequence can be found in O’Brien (1987). Defining \(M_{j,k} = \max \limits \{X_{i} : j+1\leq i \leq k \}\), it was shown that the distribution function of Mn satisfies
where
for some pn = o(n), and provided the limit exists, 𝜃n → 𝜃 as \(n \to \infty \). This result illustrates that smaller values of 𝜃 are indicative of a larger degree of extremal clustering, since the conditional probability in Eq. 2 is small when an exceedance of a large threshold is likely to soon be followed by another exceedance.
Early attempts at estimating 𝜃 were based on associating 𝜃− 1 with the limiting mean cluster size. Different methods for identifying clusters gave rise to different estimators, well known examples being the runs and blocks estimators (Smith and Weissman 1994). For the runs estimator, a cluster is identified as being initialized when a large threshold is exceeded and ends when a fixed number, known as the run length, of non-exceedances occur. The extremal index is then estimated by the ratio of the number of identified clusters to the total number of exceedances. A difficulty that arises when using this estimator is its sensitivity to the choice of run length (Hsing 1991).
The problem of cluster identification was studied by Ferro and Segers (2003) who considered the distribution of the time between two exceedances of a large threshold. They found that the limiting distribution of appropriately normalized interexceedance times converges to a distribution that is indexed by 𝜃. In particular, for a given threshold \(u \in \mathbb {R}\), they define the random variable \(T(u) = \min \limits \{n \geq 1 : X_{n+1} > u \mid X_{1} > u \}\), and found that as \(n\to \infty ,\) \(\bar {F}(x_{n})T(x_{n})\) converges in distribution to a mixture of a point mass at zero and an exponential distribution with mean 𝜃− 1. Thus, by computing theoretical moments of this limiting distribution and comparing them with their empirical counterparts, they construct their so-called intervals estimator.
Motivated by the fact that many real world processes are non-stationary, in this paper we investigate the effect of non-stationarity on extremal clustering. Previous statistical works that consider extremal clustering in non-stationary sequences include (Süveges 2007), who used the likelihood function introduced by Ferro and Segers (2003) for the extremal index together with smoothing methods to capture non-stationarity in a time series of temperature measurements. In a similar application, Coles et al. (1994) used a Markov model together with simulation techniques to estimate the extremal index within different months.
An early work that developed extreme value theory for non-stationary sequences with a common marginal distribution is Hüsler (1983), which focused on the asymptotic distribution of the sample maxima but did not consider extremal clustering. Hüsler (1986) considered the more general case where the margins may differ and also discussed the difficulty of defining the extremal index for general non-stationary sequences.
Here, we consider a sequence of random variables \(\{X_{n}\}_{n=1}^{\infty }\) with common marginal distribution function F, but do not assume stationarity in either the weak or strict sense. As we assume common margins, non-stationarity may arise through changes in the dependence structure. We show, under assumptions similar to O’Brien (1987), that
where
Thus, we find that the limiting distribution of the sample maximum at large thresholds is characterized by a parameter \(\gamma = \lim _{n \to \infty } \gamma _{n}\), provided the limit exists, which by analogy with Eq. 2, may be regarded as the average of local extremal indices. In this paper we develop methods for estimating these local extremal indices by adapting the methods of Ferro and Segers (2003) for the extremal index to our non-stationary setting. In the special case that the sequence is stationary, so that all terms in the summation (4) are equal, the formula for γn reduces to 𝜃n in Eq. 2.
The structure of the paper is as follows. Section 2 defines the notation and assumed mixing condition used throughout the paper and states the main theoretical results regarding the asymptotic distribution of the sample maxima and normalized interexceedance times. Section 3 discusses approaches to parameter estimation using the result from Section 2 on the distribution of the interexceedance times. Section 4 considers the estimation problem for two simple non-stationary Markov sequences with periodic dependence structures and Section 5 gives the proofs of the main theoretical results.
2 Theoretical results
2.1 Notation, definitions and preliminary results
Throughout the paper, when not explicitly stated otherwise, all limits should be interpreted as “as \(n\to \infty \)”. We assume that all random variables in the sequence \(\{X_{n}\}_{n=1}^{\infty }\) have common marginal distribution F with upper endpoint \(x_{F} = \sup \{x\in \mathbb {R} : F(x) < 1 \}\), though we do not assume stationarity. In addition to the definitions for Mn and Mj,k given in the Section 1, we define \(M(A) = \max \limits \{ X_{i} : i\in A \}\) where A is an arbitrary set of positive integers, and write |A| for the number of elements in A. We also refer to a set of consecutive integers as an interval. If I1 and I2 are two intervals, we say that I1 and I2 are separated by q if min(I2) - max(I1) = q + 1 or min(I1) - max(I2) = q + 1, i.e., there are q intermediate values between I1 and I2. The set {1,2,3,…} is denoted by \(\mathbb {N}\). Equality in distribution of two random variables X and Y is denoted by \( X \overset {D}{=} Y.\)
We assume that the sequence \(\{X_{n}\}_{n=1}^{\infty }\) satisfies the asymptotic independence of maxima (AIM) mixing condition of O’Brien (1987) which restricts long range dependence.
Definition 1
The sequence \(\{X_{n}\}_{n=1}^{\infty }\) is said to satisfy the asymptotic independence of maxima condition relative to the sequence xn of real numbers, abbreviated to “\(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn)”, if there exists a sequence qn of positive integers with qn = o(n) such that for any two intervals I1 = {i1,…,ij} and I2 = {ij + qn + 1,…,ij + qn + k} separated by qn, we have
where the maximum is taken over all positive integers i1,ij and k such that |I1|≥ qn, |I2|≥ qn and ij + qn + k ≤ n.
Definition 1 states a slightly weaker condition than the widely used D(xn) condition (Leadbetter 1983) in that only certain intervals I1 and I2 need to be considered in Eq. 5 rather than arbitrary sets of integers, so that all examples in the literature of sequences satisfying D(xn) also satisfy AIM(xn). For example, stationary Gaussian sequences with autocorrelation function ρn satisfying Berman’s condition, \(\rho _{n}\log n\to 0\) (Berman 1964), satisfy AIM(xn) for any sequence xn such that \(n\bar {F}(x_{n})\) is bounded and any qn = o(n) (Leadbetter et al. 1983, Lemma 4.4.1). The analogous result for non-stationary Gaussian sequences is given in Hüsler (1983), where Berman’s condition is replaced by \(r_{n}\log n \to 0\) with \(r_{n} = \sup \{|\rho (i,j)| : |i-j| \geq n\}\) and ρ(i,j) the correlation between Xi and Xj.
O’Brien (1987) showed that if \(\{X_{n}\}_{n=1}^{\infty }\) is a stationary positive Harris Markov sequence with separable state space S and \(f:S\to \mathbb {R}\) is a measurable function then the sequence Yn = f(Xn) satisfies AIM(xn) for any xn and qn = o(n) with \(q_{n} \to \infty \).
We note that Definition 1 states a property of the dependence structure of the sequence \(\{X_{n}\}_{n=1}^{\infty }\), with the specific marginal distributions playing essentially no role. In particular, if \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) and \(g:\mathbb {R}\to \mathbb {R}\) is a monotone increasing function then Yn = g(Xn) satisfies AIM(g(xn)) with the same qn.
The assumption that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) ensures the approximate independence of the block maxima of two sufficiently separated blocks. Lemma 1 below provides an upper bound for the degree of dependence of k block maxima for suitably separated blocks and will be useful in Section 2.2 when the limiting behaviour of \(\mathbb {P}(M_{n} \leq x_{n})\) is considered.
Lemma 1
Let \(\{X_{n}\}_{n=1}^{\infty }\) satisfy AIM(xn) and let I1,I2,…,Ik be distinct subintervals of {1,2,…,n} where k ≥ 2 and |Ii|≥ qn, 1 ≤ i ≤ k. Suppose that Ii and Ii+ 1 are separated by qn for 1 ≤ i ≤ k − 1. Then
2.2 Asymptotic distribution of M n
In this section we investigate the limiting behaviour of \(\mathbb {P}(M_{n} \leq x_{n})\), with the main result being Theorem 1. In addition to assuming that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn), we will assume that the rate of growth of the sequence xn is controlled via
In the case of continuous marginal distributions, Eq. 7 is immediately satisfied by xn = F− 1(1 − τ/n). More generally, Theorem 1.7.13 of Leadbetter et al. (1983) guarantees the existence of a sequence xn satisfying (7) when F is in the domain of attraction of any of the three classical extreme value distributions (Haan and Ferreira 2006, Section 1.2).
We use the standard technique of block-clipping, see for example Section 10.2.1 in Beirlant et al. (2004), to split the interval {1,2,…,n} into subintervals, or blocks, of alternating large and small lengths. Specifically, for sequences pn and qn such that qn = o(pn) and pn = o(n) we define
for i = 1,2,…rn, where rn = ⌊n/(pn + qn)⌋.
If we take the sequence qn appearing in the construction of the blocks Ai and \(A_{i}^{*}\) to be the same as that in Definition 1, then Lemma 1 bounds the degree of dependence of the collection of random variables \(\{M(A_{i})\}_{i=1}^{r_{n}}\), and this allows us to prove Lemma 2 below which modifies Lemma 3.1 from O’Brien (1987) to allow for non- stationarity.
Lemma 2
Let \(\{X_{n}\}_{n=1}^{\infty }\) satisfy AIM(xn) and let the sequence pn be such that
Then if Eq. 7 holds, we have
where the intervals \(\{A_{i}\}_{i=1}^{r_{n}}\) are as in Eq. 8.
Remarks 1
Equation 10 follows easily from Eq. 6 by making the identification k = rn and using Eqs. 7 and 9. Additionally, if \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) then we can always find a sequence pn such that Eq. 9 holds, for example, by taking \(p_{n} = \lfloor { \{ n \max \limits (q_{n}, n\alpha _{n})\}^{1/2} }\rfloor \). Thus the only assumption in Lemma 2 beyond common margins is that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) for a sequence xn satisfying (7).
We can now state our main theorem.
Theorem 1
Under the same assumptions as in Lemma 2, we have
and consequently
where
As it was noted in Section 1, for independent sequences (7) implies that \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau }\). For a random sequence satisfying the conditions of Lemma 2, the following result gives a necessary and sufficient condition for the convergence of \(\mathbb {P}(M_{n} \leq x_{n}).\)
Corollary 1
Under the same assumptions as in Lemma 2, \(\mathbb {P}(M_{n} \leq x_{n})\) converges if and only if \( \lim _{n\to \infty } \gamma _{n}\) exists, where γn is as in Eq. 13, in which case \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau \gamma }\) with \(\gamma = \lim _{n\to \infty } \gamma _{n}\in [0,1].\)
Corollary 1 follows from Eq. 12 since \(n\bar {F}(x_{n}) \to \tau \) if and only Fn(xn) → e−τ which is easily seen by taking logs in the latter expression and using log(1 − t) = −t + o(t) as t → 0.
A basic question regarding the constant γ appearing in Corollary 1 is whether it is independent of the particular value of τ in Eq. 7, i.e., do we obtain the same limiting value of γn regardless of the specific sequence xn and τ used in Eq. 7? We will see in Section 2.3 that for sequences with periodic dependence this is indeed the case, and Theorem 2 gives sufficient conditions for this to hold more generally.
We now turn our attention to the conditional probabilities appearing in the summation (13), which contain local information regarding the strength of extremal clustering in the sequence \(\{X_{n}\}_{n=1}^{\infty }\).
Definition 2
Under the same assumptions as in Lemma 2, let \(\{f_{n}\}_{n=1}^{\infty }\) be the sequence of functions defined on \(\mathbb {N}\) by
We define the extremal clustering function of \(\{X_{n}\}_{n=1}^{\infty }\) to be the function \(\theta : \mathbb {N} \rightarrow [0, 1]\) given by
provided the limit exists.
In the special case that the sequence \(\{X_{n}\}_{n=1}^{\infty }\) is stationary, the extremal clustering function is simply a constant function equal to the extremal index of the sequence. In the general case, if we think of the index i in Xi as denoting time, then we may regard 𝜃i as the extremal index at time i. The definition of 𝜃i entails pointwise convergence of the sequence of approximations \(\{f_{n}\}_{n=1}^{\infty }\) in Eq. 14. When there is a uniformity in this convergence and the extremal clustering function is Cesàro summable we obtain the following result.
Theorem 2
Suppose \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) with \(n\bar {F}(x_{n}) \to \tau > 0\). Assume that \(\{\theta _{i}\}_{i=1}^{\infty }\) is Cesàro summable and
where 𝜃i,n = fn(i) is as in Eq. 14. Then \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau \gamma }\) where
Moreover, if \(\{y_{n}\}_{n=1}^{\infty }\) is a sequence of real numbers such that \(n\bar {F}(y_{n}) \to \tau ^{\prime }\) with \(\tau ^{\prime } \leq \tau \) then \(\mathbb {P}(M_{n} \leq y_{n}) \to e^{-\tau ^{\prime }\gamma }\) with γ as in Eq. 17.
As with the constant γ in Corollary 1, we may inquire as to whether the extremal clustering function is independent of the value of τ and sequence xn used in Eq. 7. Although we do not attempt to answer this in full generality, we note that, as with the conditional probability formulation of the extremal index, for most sequences that are of practical interest, the formula defining 𝜃i may be reduced to a form that makes no explicit reference to the sequences xn and pn. For example, under the additional assumption due to Smith (1992) which requires that for any xn in Theorem 1 we have
for each i, then Eq. 15 reduces to
Another common assumption for statistical applications is the D(k)(xn) condition of Chernick et al. (1991) which we define below in a slightly modified form for our non-stationary setting.
Definition 3
A sequence \(\{X_{n}\}_{n=1}^{\infty }\) as in Theorem 1 is said to satisfy the D(k)(xn) condition, where \(k\in \mathbb {N}\), if
for each \(i\in \mathbb {N}\). For the case k = 1, we define \(M_{i,i} = -\infty \).
Note that it is assumed in Definition 3 that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) in conjunction with Eq. 20. Whereas Eq. 5 limits the degree of long range dependence in the sequence, Eq. 20 is a local mixing condition that ensures that the probability of again exceeding the threshold xn in a block of pn observations, after dropping below it for k − 1 consecutive observations falls to zero sufficiently rapidly as \(n\to \infty \). The case where k = 1 implies that in the limit, any exceedances of a high threshold occur in isolation and is implied in the stationary case by the D′(xn) condition of Leadbetter et al. (1983), Chapter 3. One might expect that a more natural condition in our non-stationary setting would be to replace the constant k in Eq. 20 by ki to reflect possible variations in the strength of local dependence. However, when Eq. 20 holds for some particular k, then it also holds for any other \(k^{\prime }\) with \(k^{\prime } > k,\) and so provided that the sequence \(\{k_{i}\}_{i=1}^{\infty }\) is bounded we may set \(k = \max \limits \{k_{i} : i\in \mathbb {N}\}\) and obtain (20) for each i. Thus the assumption of a single value of k in Definition 3 allows for variations in the strength of local dependence while at the same time restricting it to not persist too strongly to an arbitrary number of lags. If whenever xn is a sequence as in Theorem 1 and the D(k)(xn) condition holds then Eq. 15 reduces to
We will assume without further comment for the rest of the paper that the sequence \(\{X_{n}\}_{n=1}^{\infty }\) has a well-defined extremal clustering function as may arise from assumptions (18) or (20).
2.3 Periodic dependence
In this section we assume that the sequence \(\{X_{n}\}_{n=1}^{\infty }\) has a more refined structure than in the previous sections, namely that of periodic dependence, under which the results of Section 2.2 may be simplified considerably.
Definition 4
A sequence \(\{X_{n}\}_{n=1}^{\infty }\) with common marginal distributions is said to have periodic dependence if there exists \(d\in \mathbb {N}\) such that \((X_{t_{1}},\ldots ,X_{t_{k}}) \overset {D}{=} (X_{t_{1} + d},\ldots ,X_{t_{k} + d})\) for all \(t_{1},\ldots ,t_{k}\in \mathbb {N}.\) The smallest d with this property is called the fundamental period.
Whereas for a strictly stationary sequence an arbitrary shift in time leaves the finite-dimensional distributions unchanged, for a sequence with periodic dependence only time shifts that are a multiple of the fundamental period leave finite-dimensional distributions unchanged. In particular, \(M_{a,a+b} \overset {D}{=} M_{c,c+b}\) when a ≡ c (mod d). Such sequences often mimic the dependence structure of certain environmental time series where we might expect a fundamental period of one year.
The following result concerning the convergence of \(\mathbb {P}(M_{n} \leq x_{n})\) shows that Theorem 3.7.1 of Leadbetter et al. (1983) for stationary sequences also holds for non-stationary sequences with periodic dependence.
Theorem 3
Let \(\{X_{n}\}_{n=1}^{\infty }\) have periodic dependence and satisfy the conditions of Lemma 2, with xn satisying (7) for some τ > 0. Suppose that \(y_{n} = y_{n}(\tau ^{\prime })\) is a sequence of real numbers defined for each \(\tau ^{\prime }\) with \(0 < \tau ^{\prime } \leq \tau \) so that \(n\bar {F}(y_{n}) \to \tau ^{\prime }\). Then there exist constants γ and \(\gamma ^{\prime }\) with \(0 \leq \gamma \leq \gamma ^{\prime } \leq 1\) such that
for all \(0 < \tau ^{\prime } \leq \tau \). Hence if \(\mathbb {P}\{M_{n} \leq y_{n}(\tau ^{\prime })\}\) converges for some \(\tau ^{\prime }\) with \(0 < \tau ^{\prime } \leq \tau \), then \(\gamma = \gamma ^{\prime }\) and \(\mathbb {P}\{M_{n} \leq y_{n}(\tau ^{\prime })\} \to e^{-\tau ^{\prime }\gamma }\) for all such \(\tau ^{\prime }\).
Although Theorem 3 makes no reference to the extremal clustering function, when \(\mathbb {P}(M_{n} \leq x_{n})\) converges, the constant γ in Theorem 3 is identified by Corollary 1 as \(\gamma = \lim _{n\to \infty }\gamma _{n}\) with γn as in Eq. 13. Due to periodicity we obtain the simplified formula \(\gamma = d^{-1}{\sum }_{i=1}^{d}\theta _{i},\) and the extremal clustering function is determined by the d values \(\{\theta _{i}\}_{i=1}^{d}\) which repeat cyclically. Moreover, for sequences with periodic dependence, the convergence statement (16) can be strengthened to uniform convergence since \(\sup _{i\in \mathbb {N}}| \theta _{i} - \theta _{i,n} | = \max \limits _{1\leq i \leq d}| \theta _{i} - \theta _{i,n} |. \)
The following result is an immediate consequence of Theorem 3.
Corollary 2
Let \(\{X_{n}\}_{n=1}^{\infty }\) have periodic dependence with common marginal distribution function F. For each τ > 0, let xn(τ) be a sequence such that \(n\bar {F}(x_{n}(\tau )) \to \tau \) and suppose that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn(τ)) for each such τ. If \(\mathbb {P}\{M_{n} \leq x_{n}(\tau )\}\) converges for a single τ > 0 then it converges for all τ > 0, and in particular \(\mathbb {P}\{M_{n} \leq x_{n}(\tau )\} \to e^{-\tau \gamma }\) for some γ ∈ [0,1].
2.4 Interexceedance times
Ferro and Segers (2003) provided a method for estimating the extremal index of a stationary sequence without the need for identifying independent clusters of extremes. This was achieved by considering the distribution of the time between two exceedances of a threshold u, i.e.,
as u approaches xF. In particular, it was shown that the normalized interexceedance time \(\bar {F}(x_{n})T(x_{n})\) converges in distribution as \(n\to \infty \) to a mixture of a point mass at zero, with probability 1 − 𝜃, and an exponential random variable with mean 𝜃− 1, with probability 𝜃. The mixture arises from the fact that the interexceedance times can be classified in to two categories: within cluster and between cluster times. The mass at zero stems from the fact that the within cluster times, which tend to be small relative to the between cluster times, are dominated by the factor \(\bar {F}(x_{n})\).
In the stationary case, conditioning on the event X1 > u in Eq. 22 may be replaced with Xi > u, and Xn+ 1 replaced by Xn+i, for any \(i\in \mathbb {N},\) without affecting the distribution of T(u). In the non-stationary case we consider for each \(i \in \mathbb {N}\) and threshold u, the random variable Ti(u) defined by
whose distribution in general depends on i. We find that the distribution of \(\bar {F}(x_{n})T_{i}(x_{n})\) converges as \(n\to \infty \) to a mixture of a mass at zero, with probability 1 − 𝜃i, and an exponential random variable with mean γ− 1, with probability 𝜃i. As in Ferro and Segers (2003), a slightly stronger mixing condition is required to derive this result than was needed for Theorem 1. We denote by \(\mathcal {F}_{j_{1},j_{2}}(u)\), the σ-algebra generated by the events {Xi > u : j1 ≤ i ≤ j2}, \(j_{1},j_{2}\in \mathbb {N}\), and we define the mixing coefficients
where the supremum is over all \(E_{1} \in \mathcal {F}_{1,l}(u)\) with \(\mathbb {P}(E_{1})>0\) and \(E_{2}\in \mathcal {F}_{l+q,n}(u).\) We will assume the existence of a sequence qn = o(n) such that \(\alpha ^{*}_{cn,q_{n}}(x_{n}) \to 0\) for all c > 0. This implies that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) with the same choice of qn and so we may find a sequence pn so that (9) is satisfied. We define 𝜃i,n to be as in Eq. 14 and assume a slightly stronger form of convergence than in Eq. 16 but weaker than uniform convergence \(\sup _{i\in \mathbb {N}}| \theta _{i} -\theta _{i,n}| \to 0.\)
The limiting distribution of the normalized interexceedance times is given in Theorem 4.
Theorem 4
Let \(\{X_{n}\}_{n=1}^{\infty }\) be a sequence of random variables with common marginal distribution F and \(\{x_{n}\}_{n=1}^{\infty }\) a sequence of real numbers such that \(n\bar {F}(x_{n}) \to \tau > 0\). Suppose that there is a sequence of positive integers qn = o(n) such that \(\alpha ^{*}_{cn,q_{n}}(x_{n}) \to 0\) and \(\max \limits _{1\leq i \leq cn}| \theta _{i} -\theta _{i,n}| \to 0\) for all c > 0. Then, if \(\{\theta _{i}\}_{i=1}^{\infty }\) is Cesàro summable we have, for each fixed \(i\in \mathbb {N}\) and t > 0
3 Estimation with a focus on periodic sequences
In this section we consider moment and maximum likelihood estimators for 𝜃i and γ based on the limiting distribution of normalized interexceedance times given in Theorem 4. We first show that the intervals estimator of Ferro and Segers (2003) may be used to estimate 𝜃i and then consider likelihood based estimation along the lines of Süveges (2007). For simplicity, we focus our discussion on the case of periodic dependence as in Definition 4. Such an assumption reduces estimation of the extremal clustering function to estimating the vector 𝜃 = (𝜃1,…,𝜃d) with \(\gamma = d^{-1}{\sum }_{i=1}^{d}\theta _{i}\) where d is the fundamental period which, for simplicity, we assume to be known a-priori. Such an assumption is important for the moment based estimators of Section 3.1 where one needs replications of interexceedance times in order to use the estimators, but can easily be relaxed for likelihood based inference.
3.1 Moment based estimators
Theorem 4 implies that the first two moments of \(\bar {F}(u)T_{i}(u)\) satisfy \(\mathbb {E}\{\bar {F}(u)T_{i}(u) \} = \theta _{i} / \gamma + o(1) \) and \(\mathbb {E}[\{ \bar {F}(u)T_{i}(u) \}^{2} ] = 2\theta _{i} / \gamma ^{2} + o(1)\) as u → xF. Assuming the threshold is chosen to be suitably large so that the o(1) terms can be neglected, these two equations can be solved with respect to the unknown parameters to give
A complication that arises in the non-stationary setting is that, since 𝜃i is defined via a conditional probability given the event Xi > u, if Xi does not exceed the threshold u then there are no interexceedance times to estimate 𝜃i. This problem doesn’t arise in the stationary case where every interexceedance time may be used to estimate the extremal index 𝜃.
In order to estimate 𝜃i then, it is natural to assume that the extremal clustering function is structured in some way, e.g., periodic or piecewise constant. Making such an assumption allows us to use multiple interexceedance times to estimate 𝜃i. Focusing on the case where \(\{X_{n}\}_{n=1}^{\infty }\) has periodic dependence with fundamental period d, all exceedances of the threshold u occuring at points that are separated by a multiple of d give rise to interexceedance times that may be used to estimate the same value of the extremal clustering function. More precisely, suppose that X1,…,Xn is a sample of size n of the process with exceedance times E = {1 ≤ i ≤ n : Xi > u}, and corresponding interexceedance times \(I = \{T_{i}(u): i\in E \backslash \{\max \limits (E)\} \},\) with Ti(u) as in Eq. 23. The set of interexceedance times that may be used for estimating 𝜃i is the subset \(I_{i} \subseteq I\) defined by Ii = {Tj(u) ∈ I : j ≡ i (mod d)}. If |Ii| = Ni, then we may relabel the elements of Ii as \(I_{i} = \{T_{i}^{(j)} \}_{j=1}^{N_{i}}\) where now the subscript remains fixed. Making further, more refined assumptions regarding the nature of the periodicity of the process under consideration may give rise to different sets Ii. For example, in an environmental time series setting it may be reasonable to assume that the extremal clustering function is piecewise constant within months or seasons, so that all interexceedance times that correspond to exceedances within the same calendar month or season belong to the same Ii.
Equation 26 suggests the estimator
whose bias we now investigate. From Eq. 25 we have that for \(n \in \mathbb {N}\)
which motivates consideration of the positive integer valued random variable T defined by
where p ∈ (0,1) and 𝜃i,γ ∈ (0,1] and we may identify p with F(xn). In a similar manner to Ferro and Segers (2003), we find that \(\mathbb {E}(T) = 1 + \theta _{i} p^{\gamma } (1 - p^{\gamma } )^{-1} \) and \(\mathbb {E}(T^{2}) = 1 + \theta _{i} p^{\gamma } (1 - p^{\gamma } )^{-1} + 2 \theta _{i} p^{\gamma } (1 - p^{\gamma })^{-2}\), so that upon simplification we find that
A Taylor expansion of the right hand side of Eq. 28 around p = 1 gives
so that the first order bias of \(\hat {\theta }_{i}\) is \(\gamma (2 - 3\theta _{i}/2)\bar {F}(x_{n})\). On the other hand, since
this motivates the estimator
whose first order bias is zero. This estimator forms the key component of the intervals estimator of Ferro and Segers (2003), which we can use to estimate 𝜃i. We note that \(\tilde {\theta }_{i} \) may take values greater than 1 and is not defined if max(Ii) ≤ 2 as then the denominator in Eq. 29 is zero. In order to deal with these cases, the intervals estimator \(\theta _{i}^{*}\) of 𝜃i is defined as
While Eq. 26 also suggests an estimator for γ, this is based only on the interexceedances relevant to estimating 𝜃i and also requires an estimate of \(\bar {F}(u)\). One possibility is to obtain d such estimates and take the mean of these as the estimate of γ. However, this estimator need not respect the relation \(\gamma = d^{-1}{\sum }_{i=1}^{d} \theta _{i}\), a consequence of the fact that we dropped the o(1) terms when solving the first two moment equations. In the examples that we consider in Section 4, we estimate γ using the mean of the estimates for the 𝜃i values.
3.2 Maximum likelihood estimation
Theorem 4 also allows for the construction of the likelihood function for the vector of unknown parameters. This is an attractive approach due to the modelling possibilities that become available, however, as discussed in Ferro and Segers (2003) in the stationary case, problems arise with maximum likelihood estimation due to uncertainty in how to assign interexceedance times to the components of the limiting mixture distribution. Since the asymptotically valid likelihood is used as an approximation at some subasymptotic threshold u, all observed normalized interexceedance times are strictly positive. Assigning all interexceedance times to the exponential part of the limiting mixture means that they are all being classified as between cluster times. This is tantamount to exceedances of a large threshold occuring in isolation, and so the maximum likelihood estimator based on this, typically misspecified, likelihood converges in probability to 1 regardless of the true underlying value of 𝜃.
This problem was addressed in Süveges (2007) for sequences satisfying the D(2)(xn) condition, i.e., the case k = 2 in Eq. 20. For such sequences, in the limit as \(n\to \infty \), exceedances above xn cluster into independent groups of consecutive exceedances, so that all observed interexceedance times equal to one are assigned to the zero component of the mixture likelihood. On the other hand, all interexceedance times greater than one are assigned to the exponential component of the likelihood. It was found that, when the D(2)(xn) condition is satisfied, maximum likelihood estimation outperforms the intervals estimator in terms of lower root mean squared error. The consecutive exceedances model of clusters implied by D(2)(xn) is in contrast to the general situation where within clusters, exceedances may be separated by observations that fall below the threshold.
If we were to make the D(2)(xn) assumption in our non-stationary setting, so that the consecutive exceedances model for clusters is accurate, then with \(I_{i} = \{T_{i}^{(j)} \}_{j=1}^{N_{i}}\) the interexceedance times relevant for estimating 𝜃i as in Section 3.1, we obtain the likelihood function as
where \(I = \cup _{i=1}^{d}I_{i}\) is the set of all interexceedance times and
The full log-likelihood is then
where \(\gamma = d^{-1} {\sum }_{i=1}^{d}\theta _{i},\) \(n_{i} = {\sum }_{j=1}^{N_{i}}\mathbbm {1}[T^{(j)}_{i} > 1],\) and in practice \(\bar {F}(x_{n})\) must be replaced with an estimate. Unlike in the stationary case, the likelihood equations don’t have a closed form solution, essentially due to the dependence of γ on all the 𝜃i. Equation 30, however, is easily optimized numerically provided d is not too large. If d is large, it is more natural to parameterize 𝜃i in terms of a small number of parameters which we may estimate by maximum likelihood or consider non-parametric estimation along the lines of Einmahl et al. (2016).
We may generalise this idea and assign all interexceedance times less than or equal to some value k to the zero component of the likelihood, so that the corresponding expression for Li becomes
This may be justified by the assumption that the sequence satisfies the D(k+ 1)(xn) condition. Selection of an appropriate value of k is equivalent to the selection of the run length for the runs estimator, and this problem is considered in the stationary case in Süveges and Davison (2010) and Juan Cai (2019). However, in a non-stationary setting, where the clustering characteristics of the sequence may change in time, the appropriate value of k may also be time varying, so that k may be replaced with ki in Eq. 31. Although, as was discussed in Section 3.1, we may take a constant value of k in the definition of D(k)(xn), for the purposes of estimating 𝜃i, one wants to select for each i, the smallest k = ki such that Eq. 20 is satisfied (Hsing 1993). If too small a value is selected for ki then some of the interexceedance times may be wrongly assigned to the exponential component of the likelihood leading to an overestimate of 𝜃i whereas if ki is selected to be too large then we tend to underestimate 𝜃i.
4 Examples
In this section we consider two simple examples of non-stationary Markov sequences with a periodic dependence structure and common marginal distributions. The first example we consider is the Gaussian autoregressive model
where \(\epsilon _{n} \sim N(0, 1-{\rho _{n}^{2}})\), |ρn| < 1 and \(X_{1} \sim N(0,1)\). In our second example, we consider a model where (Xn,Xn+ 1) follow a bivariate logistic distribution with joint distribution function
αn ∈ (0,1] and \(X_{1} \sim \text {Fr\'{e}chet}(1)\) so that \(\mathbb {P}(X_{1} \leq x) = e^{-1/x}, x \geq 0.\) For the Gaussian model, no limiting extremal clustering occurs at any point in the sequence, so that 𝜃i = 1 for each i, in contrast to the logistic model where 𝜃i < 1 for each i.
For sufficiently well behaved stationary Markov sequences, mixing conditions much stronger than those considered in Section 2 hold. For example, for the stationary Gaussian autoregressive sequence, with ρn = ρ in Eq. 32 for all n ≥ 1, Theorems 1 and 2 from Athreya and Pantula (1986) give that the mixing conditions of Theorem 1 and Theorem 4 hold for any sequence qn such that \(q_{n} \to \infty \), qn = o(n), for any xn. Analogous results also hold for the non-stationary models that we consider in this section, see for example Bradley (2005) Theorem 3.3 and Davydov (1973) Theorem 4.
4.1 Gaussian autoregressive model
Stationary sequences \(\{X_{n}\}_{n=1}^{\infty }\) where each Xi is a standard Gaussian random variable, are extensively studied in Chapter 4 of Leadbetter et al. (1983). It is shown there that if the lag n autocorrelation ρ(n) satisfies ρ(n)log n → 0, then the extremal index 𝜃 of the sequence equals one and so no limiting extremal clustering occurs. Thus, the stationary autoregressive sequence with ρn = ρ in Eq. 32 for all n ≥ 1 has extremal index one, provided ρ < 1. This is a special case of the more general result that a stationary asymptotically independent Markov sequence has an extremal index of one (Smith 1992). We say that the stationary sequence \(\{X_{n}\}_{n=1}^{\infty }\) is asymptotically independent at lag k if χ(k) = 0 where
and asymptotically independent if χ(k) = 0 for all k (Ledford and Tawn 2003).
Here, we consider the non-stationary autoregressive model (32) and specify a periodic lag one correlation function \( \rho _{n+1} = 0.5 + 0.25 \sin \limits (2\pi n/7)\) for n ≥ 0. Applying Theorem 6.3.4 of Leadbetter et al. (1983), and comparing the non-stationary sequence to an independent standard Gaussian sequence, we deduce that \(\mathbb {P}(M_{n} \leq x_{n}) - {\Phi }(x_{n})^{n} \to 0\) as \(n \to \infty \) where Φ is the standard Gaussian distribution function, and thus conclude that γ = 1 and 𝜃i = 1 for i = 1,…,7. The same conclusion may also be drawn by applying Theorem 4.1 of Hüsler (1983), which shows that if xn satisfies (7) then \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau }.\)
We simulated 1000 realizations of this sequence of length 104 and, for each realization, estimated 𝜃1,…,𝜃7 and γ for a range of high thresholds, using both the intervals estimator and maximum likelihood with k in Eq. 31 equal to zero and one. We then repeated this procedure for sequences of length 105 and 106. We found that the maximum likelihood estimator with k = 0 gave by far the best performance as measured by root mean squared error in γ. In fact, in this case the 0.025 and 0.975 empirical quantiles of the estimated values of γ were both 1 to two decimal places in all simulations. This is not surprising since selecting k = 0 ensures that all interexceedance times have the correct asymptotic classification as between cluster times. However, in a real data example such a level of prior knowledge regarding asymptotic independence is not realistic and would render estimation redundant. Although maximum likelihood estimation with k = 1 performed slightly poorer than the intervals estimator, both methods produced broadly similar results.
Table 1 shows the 0.025 and 0.975 empirical quantiles of the parameter estimates obtained using the intervals estimator. In the table, u = qp corresponds to the threshold that there is probability p of exceeding at each time point i.e., \(\mathbb {P}(X_{i} > q_{p}) = p\). Although the true value of each 𝜃i is 1, so that no extremal clustering occurs in the limit as \(u\to \infty \), clustering may occur at subasymptotic levels. Moreover, there will tend to be more subasymptotic clustering in the sequence at points with a larger lag one autocorrelation, i.e., larger ρi. This point has been thoroughly discussed in the context of stationary sequences and estimation of the extremal index (Ancona-Navarrete and Tawn 2000; Eastoe and Tawn 2012) and leads to the notion of a subasymptotic or threshold based extremal index.
4.2 Bivariate logistic dependence
The stationary logistic model, that is, Eq. 33 with αn = α for all n ≥ 1, has been thoroughly studied (Smith et al. 1997; Ledford and Tawn 2003; Süveges 2007). The parameter α controls the strength of dependence between adjacent terms in the sequence, with α = 1 corresponding to independence and α → 0 giving complete dependence. Such a sequence exhibits asymptotic dependence provided α < 1, in particular, \(\lim _{u\to \infty } \mathbb {P}(X_{n+1} > u \mid X_{n} > u) = 2 - 2^{\alpha }.\) By exploiting the Markov structure of the sequence, precise calculation of 𝜃 can be achieved using the numerical methods described in Smith (1992), where it is found for example that the sequence with α = 1/2 has 𝜃 = 0.328, and moreover, Eq. 18 is shown to hold for all α ∈ (0,1]. The case of α = 1/2 is also considered in Süveges (2007) where, based on diagnostic plots, it is concluded that the D(2)(xn) condition is not satisfied for this sequence, and moreover, the maximum likelihood estimator for 𝜃 based on a run length of k = 1 has bias of around 20%. Süveges and Davison (2010) find that a more suitable run length is k = 5, and in this case the maximum likelihood estimator for 𝜃 has lower root mean squared error than the intervals estimator. Smaller values of α will tend to be associated with larger values of the run length k, though the precise nature of this relation is unclear.
We consider the non-stationary logistic model (33) with \(\alpha _{n+1} = 0.5 + 0.25 \sin \limits (2\pi n /7) \) for n ≥ 0. Note that although we have specified the same parametric form for the dependence parameters α as in the previous example for ρ, the two parameters are not directly comparable. We simulated 1000 realizations of this process, of lengths 104 and 105, and estimated 𝜃1,…,𝜃7 using maximum likelihood with k = 5, at a range of different thresholds. Table 2 shows, for the different sample sizes and thresholds considered, the 0.025 and 0.975 empirical quantiles of the parameter estimates obtained from this simulation. Although the exact values of the parameters are unknown, making evaluation of any estimators performance impossible, an upper bound for 𝜃i is easily obtained as \( \lim _{u\to \infty } \mathbb {P}(X_{i+1} \leq u \mid X_{i} > u) = 2^{\alpha _{i}} - 1\). In our case this gives the bounds (𝜃1,…,𝜃7) ≤ (0.41,0.62,0.67,0.52,0.31,0.19,0.24) and γ ≤ 0.42 where the relation ≤ is interpreted componentwise. It is conceivable that the methods in Smith (1992) could be adapted to the non-stationary case to allow exact computation of 𝜃i though we do not pursue this direction here.
We also considered estimation of 𝜃i using the intervals estimator and obtained similar results to the maximum likelihood estimates. The median value of the 1000 estimates for each parameter using the different methods of estimation are shown in Fig. 1 for the sample size of 105 and threshold q0.05. The estimators clearly recover the periodicity in the dependence structure of the sequence and, on average, respect the upper bound for 𝜃i of \(2^{\alpha _{i}} - 1\).
5 Proofs
5.1 Auxiliary results
In this section we state and prove some Lemmas that are required in the proof of Theorem 1.
Lemma 3
Let \(\{t_{n}\}_{n=1}^{\infty }\) be a sequence of positive integers and ai,n, \(i, n \in \mathbb {N},\) an array of non-negative real numbers such that \(t_{n}\to \infty \) and \(A_{n} = \max \limits _{1\leq i \leq t_{n}} a_{i,n} \to 0\) as \(n\to \infty \). Then,
if and only if
Proof
Using the fact that \(\log (1-x) = -x + R(x)\) where |R(x)| < Cx2, for sufficiently small x > 0, for some C > 0, we have \({\sum }_{i=1}^{t_{n}}\log (1-a_{i,n}) = -{\sum }_{i=1}^{t_{n}}a_{i,n} + {\sum }_{i=1}^{t_{n}}R(a_{i,n}) \) and
so that \({\sum }_{i=1}^{t_{n}} \log (1-a_{i,n}) = -\big ({\sum }_{i=1}^{t_{n}}a_{i,n} \big )\big (1 + o(1)\big )\) or equivalently
from which the result follows. □
Lemma 4
Let \(\{t_{n}\}_{n=1}^{\infty }\) be a sequence of positive integers and ai,n, \(i, n \in \mathbb {N},\) an array of non-negative real numbers such that \(t_{n}\to \infty \), \(A_{n} = \max \limits _{1\leq i \leq t_{n}} a_{i,n} \to 0\) and \({\sum }_{i=1}^{t_{n}}a_{i,n}\) is bounded above as \(n\to \infty \). Then,
Proof
This follows from Lemma 3 by considering subsequences along which \({\sum }_{i=1}^{t_{n}} a_{i,n}\) converges. □
Lemma 5
Let \(g:\mathbb {R} \to \mathbb {R}\) be a bounded function. If f(x) = A(x)g(x) and \(\lim _{x\to \infty }A(x) = 1,\) then f(x) = g(x) + o(1) as \(x\to \infty \).
Proof
As g is bounded, ∃M > 0 such that |g(x)| < M, \(\forall x \in \mathbb {R}\). Now let 𝜖 > 0. As \(\lim _{x\to \infty }A(x) = 1,\) ∃x0 such that
Then for x > x0
□
Lemma 6
Let \(\{X_{n}\}_{n=1}^{\infty }\), xn, pn and qn be as in Lemma 2 such that Eq. 7 holds and assume \(\mathbb {P}(M_{n} \leq x_{n}) \to L \in (0,1)\). Let sn be such that pn = o(sn),sn = o(n) and tn = ⌊n/(sn + qn)⌋. Then
where \(M^{i}_{j,k} = {\max \limits } \{X^{i}_{j+1},\ldots , {X^{i}_{k}} \}\) and \({X^{i}_{j}} = X_{(i-1) (s_{n}+q_{n})+j}\).
Proof
We first note that Lemma 2 also holds with blocks of length sn, i.e., with sn in place of pn in the definition of Ai in Eq. 8 and tn in place of rn. Thus from Eq. 10, with blocks of length sn, we have that \( \mathbb {P}(M_{n} \leq x_{n}) - {\prod }_{i=1}^{t_{n}}\mathbb {P}(M(A_{i}) \leq x_{n}) \to 0\) so that \({\prod }_{i=1}^{t_{n}}\mathbb {P}(M(A_{i}) \leq x_{n}) \to L \in (0,1),\) or equivalently
Now we note that \(\max \limits _{1\leq i \leq t_{n}}\mathbb {P}(M(A_{i}) > x_{n}) \to 0\) since \(\mathbb {P}(M(A_{i}) > x_{n}) \leq s_{n}\bar {F}(x_{n})\) and sn = o(n) and \(\bar {F}(x_{n}) = \tau n^{-1}+o(n^{-1})\). Thus, using \(\log (1-t) = -t + o(t)\) as t → 0, Eq. 40 may be written
Now it is easily seen that the second sum in Eq. 41 converges to zero since
and so Eq. 41 implies \({\sum }_{i=1}^{t_{n}}\mathbb {P}(M(A_{i}) > x_{n}) \to -\log (L).\) Now, decomposing the event {M(Ai) > xn} as a disjoint union we get
Again, the second sum in Eq. 42 goes to zero since it is bounded above by \(t_{n} p_{n}\bar {F}(x_{n}) \leq \{p_{n}/(s_{n}+q_{n})\}n\bar {F}(x_{n}) \to 0,\) from which Eq. 39 follows. □
5.2 Proof of Lemma 1
We use induction on the number of subintervals, k. The case k = 2 is just the fact that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn). Assuming that the result is true for k such arbitrary subintervals, we will verify it also holds for the k + 1 intervals I1,I2,…,Ik,Ik+ 1. Let \(I_{1}^{*}\) be the interval separating I1 and I2 and let \(J= I_{1} \cup I_{1}^{*} \cup I_{2}\), and we note that J is an interval with |J| > qn and since \(\{(M(J \cup \cup _{i=3}^{k+1}I_{i}) \leq x_{n} \} \subseteq \{M(\cup _{i=1}^{k+1}I_{i})\leq x_{n})\}\) we have,
so we may write \(\mathbb {P}(M(\cup _{i=1}^{k+1}I_{i}) \leq x_{n}) = \mathbb {P}(M\big (J \cup \cup _{i=3}^{k+1}I_{i}\big ) \leq x_{n}) + R_{1,n}\) where the remainder R1,n satisfies \(R_{1,n} \leq q_{n}\bar {F}(x_{n})\). We then have
where \(|R_{2,n}| \leq (k-1)\alpha _{n} + 2(k-2)q_{n}\bar {F}(x_{n})\) and we have used our induction hypothesis to get (46) since \(J \cup \cup _{i=3}^{k+1}I_{i}\) is a union of k intervals with adjacent intervals separated by qn. Now note that since \(\{M(J)\leq x_{n}\} \subseteq \{M(I_{1}\cup I_{2})\leq x_{n}\}\) we have \(0 \leq \mathbb {P}(M(I_{1}\cup I_{2})\leq x_{n}) - \mathbb {P}(M(J)\leq x_{n}) \leq q_{n}\bar {F}(x_{n})\) and we may write \(\mathbb {P}(M(J)\leq x_{n}) = \mathbb {P}(M(I_{1}\cup I_{2})\leq x_{n}) + R_{3,n}\) where \(|R_{3,n}| \leq q_{n}\bar {F}(x_{n})\). Thus from Eqs. 44 and 47 we have
as required.
5.3 Proof of Lemma 2.
As \(\{M_{n} \leq x_{n}\} \subseteq \cap _{i=1}^{r_{n}}\{M(A_{i})\leq x_{n}\}\) we have
Also, by Lemma 1 we have
so that the triangle inequality and Eqs. 48 and 49 give the result.
5.4 Proof of Theorem 1.
In addition to the notation defined in Section 2.1, we also define
Now, for i = 1,…,rn, we have
and so
Now we note that
since the difference between the two sums is
so that Eq. 51 gives
Now we prove the reverse inequality of Eq. 53, i.e.,
We will show that Eq. 54 holds under the assumption that \(\mathbb {P}(M_{n} \leq x_{n})\) converges to some L in [0,1], with the more general case following by considering subsequences along which \(\mathbb {P}(M_{n} \leq x_{n})\) converges and repeating the following argument. By Lemma 2, specifically (10), and Lemma 4 with \(a_{i,n} = \mathbb {P}(M(A_{i}) > x_{n})\) we see that L > 0, and since (54) trivially holds when L = 1 we may assume L ∈ (0,1). Following (O’Brien 1987), introduce a new sequence sn = o(n) which plays the role of pn such that pn = o(sn) and let tn = ⌊n/(sn + qn)⌋ which now plays the role of rn and note that tn = o(rn) and the definitions in Eqs. 50 and 8 are modified by replacing pn with sn. Then for i = 1,…tn, we have
and consequently, since Lemma 2 holds with sn in place of pn and tn in place of rn,
Now, with \(a_{i,n} = \mathbb {P}(M^{i}_{0,s_{n}-p_{n}}>x_{n}, M^{i}_{s_{n}-p_{n},s_{n}}\leq x_{n}) + \mathbb {P}(M^{i}_{s_{n}-p_{n}, s_{n}} > x_{n}) \) we have, for all i, \(a_{i,n} \leq s_{n}\bar {F}(x_{n})\) and so \(\max \limits _{1\leq i \leq t_{n}}a_{i,n} \leq s_{n}\bar {F}(x_{n}) \to 0\) as sn = o(n) and \(\bar {F}(x_{n}) = \tau n^{-1} + o(n^{-1})\). Also, \({\sum }_{i=1}^{t_{n}}a_{i,n} \leq t_{n}\max \limits _{1\leq i \leq t_{n}}a_{i,n} \leq s_{n}(s_{n}+q_{n})^{-1}n\bar {F}(x_{n})\) which is bounded above as \(n\to \infty \). Thus we may apply Lemma 4 to Eq. 55 to get
with Eq. 56 following from Lemma 6. Now Lemma 5 reduces (56) to
with Eq. 58 following from Eq. 57 by the inclusions \(\{ M^{i}_{0, s_{n}-p_{n}} > x_{n}, M^{i}_{s_{n}-p_{n},s_{n}} \leq x_{n}\} \subseteq \bigcup _{j=1}^{s_{n}-p_{n}}\{{X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n} \} \subseteq \bigcup _{j=1}^{s_{n}}\{{X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n} \}\) and the union bound. A similar argument that was used to show (52) gives
so that Eq. 58 becomes
and so Eqs. 53 and 60 together prove (11). Also, since
with \(\gamma _{n} = n^{-1}{\sum }_{j=1}^{n}\mathbb {P}(M_{j,j+p_{n}} \leq x_{n} \mid X_{j} > x_{n})\), this also gives (12).
5.5 Proof of Theorem 2.
Throughout we let \(\theta _{i,n} = \mathbb {P}(M_{i,i+p_{n}} \leq x_{n} \mid X_{i} > x_{n})\). From Corollary (1) we know that \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau \gamma ^{\prime }}\) with \(\gamma ^{\prime } = \lim _{n\to \infty }n^{-1}{\sum }_{i=1}^{n}\theta _{i,n}\) which is easily seen to converge to the same value as \(\lim _{n\to \infty }n^{-1}{\sum }_{i=1}^{n}\theta _{i}\) since
This establishes the first part of the Theorem.
To show that, \(\mathbb {P}(M_{n} \leq y_{n}) \to e^{-\tau ^{\prime }\gamma }\), define \(n^{\prime } = \lfloor { (\tau ^{\prime }/\tau )n}\rfloor \) so that \(n^{\prime }\bar {F}(x_{n}) \to \tau ^{\prime }\) or equivalently \(F(x_{n})^{n^{\prime }} \to e^{-\tau ^{\prime }}\). Then,
Since \(n^{\prime } \leq n\) we have by Theorem 1, \(\mathbb {P}(M_{n^{\prime }} \leq x_{n}) = F(x_{n})^{n^{\prime }\gamma _{n}^{\prime }}\) where \(\gamma _{n}^{\prime } = (n^{\prime })^{-1}{\sum }_{i=1}^{n^{\prime }} \theta _{i,n}\) and \(\gamma _{n}^{\prime } \) also has limiting value \(\lim _{n\to \infty }n^{-1}{\sum }_{i=1}^{n}\theta _{i} \) since \(| (n^{\prime })^{-1}{\sum }_{i=1}^{n^{\prime }} \theta _{i,n} - (n^{\prime })^{-1}{\sum }_{i=1}^{n^{\prime }} \theta _{i} | \to 0\) as in Eq. 61. Then, since \(F(x_{n})^{n^{\prime }} \to e^{-\tau ^{\prime }}\), we have \(\mathbb {P}(M_{n^{\prime }} \leq x_{n}) \to e^{-\tau ^{\prime }\gamma }\) and so from Eq. 62, \(\mathbb {P}(M_{n} \leq y_{n}) \to e^{-\tau ^{\prime }\gamma }\) also with γ as in Eq. 17.
5.6 Proof of theorem 3.
The first step in the proof is to show that we have, for each integer k ≥ 1,
where \(n^{\prime } = \lfloor {n/k}\rfloor \). To do this we define intervals Ii and \(I_{i}^{*}\), 1 ≤ i ≤ k, of alternating large and small lengths as follows,
We show that
and
from which Eq. 63 follows by the triangle inequality.
Equation 65 follows from \(\{M_{n} \leq x_{n}\} \subseteq \{M(\cup _{i=1}^{k}I_{i}) \leq x_{n}\}\) and so \(\mathbb {P}(M(\cup _{i=1}^{k}I_{i}) \leq x_{n}) - \mathbb {P}(M_{n} \leq x_{n}) \leq \mathbb {P}(\cup _{i=1}^{k}\{M(I_{i}^{*}) > x_{n}\}) \leq kq_{n}\bar {F}(x_{n}) \to 0\) since qn = o(n) and \(\bar {F}(x_{n}) = \tau /n + o(n^{-1}).\)
Equation 66 follows immediately from Lemma 1 as Ij,1 ≤ j ≤ k, are distinct subintervals of {1,…,n}, and Ii and Ii+ 1 are separated by qn.
Equation 67 follows from \(\{M(I_{i} \cup I_{i}^{*}) \leq x_{n} \} \subseteq \{M(I_{i}) \leq x_{n}\}\) and \(0\leq \mathbb {P}(M(I_{i}) \leq x_{n}) - \mathbb {P}(M(I_{i} \cup I_{i}^{*}) \leq x_{n}) \leq q_{n}\bar {F}(x_{n}) \to 0\) so that \( \mathbb {P}(M(I_{i}) \leq x_{n}) = \mathbb {P}(M(I_{i} \cup I_{i}^{*}) \leq x_{n}) + o(1)\) for 1 ≤ i ≤ k.
For Eq. 68, we first note that \(\mathbb {P}(M(I_{1}\cup I_{1}^{*})\leq x_{n}) = \mathbb {P}(M_{n}^{\prime } \leq x_{n}).\) Since \(I_{i}\cup I_{i}^{*}\) is an interval of length \(n^{\prime }\), 1 ≤ i ≤ k, we have by periodicity that \(M(I_{i}\cup I_{i}^{*}) \overset {D}{=} M_{r,r+n^{\prime }}\) for some r ∈{0,1,…,d − 1}. Then for 2 ≤ i ≤ k, we have
Hence \({\prod }_{i=1}^{k}\mathbb {P}(M(I_{i}\cup I_{i}^{*})\leq x_{n}) = {\prod }_{i=1}^{k}\mathbb {P}(M(I_{1}\cup I_{1}^{*})\leq x_{n}) + o(1) = \mathbb {P}^{k}(M_{n^{\prime }} \leq x_{n}) + o,\) which establishes (63).
Now we note that since \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) with \(n\bar {F}(x_{n}) \to \tau \), it also satisfies AIM(yn) whenever \(n\bar {F}(y_{n}) \to \tau ^{\prime } \leq \tau \). This follows in the exact same way as for the D(xn) condition, see, e.g., Lemma 3.6.2. in Leadbetter et al. (1983). This fact together with Eq. 63 allows the proof to proceed in exactly the same manner as the proof of Theorem 3.7.1. in Leadbetter et al. (1983).
5.7 Proof of Theorem 4
For \(n, q \in \mathbb {N}\) and \(u\in \mathbb {R}\), we define the mixing coefficients αn,q(u) by
where the maximum is taken over intervals I1 and I2 that are separated by q, with \(\min \limits \{|I_{1}|, |I_{2}|\} \geq q\), \(\min \limits \{\min \limits (I_{1}), \min \limits (I_{2})\} \geq 1\) and max\(\{\max \limits (I_{1}), \max \limits (I_{2})\} \leq n\).
We first note that since both qn = o(n) and \(0 \leq \alpha _{n} = \alpha _{n,q_{n}}(x_{n}) \leq \alpha ^{*}_{n,q_{n}}(x_{n}) \to 0\), we can find a sequence of positive integers pn = o(n) such that nαn = o(pn) and qn = o(pn) so that the conditions of Theorem 1 are satisfied.
Let t > 0 and write \(k_{n} = \lfloor {t/\bar {F}(x_{n})}\rfloor \sim tn/\tau \) so that for sufficiently large n, kn > pn + qn. Now, fix \(i\in \mathbb {N}\). For sufficiently large n we have
so that
In a similar way, since \(\{ M_{i+k_{n}} \leq x_{n} \} \subseteq \{ M_{i+p_{n}+q_{n}, i + k_{n}} \leq x_{n} \},\) we have
so that \(\mathbb {P}(M_{i+p_{n}+q_{n},i+k_{n}} \leq x_{n}) = \mathbb {P}(M_{i+k_{n}} \leq x_{n}) + o(1).\) Now we can derive the limiting distribution of \(\bar {F}(x_{n})T_{i}(x_{n})\). We have
Now we focus on the term \(\mathbb {P}(M_{i+k_{n}} \leq x_{n})\) appearing in Eq. 69. Since \(\mathbb {P}(M_{k_{n}} \leq x_{n}) - \mathbb {P}(M_{i+k_{n}} \leq x_{n}) \leq i\bar {F}(x_{n})\) we have \(\mathbb {P}(M_{i+k_{n}} \leq x_{n}) = \mathbb {P}(M_{k_{n}} \leq x_{n})+ o(1).\) Since kn = O(n) we have \(\alpha ^{*}_{k_{n},q_{n}}(x_{n}) \to 0\) and so \(\alpha _{k_{n},q_{n}}(x_{n}) \to 0\) also. Thus we may find a sequence \(p_{n}^{\prime } = o(n)\) such that \(k_{n}\alpha _{k_{n},q_{n}} = o(p_{n}^{\prime })\) and \(q_{n} = o(p_{n}^{\prime })\), e.g., we may take \(p_{n}^{\prime } = \lfloor {(k_{n} \max \limits \{q_{n}, k_{n}\alpha _{k_{n},q_{n}}(x_{n})\})^{1/2}}\rfloor .\) Then applying Theorem 1 to the first kn terms we get \(\mathbb {P}(M_{k_{n}} \leq x_{n}) = F(x_{n})^{k_{n}\gamma _{n}^{\prime }}\) where \(\gamma _{n}^{\prime } = k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n}^{\prime }\) with \(\theta _{j,n}^{\prime } = \mathbb {P}(M_{j,j+p_{n}^{\prime }} \leq x_{n} \mid X_{j} > x_{n})\).
We now verify that \(\gamma _{n}^{\prime }\) has limiting value \(\gamma = \lim _{n\to \infty }n^{-1}{\sum }_{j=1}^{n}\theta _{j}\). Define sequences an, bn and \(k_{n}^{\prime }\) by \(a_{n} = \max \limits \{p_{n}, p_{n}^{\prime }\}, b_{n} = \min \limits \{p_{n}, p_{n}^{\prime }\}\) and \(k_{n}^{\prime } = k_{n} + a_{n}\) and note that \(k_{n}^{\prime } = O(n)\). Then with the usual notation \(\theta _{j,n} = \mathbb {P}(M_{j,j+p_{n}} \leq x_{n} \mid X_{j} > x_{n})\), we have for all 1 ≤ j ≤ kn that, for sufficiently large n, \(|\theta _{j,n}^{\prime } - \theta _{j,n}| \leq \mathbb {P}(M_{j+b_{n}, j+a_{n}} > x_{n} \mid X_{j} > x_{n}) \leq |p_{n} - p_{n}^{\prime }| \bar {F}(x_{n}) + \alpha ^{*}_{k_{n}^{\prime },q_{n}}(x_{n})\) where we have used the fact that bn > qn for sufficiently large n and \(\alpha ^{*}_{n,q}(u)\) is non-decreasing in n for fixed q and u. Therefore, \(|k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n} - k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n}^{\prime }| \leq |p_{n} - p_{n}^{\prime }| \bar {F}(x_{n}) + \alpha ^{*}_{k_{n}^{\prime },q_{n}}(x_{n})\to 0\) and so \(k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n} \) and \(k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n}^{\prime }\) have the same limit which equals γ since \(|k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n} - k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j}| \leq \max \limits _{1\leq j\leq k_{n}}|\theta _{j} - \theta _{j,n}| \to 0.\)
Finally, we have \(\mathbb {P}(M_{i+k_{n}} \leq x_{n}) = \mathbb {P}(M_{k_{n}} \leq x_{n})+ o(1) = F(x_{n})^{k_{n}(\gamma + o(1))} = [e^{-t} + o(1)]^{\gamma + o(1)}\) since \(n\bar {F}(x_{n})\to \tau \) implies that \(k_{n}\bar {F}(x_{n})\to t\) which in turn implies that \(F(x_{n})^{k_{n}}\to e^{-t}\). Substituting \(\mathbb {P}(M_{i+k_{n}} \leq x_{n}) = e^{-\gamma t} + o(1)\) in Eq. 69 then gives the result.
References
Ancona-Navarrete, M.A., Tawn, J.A.: A comparison of methods for estimating the extremal index. Extremes 3, 5–38 (2000)
Athreya, K.B., Pantula, S.G.: Mixing properties of Harris chains and autoregressive processes. J. Appl. Probab. 23, 880–892 (1986)
Beirlant, J., Goegebeur, Y., Segers, J., Teugels, J.: Statistics of extremes, theory and applications. Wiley (2004)
Berman, S.M.: Limit theorems for the maximum term in stationary sequences. Ann. Math. Stat. 35, 502–516 (1964). https://doi.org/10.1214/aoms/1177703551
Bradley, R.C.: Basic properties of strong mixing conditions. A survey and some open questions. Probab. Surv. 2, 107–144 (2005)
Chernick, M., Hsing, T., McCormick, W.: Calculating the extremal index for a class of stationary sequences. Adv. Appl. Probab. 23, 835–850 (1991)
Coles, S.G., Tawn, J.A., Smith, R.L.: A seasonal Markov model for extremely low temperatures. Environmetrics 5, 221–239 (1994)
Davydov, Y.A.: Mixing conditions for Markov chains. Theory Probab. Appl. 18, 312–328 (1973)
Eastoe, E.F., Tawn, J.A.: Modelling the distribution of the cluster maxima of exceedances of sub-asymptotic thresholds. Biometrika 99, 43–55 (2012)
Einmahl, J.H.J., de Haan, L., Zhou, C.: Statistics of heteroscedastic extremes. J. R. Stat. Soc. B 78, 31–51 (2016)
Ferro, C.A.T., Segers, J.: Inference for clusters of extreme values. J. R. Stat. Soc. B 65, 545–556 (2003)
Haan, L., Ferreira, A.: Extreme value theory: An introduction. Springer Series in Operations Research and Statistics. Springer (2006)
Hsing, T.: On the characterization of certain point processes. Stoch. Process. Appl. 26, 297–316 (1987)
Hsing, T.: Estimating the parameters of rare events. Stoch. Process. Appl. 37, 117–139 (1991)
Hsing, T.: Extremal index estimation for a weakly dependent stationary sequence. Ann. Stat. 21, 2043–2071 (1993). https://doi.org/10.1214/aos/1176349409
Hüsler, J.: Asymptotic approximation of crossing probabilities of random sequences. Wahrscheinlichkeitsth. Gebiete 63, 257–270 (1983)
Hüsler, J.: Extreme values of nonstationary random sequences. J. Appl. Probab. 23, 937–950 (1986)
Juan Cai, J.: A nonparametric estimator of the extremal index. arXiv:1911.06674 (2019)
Leadbetter, M.R.: On extreme values in stationary sequences. Z. Wahr-sch. Verw. Gebiete 28, 289–303 (1974)
Leadbetter, M.R.: Extremes and local dependence in stationary sequences. Z. Wahr-sch. Verw. Gebiete 65, 291–306 (1983)
Leadbetter, M.R., Lindgren, G., Rootzén, H.: Extremes and related properties of random sequences and series. Springer, New York (1983)
Ledford, A.W., Tawn, J.A.: Diagnostics for dependence within time series extremes. J. R. Stat. Soc., B 65, 521–543 (2003)
Loynes, R.M.: Extreme values in uniformly mixing stationary stochastic processes. Ann. Math. Stat. 36, 993–999 (1965)
O’Brien, G.L.: Extreme values for stationary and Markov sequences. Ann. Probab. 15, 281–291 (1987)
Smith, R.L.: The extremal index for a Markov chain. J. Appl. Probab. 29, 37–45 (1992)
Smith, R.L., Weissman, I.: Estimating the extremal index. J. R. Stat. Soc. B 56, 515–528 (1994)
Smith, R.L., Tawn, J.A., Coles, S.G.: Markov chain models for threshold exceedances. Biometrika 84, 249–268 (1997)
Süveges, M.: Likelihood estimation of the extremal index. Extremes 10, 41–55 (2007)
Süveges, M., Davison, A.C.: Model misspecification in peaks over threshold analysis. Ann. Appl. Stat. 4, 203–221 (2010)
Watson, G.S.: Extreme values in samples from m-dependent stationary stochastic processes. Ann. Math. Stat. 25, 798–800 (1954)
Acknowledgements
The authors would like to express their gratitude to an anonymous reviewer and the Editor whose helpful comments have greatly improved this paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Data availability
The datasets generated and analysed during the current study are available from the corresponding author upon request.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Auld, G., Papastathopoulos, I. Extremal clustering in non-stationary random sequences. Extremes 24, 725–752 (2021). https://doi.org/10.1007/s10687-021-00418-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10687-021-00418-2
Keywords
- Clustering of extremes
- Extremal index
- Interexceedance times
- Intervals estimator
- Non-stationary sequences
- Periodic processes