1 Introduction

Extreme value theory for strictly stationary sequences has been extensively studied, initiated in the works of Watson (1954), Berman (1964), and Loynes (1965), and continued by Leadbetter (1974, 1983) and O’Brien (1987) amongst others. One of the key findings in this line of research is that unlike in independent and identically distributed sequences where extreme values tend to occur in isolation, stationary sequences possess an intrinsic potential for clustering of extremes, i.e., several successive or close extreme values may be observed. Understanding the extremal clustering characteristics of a stochastic process is critical in many applications where a cluster of extreme values may have serious consequences. For example, if a sequence consists of daily temperatures at some fixed location then a cluster of extremes may correspond to a heatwave.

The extent to which extremal clustering may occur is naturally measured, for strictly stationary sequences, by a parameter known as the extremal index. Let \(\{X_{n}\}_{n=1}^{\infty }\) be a sequence of random variables with common marginal distribution function F, and let \(\bar {F}=1-F\) and \(M_{n} = \max \limits \{X_{1},\ldots ,X_{n}\}\). Also, let \(\{x_{n}\}_{n=1}^{\infty }\) be a sequence of real numbers that we may informally think of as thresholds or levels. In the special case that Xi and Xj are independent, ij, then a necessary and sufficient condition for \(\mathbb {P}(M_{n} \leq x_{n})\) to converge to a limit in (0,1) as \(n\to \infty \) is that \(n\bar {F}(x_{n}) \to \tau > 0,\) in which case \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau }\) (Leadbetter et al. 1983, Theorem 1.5.1). More generally, if \(\{X_{n}\}_{n=1}^{\infty }\) is a strictly stationary sequence, then \(n\bar {F}(x_{n}) \to \tau \) is not sufficient to ensure the convergence of \(\mathbb {P}(M_{n} \leq x_{n})\). However, in most cases of practical interest, provided that a suitable long range dependence restriction is satisfied, such as condition D of Leadbetter (1974), one has \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\theta \tau }\) where 𝜃 ∈ [0,1] is the extremal index. Leadbetter (1983) showed that exceedances of the level xn occur in clusters with the limiting mean cluster size being equal to 𝜃− 1, and Hsing (1987) showed that distinct clusters may be considered independent in the limit.

Another characterization of 𝜃 that links it to the extremal clustering properties of a strictly stationary sequence can be found in O’Brien (1987). Defining \(M_{j,k} = \max \limits \{X_{i} : j+1\leq i \leq k \}\), it was shown that the distribution function of Mn satisfies

$$ \mathbb{P}(M_{n} \leq x_{n}) - F(x_{n})^{n\theta_{n}} \to 0, \quad \text{as } n\to \infty, $$
(1)

where

$$ \theta_{n} = \mathbb{P}(M_{1,p_{n}} \leq x_{n} \mid X_{1} > x_{n}), $$
(2)

for some pn = o(n), and provided the limit exists, 𝜃n𝜃 as \(n \to \infty \). This result illustrates that smaller values of 𝜃 are indicative of a larger degree of extremal clustering, since the conditional probability in Eq. 2 is small when an exceedance of a large threshold is likely to soon be followed by another exceedance.

Early attempts at estimating 𝜃 were based on associating 𝜃− 1 with the limiting mean cluster size. Different methods for identifying clusters gave rise to different estimators, well known examples being the runs and blocks estimators (Smith and Weissman 1994). For the runs estimator, a cluster is identified as being initialized when a large threshold is exceeded and ends when a fixed number, known as the run length, of non-exceedances occur. The extremal index is then estimated by the ratio of the number of identified clusters to the total number of exceedances. A difficulty that arises when using this estimator is its sensitivity to the choice of run length (Hsing 1991).

The problem of cluster identification was studied by Ferro and Segers (2003) who considered the distribution of the time between two exceedances of a large threshold. They found that the limiting distribution of appropriately normalized interexceedance times converges to a distribution that is indexed by 𝜃. In particular, for a given threshold \(u \in \mathbb {R}\), they define the random variable \(T(u) = \min \limits \{n \geq 1 : X_{n+1} > u \mid X_{1} > u \}\), and found that as \(n\to \infty ,\) \(\bar {F}(x_{n})T(x_{n})\) converges in distribution to a mixture of a point mass at zero and an exponential distribution with mean 𝜃− 1. Thus, by computing theoretical moments of this limiting distribution and comparing them with their empirical counterparts, they construct their so-called intervals estimator.

Motivated by the fact that many real world processes are non-stationary, in this paper we investigate the effect of non-stationarity on extremal clustering. Previous statistical works that consider extremal clustering in non-stationary sequences include (Süveges 2007), who used the likelihood function introduced by Ferro and Segers (2003) for the extremal index together with smoothing methods to capture non-stationarity in a time series of temperature measurements. In a similar application, Coles et al. (1994) used a Markov model together with simulation techniques to estimate the extremal index within different months.

An early work that developed extreme value theory for non-stationary sequences with a common marginal distribution is Hüsler (1983), which focused on the asymptotic distribution of the sample maxima but did not consider extremal clustering. Hüsler (1986) considered the more general case where the margins may differ and also discussed the difficulty of defining the extremal index for general non-stationary sequences.

Here, we consider a sequence of random variables \(\{X_{n}\}_{n=1}^{\infty }\) with common marginal distribution function F, but do not assume stationarity in either the weak or strict sense. As we assume common margins, non-stationarity may arise through changes in the dependence structure. We show, under assumptions similar to O’Brien (1987), that

$$ \mathbb{P}(M_{n} \leq x_{n}) - F(x_{n})^{n\gamma_{n}} \to 0, \quad \text{as } n\to \infty, $$
(3)

where

$$ \gamma_{n} = \frac{1}{n} \sum\limits_{j=1}^{n} \mathbb{P}(M_{j,j+p_{n}} \leq x_{n} \mid X_{j}> x_{n}). $$
(4)

Thus, we find that the limiting distribution of the sample maximum at large thresholds is characterized by a parameter \(\gamma = \lim _{n \to \infty } \gamma _{n}\), provided the limit exists, which by analogy with Eq. 2, may be regarded as the average of local extremal indices. In this paper we develop methods for estimating these local extremal indices by adapting the methods of Ferro and Segers (2003) for the extremal index to our non-stationary setting. In the special case that the sequence is stationary, so that all terms in the summation (4) are equal, the formula for γn reduces to 𝜃n in Eq. 2.

The structure of the paper is as follows. Section 2 defines the notation and assumed mixing condition used throughout the paper and states the main theoretical results regarding the asymptotic distribution of the sample maxima and normalized interexceedance times. Section 3 discusses approaches to parameter estimation using the result from Section 2 on the distribution of the interexceedance times. Section 4 considers the estimation problem for two simple non-stationary Markov sequences with periodic dependence structures and Section 5 gives the proofs of the main theoretical results.

2 Theoretical results

2.1 Notation, definitions and preliminary results

Throughout the paper, when not explicitly stated otherwise, all limits should be interpreted as “as \(n\to \infty \)”. We assume that all random variables in the sequence \(\{X_{n}\}_{n=1}^{\infty }\) have common marginal distribution F with upper endpoint \(x_{F} = \sup \{x\in \mathbb {R} : F(x) < 1 \}\), though we do not assume stationarity. In addition to the definitions for Mn and Mj,k given in the Section 1, we define \(M(A) = \max \limits \{ X_{i} : i\in A \}\) where A is an arbitrary set of positive integers, and write |A| for the number of elements in A. We also refer to a set of consecutive integers as an interval. If I1 and I2 are two intervals, we say that I1 and I2 are separated by q if min(I2) - max(I1) = q + 1 or min(I1) - max(I2) = q + 1, i.e., there are q intermediate values between I1 and I2. The set {1,2,3,…} is denoted by \(\mathbb {N}\). Equality in distribution of two random variables X and Y is denoted by \( X \overset {D}{=} Y.\)

We assume that the sequence \(\{X_{n}\}_{n=1}^{\infty }\) satisfies the asymptotic independence of maxima (AIM) mixing condition of O’Brien (1987) which restricts long range dependence.

Definition 1

The sequence \(\{X_{n}\}_{n=1}^{\infty }\) is said to satisfy the asymptotic independence of maxima condition relative to the sequence xn of real numbers, abbreviated to “\(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn)”, if there exists a sequence qn of positive integers with qn = o(n) such that for any two intervals I1 = {i1,…,ij} and I2 = {ij + qn + 1,…,ij + qn + k} separated by qn, we have

$$ \alpha_{n} = \max \mid\mathbb{P}\big(M(I_{1} \cup I_{2}) \leq x_{n} \big) - \mathbb{P}\big(M(I_{1}) \leq x_{n}\big)\mathbb{P}\big(M(I_{2}) \leq x_{n}\big)\mid \rightarrow 0, $$
(5)

where the maximum is taken over all positive integers i1,ij and k such that |I1|≥ qn, |I2|≥ qn and ij + qn + kn.

Definition 1 states a slightly weaker condition than the widely used D(xn) condition (Leadbetter 1983) in that only certain intervals I1 and I2 need to be considered in Eq. 5 rather than arbitrary sets of integers, so that all examples in the literature of sequences satisfying D(xn) also satisfy AIM(xn). For example, stationary Gaussian sequences with autocorrelation function ρn satisfying Berman’s condition, \(\rho _{n}\log n\to 0\) (Berman 1964), satisfy AIM(xn) for any sequence xn such that \(n\bar {F}(x_{n})\) is bounded and any qn = o(n) (Leadbetter et al. 1983, Lemma 4.4.1). The analogous result for non-stationary Gaussian sequences is given in Hüsler (1983), where Berman’s condition is replaced by \(r_{n}\log n \to 0\) with \(r_{n} = \sup \{|\rho (i,j)| : |i-j| \geq n\}\) and ρ(i,j) the correlation between Xi and Xj.

O’Brien (1987) showed that if \(\{X_{n}\}_{n=1}^{\infty }\) is a stationary positive Harris Markov sequence with separable state space S and \(f:S\to \mathbb {R}\) is a measurable function then the sequence Yn = f(Xn) satisfies AIM(xn) for any xn and qn = o(n) with \(q_{n} \to \infty \).

We note that Definition 1 states a property of the dependence structure of the sequence \(\{X_{n}\}_{n=1}^{\infty }\), with the specific marginal distributions playing essentially no role. In particular, if \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) and \(g:\mathbb {R}\to \mathbb {R}\) is a monotone increasing function then Yn = g(Xn) satisfies AIM(g(xn)) with the same qn.

The assumption that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) ensures the approximate independence of the block maxima of two sufficiently separated blocks. Lemma 1 below provides an upper bound for the degree of dependence of k block maxima for suitably separated blocks and will be useful in Section 2.2 when the limiting behaviour of \(\mathbb {P}(M_{n} \leq x_{n})\) is considered.

Lemma 1

Let \(\{X_{n}\}_{n=1}^{\infty }\) satisfy AIM(xn) and let I1,I2,…,Ik be distinct subintervals of {1,2,…,n} where k ≥ 2 and |Ii|≥ qn, 1 ≤ ik. Suppose that Ii and Ii+ 1 are separated by qn for 1 ≤ ik − 1. Then

$$ \big|\mathbb{P}(M(\cup_{i=1}^{k}I_{i}) \leq x_{n}) - \prod\limits_{i=1}^{k} \mathbb{P}(M(I_{i}) \leq x_{n}) \big| \leq (k-1)\alpha_{n} + 2(k-2)q_{n}\bar{F}(x_{n}). $$
(6)

2.2 Asymptotic distribution of M n

In this section we investigate the limiting behaviour of \(\mathbb {P}(M_{n} \leq x_{n})\), with the main result being Theorem 1. In addition to assuming that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn), we will assume that the rate of growth of the sequence xn is controlled via

$$ n\bar{F}(x_{n}) \to \tau >0. $$
(7)

In the case of continuous marginal distributions, Eq. 7 is immediately satisfied by xn = F− 1(1 − τ/n). More generally, Theorem 1.7.13 of Leadbetter et al. (1983) guarantees the existence of a sequence xn satisfying (7) when F is in the domain of attraction of any of the three classical extreme value distributions (Haan and Ferreira 2006, Section 1.2).

We use the standard technique of block-clipping, see for example Section 10.2.1 in Beirlant et al. (2004), to split the interval {1,2,…,n} into subintervals, or blocks, of alternating large and small lengths. Specifically, for sequences pn and qn such that qn = o(pn) and pn = o(n) we define

$$ \begin{array}{@{}rcl@{}} A_{i} &= \big\{(i-1)(p_{n}+q_{n})+1,\ldots, ip_{n} + (i-1)q_{n} \big \} \\ A_{i}^{*} & = \big\{ ip_{n} + (i-1)q_{n} + 1, \ldots, i(p_{n} + q_{n}) \big\}, \end{array} $$
(8)

for i = 1,2,…rn, where rn = ⌊n/(pn + qn)⌋.

If we take the sequence qn appearing in the construction of the blocks Ai and \(A_{i}^{*}\) to be the same as that in Definition 1, then Lemma 1 bounds the degree of dependence of the collection of random variables \(\{M(A_{i})\}_{i=1}^{r_{n}}\), and this allows us to prove Lemma 2 below which modifies Lemma 3.1 from O’Brien (1987) to allow for non- stationarity.

Lemma 2

Let \(\{X_{n}\}_{n=1}^{\infty }\) satisfy AIM(xn) and let the sequence pn be such that

$$ p_{n} = o(n), \quad n\alpha_{n} = o(p_{n}) \quad \text{and} \quad q_{n} = o(p_{n}). $$
(9)

Then if Eq. 7 holds, we have

$$ \mathbb{P}(M_{n} \leq x_{n}) - \prod\limits_{i=1}^{r_{n}} \mathbb{P}(M(A_{i}) \leq x_{n}) \rightarrow 0, $$
(10)

where the intervals \(\{A_{i}\}_{i=1}^{r_{n}}\) are as in Eq. 8.

Remarks 1

Equation 10 follows easily from Eq. 6 by making the identification k = rn and using Eqs. 7 and 9. Additionally, if \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) then we can always find a sequence pn such that Eq. 9 holds, for example, by taking \(p_{n} = \lfloor { \{ n \max \limits (q_{n}, n\alpha _{n})\}^{1/2} }\rfloor \). Thus the only assumption in Lemma 2 beyond common margins is that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) for a sequence xn satisfying (7).

We can now state our main theorem.

Theorem 1

Under the same assumptions as in Lemma 2, we have

$$ \mathbb{P}(M_{n} \leq x_{n}) - \text{exp}\bigg\{-\sum\limits_{j=1}^{n} \mathbb{P}(X_{j} > x_{n}, M_{j,j+p_{n}} \leq x_{n}) \bigg\} \rightarrow 0, $$
(11)

and consequently

$$ \mathbb{P}(M_{n} \leq x_{n}) - F(x_{n})^{n\gamma_{n}} \rightarrow 0, $$
(12)

where

$$ \gamma_{n} = \frac{1}{n}\sum\limits_{j=1}^{n} \mathbb{P}(M_{j,j+p_{n}} \leq x_{n} \mid X_{j} > x_{n}). $$
(13)

As it was noted in Section 1, for independent sequences (7) implies that \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau }\). For a random sequence satisfying the conditions of Lemma 2, the following result gives a necessary and sufficient condition for the convergence of \(\mathbb {P}(M_{n} \leq x_{n}).\)

Corollary 1

Under the same assumptions as in Lemma 2, \(\mathbb {P}(M_{n} \leq x_{n})\) converges if and only if \( \lim _{n\to \infty } \gamma _{n}\) exists, where γn is as in Eq. 13, in which case \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau \gamma }\) with \(\gamma = \lim _{n\to \infty } \gamma _{n}\in [0,1].\)

Corollary 1 follows from Eq. 12 since \(n\bar {F}(x_{n}) \to \tau \) if and only Fn(xn) → eτ which is easily seen by taking logs in the latter expression and using log(1 − t) = −t + o(t) as t → 0.

A basic question regarding the constant γ appearing in Corollary 1 is whether it is independent of the particular value of τ in Eq. 7, i.e., do we obtain the same limiting value of γn regardless of the specific sequence xn and τ used in Eq. 7? We will see in Section 2.3 that for sequences with periodic dependence this is indeed the case, and Theorem 2 gives sufficient conditions for this to hold more generally.

We now turn our attention to the conditional probabilities appearing in the summation (13), which contain local information regarding the strength of extremal clustering in the sequence \(\{X_{n}\}_{n=1}^{\infty }\).

Definition 2

Under the same assumptions as in Lemma 2, let \(\{f_{n}\}_{n=1}^{\infty }\) be the sequence of functions defined on \(\mathbb {N}\) by

$$ f_{n}(i) = \theta_{i,n} = \mathbb{P}(M_{i,i+p_{n}} \leq x_{n} \mid X_{i} > x_{n}), \quad i\in \mathbb{N}. $$
(14)

We define the extremal clustering function of \(\{X_{n}\}_{n=1}^{\infty }\) to be the function \(\theta : \mathbb {N} \rightarrow [0, 1]\) given by

$$ \theta_{i} = \lim_{n\to\infty} f_{n}(i) $$
(15)

provided the limit exists.

In the special case that the sequence \(\{X_{n}\}_{n=1}^{\infty }\) is stationary, the extremal clustering function is simply a constant function equal to the extremal index of the sequence. In the general case, if we think of the index i in Xi as denoting time, then we may regard 𝜃i as the extremal index at time i. The definition of 𝜃i entails pointwise convergence of the sequence of approximations \(\{f_{n}\}_{n=1}^{\infty }\) in Eq. 14. When there is a uniformity in this convergence and the extremal clustering function is Cesàro summable we obtain the following result.

Theorem 2

Suppose \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) with \(n\bar {F}(x_{n}) \to \tau > 0\). Assume that \(\{\theta _{i}\}_{i=1}^{\infty }\) is Cesàro summable and

$$ \max_{1\leq i \leq n}| \theta_{i} - \theta_{i,n} | \to 0 \quad \text{as } n\to \infty, $$
(16)

where 𝜃i,n = fn(i) is as in Eq. 14. Then \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau \gamma }\) where

$$ \gamma = \lim_{n\to\infty} \frac{1}{n} \sum\limits_{i=1}^{n} \theta_{i}. $$
(17)

Moreover, if \(\{y_{n}\}_{n=1}^{\infty }\) is a sequence of real numbers such that \(n\bar {F}(y_{n}) \to \tau ^{\prime }\) with \(\tau ^{\prime } \leq \tau \) then \(\mathbb {P}(M_{n} \leq y_{n}) \to e^{-\tau ^{\prime }\gamma }\) with γ as in Eq. 17.

As with the constant γ in Corollary 1, we may inquire as to whether the extremal clustering function is independent of the value of τ and sequence xn used in Eq. 7. Although we do not attempt to answer this in full generality, we note that, as with the conditional probability formulation of the extremal index, for most sequences that are of practical interest, the formula defining 𝜃i may be reduced to a form that makes no explicit reference to the sequences xn and pn. For example, under the additional assumption due to Smith (1992) which requires that for any xn in Theorem 1 we have

$$ \lim_{p\to\infty}\lim_{n\to\infty} \sum\limits_{k=p}^{p_{n}}\mathbb{P}(X_{i+k} > x_{n} \mid X_{i} > x_{n}) = 0, $$
(18)

for each i, then Eq. 15 reduces to

$$ \theta_{i} = \lim_{p\to\infty}\lim_{x\to x_{F}} \mathbb{P}(M_{i,i+p} \leq x \mid X_{i} > x). $$
(19)

Another common assumption for statistical applications is the D(k)(xn) condition of Chernick et al. (1991) which we define below in a slightly modified form for our non-stationary setting.

Definition 3

A sequence \(\{X_{n}\}_{n=1}^{\infty }\) as in Theorem 1 is said to satisfy the D(k)(xn) condition, where \(k\in \mathbb {N}\), if

$$ n \mathbb{P}(X_{i} > x_{n}, M_{i,i+k-1} \leq x_{n}, M_{i+k-1,i+p_{n}} > x_{n}) \to 0 \quad \text{as } n\to\infty $$
(20)

for each \(i\in \mathbb {N}\). For the case k = 1, we define \(M_{i,i} = -\infty \).

Note that it is assumed in Definition 3 that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) in conjunction with Eq. 20. Whereas Eq. 5 limits the degree of long range dependence in the sequence, Eq. 20 is a local mixing condition that ensures that the probability of again exceeding the threshold xn in a block of pn observations, after dropping below it for k − 1 consecutive observations falls to zero sufficiently rapidly as \(n\to \infty \). The case where k = 1 implies that in the limit, any exceedances of a high threshold occur in isolation and is implied in the stationary case by the D(xn) condition of Leadbetter et al. (1983), Chapter 3. One might expect that a more natural condition in our non-stationary setting would be to replace the constant k in Eq. 20 by ki to reflect possible variations in the strength of local dependence. However, when Eq. 20 holds for some particular k, then it also holds for any other \(k^{\prime }\) with \(k^{\prime } > k,\) and so provided that the sequence \(\{k_{i}\}_{i=1}^{\infty }\) is bounded we may set \(k = \max \limits \{k_{i} : i\in \mathbb {N}\}\) and obtain (20) for each i. Thus the assumption of a single value of k in Definition 3 allows for variations in the strength of local dependence while at the same time restricting it to not persist too strongly to an arbitrary number of lags. If whenever xn is a sequence as in Theorem 1 and the D(k)(xn) condition holds then Eq. 15 reduces to

$$ \theta_{i} = \lim_{x\to x_{F}}\mathbb{P}(M_{i,i+k-1} \leq x \mid X_{i} > x). $$
(21)

We will assume without further comment for the rest of the paper that the sequence \(\{X_{n}\}_{n=1}^{\infty }\) has a well-defined extremal clustering function as may arise from assumptions (18) or (20).

2.3 Periodic dependence

In this section we assume that the sequence \(\{X_{n}\}_{n=1}^{\infty }\) has a more refined structure than in the previous sections, namely that of periodic dependence, under which the results of Section 2.2 may be simplified considerably.

Definition 4

A sequence \(\{X_{n}\}_{n=1}^{\infty }\) with common marginal distributions is said to have periodic dependence if there exists \(d\in \mathbb {N}\) such that \((X_{t_{1}},\ldots ,X_{t_{k}}) \overset {D}{=} (X_{t_{1} + d},\ldots ,X_{t_{k} + d})\) for all \(t_{1},\ldots ,t_{k}\in \mathbb {N}.\) The smallest d with this property is called the fundamental period.

Whereas for a strictly stationary sequence an arbitrary shift in time leaves the finite-dimensional distributions unchanged, for a sequence with periodic dependence only time shifts that are a multiple of the fundamental period leave finite-dimensional distributions unchanged. In particular, \(M_{a,a+b} \overset {D}{=} M_{c,c+b}\) when ac (mod d). Such sequences often mimic the dependence structure of certain environmental time series where we might expect a fundamental period of one year.

The following result concerning the convergence of \(\mathbb {P}(M_{n} \leq x_{n})\) shows that Theorem 3.7.1 of Leadbetter et al. (1983) for stationary sequences also holds for non-stationary sequences with periodic dependence.

Theorem 3

Let \(\{X_{n}\}_{n=1}^{\infty }\) have periodic dependence and satisfy the conditions of Lemma 2, with xn satisying (7) for some τ > 0. Suppose that \(y_{n} = y_{n}(\tau ^{\prime })\) is a sequence of real numbers defined for each \(\tau ^{\prime }\) with \(0 < \tau ^{\prime } \leq \tau \) so that \(n\bar {F}(y_{n}) \to \tau ^{\prime }\). Then there exist constants γ and \(\gamma ^{\prime }\) with \(0 \leq \gamma \leq \gamma ^{\prime } \leq 1\) such that

$$ \begin{array}{@{}rcl@{}} \text{lim sup} \mathbb{P}\{M_{n} \leq y_{n}(\tau^{\prime})\} &= e^{-\tau^{\prime}\gamma} \\ \text{lim inf} \mathbb{P}\{M_{n} \leq y_{n}(\tau^{\prime})\} &= e^{-\tau^{\prime}\gamma^{\prime}} \end{array} $$

for all \(0 < \tau ^{\prime } \leq \tau \). Hence if \(\mathbb {P}\{M_{n} \leq y_{n}(\tau ^{\prime })\}\) converges for some \(\tau ^{\prime }\) with \(0 < \tau ^{\prime } \leq \tau \), then \(\gamma = \gamma ^{\prime }\) and \(\mathbb {P}\{M_{n} \leq y_{n}(\tau ^{\prime })\} \to e^{-\tau ^{\prime }\gamma }\) for all such \(\tau ^{\prime }\).

Although Theorem 3 makes no reference to the extremal clustering function, when \(\mathbb {P}(M_{n} \leq x_{n})\) converges, the constant γ in Theorem 3 is identified by Corollary 1 as \(\gamma = \lim _{n\to \infty }\gamma _{n}\) with γn as in Eq. 13. Due to periodicity we obtain the simplified formula \(\gamma = d^{-1}{\sum }_{i=1}^{d}\theta _{i},\) and the extremal clustering function is determined by the d values \(\{\theta _{i}\}_{i=1}^{d}\) which repeat cyclically. Moreover, for sequences with periodic dependence, the convergence statement (16) can be strengthened to uniform convergence since \(\sup _{i\in \mathbb {N}}| \theta _{i} - \theta _{i,n} | = \max \limits _{1\leq i \leq d}| \theta _{i} - \theta _{i,n} |. \)

The following result is an immediate consequence of Theorem 3.

Corollary 2

Let \(\{X_{n}\}_{n=1}^{\infty }\) have periodic dependence with common marginal distribution function F. For each τ > 0, let xn(τ) be a sequence such that \(n\bar {F}(x_{n}(\tau )) \to \tau \) and suppose that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn(τ)) for each such τ. If \(\mathbb {P}\{M_{n} \leq x_{n}(\tau )\}\) converges for a single τ > 0 then it converges for all τ > 0, and in particular \(\mathbb {P}\{M_{n} \leq x_{n}(\tau )\} \to e^{-\tau \gamma }\) for some γ ∈ [0,1].

2.4 Interexceedance times

Ferro and Segers (2003) provided a method for estimating the extremal index of a stationary sequence without the need for identifying independent clusters of extremes. This was achieved by considering the distribution of the time between two exceedances of a threshold u, i.e.,

$$ T(u) = \min\{ n\geq 1 : X_{n+1} > u \mid X_{1} > u \}, $$
(22)

as u approaches xF. In particular, it was shown that the normalized interexceedance time \(\bar {F}(x_{n})T(x_{n})\) converges in distribution as \(n\to \infty \) to a mixture of a point mass at zero, with probability 1 − 𝜃, and an exponential random variable with mean 𝜃− 1, with probability 𝜃. The mixture arises from the fact that the interexceedance times can be classified in to two categories: within cluster and between cluster times. The mass at zero stems from the fact that the within cluster times, which tend to be small relative to the between cluster times, are dominated by the factor \(\bar {F}(x_{n})\).

In the stationary case, conditioning on the event X1 > u in Eq. 22 may be replaced with Xi > u, and Xn+ 1 replaced by Xn+i, for any \(i\in \mathbb {N},\) without affecting the distribution of T(u). In the non-stationary case we consider for each \(i \in \mathbb {N}\) and threshold u, the random variable Ti(u) defined by

$$ T_{i}(u) = \min\{n\geq 1 : X_{n+i} > u \mid X_{i} > u \}, $$
(23)

whose distribution in general depends on i. We find that the distribution of \(\bar {F}(x_{n})T_{i}(x_{n})\) converges as \(n\to \infty \) to a mixture of a mass at zero, with probability 1 − 𝜃i, and an exponential random variable with mean γ− 1, with probability 𝜃i. As in Ferro and Segers (2003), a slightly stronger mixing condition is required to derive this result than was needed for Theorem 1. We denote by \(\mathcal {F}_{j_{1},j_{2}}(u)\), the σ-algebra generated by the events {Xi > u : j1ij2}, \(j_{1},j_{2}\in \mathbb {N}\), and we define the mixing coefficients

$$ \begin{array}{@{}rcl@{}} \alpha^{*}_{n,q}(u) = \max_{1\leq l\leq n-q} \sup \mid \mathbb{P}(E_{2} \mid E_{1}) - \mathbb{P}(E_{2}) \mid, \end{array} $$
(24)

where the supremum is over all \(E_{1} \in \mathcal {F}_{1,l}(u)\) with \(\mathbb {P}(E_{1})>0\) and \(E_{2}\in \mathcal {F}_{l+q,n}(u).\) We will assume the existence of a sequence qn = o(n) such that \(\alpha ^{*}_{cn,q_{n}}(x_{n}) \to 0\) for all c > 0. This implies that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) with the same choice of qn and so we may find a sequence pn so that (9) is satisfied. We define 𝜃i,n to be as in Eq. 14 and assume a slightly stronger form of convergence than in Eq. 16 but weaker than uniform convergence \(\sup _{i\in \mathbb {N}}| \theta _{i} -\theta _{i,n}| \to 0.\)

The limiting distribution of the normalized interexceedance times is given in Theorem 4.

Theorem 4

Let \(\{X_{n}\}_{n=1}^{\infty }\) be a sequence of random variables with common marginal distribution F and \(\{x_{n}\}_{n=1}^{\infty }\) a sequence of real numbers such that \(n\bar {F}(x_{n}) \to \tau > 0\). Suppose that there is a sequence of positive integers qn = o(n) such that \(\alpha ^{*}_{cn,q_{n}}(x_{n}) \to 0\) and \(\max \limits _{1\leq i \leq cn}| \theta _{i} -\theta _{i,n}| \to 0\) for all c > 0. Then, if \(\{\theta _{i}\}_{i=1}^{\infty }\) is Cesàro summable we have, for each fixed \(i\in \mathbb {N}\) and t > 0

$$ \mathbb{P}(\bar{F}(x_{n})T_{i}(x_{n}) > t) \to \theta_{i} \text{exp}(-\gamma t). $$
(25)

3 Estimation with a focus on periodic sequences

In this section we consider moment and maximum likelihood estimators for 𝜃i and γ based on the limiting distribution of normalized interexceedance times given in Theorem 4. We first show that the intervals estimator of Ferro and Segers (2003) may be used to estimate 𝜃i and then consider likelihood based estimation along the lines of Süveges (2007). For simplicity, we focus our discussion on the case of periodic dependence as in Definition 4. Such an assumption reduces estimation of the extremal clustering function to estimating the vector 𝜃 = (𝜃1,…,𝜃d) with \(\gamma = d^{-1}{\sum }_{i=1}^{d}\theta _{i}\) where d is the fundamental period which, for simplicity, we assume to be known a-priori. Such an assumption is important for the moment based estimators of Section 3.1 where one needs replications of interexceedance times in order to use the estimators, but can easily be relaxed for likelihood based inference.

3.1 Moment based estimators

Theorem 4 implies that the first two moments of \(\bar {F}(u)T_{i}(u)\) satisfy \(\mathbb {E}\{\bar {F}(u)T_{i}(u) \} = \theta _{i} / \gamma + o(1) \) and \(\mathbb {E}[\{ \bar {F}(u)T_{i}(u) \}^{2} ] = 2\theta _{i} / \gamma ^{2} + o(1)\) as uxF. Assuming the threshold is chosen to be suitably large so that the o(1) terms can be neglected, these two equations can be solved with respect to the unknown parameters to give

$$ \begin{array}{@{}rcl@{}} \gamma = \frac{2 \mathbb{E}(\bar{F}(u)T_{i}(u))}{\mathbb{E}(\{ \bar{F}(u)T_{i}(u) \}^{2})} \text{and } \theta_{i} = \frac{ 2 \{\mathbb{E}(\bar{F}(u)T_{i}(u)) \}^{2} }{\mathbb{E}(\{ \bar{F}(u)T_{i}(u) \}^{2}) } = \frac{ 2 \{\mathbb{E}(T_{i}(u)) \}^{2} }{\mathbb{E}(\{T_{i}(u) \}^{2})}\mathbin{\raisebox{0.5ex}{,}} \quad 1\leq i \leq d. \end{array} $$
(26)

A complication that arises in the non-stationary setting is that, since 𝜃i is defined via a conditional probability given the event Xi > u, if Xi does not exceed the threshold u then there are no interexceedance times to estimate 𝜃i. This problem doesn’t arise in the stationary case where every interexceedance time may be used to estimate the extremal index 𝜃.

In order to estimate 𝜃i then, it is natural to assume that the extremal clustering function is structured in some way, e.g., periodic or piecewise constant. Making such an assumption allows us to use multiple interexceedance times to estimate 𝜃i. Focusing on the case where \(\{X_{n}\}_{n=1}^{\infty }\) has periodic dependence with fundamental period d, all exceedances of the threshold u occuring at points that are separated by a multiple of d give rise to interexceedance times that may be used to estimate the same value of the extremal clustering function. More precisely, suppose that X1,…,Xn is a sample of size n of the process with exceedance times E = {1 ≤ in : Xi > u}, and corresponding interexceedance times \(I = \{T_{i}(u): i\in E \backslash \{\max \limits (E)\} \},\) with Ti(u) as in Eq. 23. The set of interexceedance times that may be used for estimating 𝜃i is the subset \(I_{i} \subseteq I\) defined by Ii = {Tj(u) ∈ I : ji (mod d)}. If |Ii| = Ni, then we may relabel the elements of Ii as \(I_{i} = \{T_{i}^{(j)} \}_{j=1}^{N_{i}}\) where now the subscript remains fixed. Making further, more refined assumptions regarding the nature of the periodicity of the process under consideration may give rise to different sets Ii. For example, in an environmental time series setting it may be reasonable to assume that the extremal clustering function is piecewise constant within months or seasons, so that all interexceedance times that correspond to exceedances within the same calendar month or season belong to the same Ii.

Equation 26 suggests the estimator

$$ \hat{\theta}_{i} = \frac{2 \big({\sum}_{j=1}^{N_{i}} T_{i}^{(j)} \big)^{2} }{ N_{i} {\sum}_{j=1}^{N_{i}} (T_{i}^{(j)} )^{2} }\mathbin{\raisebox{0.5ex}{,}} $$
(27)

whose bias we now investigate. From Eq. 25 we have that for \(n \in \mathbb {N}\)

$$ \mathbb{P}(T_{i}(x_{n}) > n ) = \theta_{i} F(x_{n})^{n\gamma} + o(1), \\ $$

which motivates consideration of the positive integer valued random variable T defined by

$$ \mathbb{P}(T> n) = \theta_{i} p^{n\gamma}, \quad \text{for } n\geq 1, $$

where p ∈ (0,1) and 𝜃i,γ ∈ (0,1] and we may identify p with F(xn). In a similar manner to Ferro and Segers (2003), we find that \(\mathbb {E}(T) = 1 + \theta _{i} p^{\gamma } (1 - p^{\gamma } )^{-1} \) and \(\mathbb {E}(T^{2}) = 1 + \theta _{i} p^{\gamma } (1 - p^{\gamma } )^{-1} + 2 \theta _{i} p^{\gamma } (1 - p^{\gamma })^{-2}\), so that upon simplification we find that

$$ \frac{ 2\big\{\mathbb{E}(T) \big\}^{2} }{\mathbb{E}(T^{2}) } = \frac{2(1 - p^{\gamma} + \theta_{i} p^{\gamma} )^{2} }{ (1-p^{\gamma})^{2} + \theta_{i} p^{\gamma}(1-p^{\gamma} ) + 2 \theta_{i} p^{\gamma} }\cdot $$
(28)

A Taylor expansion of the right hand side of Eq. 28 around p = 1 gives

$$ \frac{ 2\big\{\mathbb{E}(T) \big\}^{2} }{\mathbb{E}(T^{2}) } = \theta_{i} +\gamma(2 - 3\theta_{i}/2)(1 - p) + O\big\{ (1-p )^{2} \big\}, \quad \text{as } p \to 1, $$

so that the first order bias of \(\hat {\theta }_{i}\) is \(\gamma (2 - 3\theta _{i}/2)\bar {F}(x_{n})\). On the other hand, since

$$ \theta_{i} = \frac{2 \big\{\mathbb{E}(T-1) \big\}^{2}}{\mathbb{E}\{(T-1)(T-2)\}} \mathbin{\raisebox{0.5ex}{,}} $$

this motivates the estimator

$$ \tilde{\theta}_{i} = \frac{2 {\sum}_{j=1}^{N_{i}} (T_{i}^{(j)} - 1)^{2} } { N_{i} {\sum}_{j=1}^{N_{i}} (T_{i}^{(j)} -1)(T_{i}^{(j)} - 2)} \mathbin{\raisebox{0.5ex}{,}} $$
(29)

whose first order bias is zero. This estimator forms the key component of the intervals estimator of Ferro and Segers (2003), which we can use to estimate 𝜃i. We note that \(\tilde {\theta }_{i} \) may take values greater than 1 and is not defined if max(Ii) ≤ 2 as then the denominator in Eq. 29 is zero. In order to deal with these cases, the intervals estimator \(\theta _{i}^{*}\) of 𝜃i is defined as

$$ \theta_i^{*} = \begin{cases} \min\{1, \hat{\theta}_i\} \quad \text{if} \max(I_i) \leq 2, \\ \min\{1, \tilde{\theta}_i \} \quad \text{if} \max(I_i) > 2. \end{cases} $$

While Eq. 26 also suggests an estimator for γ, this is based only on the interexceedances relevant to estimating 𝜃i and also requires an estimate of \(\bar {F}(u)\). One possibility is to obtain d such estimates and take the mean of these as the estimate of γ. However, this estimator need not respect the relation \(\gamma = d^{-1}{\sum }_{i=1}^{d} \theta _{i}\), a consequence of the fact that we dropped the o(1) terms when solving the first two moment equations. In the examples that we consider in Section 4, we estimate γ using the mean of the estimates for the 𝜃i values.

3.2 Maximum likelihood estimation

Theorem 4 also allows for the construction of the likelihood function for the vector of unknown parameters. This is an attractive approach due to the modelling possibilities that become available, however, as discussed in Ferro and Segers (2003) in the stationary case, problems arise with maximum likelihood estimation due to uncertainty in how to assign interexceedance times to the components of the limiting mixture distribution. Since the asymptotically valid likelihood is used as an approximation at some subasymptotic threshold u, all observed normalized interexceedance times are strictly positive. Assigning all interexceedance times to the exponential part of the limiting mixture means that they are all being classified as between cluster times. This is tantamount to exceedances of a large threshold occuring in isolation, and so the maximum likelihood estimator based on this, typically misspecified, likelihood converges in probability to 1 regardless of the true underlying value of 𝜃.

This problem was addressed in Süveges (2007) for sequences satisfying the D(2)(xn) condition, i.e., the case k = 2 in Eq. 20. For such sequences, in the limit as \(n\to \infty \), exceedances above xn cluster into independent groups of consecutive exceedances, so that all observed interexceedance times equal to one are assigned to the zero component of the mixture likelihood. On the other hand, all interexceedance times greater than one are assigned to the exponential component of the likelihood. It was found that, when the D(2)(xn) condition is satisfied, maximum likelihood estimation outperforms the intervals estimator in terms of lower root mean squared error. The consecutive exceedances model of clusters implied by D(2)(xn) is in contrast to the general situation where within clusters, exceedances may be separated by observations that fall below the threshold.

If we were to make the D(2)(xn) assumption in our non-stationary setting, so that the consecutive exceedances model for clusters is accurate, then with \(I_{i} = \{T_{i}^{(j)} \}_{j=1}^{N_{i}}\) the interexceedance times relevant for estimating 𝜃i as in Section 3.1, we obtain the likelihood function as

$$ L({\theta} ; I ) = \prod\limits_{i=1}^{d} L_{i}({\theta} ; I_{i} ) $$

where \(I = \cup _{i=1}^{d}I_{i}\) is the set of all interexceedance times and

$$ L_{i}({\theta} ; I_{i}) = \prod\limits_{j=1}^{N_{i}} (1-\theta_{i})^{\mathbbm{1}[T^{(j)}_{i}=1] }\big\{\theta_{i} \gamma \exp(-\gamma \bar{F}(x_{n})T_{i}^{(j)} ) \big\}^{\mathbbm{1}[T^{(j)}_{i} >1]}. $$

The full log-likelihood is then

$$ \begin{array}{@{}rcl@{}} l({\theta} ; I) &= & \sum\limits_{i=1}^{d}(N_{i} - n_{i})\log(1-\theta_{i}) + \sum\limits_{i=1}^{d}n_{i} \log(\theta_{i}) + \big(\sum\limits_{i=1}^{d}n_{i} \big)\log(\gamma) \\ && -\gamma \bar{F}(x_{n})\sum\limits_{i=1}^{d}\sum\limits_{j=1}^{N_{i}}(T_{i}^{(j)}-1) - \gamma\bar{F}(x_{n})\sum\limits_{i=1}^{d} n_{i}, \end{array} $$
(30)

where \(\gamma = d^{-1} {\sum }_{i=1}^{d}\theta _{i},\) \(n_{i} = {\sum }_{j=1}^{N_{i}}\mathbbm {1}[T^{(j)}_{i} > 1],\) and in practice \(\bar {F}(x_{n})\) must be replaced with an estimate. Unlike in the stationary case, the likelihood equations don’t have a closed form solution, essentially due to the dependence of γ on all the 𝜃i. Equation 30, however, is easily optimized numerically provided d is not too large. If d is large, it is more natural to parameterize 𝜃i in terms of a small number of parameters which we may estimate by maximum likelihood or consider non-parametric estimation along the lines of Einmahl et al. (2016).

We may generalise this idea and assign all interexceedance times less than or equal to some value k to the zero component of the likelihood, so that the corresponding expression for Li becomes

$$ L_{i}({\theta} ; I_{i} ) = \prod\limits_{j=1}^{N_{i}} (1-\theta_{i})^{\mathbbm{1}[T^{(j)}_{i} \leq k] }\big\{\theta_{i} \gamma \exp(-\gamma \bar{F}(x_{n})T_{i}^{(j)} )\big\}^{\mathbbm{1}[T^{(j)}_{i}>k] }. $$
(31)

This may be justified by the assumption that the sequence satisfies the D(k+ 1)(xn) condition. Selection of an appropriate value of k is equivalent to the selection of the run length for the runs estimator, and this problem is considered in the stationary case in Süveges and Davison (2010) and Juan Cai (2019). However, in a non-stationary setting, where the clustering characteristics of the sequence may change in time, the appropriate value of k may also be time varying, so that k may be replaced with ki in Eq. 31. Although, as was discussed in Section 3.1, we may take a constant value of k in the definition of D(k)(xn), for the purposes of estimating 𝜃i, one wants to select for each i, the smallest k = ki such that Eq. 20 is satisfied (Hsing 1993). If too small a value is selected for ki then some of the interexceedance times may be wrongly assigned to the exponential component of the likelihood leading to an overestimate of 𝜃i whereas if ki is selected to be too large then we tend to underestimate 𝜃i.

4 Examples

In this section we consider two simple examples of non-stationary Markov sequences with a periodic dependence structure and common marginal distributions. The first example we consider is the Gaussian autoregressive model

$$ X_{n+1} = \rho_{n} X_{n}+ \epsilon_{n}, \quad n \geq 1, $$
(32)

where \(\epsilon _{n} \sim N(0, 1-{\rho _{n}^{2}})\), |ρn| < 1 and \(X_{1} \sim N(0,1)\). In our second example, we consider a model where (Xn,Xn+ 1) follow a bivariate logistic distribution with joint distribution function

$$ F(x_{n}, x_{n+1}) = \exp\big\{- (x_{n}^{-1/\alpha_{n}} + x_{n+1}^{-1/\alpha_{n}})^{\alpha_{n}} \big\}, \quad n\geq 1 , $$
(33)

αn ∈ (0,1] and \(X_{1} \sim \text {Fr\'{e}chet}(1)\) so that \(\mathbb {P}(X_{1} \leq x) = e^{-1/x}, x \geq 0.\) For the Gaussian model, no limiting extremal clustering occurs at any point in the sequence, so that 𝜃i = 1 for each i, in contrast to the logistic model where 𝜃i < 1 for each i.

For sufficiently well behaved stationary Markov sequences, mixing conditions much stronger than those considered in Section 2 hold. For example, for the stationary Gaussian autoregressive sequence, with ρn = ρ in Eq. 32 for all n ≥ 1, Theorems 1 and 2 from Athreya and Pantula (1986) give that the mixing conditions of Theorem 1 and Theorem 4 hold for any sequence qn such that \(q_{n} \to \infty \), qn = o(n), for any xn. Analogous results also hold for the non-stationary models that we consider in this section, see for example Bradley (2005) Theorem 3.3 and Davydov (1973) Theorem 4.

4.1 Gaussian autoregressive model

Stationary sequences \(\{X_{n}\}_{n=1}^{\infty }\) where each Xi is a standard Gaussian random variable, are extensively studied in Chapter 4 of Leadbetter et al. (1983). It is shown there that if the lag n autocorrelation ρ(n) satisfies ρ(n)log n → 0, then the extremal index 𝜃 of the sequence equals one and so no limiting extremal clustering occurs. Thus, the stationary autoregressive sequence with ρn = ρ in Eq. 32 for all n ≥ 1 has extremal index one, provided ρ < 1. This is a special case of the more general result that a stationary asymptotically independent Markov sequence has an extremal index of one (Smith 1992). We say that the stationary sequence \(\{X_{n}\}_{n=1}^{\infty }\) is asymptotically independent at lag k if χ(k) = 0 where

$$ {\chi}(k) = \lim_{u\to x_F}\mathbb{P}(X_{n+k} > u \mid X_n > u), \quad k\geq 1, $$

and asymptotically independent if χ(k) = 0 for all k (Ledford and Tawn 2003).

Here, we consider the non-stationary autoregressive model (32) and specify a periodic lag one correlation function \( \rho _{n+1} = 0.5 + 0.25 \sin \limits (2\pi n/7)\) for n ≥ 0. Applying Theorem 6.3.4 of Leadbetter et al. (1983), and comparing the non-stationary sequence to an independent standard Gaussian sequence, we deduce that \(\mathbb {P}(M_{n} \leq x_{n}) - {\Phi }(x_{n})^{n} \to 0\) as \(n \to \infty \) where Φ is the standard Gaussian distribution function, and thus conclude that γ = 1 and 𝜃i = 1 for i = 1,…,7. The same conclusion may also be drawn by applying Theorem 4.1 of Hüsler (1983), which shows that if xn satisfies (7) then \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau }.\)

We simulated 1000 realizations of this sequence of length 104 and, for each realization, estimated 𝜃1,…,𝜃7 and γ for a range of high thresholds, using both the intervals estimator and maximum likelihood with k in Eq. 31 equal to zero and one. We then repeated this procedure for sequences of length 105 and 106. We found that the maximum likelihood estimator with k = 0 gave by far the best performance as measured by root mean squared error in γ. In fact, in this case the 0.025 and 0.975 empirical quantiles of the estimated values of γ were both 1 to two decimal places in all simulations. This is not surprising since selecting k = 0 ensures that all interexceedance times have the correct asymptotic classification as between cluster times. However, in a real data example such a level of prior knowledge regarding asymptotic independence is not realistic and would render estimation redundant. Although maximum likelihood estimation with k = 1 performed slightly poorer than the intervals estimator, both methods produced broadly similar results.

Table 1 shows the 0.025 and 0.975 empirical quantiles of the parameter estimates obtained using the intervals estimator. In the table, u = qp corresponds to the threshold that there is probability p of exceeding at each time point i.e., \(\mathbb {P}(X_{i} > q_{p}) = p\). Although the true value of each 𝜃i is 1, so that no extremal clustering occurs in the limit as \(u\to \infty \), clustering may occur at subasymptotic levels. Moreover, there will tend to be more subasymptotic clustering in the sequence at points with a larger lag one autocorrelation, i.e., larger ρi. This point has been thoroughly discussed in the context of stationary sequences and estimation of the extremal index (Ancona-Navarrete and Tawn 2000; Eastoe and Tawn 2012) and leads to the notion of a subasymptotic or threshold based extremal index.

Table 1 0.025 and 0.975 empirical quantiles of the estimates of 𝜃1,…,𝜃7,γ, in the Gaussian autoregressive model using the intervals estimator

4.2 Bivariate logistic dependence

The stationary logistic model, that is, Eq. 33 with αn = α for all n ≥ 1, has been thoroughly studied (Smith et al. 1997; Ledford and Tawn 2003; Süveges 2007). The parameter α controls the strength of dependence between adjacent terms in the sequence, with α = 1 corresponding to independence and α → 0 giving complete dependence. Such a sequence exhibits asymptotic dependence provided α < 1, in particular, \(\lim _{u\to \infty } \mathbb {P}(X_{n+1} > u \mid X_{n} > u) = 2 - 2^{\alpha }.\) By exploiting the Markov structure of the sequence, precise calculation of 𝜃 can be achieved using the numerical methods described in Smith (1992), where it is found for example that the sequence with α = 1/2 has 𝜃 = 0.328, and moreover, Eq. 18 is shown to hold for all α ∈ (0,1]. The case of α = 1/2 is also considered in Süveges (2007) where, based on diagnostic plots, it is concluded that the D(2)(xn) condition is not satisfied for this sequence, and moreover, the maximum likelihood estimator for 𝜃 based on a run length of k = 1 has bias of around 20%. Süveges and Davison (2010) find that a more suitable run length is k = 5, and in this case the maximum likelihood estimator for 𝜃 has lower root mean squared error than the intervals estimator. Smaller values of α will tend to be associated with larger values of the run length k, though the precise nature of this relation is unclear.

We consider the non-stationary logistic model (33) with \(\alpha _{n+1} = 0.5 + 0.25 \sin \limits (2\pi n /7) \) for n ≥ 0. Note that although we have specified the same parametric form for the dependence parameters α as in the previous example for ρ, the two parameters are not directly comparable. We simulated 1000 realizations of this process, of lengths 104 and 105, and estimated 𝜃1,…,𝜃7 using maximum likelihood with k = 5, at a range of different thresholds. Table 2 shows, for the different sample sizes and thresholds considered, the 0.025 and 0.975 empirical quantiles of the parameter estimates obtained from this simulation. Although the exact values of the parameters are unknown, making evaluation of any estimators performance impossible, an upper bound for 𝜃i is easily obtained as \( \lim _{u\to \infty } \mathbb {P}(X_{i+1} \leq u \mid X_{i} > u) = 2^{\alpha _{i}} - 1\). In our case this gives the bounds (𝜃1,…,𝜃7) ≤ (0.41,0.62,0.67,0.52,0.31,0.19,0.24) and γ ≤ 0.42 where the relation ≤ is interpreted componentwise. It is conceivable that the methods in Smith (1992) could be adapted to the non-stationary case to allow exact computation of 𝜃i though we do not pursue this direction here.

Table 2 0.025 and 0.975 empirical quantiles of the estimates of 𝜃1,…,𝜃7,γ, in the logistic time series model using maximum likelihood with k = 5

We also considered estimation of 𝜃i using the intervals estimator and obtained similar results to the maximum likelihood estimates. The median value of the 1000 estimates for each parameter using the different methods of estimation are shown in Fig. 1 for the sample size of 105 and threshold q0.05. The estimators clearly recover the periodicity in the dependence structure of the sequence and, on average, respect the upper bound for 𝜃i of \(2^{\alpha _{i}} - 1\).

Fig. 1
figure 1

Illustration of estimators (triangles: intervals estimator, and circles: maximum likelihood with k = 5) obtained from 103 realizations from the non-stationary logistic model of length 105 and threshold q0.05. The marked points correspond to the median estimate from 103 realizations of the model. The grey region contains the 0.025 and 0.975 empirical quantiles of the parameter estimates using both the intervals and maximum likelihood estimators. It is constructed by taking the pointwise maxima of the 0.975 quantiles (upper boundary) and pointwise minima of the 0.025 quantiles (lower boundary). The solid black curve shows the upper bound for 𝜃i of \(2^{\alpha _{i}}-1\)

5 Proofs

5.1 Auxiliary results

In this section we state and prove some Lemmas that are required in the proof of Theorem 1.

Lemma 3

Let \(\{t_{n}\}_{n=1}^{\infty }\) be a sequence of positive integers and ai,n, \(i, n \in \mathbb {N},\) an array of non-negative real numbers such that \(t_{n}\to \infty \) and \(A_{n} = \max \limits _{1\leq i \leq t_{n}} a_{i,n} \to 0\) as \(n\to \infty \). Then,

$$ \begin{array}{@{}rcl@{}} \sum\limits_{i=1}^{t_{n}} a_{i,n} \to \tau \geq 0 \end{array} $$
(34)

if and only if

$$ \begin{array}{@{}rcl@{}} \prod\limits_{i=1}^{t_{n}} (1 - a_{i,n}) \to e^{-\tau}. \end{array} $$
(35)

Proof

Using the fact that \(\log (1-x) = -x + R(x)\) where |R(x)| < Cx2, for sufficiently small x > 0, for some C > 0, we have \({\sum }_{i=1}^{t_{n}}\log (1-a_{i,n}) = -{\sum }_{i=1}^{t_{n}}a_{i,n} + {\sum }_{i=1}^{t_{n}}R(a_{i,n}) \) and

$$ |\sum\limits_{i=1}^{t_{n}}R(a_{i,n})| \leq C\sum\limits_{i=1}^{t_{n}}a_{i,n}^{2} \leq CA_{n}\sum\limits_{i=1}^{t_{n}}a_{i,n}, $$
(36)

so that \({\sum }_{i=1}^{t_{n}} \log (1-a_{i,n}) = -\big ({\sum }_{i=1}^{t_{n}}a_{i,n} \big )\big (1 + o(1)\big )\) or equivalently

$$ \log\prod\limits_{i=1}^{t_{n}}(1-a_{i,n}) = -\big(\sum\limits_{i=1}^{t_{n}}a_{i,n} \big)\big(1 + o(1)\big), $$
(37)

from which the result follows. □

Lemma 4

Let \(\{t_{n}\}_{n=1}^{\infty }\) be a sequence of positive integers and ai,n, \(i, n \in \mathbb {N},\) an array of non-negative real numbers such that \(t_{n}\to \infty \), \(A_{n} = \max \limits _{1\leq i \leq t_{n}} a_{i,n} \to 0\) and \({\sum }_{i=1}^{t_{n}}a_{i,n}\) is bounded above as \(n\to \infty \). Then,

$$ \prod\limits_{i=1}^{t_{n}} (1 - a_{i,n}) - \text{exp}\bigg({-\sum\limits_{i=1}^{t_{n}} a_{i,n}} \bigg) \to 0. $$
(38)

Proof

This follows from Lemma 3 by considering subsequences along which \({\sum }_{i=1}^{t_{n}} a_{i,n}\) converges. □

Lemma 5

Let \(g:\mathbb {R} \to \mathbb {R}\) be a bounded function. If f(x) = A(x)g(x) and \(\lim _{x\to \infty }A(x) = 1,\) then f(x) = g(x) + o(1) as \(x\to \infty \).

Proof

As g is bounded, ∃M > 0 such that |g(x)| < M, \(\forall x \in \mathbb {R}\). Now let 𝜖 > 0. As \(\lim _{x\to \infty }A(x) = 1,\)x0 such that

$$ |A(x) - 1 | < \epsilon/M \quad \text{for} x>x_{0}. $$

Then for x > x0

$$ | f(x) - g(x) | = |g(x)||A(x) - 1 | < M\epsilon/M = \epsilon. $$

Lemma 6

Let \(\{X_{n}\}_{n=1}^{\infty }\), xn, pn and qn be as in Lemma 2 such that Eq. 7 holds and assume \(\mathbb {P}(M_{n} \leq x_{n}) \to L \in (0,1)\). Let sn be such that pn = o(sn),sn = o(n) and tn = ⌊n/(sn + qn)⌋. Then

$$ \sum\limits_{i=1}^{t_{n}} \mathbb{P}(M^{i}_{s_{n}-p_{n},s_{n}} > x_{n}) = o\bigg(\sum\limits_{i=1}^{t_{n}}\mathbb{P}(M^{i}_{0,s_{n}-p_{n}} > x_{n}, M^{i}_{s_{n}-p_{n}, s_{n}} \leq x_{n} )\bigg) $$
(39)

where \(M^{i}_{j,k} = {\max \limits } \{X^{i}_{j+1},\ldots , {X^{i}_{k}} \}\) and \({X^{i}_{j}} = X_{(i-1) (s_{n}+q_{n})+j}\).

Proof

We first note that Lemma 2 also holds with blocks of length sn, i.e., with sn in place of pn in the definition of Ai in Eq. 8 and tn in place of rn. Thus from Eq. 10, with blocks of length sn, we have that \( \mathbb {P}(M_{n} \leq x_{n}) - {\prod }_{i=1}^{t_{n}}\mathbb {P}(M(A_{i}) \leq x_{n}) \to 0\) so that \({\prod }_{i=1}^{t_{n}}\mathbb {P}(M(A_{i}) \leq x_{n}) \to L \in (0,1),\) or equivalently

$$ \sum\limits_{i=1}^{t_{n}}\log\big(1 - \mathbb{P}(M(A_{i}) > x_{n})\big)\to \log(L). $$
(40)

Now we note that \(\max \limits _{1\leq i \leq t_{n}}\mathbb {P}(M(A_{i}) > x_{n}) \to 0\) since \(\mathbb {P}(M(A_{i}) > x_{n}) \leq s_{n}\bar {F}(x_{n})\) and sn = o(n) and \(\bar {F}(x_{n}) = \tau n^{-1}+o(n^{-1})\). Thus, using \(\log (1-t) = -t + o(t)\) as t → 0, Eq. 40 may be written

$$ -\sum\limits_{i=1}^{t_{n}}\mathbb{P}(M(A_{i}) > x_{n}) + \sum\limits_{i=1}^{t_{n}}o(\mathbb{P}(M(A_{i}) > x_{n})) \to \log(L). $$
(41)

Now it is easily seen that the second sum in Eq. 41 converges to zero since

$$ \begin{array}{@{}rcl@{}} \sum\limits_{i=1}^{t_{n}} o(\mathbb{P}(M(A_{i}) > x_{n})) = \sum\limits_{i=1}^{t_{n}}o\big(s_{n}\bar{F}(x_{n})\big) & \leq& t_{n} s_{n}\bar{F}(x_{n})o(1) \\ & \leq&\frac{s_{n}}{s_{n} + q_{n}}n\bar{F}(x_{n})o(1) \to 0, \end{array} $$

and so Eq. 41 implies \({\sum }_{i=1}^{t_{n}}\mathbb {P}(M(A_{i}) > x_{n}) \to -\log (L).\) Now, decomposing the event {M(Ai) > xn} as a disjoint union we get

$$ \sum\limits_{i=1}^{t_{n}} \mathbb{P}(M^{i}_{0,s_{n}-p_{n}} > x_{n}, M^{i}_{s_{n}-p_{n},s_{n}} \leq x_{n}) + \sum\limits_{i=1}^{t_{n}} \mathbb{P}(M^{i}_{s_{n}-p_{n},s_{n}} > x_{n}) \to -\log(L). $$
(42)

Again, the second sum in Eq. 42 goes to zero since it is bounded above by \(t_{n} p_{n}\bar {F}(x_{n}) \leq \{p_{n}/(s_{n}+q_{n})\}n\bar {F}(x_{n}) \to 0,\) from which Eq. 39 follows. □

5.2 Proof of Lemma 1

We use induction on the number of subintervals, k. The case k = 2 is just the fact that \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn). Assuming that the result is true for k such arbitrary subintervals, we will verify it also holds for the k + 1 intervals I1,I2,…,Ik,Ik+ 1. Let \(I_{1}^{*}\) be the interval separating I1 and I2 and let \(J= I_{1} \cup I_{1}^{*} \cup I_{2}\), and we note that J is an interval with |J| > qn and since \(\{(M(J \cup \cup _{i=3}^{k+1}I_{i}) \leq x_{n} \} \subseteq \{M(\cup _{i=1}^{k+1}I_{i})\leq x_{n})\}\) we have,

$$ 0 \leq \mathbb{P}(M(\cup_{i=1}^{k+1}I_{i}) \leq x_{n}) - \mathbb{P}(M\big(J \cup\cup_{i=3}^{k+1}I_{i}\big) \leq x_{n}) \leq \mathbb{P}(M(I_{1}^{*}) > x_{n}) \leq q_{n}\bar{F}(x_{n}), $$
(43)

so we may write \(\mathbb {P}(M(\cup _{i=1}^{k+1}I_{i}) \leq x_{n}) = \mathbb {P}(M\big (J \cup \cup _{i=3}^{k+1}I_{i}\big ) \leq x_{n}) + R_{1,n}\) where the remainder R1,n satisfies \(R_{1,n} \leq q_{n}\bar {F}(x_{n})\). We then have

$$ \begin{array}{@{}rcl@{}} && \big|\mathbb{P}(M(\cup_{i=1}^{k+1}I_{i}) \leq x_{n}) - \prod\limits_{i=1}^{k+1} \mathbb{P}(M(I_{i}) \leq x_{n}) \big| \end{array} $$
(44)
$$ \begin{array}{@{}rcl@{}} &= & \big|\mathbb{P}(M\big(J \cup\cup_{i=3}^{k+1}I_{i}\big) \leq x_{n}) - \prod\limits_{i=1}^{k+1} \mathbb{P}(M(I_{i}) \leq x_{n}) + R_{1,n} \big| \end{array} $$
(45)
$$ \begin{array}{@{}rcl@{}} &= &\big|\mathbb{P}(M(J)\!\leq\! x_{n})\prod\limits_{i=3}^{k+1} \mathbb{P}(M(I_{i}) \!\leq\! x_{n}) - \prod\limits_{i=1}^{k+1} \mathbb{P}(M(I_{i}) \!\leq\! x_{n}) + R_{1,n} + R_{2,n} \big| \end{array} $$
(46)
$$ \begin{array}{@{}rcl@{}} &\leq & \big|\mathbb{P}(M(J)\leq x_{n}) - \mathbb{P}(M(I_{1})\leq x_{n})\mathbb{P}(M(I_{2})\leq x_{n})\big| + |R_{1,n}| + |R_{2,n}| \end{array} $$
(47)

where \(|R_{2,n}| \leq (k-1)\alpha _{n} + 2(k-2)q_{n}\bar {F}(x_{n})\) and we have used our induction hypothesis to get (46) since \(J \cup \cup _{i=3}^{k+1}I_{i}\) is a union of k intervals with adjacent intervals separated by qn. Now note that since \(\{M(J)\leq x_{n}\} \subseteq \{M(I_{1}\cup I_{2})\leq x_{n}\}\) we have \(0 \leq \mathbb {P}(M(I_{1}\cup I_{2})\leq x_{n}) - \mathbb {P}(M(J)\leq x_{n}) \leq q_{n}\bar {F}(x_{n})\) and we may write \(\mathbb {P}(M(J)\leq x_{n}) = \mathbb {P}(M(I_{1}\cup I_{2})\leq x_{n}) + R_{3,n}\) where \(|R_{3,n}| \leq q_{n}\bar {F}(x_{n})\). Thus from Eqs. 44 and 47 we have

$$ \begin{array}{@{}rcl@{}} && \big|\mathbb{P}(M(\cup_{i=1}^{k+1}I_{i}) \leq x_{n}) - \prod\limits_{i=1}^{k+1} \mathbb{P}(M(I_{i}) \leq x_{n}) \big| \\ &\leq & \big|\mathbb{P}(M(I_{1}\cup I_{2})\!\leq\! x_{n}) - \mathbb{P}(M(I_{1})\leq x_{n})\mathbb{P}(M(I_{2})\leq x_{n}) + R_{3,n} \big| + |R_{1,n}| + |R_{2,n}| \\ &\leq & \big|\mathbb{P}(M(I_{1}\cup I_{2})\!\leq\! x_{n}) - \mathbb{P}(M(I_{1})\!\leq\! x_{n})\mathbb{P}(M(I_{2})\!\leq\! x_{n}) \big| + |R_{1,n}| + |R_{2,n}| + |R_{3,n}| \\ &\leq & \alpha_{n} + q_{n}\bar{F}(x_{n}) + (k-1)\alpha_{n} + 2(k-2)q_{n}\bar{F}(x_{n}) + q_{n}\bar{F}(x_{n}) \\ &= & k\alpha_{n} + 2(k-1)q_{n}\bar{F}(x_{n}) \end{array} $$

as required.

5.3 Proof of Lemma 2.

As \(\{M_{n} \leq x_{n}\} \subseteq \cap _{i=1}^{r_{n}}\{M(A_{i})\leq x_{n}\}\) we have

$$ \begin{array}{@{}rcl@{}} 0 \leq \mathbb{P}(M(\cup_{i=1}^{r_{n}}A_{i}) \leq x_{n}) - \mathbb{P}(M_{n} \leq x_{n}) & \leq& \mathbb{P}(M(\cup_{i=1}^{r_{n}}A_{i}^{*}) > x_{n}) \\ & \leq& r_{n} q_{n} \bar{F}(x_{n}) \\ &\leq& n(p_{n}+q_{n})^{-1}q_{n}\{\tau\ n^{-1}+o(n^{-1})\} \to 0. \end{array} $$
(48)

Also, by Lemma 1 we have

$$ \big|\mathbb{P}(M(\cup_{i=1}^{r_{n}}A_{i}) \leq x_{n}) - \prod\limits_{i=1}^{r_{n}} \mathbb{P}(M(A_{i}) \leq x_{n}) \big| \leq (r_{n}-1)\alpha_{n} + 2(r_{n}-2)q_{n}\bar{F}(x_{n}) \to 0 $$
(49)

so that the triangle inequality and Eqs. 48 and 49 give the result.

5.4 Proof of Theorem 1.

In addition to the notation defined in Section 2.1, we also define

$$ {X^{i}_{j}} = X_{(i-1) (p_{n}+q_{n})+j}, M^{i}_{j,k} = \max \{X^{i}_{j+1},\ldots, {X^{i}_{k}} \}. $$
(50)

Now, for i = 1,…,rn, we have

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(M(A_{i}) \leq x_{n}) &=& 1 - \mathbb{P}(M(A_{i}) > x_{n}) \\ &=& 1 - \sum\limits_{j=1}^{p_{n}} \mathbb{P}({X^{i}_{j}} > x_{n}, M^{i}_{j,p_{n}} \leq x_{n}) \\ &\leq& 1 - \sum\limits_{j=1}^{p_{n}} \mathbb{P}({X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n}) \\ &\leq& \exp\bigg\{-\sum\limits_{j=1}^{p_{n}} \mathbb{P}({X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n})\bigg\} \end{array} $$

and so

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(M_{n} \leq x_{n}) &=& \prod\limits_{i=1}^{r_{n}} \mathbb{P}(M(A_{i}) \leq x_{n}) + o(1) \\ &\leq& \exp\bigg\{-\sum\limits_{i=1}^{r_{n}}\sum\limits_{j=1}^{p_{n}} \mathbb{P}({X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n}) \bigg\} + o(1). \end{array} $$
(51)

Now we note that

$$ \sum\limits_{j=1}^{n} \mathbb{P}(X_{j}> x_{n}, M_{j,j+p_{n}} \leq x_{n}) = \sum\limits_{i=1}^{r_{n}}\sum\limits_{j=1}^{p_{n}}\mathbb{P}({X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n}) + o(1) $$
(52)

since the difference between the two sums is

$$ \begin{array}{@{}rcl@{}} \sum\limits_{j=1}^{n} \mathbb{P}(X_{j}> x_{n}, M_{j,j+p_{n}} \leq x_{n}) & -& {\sum}_{i=1}^{r_{n}}\sum\limits_{j=1}^{p_{n}}\mathbb{P}({X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n}) \\ &=& \sum\limits_{i=1}^{r_{n}}\sum\limits_{j=p_{n}+1}^{p_{n}+q_{n}} \mathbb{P}({X^{i}_{j}} > x_{n}, M_{j,j+p_{n}} \leq x_{n}) + o(1) \\ & \leq& r_{n} q_{n} \bar{F}(x_{n}) + o(1) \\ &\leq& \frac{q_{n}}{p_{n}+q_{n}} n \bar{F}(x_{n}) + o(1) \rightarrow 0 \end{array} $$

so that Eq. 51 gives

$$ \mathbb{P}(M_{n} \leq x_{n}) \leq \exp\bigg\{-\sum\limits_{j=1}^{n} \mathbb{P}(X_{j} > x_{n}, M_{j,j+p_{n}} \leq x_{n}) \bigg\} + o(1). $$
(53)

Now we prove the reverse inequality of Eq. 53, i.e.,

$$ \mathbb{P}(M_{n} \leq x_{n}) \geq \exp\bigg\{-\sum\limits_{j=1}^{n} \mathbb{P}(X_{j} > x_{n}, M_{j,j+p_{n}} \leq x_{n}) \bigg\} + o(1). $$
(54)

We will show that Eq. 54 holds under the assumption that \(\mathbb {P}(M_{n} \leq x_{n})\) converges to some L in [0,1], with the more general case following by considering subsequences along which \(\mathbb {P}(M_{n} \leq x_{n})\) converges and repeating the following argument. By Lemma 2, specifically (10), and Lemma 4 with \(a_{i,n} = \mathbb {P}(M(A_{i}) > x_{n})\) we see that L > 0, and since (54) trivially holds when L = 1 we may assume L ∈ (0,1). Following (O’Brien 1987), introduce a new sequence sn = o(n) which plays the role of pn such that pn = o(sn) and let tn = ⌊n/(sn + qn)⌋ which now plays the role of rn and note that tn = o(rn) and the definitions in Eqs. 50 and 8 are modified by replacing pn with sn. Then for i = 1,…tn, we have

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(M(A_{i}) > x_{n}) &= \mathbb{P}(M^{i}_{0,s_{n}-p_{n}} > x_{n}, M^{i}_{s_{n}-p_{n},s_{n}} \leq x_{n}) + \mathbb{P}(M^{i}_{s_{n}-p_{n},s_{n}} > x_{n}) \end{array} $$

and consequently, since Lemma 2 holds with sn in place of pn and tn in place of rn,

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(M_{n} \leq x_{n}) &=& \prod\limits_{i=1}^{t_{n}} \mathbb{P}(M(A_{i})\leq x_{n}) + o(1) = \prod\limits_{i=1}^{t_{n}}\bigg\{1 - \mathbb{P}(M(A_{i}) > x_{n})\bigg\} + o(1) \\ &=& \prod\limits_{i=1}^{t_{n}}\bigg\{1 - \mathbb{P}(M^{i}_{0,s_{n}-p_{n}}>x_{n}, M^{i}_{s_{n}-p_{n},s_{n}}\leq x_{n}) - \mathbb{P}(M^{i}_{s_{n}-p_{n}, s_{n}} > x_{n})\bigg\} + o(1). \end{array} $$
(55)

Now, with \(a_{i,n} = \mathbb {P}(M^{i}_{0,s_{n}-p_{n}}>x_{n}, M^{i}_{s_{n}-p_{n},s_{n}}\leq x_{n}) + \mathbb {P}(M^{i}_{s_{n}-p_{n}, s_{n}} > x_{n}) \) we have, for all i, \(a_{i,n} \leq s_{n}\bar {F}(x_{n})\) and so \(\max \limits _{1\leq i \leq t_{n}}a_{i,n} \leq s_{n}\bar {F}(x_{n}) \to 0\) as sn = o(n) and \(\bar {F}(x_{n}) = \tau n^{-1} + o(n^{-1})\). Also, \({\sum }_{i=1}^{t_{n}}a_{i,n} \leq t_{n}\max \limits _{1\leq i \leq t_{n}}a_{i,n} \leq s_{n}(s_{n}+q_{n})^{-1}n\bar {F}(x_{n})\) which is bounded above as \(n\to \infty \). Thus we may apply Lemma 4 to Eq. 55 to get

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(M_{n} \leq x_{n}) &=& \exp\bigg\{-\sum\limits_{i=1}^{t_{n}} \bigg(\mathbb{P}(M^{i}_{0,s_{n}-p_{n}}>x_{n}, M^{i}_{s_{n}-p_{n},s_{n}}\leq x_{n}) + \mathbb{P}(M^{i}_{s_{n}-p_{n}, s_{n}} > x_{n}) \bigg)\bigg\} + o(1) \\ &=& \exp\bigg\{-\bigg(\sum\limits_{i=1}^{t_{n}}\mathbb{P}(M^{i}_{0,s_{n}-p_{n}}>x_{n}, M^{i}_{s_{n}-p_{n},s_{n}}\leq x_{n})\bigg)\bigg(1 + o(1) \bigg) \bigg\} + o(1) \end{array} $$
(56)

with Eq. 56 following from Lemma 6. Now Lemma 5 reduces (56) to

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(M_{n} \leq x_{n}) &=& \exp\bigg\{-\sum\limits\limits_{i=1}^{t_{n}}\mathbb{P}(M^{i}_{0,s_{n}-p_{n}}>x_{n}, M^{i}_{s_{n}-p_{n},s_{n}}\leq x_{n}) \bigg\} + o(1) \end{array} $$
(57)
$$ \begin{array}{@{}rcl@{}} & \geq & \exp\bigg\{-\sum\limits_{i=1}^{t_{n}}\sum\limits_{j=1}^{s_{n}}\mathbb{P}({X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n}) \bigg\} + o(1) \end{array} $$
(58)

with Eq. 58 following from Eq. 57 by the inclusions \(\{ M^{i}_{0, s_{n}-p_{n}} > x_{n}, M^{i}_{s_{n}-p_{n},s_{n}} \leq x_{n}\} \subseteq \bigcup _{j=1}^{s_{n}-p_{n}}\{{X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n} \} \subseteq \bigcup _{j=1}^{s_{n}}\{{X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n} \}\) and the union bound. A similar argument that was used to show (52) gives

$$ \sum\limits_{j=1}^{n} \mathbb{P}(X_{j} > x_{n}, M_{j,j+p_{n}} \leq x_{n}) = \sum\limits_{i=1}^{t_{n}}\sum\limits_{j=1}^{s_{n}}\mathbb{P}({X^{i}_{j}} > x_{n}, M^{i}_{j,j+p_{n}} \leq x_{n}) + o(1) $$
(59)

so that Eq. 58 becomes

$$ \mathbb{P}(M_{n} \leq x_{n}) \geq \exp\bigg\{-\sum\limits_{j=1}^{n} \mathbb{P}(X_{j} > x_{n}, M_{j,j+p_{n}} \leq x_{n}) \bigg\} + o(1) $$
(60)

and so Eqs. 53 and 60 together prove (11). Also, since

$$ \begin{array}{@{}rcl@{}} \exp\bigg\{-\sum\limits_{j=1}^{n} \mathbb{P}(X_{j} > x_{n}, M_{j,j+p_{n}} \leq x_{n}) \bigg\} = \bigg[\exp\big\{-n\bar{F}(x_{n}) \big\}\bigg]^{\gamma_{n}} \end{array} $$

with \(\gamma _{n} = n^{-1}{\sum }_{j=1}^{n}\mathbb {P}(M_{j,j+p_{n}} \leq x_{n} \mid X_{j} > x_{n})\), this also gives (12).

5.5 Proof of Theorem 2.

Throughout we let \(\theta _{i,n} = \mathbb {P}(M_{i,i+p_{n}} \leq x_{n} \mid X_{i} > x_{n})\). From Corollary (1) we know that \(\mathbb {P}(M_{n} \leq x_{n}) \to e^{-\tau \gamma ^{\prime }}\) with \(\gamma ^{\prime } = \lim _{n\to \infty }n^{-1}{\sum }_{i=1}^{n}\theta _{i,n}\) which is easily seen to converge to the same value as \(\lim _{n\to \infty }n^{-1}{\sum }_{i=1}^{n}\theta _{i}\) since

$$ \begin{array}{@{}rcl@{}} | n^{-1}\sum\limits_{i=1}^{n}\theta_{i} - n^{-1}\sum\limits_{i=1}^{n}\theta_{i,n} | & \leq& n^{-1} \sum\limits_{i=1}^{n}|\theta_{i,n} - \theta_{i} | \\ & \leq& \max_{1\leq i \leq n} |\theta_{i,n} - \theta_{i}| \to 0. \end{array} $$
(61)

This establishes the first part of the Theorem.

To show that, \(\mathbb {P}(M_{n} \leq y_{n}) \to e^{-\tau ^{\prime }\gamma }\), define \(n^{\prime } = \lfloor { (\tau ^{\prime }/\tau )n}\rfloor \) so that \(n^{\prime }\bar {F}(x_{n}) \to \tau ^{\prime }\) or equivalently \(F(x_{n})^{n^{\prime }} \to e^{-\tau ^{\prime }}\). Then,

$$ \begin{array}{@{}rcl@{}} | \mathbb{P}(M_{n^{\prime}} \leq x_{n}) - \mathbb{P}(M_{n^{\prime}} \leq y_{n^{\prime}}) | & \leq& n^{\prime} | F(x_{n}) - F(y_{n^{\prime}}) | \\ & =& n^{\prime} | \bar{F}(x_{n}) - \bar{F}(y_{n^{\prime}}) | \to 0. \end{array} $$
(62)

Since \(n^{\prime } \leq n\) we have by Theorem 1, \(\mathbb {P}(M_{n^{\prime }} \leq x_{n}) = F(x_{n})^{n^{\prime }\gamma _{n}^{\prime }}\) where \(\gamma _{n}^{\prime } = (n^{\prime })^{-1}{\sum }_{i=1}^{n^{\prime }} \theta _{i,n}\) and \(\gamma _{n}^{\prime } \) also has limiting value \(\lim _{n\to \infty }n^{-1}{\sum }_{i=1}^{n}\theta _{i} \) since \(| (n^{\prime })^{-1}{\sum }_{i=1}^{n^{\prime }} \theta _{i,n} - (n^{\prime })^{-1}{\sum }_{i=1}^{n^{\prime }} \theta _{i} | \to 0\) as in Eq. 61. Then, since \(F(x_{n})^{n^{\prime }} \to e^{-\tau ^{\prime }}\), we have \(\mathbb {P}(M_{n^{\prime }} \leq x_{n}) \to e^{-\tau ^{\prime }\gamma }\) and so from Eq. 62, \(\mathbb {P}(M_{n} \leq y_{n}) \to e^{-\tau ^{\prime }\gamma }\) also with γ as in Eq. 17.

5.6 Proof of theorem 3.

The first step in the proof is to show that we have, for each integer k ≥ 1,

$$ \mathbb{P}(M_{n} \leq x_{n}) - \mathbb{P}^{k}(M_{n^{\prime}} \leq x_{n}) \to 0 $$
(63)

where \(n^{\prime } = \lfloor {n/k}\rfloor \). To do this we define intervals Ii and \(I_{i}^{*}\), 1 ≤ ik, of alternating large and small lengths as follows,

$$ I_{i} = \{(i-1)n^{\prime} + 1, \ldots, in^{\prime} - q_{n}\}, \quad I_{i}^{*} = \{in^{\prime}-q_{n}+1, \ldots, in^{\prime}\}. $$
(64)

We show that

$$ \begin{array}{@{}rcl@{}} && \quad \big| \mathbb{P}(M_{n} \leq x_{n}) - \mathbb{P}(M(\cup_{i=1}^{k}I_{i}) \leq x_{n})\big| \to 0, \end{array} $$
(65)
$$ \begin{array}{@{}rcl@{}} && \quad \big| \mathbb{P}(M(\cup_{i=1}^{k}I_{i}) \leq x_{n}) - \prod\limits_{i=1}^{k}\mathbb{P}(M(I_{i})\leq x_{n})\big| \to 0, \end{array} $$
(66)
$$ \begin{array}{@{}rcl@{}} && \quad \big| \prod\limits_{i=1}^{k}\mathbb{P}(M(I_{i})\leq x_{n}) - \prod\limits_{i=1}^{k}\mathbb{P}(M(I_{i}\cup I_{i}^{*})\leq x_{n}) \big| \to 0, \end{array} $$
(67)

and

$$ \begin{array}{@{}rcl@{}} \big| \prod\limits\limits_{i=1}^{k}\mathbb{P}(M(I_{i}\cup I_{i}^{*})\leq x_{n}) - \mathbb{P}^{k}(M_{n^{\prime}} \leq x_{n})\big| \to 0 , \end{array} $$
(68)

from which Eq. 63 follows by the triangle inequality.

Equation 65 follows from \(\{M_{n} \leq x_{n}\} \subseteq \{M(\cup _{i=1}^{k}I_{i}) \leq x_{n}\}\) and so \(\mathbb {P}(M(\cup _{i=1}^{k}I_{i}) \leq x_{n}) - \mathbb {P}(M_{n} \leq x_{n}) \leq \mathbb {P}(\cup _{i=1}^{k}\{M(I_{i}^{*}) > x_{n}\}) \leq kq_{n}\bar {F}(x_{n}) \to 0\) since qn = o(n) and \(\bar {F}(x_{n}) = \tau /n + o(n^{-1}).\)

Equation 66 follows immediately from Lemma 1 as Ij,1 ≤ jk, are distinct subintervals of {1,…,n}, and Ii and Ii+ 1 are separated by qn.

Equation 67 follows from \(\{M(I_{i} \cup I_{i}^{*}) \leq x_{n} \} \subseteq \{M(I_{i}) \leq x_{n}\}\) and \(0\leq \mathbb {P}(M(I_{i}) \leq x_{n}) - \mathbb {P}(M(I_{i} \cup I_{i}^{*}) \leq x_{n}) \leq q_{n}\bar {F}(x_{n}) \to 0\) so that \( \mathbb {P}(M(I_{i}) \leq x_{n}) = \mathbb {P}(M(I_{i} \cup I_{i}^{*}) \leq x_{n}) + o(1)\) for 1 ≤ ik.

For Eq. 68, we first note that \(\mathbb {P}(M(I_{1}\cup I_{1}^{*})\leq x_{n}) = \mathbb {P}(M_{n}^{\prime } \leq x_{n}).\) Since \(I_{i}\cup I_{i}^{*}\) is an interval of length \(n^{\prime }\), 1 ≤ ik, we have by periodicity that \(M(I_{i}\cup I_{i}^{*}) \overset {D}{=} M_{r,r+n^{\prime }}\) for some r ∈{0,1,…,d − 1}. Then for 2 ≤ ik, we have

$$ \begin{array}{@{}rcl@{}} | \mathbb{P}(M(I_{i}\cup I_{i}^{*})) - \mathbb{P}(M(I_{1}\cup I_{1}^{*})) | &=& | \mathbb{P}(M_{r,r+n^{\prime}} \leq x_{n}) - \mathbb{P}(M_{n}^{\prime} \leq x_{n}) | \\ & \leq& | \mathbb{P}(M_{r+n^{\prime}} \leq x_{n}) - \mathbb{P}(M_{n}^{\prime} \leq x_{n}) | + \\ & &\phantom{==} | \mathbb{P}(M_{r,r+n^{\prime}} \leq x_{n}) - \mathbb{P}(M_{r+n^{\prime}} \leq x_{n}) | \\ & \leq& r\bar{F}(x_{n}) + r\bar{F}(x_{n}) \\ & \leq& 2d\bar{F}(x_{n}) \to 0. \end{array} $$

Hence \({\prod }_{i=1}^{k}\mathbb {P}(M(I_{i}\cup I_{i}^{*})\leq x_{n}) = {\prod }_{i=1}^{k}\mathbb {P}(M(I_{1}\cup I_{1}^{*})\leq x_{n}) + o(1) = \mathbb {P}^{k}(M_{n^{\prime }} \leq x_{n}) + o,\) which establishes (63).

Now we note that since \(\{X_{n}\}_{n=1}^{\infty }\) satisfies AIM(xn) with \(n\bar {F}(x_{n}) \to \tau \), it also satisfies AIM(yn) whenever \(n\bar {F}(y_{n}) \to \tau ^{\prime } \leq \tau \). This follows in the exact same way as for the D(xn) condition, see, e.g., Lemma 3.6.2. in Leadbetter et al. (1983). This fact together with Eq. 63 allows the proof to proceed in exactly the same manner as the proof of Theorem 3.7.1. in Leadbetter et al. (1983).

5.7 Proof of Theorem 4

For \(n, q \in \mathbb {N}\) and \(u\in \mathbb {R}\), we define the mixing coefficients αn,q(u) by

$$ \alpha_{n,q}(u) = \max \mid\mathbb{P}\big(M(I_{1} \cup I_{2}) \leq u \big) - \mathbb{P}\big(M(I_{1}) \leq u\big)\mathbb{P}\big(M(I_{2}) \leq u\big)\mid $$

where the maximum is taken over intervals I1 and I2 that are separated by q, with \(\min \limits \{|I_{1}|, |I_{2}|\} \geq q\), \(\min \limits \{\min \limits (I_{1}), \min \limits (I_{2})\} \geq 1\) and max\(\{\max \limits (I_{1}), \max \limits (I_{2})\} \leq n\).

We first note that since both qn = o(n) and \(0 \leq \alpha _{n} = \alpha _{n,q_{n}}(x_{n}) \leq \alpha ^{*}_{n,q_{n}}(x_{n}) \to 0\), we can find a sequence of positive integers pn = o(n) such that nαn = o(pn) and qn = o(pn) so that the conditions of Theorem 1 are satisfied.

Let t > 0 and write \(k_{n} = \lfloor {t/\bar {F}(x_{n})}\rfloor \sim tn/\tau \) so that for sufficiently large n, kn > pn + qn. Now, fix \(i\in \mathbb {N}\). For sufficiently large n we have

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(M_{i+p_{n},i+p_{n}+q_{n}} > x_{n} \mid X_{i} > x_{n}) \leq q_{n}\bar{F}(x_{n}) + \alpha^{*}_{n,q_{n}}(x_{n}) \rightarrow 0 \end{array} $$

so that

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(M_{i,i+k_{n}} \leq x_{n} \mid X_{i} > x_{n}) = \mathbb{P}(M_{i,i+p_{n}} \leq x_{n}, M_{i+p_{n}+q_{n},i+k_{n}} \leq x_{n} \mid X_{i} > x_{n}) + o(1). \end{array} $$

In a similar way, since \(\{ M_{i+k_{n}} \leq x_{n} \} \subseteq \{ M_{i+p_{n}+q_{n}, i + k_{n}} \leq x_{n} \},\) we have

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(M_{i+p_{n}+q_{n}, i + k_{n}} \leq x_{n} ) - \mathbb{P}(M_{i+k_{n}} \leq x_{n} ) &=& \mathbb{P}(M_{i+p_{n}+q_{n}} > x_{n}, M_{i+p_{n}+q_{n},i+k_{n}} \leq x_{n}) \\ & \leq& (i+p_{n}+q_{n})\bar{F}(x_{n}) \rightarrow 0, \end{array} $$

so that \(\mathbb {P}(M_{i+p_{n}+q_{n},i+k_{n}} \leq x_{n}) = \mathbb {P}(M_{i+k_{n}} \leq x_{n}) + o(1).\) Now we can derive the limiting distribution of \(\bar {F}(x_{n})T_{i}(x_{n})\). We have

$$ \begin{array}{@{}rcl@{}} \mathbb{P}(\bar{F}(x_{n})T_{i}(x_{n}) > t) &=& \mathbb{P}(T_{i}(x_{n}) > k_{n}) = \mathbb{P}(X_{i+1} \leq x_{n},{\ldots} X_{i+k_{n}} \leq x_{n} \mid X_{i} > x_{n}) \\ & =&\mathbb{P}(M_{i,i+k_{n}} \leq x_{n} \mid X_{i} > x_{n}) \\ & =& \mathbb{P}(M_{i,i+p_{n}} \leq x_{n}, M_{i+p_{n}+q_{n},i+k_{n}} \leq x_{n} \mid X_{i} > x_{n}) + o(1) \\ & =& \mathbb{P}(M_{i,i+p_{n}} \leq x_{n} \mid X_{i}> x_{n})\mathbb{P}(M_{i+p_{n}+q_{n},i+k_{n}} \leq x_{n} \mid X_{i} >\\&& x_{n}, M_{i,i+p_{n}} \leq x_{n})+o(1) \\ & =& \big\{\theta_{i} + o(1) \big\}\big\{\mathbb{P}(M_{i+k_{n}} \leq x_{n}) + o(1) \big\} + o(1). \end{array} $$
(69)

Now we focus on the term \(\mathbb {P}(M_{i+k_{n}} \leq x_{n})\) appearing in Eq. 69. Since \(\mathbb {P}(M_{k_{n}} \leq x_{n}) - \mathbb {P}(M_{i+k_{n}} \leq x_{n}) \leq i\bar {F}(x_{n})\) we have \(\mathbb {P}(M_{i+k_{n}} \leq x_{n}) = \mathbb {P}(M_{k_{n}} \leq x_{n})+ o(1).\) Since kn = O(n) we have \(\alpha ^{*}_{k_{n},q_{n}}(x_{n}) \to 0\) and so \(\alpha _{k_{n},q_{n}}(x_{n}) \to 0\) also. Thus we may find a sequence \(p_{n}^{\prime } = o(n)\) such that \(k_{n}\alpha _{k_{n},q_{n}} = o(p_{n}^{\prime })\) and \(q_{n} = o(p_{n}^{\prime })\), e.g., we may take \(p_{n}^{\prime } = \lfloor {(k_{n} \max \limits \{q_{n}, k_{n}\alpha _{k_{n},q_{n}}(x_{n})\})^{1/2}}\rfloor .\) Then applying Theorem 1 to the first kn terms we get \(\mathbb {P}(M_{k_{n}} \leq x_{n}) = F(x_{n})^{k_{n}\gamma _{n}^{\prime }}\) where \(\gamma _{n}^{\prime } = k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n}^{\prime }\) with \(\theta _{j,n}^{\prime } = \mathbb {P}(M_{j,j+p_{n}^{\prime }} \leq x_{n} \mid X_{j} > x_{n})\).

We now verify that \(\gamma _{n}^{\prime }\) has limiting value \(\gamma = \lim _{n\to \infty }n^{-1}{\sum }_{j=1}^{n}\theta _{j}\). Define sequences an, bn and \(k_{n}^{\prime }\) by \(a_{n} = \max \limits \{p_{n}, p_{n}^{\prime }\}, b_{n} = \min \limits \{p_{n}, p_{n}^{\prime }\}\) and \(k_{n}^{\prime } = k_{n} + a_{n}\) and note that \(k_{n}^{\prime } = O(n)\). Then with the usual notation \(\theta _{j,n} = \mathbb {P}(M_{j,j+p_{n}} \leq x_{n} \mid X_{j} > x_{n})\), we have for all 1 ≤ jkn that, for sufficiently large n, \(|\theta _{j,n}^{\prime } - \theta _{j,n}| \leq \mathbb {P}(M_{j+b_{n}, j+a_{n}} > x_{n} \mid X_{j} > x_{n}) \leq |p_{n} - p_{n}^{\prime }| \bar {F}(x_{n}) + \alpha ^{*}_{k_{n}^{\prime },q_{n}}(x_{n})\) where we have used the fact that bn > qn for sufficiently large n and \(\alpha ^{*}_{n,q}(u)\) is non-decreasing in n for fixed q and u. Therefore, \(|k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n} - k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n}^{\prime }| \leq |p_{n} - p_{n}^{\prime }| \bar {F}(x_{n}) + \alpha ^{*}_{k_{n}^{\prime },q_{n}}(x_{n})\to 0\) and so \(k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n} \) and \(k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n}^{\prime }\) have the same limit which equals γ since \(|k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j,n} - k_{n}^{-1}{\sum }_{j=1}^{k_{n}}\theta _{j}| \leq \max \limits _{1\leq j\leq k_{n}}|\theta _{j} - \theta _{j,n}| \to 0.\)

Finally, we have \(\mathbb {P}(M_{i+k_{n}} \leq x_{n}) = \mathbb {P}(M_{k_{n}} \leq x_{n})+ o(1) = F(x_{n})^{k_{n}(\gamma + o(1))} = [e^{-t} + o(1)]^{\gamma + o(1)}\) since \(n\bar {F}(x_{n})\to \tau \) implies that \(k_{n}\bar {F}(x_{n})\to t\) which in turn implies that \(F(x_{n})^{k_{n}}\to e^{-t}\). Substituting \(\mathbb {P}(M_{i+k_{n}} \leq x_{n}) = e^{-\gamma t} + o(1)\) in Eq. 69 then gives the result.