1 Introduction

In the infinite multinomial occupancy scheme balls are thrown independently in a series of boxes, so that each ball hits box \(k=1,2,\ldots \) with probability \(p_k\), where \(p_k>0\) and \(\sum _{k\in \mathbb {N}}p_k=1\). This classical model is sometimes named after Karlin due to his seminal contribution [32]. Features of the occupancy pattern emerging after the first n balls are thrown have been intensely studied, see [6, 20, 28] for survey and references and [7, 13, 14, 16] for recent advances. Statistics in focus of most of the previous work, and also relevant to the subject of this paper, are not sensitive to the labelling of boxes but rather only depend on the integer partition of n comprised of nonzero occupancy numbers.

In the infinite occupancy scheme in a random environment the (hitting) probabilities of boxes are positive random variables \((P_k)_{k\in \mathbb {N}}\) with an arbitrary joint distribution satisfying \(\sum _{k\in \mathbb {N}}P_k=1\) almost surely (a.s.). Conditionally on \((P_k)_{k\in \mathbb {N}}\), balls are thrown independently, with probability \(P_k\) of hitting box k. Instances of this general setup have received considerable attention within the circle of questions around exchangeable partitions, discrete random measures and their applications to population genetics, Bayesian statistics and computer science. In the most studied and analytically best tractable case the probabilities of boxes are representable as the residual allocation (or stick-breaking) model

$$\begin{aligned} P_k=U_1U_2\ldots U_{k-1}(1-U_k),\quad k\in \mathbb {N}, \end{aligned}$$
(1)

where the \(U_i\)’s are independent with beta\((\theta ,1)\) distributionFootnote 1 on (0, 1) and \(\theta >0\). In this case the distribution of the sequence \((P_k)_{k\in \mathbb {N}}\) is known as the Griffiths–Engen–McCloskey (\({\mathrm{GEM}}\)) distribution with parameter \(\theta \). The sequence of the \(P_k\)’s arranged in decreasing order has the Poisson–Dirichlet (\({\mathrm{PD}}\)) distribution with parameter \(\theta \), and the induced exchangeable partition on the set of n balls follows the celebrated Ewens sampling formula [3, 35, 37, 38]. Generalisations have been proposed in various directions. The two-parameter extension due to Pitman and Yor [35] involves probabilities of form (1) with independent but not identically distributed \(U_i\)’s, where the distribution of \(U_i\) is \(\hbox {beta}(\theta +\alpha i, 1-\alpha )\) (with \(0<\alpha <1\) and \(\theta >-\alpha \)). Residual allocation models with other choices of parameters for the \(U_i\)’s with different beta distributions are found in [30, 39]. Much effort has been devoted to the occupancy scheme, known as the Bernoulli sieve, which is based on a homogeneous residual allocation model (1), that is, with independent and identically distributed (iid) factors \(U_i\) having arbitrary distribution on (0, 1), see [2, 15, 22, 28, 29, 36]. The homogeneous model has a multiplicative regenerative property, also inherited by the partition of the set of balls.

In more sophisticated constructions of random environments probabilities \((P_k)_{k\in \mathbb {N}}\) are identified with some arrangement in sequence of masses of a purely atomic random probability measure. A widely explored possibility is to define a random cumulative distribution function F by transforming the path of an increasing drift-free Lévy process (subordinator) \((X(t))_{t\ge 0}\). In particular, in the regenerative model F is defined by \(F(t)=1-e^{-X(t)}\) for \(t\ge 0\), see [5, 21, 24, 25] and also Sect. 5. Such an F is called in the statistical literature neutral-to-the right prior [18]. In the Poisson–Kingman model F is given by \(F(t)= X(t)/X(1)\) for \(t\in [0,1]\), see [18, 35] and also Sect. 6.

Following [8, 12, 31] we shall study a nested infinite occupancy scheme in random environment. In this context we regard \((P_k)_{k\in \mathbb {N}}\) as a random fragmentation law (with \(P_k>0\) and \(\sum _{k\in \mathbb {N}}P_k=1\) a.s.). To introduce hierarchy of boxes, for each \(j\in \mathbb {N}_0\) let \(\mathcal {I}_j\) be the set of words of length j over \(\mathbb {N}\), where \(\mathcal {I}_0:=\{\varnothing \}\). The set \(\mathcal {I}=\bigcup _{j\in \mathbb {N}_0} \mathcal {I}_j\) of all finite words has the natural structure of a \(\infty \)-storey tree with root \(\varnothing \) and \(\infty \)-ary branching at every node, where \(v1, v2,\ldots \in \mathcal {I}_{j+1}\) are the immediate followers of \(v\in \mathcal {I}_j\). Let \(\{(P_k^{(v)})_{k\in \mathbb {N}}\), \(v\in \mathcal {I}\}\) be a family of independent copies of \((P_k)_{k\in \mathbb {N}}\). With each \(v\in \mathcal {I}\) we associate a box divided in sub-boxes \(v1, v2,\dots \) of the next level. The probabilities of boxes are defined recursively by

$$\begin{aligned} P(\varnothing )=1, ~~~P(vk)=P(v)P_k^{(v)}~~~\mathrm{for~}v\in \mathcal {I}, k\in \mathbb {N}\end{aligned}$$
(2)

(note that the factors P(v) and \(P_k^{(v)}\) are independent). Given \((P(v))_{v\in \mathcal {I}}\), balls are thrown independently, with probability P(v) of hitting box v. Since \(\sum _{v\in \mathcal {I}_j}P(v)=1\) the allocation of balls in boxes of level j occurs according to the ordinary Karlin’s occupancy scheme.

Recursion (2) defines a discrete-time mass-fragmentation process, where the generic mass splits in proportions according to the same fragmentation law, independently of the history and masses of the co-existing fragments. The nested occupancy scheme can be seen as a combinatorial version of this fragmentation process. Initially all balls are placed in box \(\varnothing \), and at each consecutive step \(j+1\) each ball in box \(v\in \mathcal {I}_j\) is placed in sub-box vk with probability \(P_k^{(v)}\). The inclusion relation on the hierarchy of boxes induces a combinatorial structure on the (labelled) set of balls called total partition, that is a sequence of refinements from the trivial one-block partition down to the partition in singletons. The paper [17] highlights the role of exchangeability and gives the general de Finetti-style connection between mass-fragmentations and total partitions.

We consider the random probabilities of the hierarchy of boxes and the outcome of throwing infinitely many balls all defined on the same underlying probability space. For \(j,r\in \mathbb {N}\), denote by \(K_{n,j,r}\) the number of boxes \(v\in \mathcal {I}_j\) of the jth level that contain exactly r out of n first balls, and let

$$\begin{aligned} K_{n,j}(s):=\sum _{r=\lceil n^{1-s} \rceil }^n K_{n,j,r},\quad s\in [0,1], \end{aligned}$$
(3)

be a cumulative count of occupied boxes, where \(\lceil \,\cdot \,\rceil \) is the integer ceiling function. With probability one the random function \(s\mapsto K_{n,j}(s)\) is nondecreasing and right-continuous, hence belongs to the Skorokhod space D[0, 1]. Also observe that \(K_{n,j}(0)=K_{n,j,n}\) is zero unless all balls fall in the same box and that \(K_{n,j}(1)\) is the number of occupied boxes in the jth level. In [8] a central limit theorem with random centering was proved for \(K_{n,j}(1)\) for j growing with n at certain rate. Our focus is different. We are interested in the joint weak convergence of \((K_{n,j}(s))_{j\in \mathbb {N}, s\in [0,1]}\), properly normalised and centered, as the number of balls n tends to \(\infty \). As far as we know, this question has not been addressed so far. We prove a multivariate functional limit theorem (Theorem 2.1) applicable to the fragmentation laws representable by homogeneous residual allocations models (including the \({\mathrm{GEM}}/{\mathrm{PD}}\) distribution) and some other models where the sequence of \(P_k\)’s arranged in decreasing order approaches zero sufficiently fast. A univariate functional limit for \((K_{n,1}(s))_{s\in [0,1]}\) in the case of Bernoulli sieve was previously obtained in [2].

2 Main result

For given fragmentation law \((P_k)_{k\in \mathbb {N}}\), let \(\rho (s):=\#\{k\in \mathbb {N}{:}\,P_k\ge 1/s\}\) for \(s>0\), and \(N(t):=\rho (e^t), V(t):=\mathbb {E}N(t)\) for \(t\in \mathbb {R}\). The joint distribution of \(K_{n,j,r}\)’s is completely determined by the probability law of the random function \(\rho (\cdot )\), which captures the fragmentation law up to re-arrangement of \(P_k\)’s. For our purposes therefore we can make no difference between fragmentation laws with the same \(\rho (\cdot )\).

Similarly, using probabilities of boxes in level \(j\in \mathbb {N}\) define \(\rho _j(s):=\#\{v\in \mathcal {I}_j{:}\,P(v)\ge 1/s\}\) for \(s>0\), and \(N_j(t):=\rho _j(e^t), V_j(t):=\mathbb {E}N_j(t)\) for \(t\in \mathbb {R}\). Note that \(N_j(t)=0\) for \(t<0\). Since \(\sum _{v\in {\mathcal {I}}_j}P(v)=1\) a.s. we have \(\rho _j(s)\le s\), whence \(N_j(t)\le e^t\) a.s. and \(V_j(t)\le e^t\).

Let \(T_k:=-\log P_k\) for \(k\in \mathbb {N}\). Here is a basic decomposition of principal importance for what follows:

$$\begin{aligned} N_j(t)=\sum _{k\in \mathbb {N}} N_{j-1}^{(k)}(t-T_k),\quad t\in \mathbb {R}, \end{aligned}$$
(4)

where \((N_{j-1}^{(k)}(t))_{t\ge 0}\) for \(k\in \mathbb {N}\) are independent copies of \((N_{j-1}(t))_{t\ge 0}\) which are also independent of \(T_1\), \(T_2,\ldots \) A consequence of (4) is a recursion for the expectations

$$\begin{aligned} V_j(t)=\int _{[0,\,t]}V_{j-1}(t-y)\mathrm{d}V(y),\quad t\ge 0,~j\ge 2, \end{aligned}$$
(5)

which shows that \(V_j(\cdot )\) is the jth convolution power of \(V(\cdot )\).

The assumptions on fragmentation law and the functional limit will involve a centered Gaussian process \(W:=(W(s))_{s\ge 0}\) which is a.s. locally Hölder continuous with exponent \(\beta >0\) and satisfy \(W(0)=0\). In particular, for any \(T>0\)

$$\begin{aligned} |W(x)-W(y)|\le M_T|x-y|^\beta ,\quad 0\le x,y\le T \end{aligned}$$
(6)

for some a.s. finite random variable \(M_T\). For each \(u>0\), we set further

$$\begin{aligned} R^{(u)}_1(s):=W(s),\quad R^{(u)}_j(s):=\int _{[0,\,s]}(s-y)^{u(j-1)}\mathrm{d}W(y),\quad s\ge 0,~ j\ge 2. \end{aligned}$$

For \(j\ge 2\), the process \(R^{(u)}_j\) is understood as the result of integration by parts

$$\begin{aligned} R_j^{(u)}(s)=u(j-1)\int _0^s (s-y)^{u(j-1)-1}W(y)\mathrm{d}y,\quad s\ge 0. \end{aligned}$$

In particular, when \(u(j-1)\) is a positive integer,

$$\begin{aligned} R^{(u)}_j(s)=(u(j-1))!\int _0^{s_1}\int _0^{s_2}\ldots \int _0^{s_{u(j-1)}} W(y)\mathrm{d}y\mathrm{d}s_{u(j-1)}\ldots \mathrm{d}s_2,\quad s\ge 0,~j\ge 2, \end{aligned}$$

where \(s_1=s\), which can be seen with the help of repeated integration by parts.

Throughout the paper \(D:=D[0,\infty )\) and D[0, 1] denote the standard Skorokhod spaces. Here is our main result.

Theorem 2.1

Assume the following conditions hold : 

  1. (i)
    $$\begin{aligned} b_0+b_1 t^{\omega -\varepsilon _1}\le V(t)-c t^\omega \le a_0+a_1t^{\omega -\varepsilon _2} \end{aligned}$$
    (7)

    for all \(t\ge 0\) and some constants \(c,\omega , a_0, a_1>0\), \(0<\varepsilon _1, \varepsilon _2\le \omega \) and \(b_0, b_1\in \mathbb {R}\),

  2. (ii)
    $$\begin{aligned} \mathbb {E}\sup _{s\in [0,\,t]}(N(s)-V(s))^2=O(t^{2\gamma }),\quad t\rightarrow \infty \end{aligned}$$
    (8)

    for some \(\gamma \in (\omega - \min (1,\varepsilon _1, \varepsilon _2), \omega )\).

  3. (iii)
    $$\begin{aligned} \frac{N(t\,\cdot )-c(t\,\cdot )^\omega }{at^\gamma }~\Rightarrow ~ W(\cdot ),\quad t\rightarrow \infty \end{aligned}$$
    (9)

    in the \(J_1\)-topology on D for some \(a>0\) and the same \(\gamma \) as in (8).

Then

$$\begin{aligned} \bigg (\frac{K_{n,j}(\cdot )- c_j(\log n(\cdot ))^{\omega j}}{ac_{j-1}(\log n)^{\gamma +\omega (j-1)}}\bigg )_{j\in \mathbb {N}}~\Rightarrow ~(R^{(\omega )}_j(\cdot ))_{j\in \mathbb {N}},\quad n\rightarrow \infty \end{aligned}$$
(10)

in the \(J_1\)-topology on \(D[0,1]^\mathbb {N}\), where

$$\begin{aligned} c_j:=\frac{(c\Gamma (\omega +1))^j}{\Gamma (\omega j+1)},\quad j\ge 0 \end{aligned}$$
(11)

with \(\Gamma (\cdot )\) denoting the gamma function.

Remark 2.2

Observe that the limit processes in (10) are the restrictions of \(R^{(\omega )}_j\) to [0, 1]. We could have defined the processes \(R^{(u)}_j\) on [0, 1] only and assumed that (9) holds on D[0, 1] rather than on D. However, we do not think that such an assumption would be more natural than the present one.

Remark 2.3

The assumption \(0<\varepsilon _1, \varepsilon _2\le \omega \) ensures that \(\gamma >0\). Furthermore, in view of (7) and the choice of \(\gamma \) relation (9) is equivalent to

$$\begin{aligned} \frac{N(t\,\cdot )-V(t\,\cdot )}{at^\gamma }~\Rightarrow ~ W(\cdot ),\quad t\rightarrow \infty \end{aligned}$$
(12)

in the \(J_1\)-topology on D. Similarly, in view of (13) given below relation (10) is equivalent to

$$\begin{aligned} \bigg (\frac{K_{n,j}(\cdot )- V_j(\log n(\cdot ))}{ac_{j-1}(\log n)^{\gamma +\omega (j-1)}}\bigg )_{j\in \mathbb {N}}~\Rightarrow ~(R^{(\omega )}_j(\cdot ))_{j\in \mathbb {N}},\quad n\rightarrow \infty \end{aligned}$$

in the \(J_1\)-topology on \(D[0,1]^\mathbb {N}\).

3 Proof of Theorem 2.1

3.1 Auxiliary results

Lemma 3.1

  1. (a)

    Condition (7) ensures that, for \(j\in \mathbb {N}\) and \(t\ge 0\),

    $$\begin{aligned} b_{0,j}+b_{1,j}t^{\omega j-\varepsilon _1} \le V_j(t)- c_j t^{\omega j}\le a_{0,j}+a_{1, j}t^{\omega j-\varepsilon _2}, \end{aligned}$$
    (13)

    where \(c_j\) is given by (11), \(a_{0,j}, a_{1,j}>0\) and \(b_{0,j}, b_{1,j}\in \mathbb {R}\) are constants with \(a_{0,1}:=a_0\), \(a_{1,1}:=a_1\), \(b_{0,1}:=b_0\) and \(b_{1,1}:=b_1\). In particular, for \(j\in \mathbb {N}\),

    $$\begin{aligned} V_j(t)~\sim ~c_j t^{\omega j},\quad t\rightarrow \infty \end{aligned}$$
    (14)

    and, for \(j\in \mathbb {N}\) and \(u,v\ge 0\),

    $$\begin{aligned} V_j(u+v)-V_j(v)\le & {} c_j ({{\,\mathrm{\mathbb {1}}\,}}_{\{\omega j \in (0,1]\}}u^{\omega j}+{{\,\mathrm{\mathbb {1}}\,}}_{\{\omega j>1\}}\omega j(u+v)^{\omega j-1}u)\nonumber \\&+\,a_{0,j}+a_{1,j}(u+v)^{\omega j-\varepsilon _2}-b_{0,j}- b_{1,j}v^{\omega j-\varepsilon _1}.\nonumber \\ \end{aligned}$$
    (15)
  2. (b)

    Suppose (7) and (8). Then

    $$\begin{aligned} \lim _{t\rightarrow \infty }\frac{N(t)}{V(t)}=1\quad \text {a.s.} \end{aligned}$$
    (16)
  3. (c)

    Suppose (7) and (8). Then, for \(j\in \mathbb {N}\),

    $$\begin{aligned} \mathbb {E}\sup _{s\in [0,\,t]}(N_j(t)-V_j(t))^2=O(t^{2\gamma +2\omega (j-1)}),\quad t\rightarrow \infty \end{aligned}$$
    (17)

Proof

(a)   We only prove the second inequality in (13). To this end, we first check that for any \(b>0\)

$$\begin{aligned} \int _{[0,\,t]}(t-y)^b\mathrm{d}V(y)\le a_0 t^b+ba_1\mathrm{B}(b,1+\omega -\varepsilon ) t^{\omega -\varepsilon +b}+bc\mathrm{B}(b, 1+\omega )t^{\omega +b}, \end{aligned}$$

where \(\mathrm{B}(\cdot ,\cdot )\) is the beta function, and we write \(\varepsilon \) for \(\varepsilon _2\) to ease notation. Indeed, using (7) we obtain

$$\begin{aligned} \int _{[0,\,t]}(t-y)^b\mathrm{d}V(y)= & {} b\int _0^t (V(t-y)-c(t-y)^\omega )y^{b-1}\mathrm{d}y\\&+\,bc\int _0^t (t-y)^\omega y^{b-1}\mathrm{d}y\\\le & {} ba_0\int _0^t y^{b-1}\mathrm{d}y+ba_1\int _0^t (t-y)^{\omega -\varepsilon }y^{b-1}\mathrm{d}y\\&+\,bc\int _0^t (t-y)^\omega y^{b-1}\mathrm{d}y\\= & {} a_0 t^b+ba_1\mathrm{B}(b,1+\omega -\varepsilon ) t^{\omega -\varepsilon +b}+bc\mathrm{B}(b, 1+\omega )t^{\omega +b}. \end{aligned}$$

To prove the second inequality in (13) we use induction. The case \(j=1\) is covered by (7). Assume the inequality holds for \(j=k-1\). Then, for \(t\ge 0\) recalling (5) we obtain

$$\begin{aligned} V_k(t)&=\int _{[0,\,t]}(V_{k-1}(t-y)-c_{k-1}(t-y)^{\omega (k-1)})\mathrm{d}V(y)\\&\quad +\,c_{k-1}\int _{[0,\,t]} (t-y)^{\omega (k-1)}\mathrm{d}V(y)\\&\le a_{0,k-1} V(t)+a_{1,k-1}\int _{[0,\,t]}(t-y)^{\omega (k-1)-\varepsilon }\mathrm{d}V(y)\\&\quad +\,c_{k-1}\int _{[0,\,t]}(t-y)^{\omega (k-1)}\mathrm{d}V(y)\\&\le a_{0, k-1}V(t)+a_{1, k-1}\big (a_0 t^{\omega (k-1)-\varepsilon }\\&\quad +\,(\omega (k-1)-\varepsilon ) a_1\mathrm{B}(\omega (k-1)-\varepsilon , 1+\omega -\varepsilon ) t^{\omega k-2\varepsilon }\\&\quad +\,(\omega (k-1)-\varepsilon ) c \mathrm{B}(\omega (k-1)-\varepsilon , 1+\omega )t^{\omega k-\varepsilon }\big )\\&\quad +\,c_{k-1}\big (a_0 t^{\omega (k-1)}+\omega (k-1)a_1\mathrm{B}(\omega (k-1),1+\omega -\varepsilon ) t^{\omega k-\varepsilon }\\&\quad +\,\omega (k-1)c\mathrm{B}(\omega (k-1), 1+\omega )t^{\omega k}\big )\le c_k t^{\omega k}+a_{0,k}+a_{1,k}t^{\omega k-\varepsilon } \end{aligned}$$

for appropriate positive \(a_{0,k}\) and \(a_{1,k}\), where we used

$$\begin{aligned} c_k=c_{k-1}\omega (k-1)c\mathrm{B}(\omega (k-1), 1+\omega ). \end{aligned}$$
(18)

Further, (14) is an immediate consequence of (13). To prove (15), we use (13) to obtain, for \(j\in \mathbb {N}\) and \(u,v\ge 0\),

$$\begin{aligned} V_j(u+v)-V_j(v)\le c_j ((u+v)^{\omega j}-v^{\omega j})+ a_{0,j}+a_{1,j}(u+v)^{\omega j-\varepsilon _2}-b_{0,j}- b_{1,j}v^{\omega j-\varepsilon _1}. \end{aligned}$$

If \(\omega j\in (0,1]\), we have \((u+v)^{\omega j}-v^{\omega j}\le u^{\omega j}\) by subadditivity. If \(\omega j>1\), we have \((u+v)^{\omega j}-v^{\omega j}\le \omega j(u+v)^{\omega j-1} u\) by the mean value theorem and monotonicity. This completes the proof of (15).

(b)  Condition (8) ensures that \(\mathrm{Var}\,N(t)=O(t^{2\gamma })\) as \(t\rightarrow \infty \). Pick any \(\delta >0\) such that \(\delta (\omega -\gamma )>1/2\). An application of Markov’s inequality yields, for any \(\varepsilon >0\) and positive integer \(\ell \),

$$\begin{aligned} \mathbb {P}\{|N(\ell ^\delta )-V(\ell ^\delta )|>\varepsilon V(\ell ^\delta )\}\le \frac{\mathrm{Var}\,N(\ell ^\delta )}{\varepsilon ^2 (V(\ell ^\delta ))^2}=O(\ell ^{-2\delta (\omega -\gamma )}),\quad \ell \rightarrow \infty . \end{aligned}$$

This entails \(\lim _{\ell \rightarrow \infty }(N(\ell ^\delta )/V(\ell ^\delta ))=1\) a.s. by the Borel–Cantelli lemma. For any \(t>1\) there exists an integer \(\ell \ge 2\) such that \((\ell -1)^\delta <t\le \ell ^\delta \), whence, by monotonicity,

$$\begin{aligned} \frac{N((\ell -1)^\delta )}{V((\ell -1)^\delta )}\frac{V((\ell -1)^\delta )}{V(\ell ^\delta )}\le \frac{N(t)}{V(t)}\le \frac{N(\ell ^\delta )}{V(\ell ^\delta )}\frac{V(\ell ^\delta )}{V((\ell -1)^\delta )}. \end{aligned}$$

Since \(\lim _{\ell \rightarrow \infty }(V(\ell ^\delta )/V((\ell -1)^\delta ))=1\) we infer (16).

(c)  We use the induction on j. When \(j=1\), relation (17) holds according to (8). Assuming that (17) holds for \(j=i-1\) we intend to show that it also holds for \(j=i\).

Recalling (4), write, for \(i\ge 2\) and \(t\ge 0\),

$$\begin{aligned} N_i(t)-V_i(t)= & {} \sum _{k\in \mathbb {N}} \big (N^{(k)}_{i-1}(t-T_k)-V_{i-1}(t-T_k)\big )\nonumber \\&+\,\bigg (\sum _{k\in \mathbb {N}} V_{i-1}(t-T_k)-V_i(t)\bigg )=:X_i(t)+Y_i(t). \end{aligned}$$
(19)

An integration by parts yields, for \(s\ge 0\),

$$\begin{aligned} |Y_i(s)|= & {} \Big |\int _{[0,\,s]}V_{i-1}(s-x)\mathrm{d}(N_1(x)-V_1(x))\Big |\\\le & {} \int _{[0,\,s]}|N_1(s-x)-V_1(s-x)|\mathrm{d}V_{i-1}(x)\\\le & {} \sup _{y\in [0,\,s]}|N_1(y)-V_1(y)| V_{i-1}(s). \end{aligned}$$

Hence,

$$\begin{aligned}&\mathbb {E}\left[ \sup _{s\in [0,\,t]}Y_i(s)\right] ^2\le \mathbb {E}\left[ \sup _{y\in [0,\,t]}(N(y)-V(y))\right] ^2 (V_{i-1}(t))^2=O(t^{2\gamma +2\omega (i-1)}), \\&\quad t\rightarrow \infty \end{aligned}$$

by (8) and (14).

Passing to the analysis of \(X_i\) we obtain, for \(s\ge 0\)

$$\begin{aligned} \left[ \sup _{s\in [0,\,t]} X_i(s)\right] ^2\le & {} \sup _{s\in [0,\,t]}\Big ( N_1(s)\sum _{k\in \mathbb {N}} \big (N^{(k)}_{i-1}(s-T_k)-V_{i-1}(s-T_k)\big )^2{{\,\mathrm{\mathbb {1}}\,}}_{\{T_k\le s\}}\Big )\\\le & {} N_1(t)\sum _{k\in \mathbb {N}}\sup _{s\in [0,\,t]} \big (N^{(k)}_{i-1}(s)-V_{i-1}(s)\big )^2{{\,\mathrm{\mathbb {1}}\,}}_{\{T_k\le t\}}. \end{aligned}$$

Therefore, \(\mathbb {E}[\sup _{s\in [0,\,t]} X_i(s)]^2\le \mathbb {E}[N(t)]^2 \mathbb {E}[\sup _{s\in [0,\,t]}(N_{i-1}(s)-V_{i-1}(s))]^2=O(t^{2\gamma +2\omega (i-1)})\) as \(t\rightarrow \infty \) by the induction assumption and the asymptotics \(\mathbb {E}[N(t)]^2=\mathrm{Var}\,N(t)+(V(t))^2\sim (V(t))^2\) as \(t\rightarrow \infty \). It remains to note that

$$\begin{aligned} \mathbb {E}\left[ \sup _{s\in [0,\,t]}(N_i(s)-V_i(s))\right] ^2\le & {} 2\left( \mathbb {E}\left[ \sup _{s\in [0,\,t]}X_i(s)\right] ^2+\mathbb {E}\left[ \sup _{s\in [0,\,t]}Y_i(s)\right] ^2\right) \\= & {} O(t^{2\gamma +2\omega (i-1)}), \quad t\rightarrow \infty . \end{aligned}$$

\(\square \)

Our main result, Theorem 2.1, is an immediate consequence of Proposition 3.7 given in Sect. 3.2, Theorem 3.2 given next and its corollary.

Theorem 3.2

Suppose (7), (8) and (9). Then

$$\begin{aligned} \bigg (\frac{N_j(t\cdot )-V_j(t\cdot )}{ac_{j-1} t^{\gamma +\omega (j-1)}}\bigg )_{j\in \mathbb {N}}~\Rightarrow ~(R^{(\omega )}_j(\cdot ))_{j\in \mathbb {N}} \end{aligned}$$
(20)

in the \(J_1\)-topology on \(D^\mathbb {N}\).

Corollary 3.3

In the setting of Theorem 3.2, for \(j\in \mathbb {N}\) and \(h>0\),

$$\begin{aligned} t^{-\gamma -\omega (j-1)}\sup _{y\in [0,\,1]}(N_j(yt+h)-N_j(yt))~\overset{\mathrm{P}}{\rightarrow }~0,\quad t\rightarrow \infty . \end{aligned}$$
(21)

It is convenient to prove Corollary 3.3 at this early stage.

Proof

Fix any \(j\in \mathbb {N}\). Since \(R_j^{(\omega )}\) is a.s. continuous, relation (20) in combination with (13) ensures that, for any \(h>0\),

$$\begin{aligned} \left( \frac{N_j(t\cdot )-c_j(t\cdot )^{\omega j}}{ac_{j-1}t^{\gamma +\omega (j-1)}}, \frac{N_j(t\cdot +h)-c_j(t\cdot +h)^{\omega j}}{ac_{j-1}t^{\gamma +\omega (j-1)}}\right) ~\Rightarrow ~ (R_j^{(\omega )}(\cdot ), R_j^{(\omega )}(\cdot )),\quad t\rightarrow \infty \end{aligned}$$

in the \(J_1\)-topology on \(D\times D\), whence

$$\begin{aligned}&t^{-\gamma -\omega (j-1)}\sup _{y\in [0,\,1]}(N_j(yt+h)-N_j(yt)-c_j((yt+h)^{\omega j}-(yt)^{\omega j}))~\overset{\mathrm{P}}{\rightarrow }~ 0,\\&\quad t\rightarrow \infty . \end{aligned}$$

Using

$$\begin{aligned} \sup _{y\in [0,\,1]}((yt+h)^{\omega j}-(yt)^{\omega j})\le & {} {{\,\mathrm{\mathbb {1}}\,}}_{\{\omega j \in (0,1]\}}h^{\omega j}+{{\,\mathrm{\mathbb {1}}\,}}_{\{\omega j>1\}}\omega j(t+h)^{\omega j-1}h \end{aligned}$$

we conclude that the right-hand side is \(o(t^{\gamma +\omega (j-1)})\) as \(t\rightarrow \infty \) because \(\gamma >\omega -1\) by assumption and also \(\gamma >0\) as explained in Remark 2.3. \(\square \)

Theorem 3.2 follows, in its turn, from Propositions 3.4 and 3.5. Below we use the processes \(X_j\) and \(Y_j\) as defined in (19).

Proposition 3.4

Suppose (7) and (9). Then

$$\begin{aligned} \bigg (\frac{N_1(t \cdot )-V_1(t \cdot )}{at^\gamma }, \bigg ( \frac{Y_j(t\cdot )}{ac_{j-1}t^{\gamma +\omega (j-1)}}\bigg )_{j\ge 2}\bigg )~\Rightarrow ~ (R^{(\omega )}_j(\cdot ))_{j\in \mathbb {N}},\quad t\rightarrow \infty , \end{aligned}$$
(22)

in the \(J_1\)-topology on \(D^\mathbb {N}\).

Proposition 3.5

Suppose (7), (8) and (9). Then, for each integer \(j\ge 2\) and each \(T>0\),

$$\begin{aligned} t^{-(\gamma +\omega (j-1))}\sup _{y\in [0,\,T]}X_j(ty)~\overset{\mathrm{P}}{\rightarrow }~0,\quad t\rightarrow \infty . \end{aligned}$$
(23)

3.2 Connecting two ways of box-counting

We retreat for a while from our main theme to focus on Karlin’s occupancy scheme with deterministic probabilities \((p_k)_{k\in \mathbb {N}}\). By the law of large numbers a box of probability p gets occupied by about np balls, provided np is big enough. This suggests to relate counting the boxes occupied by at least \(n^{1-s}\) balls to the number of boxes with probability at least \(n^{-s}\). Let \(\bar{\rho }(t):=\#\{k\in \mathbb {N}{:}\,p_k\ge 1/t\}\) for \(t>0\), and let \(\bar{K}_{n,r}\) be the number of boxes containing exactly r out of n balls. We shall estimate uniformly the difference between

$$\begin{aligned} \bar{K}_n(s):=\sum _{r=\lceil n^{1-s}\rceil }^n \bar{K}_{n,r}\,,\quad s\in [0,1], \end{aligned}$$

and \((\bar{\rho }(n^s))_{s\in [0,1]}\). The following result is very close to Proposition 4.1 in [2]. However, we did not succeed to apply the cited proposition directly and will combine the estimates obtained in its proof.

Proposition 3.6

The following universal estimate holds for each \(n\in \mathbb {N}\)

$$\begin{aligned}&\mathbb {E}\sup _{s\in [0,1]}\big |\bar{K}_n(s)-\bar{\rho }(n^s)\big |\le 4\big (\bar{\rho }(n)-\bar{\rho }(y_0 n(\log n)^{-2})\big )+2\bar{\rho }(n)(\log n)^{-1}\nonumber \\&\quad +\,\int _1^\infty x^{-2}(\bar{\rho }(nx)-\bar{\rho }(n))\mathrm{d}x+2\sup _{s\in [0,1]}(\bar{\rho }(en^s)-\bar{\rho }(e^{-1}n^s)),\nonumber \\ \end{aligned}$$
(24)

where \(y_0\in (0,1)\) is a constant which does not depend on n, nor on \((p_k)_{k\in \mathbb {N}}\).

Proof

For \(k\in \mathbb {N}\), denote by \(\bar{Z}_{n,k}\) the number of balls falling in the kth box, so that

$$\begin{aligned} \bar{K}_n(s)=\sum _{k\in \mathbb {N}}{{\,\mathrm{\mathbb {1}}\,}}_{\{n^{1-s}\le \bar{Z}_{n,k}\le n\}},~s\in [0,1]. \end{aligned}$$

Then, for \(n\in \mathbb {N}\) and \(s\in [0,1]\),

$$\begin{aligned} |\bar{K}_n(1-s)-\bar{\rho }(n^{1-s})|&\le \sum _{{k\in \mathbb {N}}}{{\,\mathrm{\mathbb {1}}\,}}_{\{\bar{Z}_{n,k}\ge n^s,\, 1\le np_k\le n^s\}}+\sum _{{k\in \mathbb {N}}}{{\,\mathrm{\mathbb {1}}\,}}_{\{\bar{Z}_{n,k}\ge n^s,\, np_k<1\}}\\&\quad +\,\sum _{{k\in \mathbb {N}}}{{\,\mathrm{\mathbb {1}}\,}}_{\{\bar{Z}_{n,k}\le n^s,\, np_k\ge n^s\}}\\&:=A_n(s)+B_n(s)+C_n(s). \end{aligned}$$

In [2] it was shown that, for \(n\in \mathbb {N}\),

$$\begin{aligned} \mathbb {E}\sup _{s\in [0,1]}A_n(s)\le 2(\bar{\rho }(n)-\bar{\rho }(y_0 n(\log n)^{-2}))+\frac{\bar{\rho }(n)}{\log n}+ \sup _{s\in [0,1]}(\bar{\rho }(en^s)-\bar{\rho }(n^s)) \end{aligned}$$

(see [2], pp. 1004–1005) and

$$\begin{aligned} \mathbb {E}\sup _{s\in [0,1]}C_n(s)\le 2(\bar{\rho }(n)-\bar{\rho }(y_0 n(\log n)^{-2})) +\frac{\bar{\rho }(n)}{\log n}+\sup _{s\in [0,1]}(\bar{\rho }(n^s)-\bar{\rho }(e^{-1}n^s)) \end{aligned}$$

(see [2], p. 1006). Finally, for \(n\in \mathbb {N}\),

$$\begin{aligned} \mathbb {E}\sup _{s\in [0,1]}B_n(s)= & {} \mathbb {E}\sum _{k\in \mathbb {N}}{{\,\mathrm{\mathbb {1}}\,}}_{\{\bar{Z}_{n,k}\ge 1,\,np_k<1\}}=\sum _{k\in \mathbb {N}} (1-(1-p_k)^n){{\,\mathrm{\mathbb {1}}\,}}_{\{np_k<1\}}\\\le & {} \sum _{k\in \mathbb {N}} np_k{{\,\mathrm{\mathbb {1}}\,}}_{\{np_k<1\}}\\= & {} \int _{(1,\infty )}\frac{1}{x}\mathrm{d}(\bar{\rho }(nx)-\bar{\rho }(n))=\int _1^\infty \frac{\bar{\rho }(nx)-\bar{\rho }(n)}{x^2} \mathrm{d}x. \end{aligned}$$

Combining the estimates we arrive at (24) because

$$\begin{aligned} \sup _{s\in [0,1]}\big |\bar{K}_n(s)-\bar{\rho }(n^s)\big |=\sup _{s\in [0,1]}\big |\bar{K}_n(1-s)-\bar{\rho }(n^{1-s})\big |. \end{aligned}$$

\(\square \)

We apply next Proposition 3.6 to the setting of Theorem 2.1. This result shows that (10) is equivalent to the analogous limit relation with \(\rho _j(n^t)=N_j(t \log n)\) replacing \(K_{n,j}(t)\).

Proposition 3.7

Suppose (7) and (9). Then, for each \(j\in \mathbb {N}\),

$$\begin{aligned} \frac{\sup _{s\in [0,1]}\big |K_{n,j}(s)- \rho _j(n^s)|}{(\log n)^{\gamma +\omega (j-1)}}~\overset{\mathrm{P}}{\rightarrow }~ 0, \quad n\rightarrow \infty . \end{aligned}$$
(25)

Proof

Fix any \(j\in \mathbb {N}\). By Proposition 3.6, for \(n\in \mathbb {N}\),

$$\begin{aligned}&\mathbb {E}\left( \sup _{s\in [0,1]}\big |K_{n,j}(s)-\rho _j(n^s)\big |\Big |(P(v))_{v\in \mathcal {I}_j}\right) \nonumber \\&\quad \le 4\big (\rho _j(n)-\rho _j(y_0 n(\log n)^{-2})\big )+2\rho _j(n)(\log n)^{-1}\nonumber \\&\qquad +\,\int _1^\infty x^{-2}(\rho _j(nx)-\rho _j(n))\mathrm{d}x\nonumber \\&\qquad +\,2\sup _{s\in [0,1]}(\rho _j(en^s)-\rho _j(e^{-1}n^s)). \end{aligned}$$
(26)

Recall the notation

$$\begin{aligned} c_j=\frac{(c\Gamma (\omega +1))^j}{\Gamma (\omega j+1)},\quad j\in \mathbb {N}\end{aligned}$$

and our assumption \(\gamma >\omega -\min (1,\varepsilon _1,\varepsilon _2)\). In view of (14),

$$\begin{aligned} \frac{\mathbb {E}\rho _j(n)}{\log n}=\frac{V_j(\log n)}{\log n}~\sim ~ c_j(\log n)^{\omega j-1}=o((\log n)^{\gamma +\omega (j-1)}), \quad n\rightarrow \infty . \end{aligned}$$
(27)

The next step is to show that

$$\begin{aligned} \mathbb {E}\int _1^\infty x^{-2}(\rho _j(nx)-\rho _j(n)) \mathrm{d}x=o\left( (\log n)^{\gamma +\omega (j-1)}\right) ,\quad n\rightarrow \infty . \end{aligned}$$
(28)

As a preparation for the proof of (28) we first note that according to (15)

$$\begin{aligned}&\mathbb {E}(\rho _j(nx)-\rho _j(n))= V_j(\log n+\log x)-V_j(\log n)\nonumber \\&\quad \le c_j ({{\,\mathrm{\mathbb {1}}\,}}_{\{\omega j\in (0,1]\}} (\log x)^{\omega j}+{{\,\mathrm{\mathbb {1}}\,}}_{\{\omega j>1\}}\omega j (\log n+\log x)^{\omega j-1}\log x)\nonumber \\&\qquad +\,a_{0,j}+a_{1,j}(\log n+\log x)^{\omega j-\varepsilon _2}-b_{0,j}+|b_{1,j}|(\log n)^{\omega j-\varepsilon _1} \end{aligned}$$

for \(n\in \mathbb {N}\) and \(x\ge 1\). Further, using the inequality \((u+v)^\alpha \le (2^{\alpha -1}\wedge 1)(u^\alpha +v^\alpha )\) which holds for \(\alpha >0\) and \(u,v\ge 0\) yields

$$\begin{aligned} \int _1^\infty x^{-2}(\log n+\log x)^{\omega j-\varepsilon _2}\mathrm{d}x = O((\log n)^{\omega j-\varepsilon _2}),\quad n\rightarrow \infty \end{aligned}$$

and

$$\begin{aligned} \int _1^\infty x^{-2}(\log n+\log x)^{\omega j-1}\mathrm{d}x=O((\log n)^{\omega j-1}),\quad n\rightarrow \infty , \end{aligned}$$

and (28) follows.

An appeal to (13) enables us to conclude that for large enough n

$$\begin{aligned}&\mathbb {E}(\rho _j(n)-\rho _j(y_0 n (\log n)^{-2}))\\&\quad =V_j(\log n)-V_j(\log n+\log y_0-2\log \log n)\\&\quad \le c_j (\log n)^{\omega j}\Big (1-\Big (1-\frac{2\log \log n-\log y_0}{\log n}\Big )^{\omega j}\Big )\\&\qquad +\,a_{0,j}+a_{1,j}(\log n)^{\omega j-\varepsilon _2}-b_{0,j}-b_{1,j}(\log n+\log y_0-2\log \log n)^{\omega j-\varepsilon _1}\\&\quad \le 4\omega j c_j (\log n)^{\omega j-1}\log \log n+a_{0,j}+a_{1,j}(\log n)^{\omega j-\varepsilon _2}\\&\qquad -\,b_{0,j}+|b_{1,j}|(\log n+\log y_0-2\log \log n)^{\omega j-\varepsilon _1}. \end{aligned}$$

Hence,

$$\begin{aligned} \mathbb {E}(\rho _j(n)-\rho _j(y_0 n (\log n)^{-2}))=o((\log n)^{\gamma +\omega (j-1)}),\quad n\rightarrow \infty \end{aligned}$$
(29)

by the same reasoning as above. Finally,

$$\begin{aligned}&\frac{\sup _{s\in [0,1]}(\rho _j(en^s)-\rho _j(e^{-1}n^s))}{(\log n)^{\gamma +\omega (j-1)}}\nonumber \\&\quad =\frac{\sup _{s\in [0,1]}(N_j(s\log n+1)-N_j(s\log n-1))}{(\log n)^{\gamma +\omega (j-1)}}~\overset{\mathrm{P}}{\rightarrow }~ 0,\quad n\rightarrow \infty \end{aligned}$$
(30)

by Corollary 3.3. Using (27)–(30) in combination with Markov’s inequality [applied to the first three terms on the right-hand side of (26)] shows that the left-hand side of (26) divided by \((\log n)^{\gamma +\omega (j-1)}\) converges to zero in probability as \(n\rightarrow \infty \). Now (25) follows by another application of Markov’s inequality and the dominated convergence theorem. \(\square \)

3.3 Proof of Proposition 3.4

We shall use an integral representation which has already appeared in the proof of Lemma 3.1(c):

$$\begin{aligned} Y_j(t)= & {} \sum _{k\in \mathbb {N}} V_{j-1}(t-T_k)-V_j(t)=\int _{[0,\,t]}V_{j-1}(t-y)\mathrm{d}(N_1(y)-V_1(y))\nonumber \\= & {} \int _{(0,\,t]}(N_1(y)-V_1(y))\mathrm{d}_y(-V_{j-1}(t-y)) \end{aligned}$$
(31)

for \(j\ge 2\) and \(t\ge 0\). Here, the last equality is obtained with the help of integration by parts.

In view of (12) Skorokhod’s representation theorem ensures that there exist versions \(\widehat{W}^{(t)}\) and \(\widehat{W}\) of \(\frac{N_1(t\cdot )-V_1(t\cdot )}{at^\gamma }\) and W, respectively, such that

$$\begin{aligned} \lim _{t\rightarrow \infty }\sup _{y\in [0,\,T]}\Big |\widehat{W}^{(t)}(y)-\widehat{W}(y)\Big |=0\quad \text {a.s.} \end{aligned}$$
(32)

for all \(T>0\). This implies that (22) is equivalent to

$$\begin{aligned} \left( \widehat{W}(\cdot ), \left( \frac{\widehat{Z}_j (t, \cdot )}{c_{j-1}t^{\omega (j-1)}}\right) _{j\ge 2}\right) \Rightarrow (R_j^{(\omega )}(\cdot ))_{j\in \mathbb {N}},\quad t\rightarrow \infty , \end{aligned}$$
(33)

where \(\widehat{Z}_j(t,x):=\int _{(0,\,x]}\widehat{W}(y)\mathrm{d}_y(-V_{j-1}(t(x-y))\) for \(j\ge 2\) and \(t,x\ge 0\). As far as the first coordinate is concerned the equivalence is an immediate consequence of (32). As for the other coordinates, note that, for each \(t>0\), the process \((Y_j(t\cdot ))_{j\ge 2}\) has the same distribution as \(\big (\int _{(0,\,\cdot ]}\widehat{W}^{(t)}(y)\mathrm{d}_y(-V_{j-1}(t(\cdot -y)))\big )_{j\ge 2}\) and then write, for \(s>0\) fixed and \(j\ge 2\)

$$\begin{aligned} \int _{[0,\,s]}\widehat{W}^{(t)}(y)\mathrm{d}_y\frac{-V_{j-1}(t(s-y))}{c_{j-1}t^{\omega (j-1)}}= & {} \int _{(0,\,s]}\big (\widehat{W}^{(t)}(y)-\widehat{W}(y)\big )\mathrm{d}_y\frac{-V_{j-1}(t(s-y))}{c_{j-1}t^{\omega (j-1)}}\\&+\,\int _{(0,\,s]}\widehat{W}(y)\mathrm{d}_y\frac{-V_{j-1}(t(s-y))}{c_{j-1}t^{\omega (j-1)}}. \end{aligned}$$

Denoting by L(ts) the first term on the right-hand side, we infer, for all \(T>0\),

$$\begin{aligned}&\sup _{s\in [0,\,T]}|L(t,s)|\\&\quad \le \sup _{y\in [0,\,T]} \big |\widehat{W}^{(t)}(y)-\widehat{W}(y)\big | \big (\big (c_{j-1} t^{\omega (j-1)}\big )^{-1}V_{j-1}(Tt)\big )\rightarrow 0,\quad t\rightarrow \infty ~~\mathrm{a.s.} \end{aligned}$$

in view of (14) which implies that

$$\begin{aligned} \lim _{t\rightarrow \infty } \big (c_{j-1}t^{\omega (j-1)}\big )^{-1}V_{j-1}(Tt)=T^{\omega (j-1)} \end{aligned}$$
(34)

and (32).

For \(j\ge 2\) and \(t,x\ge 0\), set \(Z_j(t,x):=\int _{(0,\,x]}W(y)\mathrm{d}_y(-V_{j-1}(t(x-y))\) and note that (33) is equivalent to

$$\begin{aligned} \left( W(\cdot ), \left( \frac{Z_j(t,\cdot )}{c_{j-1}t^{\omega (j-1)}}\right) _{j\ge 2}\right) ~\Rightarrow ~ (R_j^{(\omega )}(\cdot ))_{j\in \mathbb {N}},\quad t\rightarrow \infty \end{aligned}$$
(35)

because the left-hand sides of (33) and (35) have the same distribution.

It remains to check two properties:

(a)  weak convergence of finite-dimensional distributions, i.e. that for all \(n\in \mathbb {N}\), all \(0\le s_1<s_2<\cdots<s_n<\infty \) and all integer \(\ell \ge 2\)

$$\begin{aligned} \bigg (W(s_i), \bigg (\frac{Z_j (t,s_i)}{c_{j-1}t^{\omega (j-1)}}\bigg )_{2\le j\le \ell }\bigg )_{1\le i\le n}~\overset{\mathrm{d}}{\rightarrow }~ (R^{(\omega )}_j(s_i))_{1\le j\le \ell ,\,1\le i\le n} \end{aligned}$$
(36)

as \(t\rightarrow \infty \);

(b)  tightness of the distributions of coordinates in (35), excluding the first one.

Proof of (36)

If \(s_1=0\), we have \(W(s_1)=Z_j(t,s_1)=R_k^{(\omega )}(s_1)=0\) a.s. for \(j\ge 2\) and \(k\in \mathbb {N}\). Hence, in what follows we consider the case \(s_1>0\). Both the limit and the converging vectors in (36) are Gaussian. In view of this it suffices to prove that

$$\begin{aligned}&\lim _{t\rightarrow \infty } t^{-\omega (k+j-2)} \mathbb {E}[Z_k(t,s)Z_j(t,u)]=c_{k-1}c_{j-1}\mathbb {E}\left[ R^{(\omega )}_k(s)R^{(\omega )}_j(u)\right] \nonumber \\&\quad = {\left\{ \begin{array}{ll} c_{k-1}c_{j-1}\int _0^s\int _0^u r(s-y, u-z)\mathrm{d}y^{\omega (k-1)}\mathrm{d}z^{\omega (j-1)}, &{} \text {if} \ k,j\ge 2, \\ c_{j-1}\int _0^u r(s, u-z)\mathrm{d}z^{\omega (j-1)}, &{} \text {if} \ k=1, j\ge 2 \end{array}\right. }\nonumber \\ \end{aligned}$$
(37)

for \(k,j\in \mathbb {N}\), \(k+j\ge 3\) and \(s,u>0\), where we set \(Z_1(t,\cdot )=W(\cdot )\) and \(r(x,y):=\mathbb {E}[W(x)W(y)]\) for \(x,y\ge 0\). We only consider the case where \(k,j\ge 2\), the complementary case being similar and simpler.

To prove (37) we need some preparation. For each \(t>0\) denote by \(\theta _{k,t}\) and \(\theta _{j,t}\) independent random variables with the distribution functions \(\mathbb {P}\{\theta _{k,t}\le y\}=V_{k-1}(ty)/V_{k-1}(ts)\) on [0, s] and \(\mathbb {P}\{\theta _{j,t}\le y\}=V_{j-1}(ty)/V_{j-1}(tu)\) on [0, u], respectively. Further, let \(\theta _k\) and \(\theta _j\) denote independent random variables with the distribution functions \(\mathbb {P}\{\theta _k\le y\}=(y/s)^{\omega (k-1)}\) on [0, s] and \(\mathbb {P}\{\theta _j\le y\}=(y/u)^{\omega (j-1)}\) on [0, u], respectively. According to (14), \((\theta _{k,t}, \theta _{j,t})\overset{\mathrm{d}}{\rightarrow }(\theta _k, \theta _j)\) as \(t\rightarrow \infty \). Now observe that the function \(r(x,y)=\mathbb {E}[W(x)W(y)]\) is continuous, hence bounded, on \([0,T]\times [0,T]\) for every \(T>0\). This follows from the assumed a.s. continuity of W, the dominated convergence theorem in combination with \(\mathbb {E}[\sup _{z\in [0,\,T]}W(z)]^2<\infty \) for every \(T>0\) (for the latter, see Theorem 3.2 on p. 63 in [1]). As a result, \(r(s-\theta _{k,t}, u-\theta _{j,t})\overset{\mathrm{d}}{\rightarrow }r(s-\theta _k, u-\theta _j)\) as \(t\rightarrow \infty \) and thereupon

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathbb {E}r(s-\theta _{k,t}, u-\theta _{j,t})=\mathbb {E}r(s-\theta _k, u- \theta _j) \end{aligned}$$

by the dominated convergence theorem.

This together with (34) leads to formula (37):

$$\begin{aligned}&\mathbb {E}[t^{-\omega (k+j-2)}Z_k(t,s)Z_j(t,u)]\\&\quad =\frac{V_{k-1}(ts)}{t^{\omega (k-1)}}\frac{V_{j-1}(tu)}{t^{\omega (j-1)}} \int _0^s\int _0^u r(s-y, u-z)\mathrm{d}_y\left( \frac{V_{k-1}(ty)}{V_{k-1}(ts)}\right) \mathrm{d}_z\left( \frac{V_{j-1}(tz)}{V_{j-1}(tu)}\right) \\&\quad =\frac{V_{k-1}(ts)}{t^{\omega (k-1)}}\frac{V_{j-1}(tu)}{t^{\omega (j-1)}}\mathbb {E}r(s-\theta _{k,t}, u-\theta _{j,t})~ \\&\qquad \rightarrow ~c_{k-1}s^{k-1}c_{j-1}s^{j-1} \mathbb {E}r(s-\theta _k, u-\theta _j)\\&\quad =c_{k-1}c_{j-1}\int _0^s\int _0^u r(s-y, u-z)\mathrm{d}y^{\omega (k-1)}\mathrm{d}z^{\omega (j-1)} \end{aligned}$$

as \(t\rightarrow \infty \). \(\square \)

Proof of Tightness

Choose \(j\ge 2\). We intend to prove tightness of \((t^{-\omega (j-1)}Z_j (t,u))_{u\ge 0}\) on D[0, T] for all \(T>0\). Since the function \(t\mapsto t^{-\omega (j-1)}\) is regularly varying at \(\infty \) it is enough to investigate the case \(T=1\) only. By Theorem 15.5 in [9] it suffices to show that for any \(\kappa _1>0\) and \(\kappa _2>0\) there exist \(t_0>0\) and \(\delta >0\) such that

$$\begin{aligned} \mathbb {P}\left\{ \sup _{0\le u,v\le 1, |u-v|\le \delta }|Z_j(t,u)-Z_j(t, v)|>\kappa _1 t^{\omega (j-1)}\right\} \le \kappa _2 \end{aligned}$$
(38)

for all \(t\ge t_0\). We only analyze the case where \(0\le v<u\le 1\), the complementary case being analogous.

Set \(W(x)=0\) for \(x<0\). The basic observation for the subsequent proof is that (6) extends to

$$\begin{aligned} |W(x)-W(y)|\le M_T|x-y|^\beta \end{aligned}$$
(39)

whenever \(-\infty <x,y\le T\) for the same positive random variable \(M_T\) as in (6). This is trivial when \(x\vee y\le 0\) and a consequence of (6) when \(x\wedge y\ge 0\). Assume that \(x\wedge y\le 0<x\vee y\). Then \(|W(x)-W(y)|=|W(x\vee y)|\le M_T (x\vee y)^\beta \le M_T|x-y|^\beta \), where the first inequality follows from (6) with \(y=0\).

Let \(0\le v<u\le 1\) and \(u-v\le \delta \) for some \(\delta \in (0,1]\). Using (39) and (14) we obtain

$$\begin{aligned} t^{-\omega (j-1)}|Z_j(t, u)-Z_j(t,v)|= & {} t^{-\omega (j-1)}\bigg |\int _{[0,\, u)} \big (W(u-y)-W(v-y)\big )\mathrm{d}V_{j-1}(ty)\bigg |\\\le & {} M_1(u-v)^\beta (t^{-\omega (j-1)}V_{j-1}(t))\le M_1\delta ^\beta \lambda \end{aligned}$$

for large enough t and a positive constant \(\lambda \). This proves (38). \(\square \)

3.4 Proof of Proposition 3.5

Relation (23) will be proved by induction in three steps.

  • Step 1 To prove (23) with \(j=2\), use (42) below with \(k=1\) which is nothing else but (9) and repeat verbatim the proof of Step 3.

  • Step 2 Assume that (23) holds for \(j=2,\ldots , k\). We claim that then

    $$\begin{aligned} \bigg (\frac{N_j(t\cdot )-V_j(t\cdot )}{ac_{j-1}t^{\gamma +\omega (j-1)}}\bigg )_{j=1,\ldots ,k}~\Rightarrow ~(R^{(\omega )}_j(\cdot ))_{j=1,\ldots , k} \end{aligned}$$
    (40)

    in the \(J_1\)-topology on \(D^k\). Indeed, in view of (19) and the induction hypothesis relation (40) is equivalent to

    $$\begin{aligned} \bigg (\frac{N_1(t \cdot )-V_1(t \cdot )}{at^\gamma }, \bigg ( \frac{Y_j(t\cdot )}{ac_{j-1}t^{\gamma +\omega (j-1)}}\bigg )_{j=2,\ldots , k}\bigg )~\Rightarrow ~ (R^{(\omega )}_j(\cdot ))_{j=1,\ldots , k},\quad t\rightarrow \infty . \end{aligned}$$
    (41)

    The latter holds by Proposition 3.4.

  • Step 3 Using

    $$\begin{aligned} \frac{N_k(t\cdot )-V_k(t\cdot )}{ac_{k-1}t^{\gamma +\omega (k-1)}}~\Rightarrow ~R^{(\omega )}_k(\cdot ),\quad t\rightarrow \infty \end{aligned}$$
    (42)

    in the \(J_1\)-topology on D which is a consequence of (40) we shall prove that (23) holds with \(j=k+1\).

In view of (42) and the fact that \(R^{(\omega )}_k\) is a.s. continuous Skorokhod’s representation theorem ensures that there exist \(\widehat{R}_k^{(\omega )}\) a version of \(R_k^{(\omega )}\) and, for each \(t>0\), \(\widehat{R}_k^{(t, \omega )}\) a version of the process on the left-hand side of (42) for which (42) holds locally uniformly a.s. We can assume that the probability space on which these versions are defined is rich enough to accommodate

  • \(\widehat{R}_k^{(t, \omega ,1)}\), \(\widehat{R}_k^{(t, \omega ,2)},\ldots \) which are independent copies of \(\widehat{R}_k^{(t, \omega )}\) for each \(t>0\);

  • \(\widehat{R}^{(\omega , 1)}_k\), \(\widehat{R}^{(\omega , 2)}_k,\ldots \) which are independent copies of \(\widehat{R}^{(\omega )}_k\);

  • random variables \(\widehat{T}_1\), \(\widehat{T}_2,\ldots \) which are versions of \(T_1\), \(T_2,\ldots \) independent of \((\widehat{R}_k^{(t, \omega ,1)}, \widehat{R}_k^{(\omega , 1)})\), \((\widehat{R}_k^{(t, \omega ,2)},\widehat{R}_k^{(\omega , 2)}),\ldots \)

Furthermore,

$$\begin{aligned} \lim _{t\rightarrow \infty }\sup _{y\in [0,\,T]}\big |\widehat{R}^{(t,\omega ,r)}_k(y)-\widehat{R}_k^{(\omega , r)}(y)\big |=0\quad \text {a.s.} \end{aligned}$$
(43)

for all \(T>0\) and \(r\in \mathbb {N}\).

For each \(t>0\), set

$$\begin{aligned} \widehat{X}_{k+1}^{(t)}(y):=\sum _{r\in \mathbb {N}}\widehat{R}_k^{(t, \omega , r)}(y-t^{-1}\widehat{T}_r){{\,\mathrm{\mathbb {1}}\,}}_{\{\widehat{T}_r\le ty\}},\quad y\ge 0. \end{aligned}$$

The process \(\widehat{X}_{k+1}^{(t)}(\cdot )\) has the same distribution as \(X_{k+1}(t\cdot )/(ac_{k-1}t^{\gamma +\omega (k-1)})\). Therefore, (23) with \(j=k+1\) is equivalent to

$$\begin{aligned} t^{-\omega }\sup _{y\in [0,\,T]}\widehat{X}_{k+1}^{(t)}(y)~\overset{\mathrm{P}}{\rightarrow }~0,\quad t\rightarrow \infty . \end{aligned}$$
(44)

To prove this, write

$$\begin{aligned} t^{-\omega }\widehat{X}_{k+1}^{(t)}(y)= & {} t^{-\omega }\sum _{r\in \mathbb {N}}\big (\widehat{R}_k^{(t, \omega , r)}(y-t^{-1}\widehat{T}_r)-\widehat{R}_k^{(\omega , r)}(y-t^{-1}\widehat{T}_r)\big ){{\,\mathrm{\mathbb {1}}\,}}_{\{\widehat{T}_r\le ty\}}\\&+\,t^{-\omega } \sum _{r\in \mathbb {N}}\widehat{R}_k^{(\omega , r)}(y-t^{-1}\widehat{T}_r){{\,\mathrm{\mathbb {1}}\,}}_{\{\widehat{T}_r\le ty\}}=:t^{-\omega }(\widehat{Z}_1(t,y)+\widehat{Z}_2(t,y)). \end{aligned}$$

For all \(T>0\),

$$\begin{aligned} \sup _{y\in [0,\,T]}|\widehat{Z}_1(t,y)| \le \sum _{r\in \mathbb {N}}\sup _{y\in [0,\,T]}\big |\widehat{R}_k^{(t,\omega , r)}(y)-\widehat{R}_k^{(\omega , r)}(y)\big |{{\,\mathrm{\mathbb {1}}\,}}_{\{\widehat{T}_r\le tT\}}. \end{aligned}$$
(45)

For \(r\in \mathbb {N}\), the random variables \(\eta _r(t):=\sup _{y\in [0,\,T]}\big |\widehat{R}_k^{(t, \omega , r)}(y)-\widehat{R}_k^{(\omega , r)}(y)\big |\) are i.i.d. and independent of \(\widehat{T}_1\), \(\widehat{T}_2,\ldots \). Furthermore,

$$\begin{aligned} \mathbb {E}[\eta _1(t)]^2&\le 2\left( \frac{\mathbb {E}[\sup _{s\in [0,\,Tt]}(N_k(s)-V_k(s))]^2}{(ac_{k-1}t^{\gamma +\omega (k-1)})^2}+\mathbb {E}\left[ \sup _{s\in [0,\,T]}R_k^{(\omega )}(s)\right] ^2 \right) \nonumber \\&=O(1),\quad t\rightarrow \infty \end{aligned}$$
(46)

in view of (17) and the well-known fact that the supremum over [0, T] of any a.s. continuous Gaussian process has an exponential tail. Since \(\lim _{t\rightarrow \infty }\eta _1(t)=0\) a.s., inequality (46) ensures that \(\lim _{t\rightarrow \infty }\mathbb {E}\eta _1(t)=0\). The right-hand side in (45) multiplied by \(t^{-\omega }\) is dominated by

$$\begin{aligned} t^{-\omega }\sum _{r\in \mathbb {N}}(\eta _r(t)-\mathbb {E}\eta _r(t)){{\,\mathrm{\mathbb {1}}\,}}_{\{\widehat{T}_r\le tT\}}+ t^{-\omega }\widehat{N}(tT)\mathbb {E}\eta _1(t), \end{aligned}$$

where \(\widehat{N}(t):=\#\{r\in \mathbb {N}{:}\,\widehat{T}_r\le t\}\). Using the last limit relation and (16) we conclude that the second summand converges to 0 a.s., as \(t\rightarrow \infty \). The first summand converges to zero in probability, as \(t\rightarrow \infty \), by Markov’s inequality in combination with \(t^{-2\omega }\mathbb {E}\Big (\sum _{r\in \mathbb {N}}(\eta _r(t)-\mathbb {E}\eta _r(t)){{\,\mathrm{\mathbb {1}}\,}}_{\{\widehat{T}_r\le t\}}\Big )^2=t^{-2\omega }V(t) \mathrm{Var}\,\eta _1(t)=O(t^{-\omega })\). Thus, for all \(T>0\), \(t^{-\omega }\sup _{y\in [0,\,T]}|\widehat{Z}_1(t,y)|\overset{\mathrm{P}}{\rightarrow }0\), as \(t\rightarrow \infty \).

The process \((\widehat{Z}_2(t,y))\) has the same distribution as the process \((Z_2(t,y))\) in which the random variables involved do not carry the hats, and \(R_k^{(\omega , 1)}\), \(R_k^{(\omega ,2)},\ldots \) are independent copies of \(R_k^{(\omega )}\) which are independent of \(T_1\), \(T_2,\ldots \) Thus, it suffices to prove that

$$\begin{aligned} t^{-\omega }\sup _{y\in [0,\,T]}|Z_2(t,y)|\overset{\mathrm{P}}{\rightarrow }0,\quad t\rightarrow \infty . \end{aligned}$$
(47)

In what follows we write \(\mathbb {E}_{(T_r)}(\cdot )\) for \(\mathbb {E}(\cdot |(T_r))\) and \(\mathbb {P}_{(T_r)}(\cdot )\) for \(\mathbb {P}(\cdot |(T_r))\). Note that \(\mathbb {E}_{(T_r)} [Z_2(t,y)]^2=(\mathbb {E}[R_k^{(\omega )}(1)]^2)\int _{[0,\,ty]}(y-t^{-1}x)^{2(\gamma +\omega (k-1))}\mathrm{d}N_1(x)\le (\mathbb {E}[R_k^{(\omega )}(1)]^2) y^{2(\gamma +\omega (k-1))}N_1(ty)\). Using now the Cramér–Wold device and Markov’s inequality in combination with (16) we infer that, given \((T_r)\), with probability one finite-dimensional distributions of \((t^{-\omega }Z_2(t,y))_{y\ge 0}\) converge weakly to the zero vector, as \(t\rightarrow \infty \). Thus, (47) follows if we can show that the family of \(\mathbb {P}_{(T_r)}\)-distributions of \((t^{-\omega } Z_2(t,y))_{y\ge 0}\) is tight. As a preparation, we observe that the process \(R_k^{(\omega )}\) inherits the local Hölder continuity of W. Indeed, recalling (39) we obtain, for \(x,y\in [0,T]\) and \(k\ge 2\),

$$\begin{aligned} |R_k^{(\omega )}(x)-R_k^{(\omega )}(y)|\le & {} \omega (k-1)\int _0^{x\vee y}|W(x-z)-W(y-z)|z^{\omega (k-1)-1}\mathrm{d}z\nonumber \\\le & {} M_T T^{\omega (k-1)}|x-y|^\beta ~\text {a.s.} \end{aligned}$$
(48)

It is also important that the random variable \(M_T\) has finite moments of all positive orders, see Theorem 1 in [4]. Pick now integer \(n\ge 2\) such that \(2n\beta >1\). By Rosenthal’s inequality (Theorem 3 in [40]), for \(x,y\in [0,\,T]\) and a positive constant \(C_n\) which does not depend on \((T_r)\) nor on t,

$$\begin{aligned}&\mathbb {E}_{(T_r)} (Z_2(t,x)-Z_2(t,y))^{2n}\\&\quad \le C_n\left( \left( \sum _{r\in \mathbb {N}}\mathbb {E}_{(T_r)}(R_k^{(\omega , r)}(x-t^{-1}T_r)-R_k^{(\omega , r)}(y-t^{-1}T_r))^2 {{\,\mathrm{\mathbb {1}}\,}}_{\{T_r\le tT\}}\right) ^n\right. \\&\left. \qquad +\,\sum _{r\in \mathbb {N}}\mathbb {E}_{(T_r)}(R_k^{(\omega , r)}(x-t^{-1}T_r)-R_k^{(\omega , r)}(y-t^{-1}T_r))^{2n} {{\,\mathrm{\mathbb {1}}\,}}_{\{T_r\le tT\}}\right) \\&\quad \le 2C_n T^{2n\omega (k-1)} (\mathbb {E}[M_T]^{2n}) |x-y|^{2n\beta }(N_1(tT))^n \end{aligned}$$

having utilized (48) for the second inequality. In view of (16), this entails that a classical sufficient condition for tightness (formula (12.51) on p. 95 in [9]) holds

$$\begin{aligned} t^{-2n\omega }\mathbb {E}_{(T_r)} (Z_2(t,x)-Z_2(t,y))^{2n}\le \theta _n|x-y|^{2n\beta }\quad \text {a.s.} \end{aligned}$$

for a positive random variable \(\theta _n\) and large enough t. Thus, we have proved that (47) holds conditionally on \((T_r)\), hence, also unconditionally.

4 The case of homogeneous residual allocation model

In this section we apply Theorem 2.1 to the case of fragmentation law given by homogeneous residual allocation model (1). Let \(B:=(B(s))_{s\ge 0}\) be a standard Brownian motion (BM) and for \(q\ge 0\) let

$$\begin{aligned} B_q(s):=\int _{[0,\,s]}(s-y)^q \mathrm{d}B(y), ~~s\ge 0. \end{aligned}$$

The process \(B_q:=(B_q(s))_{s\ge 0}\) is a centered Gaussian process called the fractionally integrated BM or the Riemann–Liouville process. Clearly \(B=B_0\), and for \(q\in \mathbb {N}\) the process can be obtained as a repeated integral of the BM. It is known that \(B_q\) is locally Hölder continuous with any exponent \(\beta <q+1/2\) [27].

Theorem 4.1

Let \((P_k)_{k\in \mathbb {N}}\) be given by (1) with iid \(U_i\)’s such that

$$\begin{aligned} \mu :=\mathbb {E}|\log U_1|<\infty , ~\sigma ^2:=\mathrm{Var}( \log U_1)\in (0,\infty ) \end{aligned}$$

and \(\mathbb {E}|\log (1-U_1)|<\infty \). Then

$$\begin{aligned} \left( \frac{(j-1)!(K_{n,j}(\cdot )-(j!)^{-1}(\mu ^{-1}\log n(\cdot ))^j)}{\sqrt{\sigma ^2\mu ^{-2j-1}(\log n)^{2j-1}}}\right) _{j\in \mathbb {N}}~\Rightarrow ~ (B_{j-1}(\cdot ))_{j\in \mathbb {N}},\quad n\rightarrow \infty \end{aligned}$$

in the product \(J_1\)-topology on \(D[0,1]^\mathbb {N}\).

Proof

Let \((\xi _k, \eta _k)_{k\in \mathbb {N}}\) be independent copies of a random vector \((\xi , \eta )\) with positive arbitrarily dependent components. Denote by \((S_k)_{k\in \mathbb {N}_0}\) the zero-delayed ordinary random walk with increments \(\xi _k\), that is, \(S_0:=0\) and \(S_k:=\xi _1+\cdots +\xi _k\) for \(k\in \mathbb {N}\). Consider a perturbed random walk

$$\begin{aligned} \tilde{T}_k:=S_{k-1}+\eta _k,\quad k\in \mathbb {N}\end{aligned}$$
(49)

and then define \(\tilde{N}(t):=\#\{k\in \mathbb {N}{:}\,\tilde{T}_k\le t\}\) and \(\tilde{V}(t):=\mathbb {E}\tilde{N}(t)\) for \(t\ge 0\). It is clear that

$$\begin{aligned} \tilde{V}(t)=\mathbb {E}U((t-\eta )^+)=\int _{[0,\,t]}U(t-y)\mathrm{d}\tilde{G}(y),\quad t\ge 0 \end{aligned}$$
(50)

where, for \(t\ge 0\), \(U(t):=\sum _{k\ge 0}\mathbb {P}\{S_k\le t\}\) is the renewal function and \(\tilde{G}(t)=\mathbb {P}\{\eta \le t\}\).

For \(P_k\) written as (1), \(T_k=-\log P_k\) becomes

$$\begin{aligned} T_k=|\log U_1|+\cdots +|\log U_{k-1}|+|\log (1-U_k)|,\quad k\in \mathbb {N}\end{aligned}$$

which is a particular case of (49) with \((\xi , \eta )=(|\log U_1|, |\log (1-U_1)|)\). In view of this and Lemma 4.2 given below, the conditions of Theorem 2.1 hold with \(\omega =\varepsilon _1=\varepsilon _2=1\), \(\gamma =1/2\), \(c=\mu ^{-1}\), \(W=B\) and \(R_j=B_{j-1}\). \(\square \)

Lemma 4.2

Assume that \(\mathtt{m}:=\mathbb {E}\xi <\infty \), \(\mathtt{s}^2:=\mathrm{Var}\, \xi \in (0,\infty )\) and \(\mathbb {E}\eta <\infty \). Then

  1. (a)
    $$\begin{aligned} b_1 \le \tilde{V}(t)-\mathtt{m}^{-1}t\le a_0,\quad t\ge 0 \end{aligned}$$
    (51)

    for some constants \(b_1<0\) and \(a_0>0\). Also,

    $$\begin{aligned} \frac{\tilde{N}(t\cdot )-\mathtt{m}^{-1}(t\cdot )}{(\mathtt{s}^2 \mathtt{m}^{-3} t)^{1/2}}~\Rightarrow ~ B(\cdot ),\quad t\rightarrow \infty \end{aligned}$$

    in the \(J_1\)-topology on D.

  2. (b)

    \(\mathbb {E}[\sup _{s\in [0,\,t]}(\tilde{N}(s)-\tilde{V}(s))^2]=O(t)\) as \(t\rightarrow \infty \).

Proof

(a) A standard result of the renewal theory tells us that

$$\begin{aligned} 0\le U(t)-\mathtt{m}^{-1} t\le a_0, \end{aligned}$$
(52)

where \(a_0\) is a known positive constant. The second inequality in combination with \(\tilde{V}(t)\le U(t)\) proves the second inequality in (51). Using the first inequality in (52) yields

$$\begin{aligned} \tilde{V}(t)-\mathtt{m}^{-1}t= & {} \int _{[0,\,t]}(U(t-y)-\mathtt{m}^{-1}(t-y))\mathrm{d}\tilde{G}(y)\\&-\,\mathtt{m}^{-1} \int _0^t (1-\tilde{G}(y))\mathrm{d}y\ge -\mathtt{m}^{-1} \int _0^t (1-\tilde{G}(y))\mathrm{d}y\ge -\mathtt{m}^{-1}\mathbb {E}\eta . \end{aligned}$$

For a proof of weak convergence, see Theorem 3.2 in [2].

(b) We shall use a decomposition

$$\begin{aligned} \tilde{N}(t)-\tilde{V}(t)=\sum _{r\ge 0}({{\,\mathrm{\mathbb {1}}\,}}_{\{S_r+\eta _{r+1}\le t\}}-\tilde{G}(t-S_r))+\int _{[0,\,t]}\tilde{G}(t-x)\mathrm{d}(\nu (x)-U(x)), \end{aligned}$$

where \(\nu (x):=\#\{r\in \mathbb {N}_0{:}\,S_r\le x\}\) for \(x\ge 0\), so that \(U(x)=\mathbb {E}\nu (x)\). It suffices to prove that

$$\begin{aligned} \mathbb {E}\left[ \sup _{s\in [0,\,t]}\left( \sum _{r\ge 0}({{\,\mathrm{\mathbb {1}}\,}}_{\{S_r+\eta _{r+1}\le s\}}-\tilde{G}(s-S_r))\right) ^2\right] =O(t),\quad t\rightarrow \infty \end{aligned}$$
(53)

and

$$\begin{aligned} D(t):=\mathbb {E}\left[ \sup _{s\in [0,\,t]}\left( \int _{[0,\,s]}\tilde{G}(s-x)\mathrm{d}(\nu (x)-U(x))\right) ^2\right] =O(t),\quad t\rightarrow \infty . \end{aligned}$$
(54)

Proof of (53)

For each \(j\in \mathbb {N}\), we write

$$\begin{aligned} \sup _{s\in [j,\, j+1)} \sum _{r\ge 0}({{\,\mathrm{\mathbb {1}}\,}}_{\{S_r+\eta _{r+1}\le s\}}-\tilde{G}(s-S_r))\le & {} \sum _{r\ge 0}({{\,\mathrm{\mathbb {1}}\,}}_{\{S_r+\eta _{r+1}\le j+1\}}-\tilde{G}(j+1-S_r))\\&+\,\sum _{r\ge 0}(\tilde{G}(j+1-S_r)-\tilde{G}(j-S_r)). \end{aligned}$$

Similarly,

$$\begin{aligned} \sup _{s\in [j,\, j+1)} \sum _{r\ge 0}({{\,\mathrm{\mathbb {1}}\,}}_{\{S_r+\eta _{r+1}\le s\}}-\tilde{G}(s-S_r))\ge & {} \sum _{r\ge 0}({{\,\mathrm{\mathbb {1}}\,}}_{\{S_r+\eta _{r+1}\le j\}}-\tilde{G}(j-S_r))\\&-\,\sum _{r\ge 0}(\tilde{G}(j+1-S_r)-\tilde{G}(j-S_r)). \end{aligned}$$

Thus, (53) is a consequence of

$$\begin{aligned} \sum _{j=0}^{\lceil t\rceil +1}\mathbb {E}\left[ \sum _{r\ge 0}({{\,\mathrm{\mathbb {1}}\,}}_{\{S_r+\eta _{r+1}\le j\}}-\tilde{G}(j-S_r))\right] ^2=O(t),\quad t\rightarrow \infty \end{aligned}$$
(55)

and

$$\begin{aligned} \sum _{j=0}^{\lceil t\rceil +1}\mathbb {E}\left[ \sum _{r\ge 0}(\tilde{G}(j+1-S_r)-\tilde{G}(j-S_r))\right] ^2=O(t),\quad t\rightarrow \infty . \end{aligned}$$
(56)

The second moment in (55) is equal to \(\int _{[0,\,j]}\tilde{G}(j-x)(1-\tilde{G}(j-x))\mathrm{d}U(x)\le \int _{[0,\,j]}(1-\tilde{G}(j-x))\mathrm{d}U(x)\). In view of \(\mathbb {E}\eta <\infty \), the function \(x\mapsto 1-\tilde{G}(x)\) is directly Riemann integrable on \([0,\infty )\). According to Lemma 6.2.8 in [28] this implies that the right-hand side of the last inequality is O(1), as \(j\rightarrow \infty \), thereby proving (55).

Further, set \(K(j):=\int _{[0,\,j]}(\tilde{G}(j+1-x)-\tilde{G}(j-x))\mathrm{d}\nu (x)\) for \(j\in \mathbb {N}_0\). Then

$$\begin{aligned} \mathbb {E}\left[ \sum _{r\ge 0}(\tilde{G}(j+1-S_r)-\tilde{G}(j-S_r))\right] ^2\le & {} 2(\mathbb {E}[K(j)]^2+\mathbb {E}[\nu (j+1)-\nu (j)]^2)\\\le & {} 2(\mathbb {E}[K(j)]^2+\mathbb {E}[\nu (1)]^2), \end{aligned}$$

where the last inequality is a consequence of distributional subadditivity of \(\nu \), that is, \(\mathbb {P}\{\nu (t+s)-\nu (s)>x\}\le \mathbb {P}\{\nu (t)>x\}\) for \(t,s,x\ge 0\). Recall that \(\nu (1)\) has finite exponential moments, so that trivially \(\mathbb {E}[\nu (1)]^2<\infty \). Left with estimating \(\mathbb {E}[K(j)]^2\) we infer

$$\begin{aligned} \mathbb {E}[K(j)]^2= & {} \mathbb {E}\left[ \tilde{G}(j+1)-\tilde{G}(j){+}\sum _{k=0}^{j-1}\int _{[k,\,k+1)}(\tilde{G}(j+1{-}x){-}\tilde{G}(j{-}x))\mathrm{d}\nu (x)\right] ^2 \\\le & {} \mathbb {E}\left[ 1 +\sum _{k=0}^{j-1}(\tilde{G}(j+1-k)-\tilde{G}(j-k))(\nu (k+1)-\nu (k))\right] ^2\\\le & {} 2\left( 1+ (\tilde{G}(j) +\tilde{G}(j+1)-\tilde{G}(1))^2\sum _{k=0}^{j-1}\frac{\tilde{G}(j+1-k)-\tilde{G}(j-k)}{\tilde{G}(j)+\tilde{G}(j+1)-\tilde{G}(1)} \right. \\&\left. \qquad \qquad \mathbb {E}[\nu (k+1)-\nu (k)]^2\right) \\\le & {} 2(1+(\tilde{G}(j) +\tilde{G}(j+1)-\tilde{G}(1))^2\mathbb {E}[\nu (1)]^2)\le 2(1+4\mathbb {E}[\nu (1)]^2). \end{aligned}$$

Here, the second inequality is implied by convexity of \(x\mapsto x^2\) and Jensen’s inequality in the form \((\sum _{k=0}^{j-1}p_{j,\,k}x_k)^2\le \sum _{k=0}^{j-1}p_{j,\,k}x_k^2\), where \(p_{j,\,k}:=(\tilde{G}(j+1-k)-\tilde{G}(j-k))/ (\tilde{G}(j)+\tilde{G}(j+1)-\tilde{G}(1))\) and \(x_k:=\nu (k+1)-\nu (k)\). Note that the \(p_{j,\,k}\) satisfy \(\sum _{k=0}^{j-1}p_{j,\,k}=1\). Combining the obtained estimates together we arrive at (56). \(\square \)

Proof of (54)

Assuming that

$$\begin{aligned} \mathbb {E}\left[ \sup _{s\in [0,\,t]}(\nu (s)-U(s))^2\right] =O(t), \end{aligned}$$
(57)

integration by parts in (54) yields

$$\begin{aligned} D(t)= & {} \mathbb {E}\left[ \sup _{s\in [0,\,t]}\left( \int _{[0,\,s]}(\nu (s-x)-U(s-x))\mathrm{d}\tilde{G}(x)\right) ^2\right] \\\le & {} (\tilde{G}(t))^2 \mathbb {E}\left[ \sup _{s\in [0,\,t]}(\nu (s)-U(s))^2\right] =O(t) \end{aligned}$$

which proves (54).

Passing to the proof of (57) we first observe that in view of (52) relation (57) is equivalent to

$$\begin{aligned} \mathbb {E}\left[ \sup _{s\in [0,\,t]}(\nu (s)-\mathtt{m}^{-1}s)^2\right] =O(t),\quad t\rightarrow \infty . \end{aligned}$$
(58)

Since \(s\mapsto \nu (s)-\mathtt{m}^{-1}s\) is a (random) piecewise linear function with slope \(-\mathtt{m}^{-1}\) having unit jumps at times \(S_0\), \(S_1,\ldots \) we conclude that

$$\begin{aligned} \sup _{s\in [0,\,t]}(\nu (s)-\mathtt{m}^{-1}s)^2\le & {} \max \left( \max _{0\le k\le \nu (t)}(k-\mathtt{m}^{-1}S_k)^2, \max _{0\le k\le \nu (t)-1}(k+1-\mathtt{m}^{-1}S_k)^2\right) \\\le & {} 2\left( 1+\max _{0\le k\le \nu (t)}(k-\mathtt{m}^{-1}S_k)^2\right) . \end{aligned}$$

Applying Doob’s inequality to the martingale \((S_{\nu (t)\wedge n}-\mathtt{m}(\nu (t)\wedge n))_{n\in \mathbb {N}_0}\) (this is a martingale with respect to the filtration generated by the \(\xi _k\) because \(\nu (t)\) is a stopping time with respect to the same filtration) we obtain

$$\begin{aligned} \mathbb {E}[\max _{0\le k\le \nu (t)\wedge n}(S_k-\mathtt{m}k)^2]= & {} \mathbb {E}\left[ \max _{0\le k\le n}(S_{\nu (t)\wedge k}-\mathtt{m}(\nu (t)\wedge k))^2\right] \\\le & {} 4\mathbb {E}[S_{\nu (t)\wedge n}-\mathtt{m}(\nu (t)\wedge n)]^2=4\mathtt{s}^2 \mathbb {E}[\nu (t)\wedge n] \end{aligned}$$

for each \(n\in \mathbb {N}\). Here, the last equality is nothing else but Wald’s identity. An application of Lévy’s monotone convergence theorem yields

$$\begin{aligned} \mathbb {E}\left[ \max _{0\le k\le \nu (t)}(S_k-\mathtt{m} k)^2\right] \le 4\mathtt{s}^2 U(t). \end{aligned}$$

In view of (52) the right-hand side is O(t), as \(t\rightarrow \infty \), and (58) follows. \(\square \)

Recall that \((P_k)_{k\in \mathbb {N}}\) follows the \({\mathrm{GEM}}\) distribution with parameter \(\theta >0\) when the \(U_i\)’s in (1) are beta distributed with parameters \(\theta \) and 1, in which case \(\mu =\mathbb {E}|\log U_1|=\theta ^{-1}, \sigma ^2=\mathrm{Var}(\log U_1)=\theta ^{-2}\) and \(\mathbb {E}|\log (1-U_1)|=\theta \sum _{n\ge 1}n^{-1}(n+\theta )^{-1}<\infty \).

Corollary 4.3

For \(\theta >0\) let \((P_k)_{k\in \mathbb {N}}\) be \({\mathrm{GEM}}\)-distributed with parameter \(\theta \), or any random sequence such that the sequence of \(P_k\)’s arranged in decreasing order follows the \({\mathrm{PD}}\) distribution with parameter \(\theta \). Then

$$\begin{aligned} \bigg (\frac{(j-1)!(K_{n,j}(\cdot )-(j!)^{-1}(\theta \log n(\cdot ))^j)}{\sqrt{(\theta \log n)^{2j-1}}}\bigg )_{j\in \mathbb {N}}~\Rightarrow ~ (B_{j-1}(\cdot ))_{j\in \mathbb {N}},\quad n\rightarrow \infty . \end{aligned}$$
(59)

in the product \(J_1\)-topology on \(D[0,1]^\mathbb {N}\).

5 Some regenerative models

For \((X(t))_{t\ge 0}\) a drift-free subordinator with \(X(0)=0\) and a nonzero Lévy measure \(\nu \) supported by \((0,\infty )\) let

$$\begin{aligned} \Delta X(t)=X(t)-X(t-), ~t\ge 0, \end{aligned}$$

be the associated process of jumps. The process \(\Delta X(\cdot )\) assumes nonzero values on a countable set, which is dense in case \(\nu (0,\infty )=\infty \). The transformed process (multiplicative subordinator) \(F(t)= 1-e^{-X(t)},\,t\ge 0,\) has the associated process of jumps

$$\begin{aligned} \Delta F(t)= e^{-X(t-)}(1-e^{-\Delta X(t)}), ~t\ge 0. \end{aligned}$$

In this section we identify the fragmentation law \((P_k)_{k\in \mathbb {N}}\) with nonzero jumps \(\Delta F(\cdot )\) arranged in some order (for instance by decrease). Note that multiplying the Lévy measure by a positive factor corresponds to a time-change for F, hence does not affect the derived fragmentation law.

We shall assume that the Lévy measure \(\nu \) is infinite and has the right tail \(\nu ([x,\infty ))\) satisfying

$$\begin{aligned} \beta _0+\beta _1 |\log x|^{q-r_2} \le \nu ([x,\infty ))-c_0|\log x|^q \le \alpha _0+\alpha _1 |\log x|^{q-r_1} \end{aligned}$$
(60)

for small enough \(x>0\) and some \(c_0, \alpha _0, \alpha _1>0\), \(q\ge 1\), \(1\le r_1, r_2\le q\) and \(\beta _0, \beta _1<0\).

Theorem 5.1

Assume that (60) holds and

$$\begin{aligned} m:=\mathbb {E}X(1)=\int _{[0,\infty )}x \nu (\mathrm{d}x)<\infty ,~~ s^2:=\mathrm{Var}\,X(1)=\int _{[0,\infty )}x^2\nu (\mathrm{d}x)<\infty . \end{aligned}$$

Then

$$\begin{aligned}&\bigg (\frac{K_{n,j}(\cdot )-c_j^*(\log n(\cdot ))^{(q+1)j}}{q\mathrm{B}(q,(q+1)j-q)sm^{-3/2}c_{j-1}^*(\log n)^{(q+1)j-1/2}}\bigg )_{j\in \mathbb {N}} \\&\quad \Rightarrow ~ (B_{(q+1)j-1}(\cdot ))_{j\in \mathbb {N}}, \quad n\rightarrow \infty \end{aligned}$$

in the product \(J_1\)-topology on \(D[0,1]^\mathbb {N}\), where

$$\begin{aligned} c_j^*:=\Big (\frac{c_0\Gamma (q+2)}{m(q+1)}\Big )^j \frac{1}{\Gamma ((q+1)j+1)},\quad j\in \mathbb {N}_0. \end{aligned}$$

Theorem 5.1 applies to the gamma subordinator with the Lévy measure

$$\begin{aligned} \nu (\mathrm{d}x)=\theta x^{-1}e^{-\lambda x}{{\,\mathrm{\mathbb {1}}\,}}_{(0,\infty )}(x)\mathrm{d}x \end{aligned}$$

and to the subordinator with

$$\begin{aligned} \nu (\mathrm{d}x)=\theta (1-e^{-x})^{-1}e^{-\lambda x}{{\,\mathrm{\mathbb {1}}\,}}_{(0,\infty )}(x)\mathrm{d}x, \end{aligned}$$
(61)

where \(\theta ,\lambda >0\). In both cases \(s^2<\infty \) and (60) holds with \(c_0=\theta \) and \(q=r_1=r_2=1\). Let \(X(\cdot )\) be a subordinator with Lévy measure (61). We note in passing that \(\int _0^\infty \exp (-X(t))\mathrm{d}t\) is the weak limit of the total tree length, properly normalized, of a beta \((2,\lambda )\) coalescent, see Section 5 in [33] or Table 3 in the survey [23]. Also, the image of \(\nu \) given in (61) under the transformation \(x\mapsto 1-e^{-x}\) yields a particular instance of the driving measure for a beta process, see formula (4) in [11].

Theorem 5.1 is a consequence of Theorem 2.1, the easily checked formula

$$\begin{aligned} \int _{[0,\,u]}(u-y)^\alpha \mathrm{d}B_q(y)=q\mathrm{B}(q,\alpha +1)\int _{[0,\,u]}(u-y)^{q+\alpha }\mathrm{d}B(y),\quad u\ge 0,~\alpha ,q>0 \end{aligned}$$

which we use for \(\alpha =(q+1)(j-1)\), and the next lemma.

Lemma 5.2

Assume that (60) holds and \(s^2<\infty \). Then the following is true:

  1. (a)
    $$\begin{aligned} b_0+b_1t^{q-r_2+1}\le V(t)- c_0(m(q+1))^{-1}t^{q+1}\le a_0+a_1 t^q,\quad t>0 \end{aligned}$$
    (62)

    for some constants \(a_0\), \(a_1>0\) and \(b_0\), \(b_1\le 0\), where \(m=\mathbb {E}X(1)<\infty \);

  2. (b)
    $$\begin{aligned} \frac{N(t\cdot )-c_0(m(q+1))^{-1}(t\cdot )^{q+1}}{s m^{-3/2}t^{q+1/2}}~\Rightarrow ~ B_q(\cdot ),\quad t\rightarrow \infty \end{aligned}$$

    in the \(J_1\)-topology on D;

  3. (c)

    \(\mathbb {E}\sup _{s\in [0,\,t]}(N(s)-V(s))^2=O(t^{2q+1})\), as \(t\rightarrow \infty \).

Proof

(a) Set \(f(x):=\nu ([-\log (1-e^{-x}), \infty ))\) for \(x\ge 0\). Inequality (60) in combination with \(\lim _{x\rightarrow \infty }\nu ([x,\infty ))=0\) entails

$$\begin{aligned} \beta _0+\beta _1 x^{q-r_2} \le f(x)-c_0x^q \le \alpha _0+\alpha _1 x^{q-r_1} \end{aligned}$$
(63)

for all \(x>0\) and some constants \(\alpha _0\), \(\alpha _1\), \(\beta _0\) and \(\beta _1\) which are not necessarily the same as in (60).

Since

$$\begin{aligned} N(t)=\sum {{\,\mathrm{\mathbb {1}}\,}}_{\{X(s-)-\log (1-e^{-\Delta X(s)})\le t\}}=\sum {{\,\mathrm{\mathbb {1}}\,}}_{\{\Delta X(s)\ge -\log (1-e^{-(t-X(s-))})\}}, \end{aligned}$$

where the summation extends to all \(s>0\) with \(\Delta X(s)>0\), we conclude that \(V(x)=\mathbb {E}N(x)=\int _{[0,\,x]}f(x-y)\mathrm{d}U^*(y)\), where \(U^*(x):=\int _0^\infty \mathbb {P}\{X(t)\le x\}\mathrm{d}t=\mathbb {E}T(x)\) is the renewal function and \(T(x):=\inf \{t>0{:}\,X(t)>x\}\) for \(x\ge 0\).

Similarly to (52) we have

$$\begin{aligned} 0\le U^*(t)-m^{-1} t\le a^*_0,\quad t\ge 0, \end{aligned}$$
(64)

where \(a^*_0\) is a known positive constant. Using this and (63) we infer

$$\begin{aligned}&V(t)-c_0(m(q+1))^{-1}t^{q+1} \\&\quad =\int _{[0,\,t]}(U^*(t-y)-m^{-1}(t-y))\mathrm{d}f(y)\\&\qquad +\,m^{-1}\int _0^t (f(y)-c_0y ^q) \mathrm{d}y\le a^*_0 f(t)+m^{-1}\int _0^t (\alpha _0+\alpha _1 y^{q-r_1})\mathrm{d}y\\&\quad \le a^*_0(\alpha _0+\alpha _1 t^{q-r_1}+c_0t^q)+m^{-1}(\alpha _0 t+\alpha _1(q-r_1+1)^{-1}t^{q-r_1+1}). \end{aligned}$$

This proves the second inequality in (62). Arguing analogously we obtain

$$\begin{aligned} V(t)-c_0(m(q+1))^{-1}t^{q+1}\ge & {} m^{-1}\int _0^t (f(y)-c_0y ^q) \mathrm{d}y\ge m^{-1}\int _0^t (\beta _0+\beta _1 y^{q-r_2})\mathrm{d}y\\= & {} m^{-1}(\beta _0 t+ \beta _1 (q-r_2+1)^{-1}t^{q-r_2+1}), \end{aligned}$$

thereby proving the first inequality in (62).

  1. (b)

    Write

    $$\begin{aligned} N(t)= & {} \sum \big ( {{\,\mathrm{\mathbb {1}}\,}}_{\{\Delta X(s)\ge -\log (1-e^{-(t-X(s-))})\}}-f(t-X(s-))\big ){{\,\mathrm{\mathbb {1}}\,}}_{\{X(s-)\le t\}}\nonumber \\&+\,\sum f(t-X(s-)){{\,\mathrm{\mathbb {1}}\,}}_{\{X(s-)\le t\}}=:\mathcal {N}_1(t)+\mathcal {N}_2(t). \end{aligned}$$
    (65)

    As a preparation for the proof of part (b) we intend to show that

    $$\begin{aligned} \lim _{t\rightarrow \infty }t^{-q-1/2} \mathcal {N}_1(t)=0\quad \text {a.s.} \end{aligned}$$
    (66)

Proof of (66)

To reduce technicalities to a minimum we only consider the case \(q>1\). Since \(\mathbb {E}[\mathcal {N}_1(t)]^2= V(t)\) and \(V(t)\sim c_0(m(q+1))^{-1}t^{q+1}\) as \(t\rightarrow \infty \) we conclude that

$$\begin{aligned} \lim _{\mathbb {N}\ni \ell \rightarrow \infty }\ell ^{-(q+1/2)}\mathcal {N}_1(\ell )=0\quad \text {a.s.} \end{aligned}$$

by the Borel–Cantelli lemma. For each \(t\ge 0\), there exists \(\ell \in \mathbb {N}_0\) such that \(t\in [\ell , \ell +1)\). Now we use a.s. monotonicity of N(t) and \(\mathcal {N}_2(t)\) to obtain

$$\begin{aligned}&(\ell +1)^{-(q+1/2)}(\mathcal {N}_1(\ell )-(\mathcal {N}_2(\ell +1)-\mathcal {N}_2(\ell )))\le t^{-(q+1/2)}\mathcal {N}_1(t)\\&\quad \le \ell ^{-(q+1/2)}(\mathcal {N}_1(\ell +1)+\mathcal {N}_2(\ell +1)-\mathcal {N}_2(\ell ))\quad \text {a.s.} \end{aligned}$$

Thus, it remains to check that

$$\begin{aligned} \lim _{\ell \rightarrow \infty } \ell ^{-(q+1/2)}(\mathcal {N}_2(\ell +1)-\mathcal {N}_2(\ell ))=0\quad \text {a.s.} \end{aligned}$$

In view of (63), f satisfies a counterpart of (15), whence

$$\begin{aligned} \mathcal {N}_2(\ell +1)-\mathcal {N}_2(\ell )= & {} \int _{[0,\,\ell ]} (f(\ell +1-y)-f(\ell -y))\mathrm{d}T(y)\nonumber \\&+\,\int _{(\ell ,\, \ell +1]}f(\ell +1-y)\mathrm{d}T(y)\nonumber \\\le & {} (c_0 (q-1)(\ell +1)^{q-1}+\alpha _0+\alpha _1 (\ell +1)^{q-r_1}-\beta _0\nonumber \\&+\,|\beta _1|\ell ^{q-r_2}+f(1))T(\ell +1)\nonumber \\= & {} O(\ell ^q) \end{aligned}$$
(67)

a.s.  as \(\ell \rightarrow \infty \). For the last equality we have used the strong law of large numbers for T(y).

We are ready to prove part (b). We shall use representation (65). Relation (66) entails

$$\begin{aligned} t^{-q-1/2}\sup _{y\in [0,\,T]}\mathcal {N}_1(ty)~\overset{\mathrm{P}}{\rightarrow }~0,\quad t\rightarrow \infty . \end{aligned}$$
(68)

for each \(T>0\). Thus, we are left with showing that

$$\begin{aligned} \frac{\mathcal {N}_2(t\cdot )-c_0(m(q+1))^{-1}(t\cdot )^{q+1}}{s m^{-3/2}t^{q+1/2}}~\Rightarrow ~ B_q(\cdot ),\quad t\rightarrow \infty \end{aligned}$$

in the \(J_1\)-topology on D. The proof of this is similar to that of weak convergence of the jth coordinate, \(j\ge 2\), in (22). The only difference is that, instead of (12), we use

$$\begin{aligned} \frac{T(t\cdot )-m^{-1}(t\cdot )}{s m^{-3/2}t^{1/2}}~\Rightarrow ~ B(\cdot ),\quad t\rightarrow \infty \end{aligned}$$

in the \(J_1\)-topology on D, where B is a Brownian motion, see Theorem 2a in [10].

(c) Since the proof is analogous to that of Lemma  4.2(b) we only give a sketch. In view of (65) it suffices to show that, as \(t\rightarrow \infty \),

$$\begin{aligned}&\mathbb {E}\left[ \sup _{s\in [0,\,t]}\left( \sum \left( {{\,\mathrm{\mathbb {1}}\,}}_{\{\Delta X(v)\ge -\log (1-e^{-(s-X(v-))})\}}-f(s-X(v-))\right) {{\,\mathrm{\mathbb {1}}\,}}_{\{X(v-)\le s\}}\right) ^2\right] \nonumber \\&\quad =O(t^{2q+1}) \end{aligned}$$
(69)

and

$$\begin{aligned} \mathbb {E}\left[ \sup _{s\in [0,\,t]}\left( \int _{[0,\,s]}f(s-x)\mathrm{d}(T(x)-U^*(x))\right) ^2\right] =O(t^{2q+1}). \end{aligned}$$
(70)

\(\square \)

Proof of (69)

Arguing as in the proof of Lemma 4.2(b) we conclude that (69) is a consequence of

$$\begin{aligned} \sum _{\ell =0}^{\lceil t\rceil +1}\mathbb {E}\Big [\sum \big ( {{\,\mathrm{\mathbb {1}}\,}}_{\{\Delta X(v)\ge -\log (1-e^{-(\ell -X(v-))})\}}-f(\ell -X(v-))\big ){{\,\mathrm{\mathbb {1}}\,}}_{\{X(v-)\le \ell \}}\Big ]^2=O(t^{2q+1}) \end{aligned}$$
(71)

and

$$\begin{aligned} \sum _{\ell =0}^{\lceil t \rceil +1}\mathbb {E}[\mathcal {N}_2(\ell +1)-\mathcal {N}_2(\ell )]^2=O(t^{2q+1}). \end{aligned}$$
(72)

The second moment in (71) is equal to \(V(\ell )=O(\ell ^{q+1})\). This entails that the left-hand side of (71) is \(O(t^{q+2})\), hence \(O(t^{2q+1})\) because of the assumption \(q\ge 1\). Finally, since \(r_1, r_2\ge 1\) by assumption and \(\mathbb {E}[T(\ell )]^2=O(\ell ^2)\), inequality (67) entails \(\mathbb {E}[\mathcal {N}_2(\ell +1)-\mathcal {N}_2(\ell )]^2=O(\ell ^{2q})\) and thereupon (72). \(\square \)

Proof of (70)

Set \(\hat{\nu }(x):=\inf \{k\in \mathbb {N}{:}\,X(k)>x\}\) for \(x\ge 0\). Since \(T(x)\le \hat{\nu }(x)\le T(x)+1\) a.s. and, according to (57), \(\mathbb {E}[\sup _{s\in [0,\,t]}(\hat{\nu }(s)-\mathbb {E}\hat{\nu }(s))^2]=O(t)\) as \(t\rightarrow \infty \), we infer \(\mathbb {E}[\sup _{s\in [0,\,t]}(T(s)-U^*(s))^2]=O(t)\) as \(t\rightarrow \infty \). With this at hand, relation (70) readily follows. \(\square \)

6 The Poisson–Kingman model

Let \((X(t))_{t\ge 0}\) be a subordinator as in Sect. 5 with the only differences that the parameters in (60) satisfy \(q\in (0,2)\), \(q/2<r_1, r_2\le q\) and that we additionally assume

$$\begin{aligned} \int _{(1,\infty )}(\log x)^s\nu (\mathrm{d}x)<\infty , \end{aligned}$$
(73)

where \(s=2q\) when \(q\in (0,3/2)\) and \(s=\varepsilon +q/(2-q)\) for some \(\varepsilon >0\) when \(q\in [3/2, 2)\).

The ranked sequence of jumps of the process \((X(t)/X(1))_{t\in [0,1]}\) can be represented as \(P_j:=L_j/L>0\), where \(L_1\ge L_2\ge \cdots \) is the sequence of atoms of a non-homogeneous Poisson random measure with mean measure \(\nu \), and \(L:=\sum _{j\ge 1} L_j{\mathop {=}\limits ^{\mathrm{d}}}X(1)\). This is the Poisson–Kingman construction [34, Section 3] of probabilities \((P_j)_{j\in \mathbb {N}}\), which we regard as fragmentation law underlying a nested occupancy scheme.

Theorem 6.1

Assume that the function \(x\mapsto \nu ((x,\infty ))\) is strictly decreasing and continuous on \((0,\infty )\). For the fragmentation law as described above limit relation (10) holds with \(\omega =q\), \(\gamma =q/2\), \(c=c_0\), \(a=c_0^{1/2}\) and \(W(s):=B(s^q)\) for \(s\ge 0\) being a time changed Brownian motion.

Theorem 6.1 is a consequence of Theorem 2.1 and Lemma 6.2 given next.

Lemma 6.2

Under the assumptions of Theorem 6.1 the following is true:

  1. (a)
    $$\begin{aligned} \beta _2+\beta _3 t^{q-r_4}\le V(t)- c_0 t^q \le \alpha _2+\alpha _3 t^{q-r_3},\quad t>0 \end{aligned}$$
    (74)

    for some constants \(\alpha _2,\alpha _3>0\), \(q\in (0,2)\), \(q/2<r_3, r_4\le q\) and \(\beta _2\), \(\beta _3<0\).

  2. (b)
    $$\begin{aligned} \mathbb {E}\sup _{s\in [0,\,t]}(N(s)-V(s))^2=O(t^q),\quad t\rightarrow \infty . \end{aligned}$$
  3. (c)
    $$\begin{aligned} \frac{N(t\cdot )-c_0(t\cdot )^q}{(c_0 t^q)^{1/2}}~\Rightarrow ~ W(\cdot ),\quad t\rightarrow \infty \end{aligned}$$

    in the \(J_1\)-topology on D, where \(W(s)=B(s^q)\) for \(s\ge 0\).

Proof

For \(t\in \mathbb {R}\), set \(\widehat{N} (t):=\#\{k\in \mathbb {N}{:}\,L_k\ge e^{-t}\}\) so that \(N(t)=\#\{k\in \mathbb {N}{:}\,L_k/L\ge e^{-t}\}=\widehat{N}(t-\log L)\). Note that \(N(t)=0\) for \(t<0\). Further, put \(m(t):=\nu ((e^{-t},\infty ))\) for \(t\in \mathbb {R}\) and note that m is a strictly increasing and continuous function with \(m(-\infty )=0\). In view of (60)

$$\begin{aligned} \beta _0+\beta _1 t^{q-r_2} \le m(t)-c_0 t^q \le \alpha _0+\alpha _1 t^{q-r_1} \end{aligned}$$
(75)

forFootnote 2\(t\ge 0\), where \(\alpha _0,\alpha _1>0\), \(q\in (0,2)\), \(q/2<r_1, r_2\le q\) and \(\beta _0, \beta _1<0\). Later, we shall need the following consequences of (75):

$$\begin{aligned} m(t)~\sim ~ c_0 t^q,\quad t\rightarrow \infty \end{aligned}$$
(76)

and

$$\begin{aligned} \lim _{t\rightarrow \infty }\sup _{s\in [0,\,s_0]}\Big |\frac{m(ts)}{c_0t^q}-s^q\Big |=0 \end{aligned}$$
(77)

for all \(s_0>0\). For the latter we have also used Dini’s theorem.

The random process \((\widehat{N} (t))_{t\in \mathbb {R}}\) is non-homogeneous Poisson. In particular, \(\widehat{N} (t)\) has a Poisson distribution of mean m(t). Let \(\mathcal {P}:=(\mathcal {P}(t))_{t\ge 0}\) denote a homogeneous Poisson process of unit intensity. Throughout the proof we use the representation \((\widehat{N}(t))_{t\in \mathbb {R}}=(\mathcal {P}(m(t))_{t\in \mathbb {R}}\) which gives us a transition from \(\mathcal {P}\) to \(\widehat{N}\). The converse transition, namely that the arrival times of \(\mathcal {P}\) are \(m(-\log L_1)\), \(m(-\log L_2),\ldots \) is secured by our assumption that m is strictly increasing and continuous (this assumption is not needed to guarantee the direct transition).

  1. (a)

    Write

    $$\begin{aligned} N(t)-\widehat{N}(t)&=(\widehat{N}(t-\log L)-\widehat{N}(t)){{\,\mathrm{\mathbb {1}}\,}}_{\{L\le 1\}}- (\widehat{N}(t)-\widehat{N}(t-\log L)){{\,\mathrm{\mathbb {1}}\,}}_{\{L>1\}}\\&=:\mathcal {N}_1(t)-\mathcal {N}_2(t) \end{aligned}$$

and observe that

$$\begin{aligned} \mathcal {N}_1(t)\le & {} (\widehat{N}(t-\log L_1)-\widehat{N}(t)){{\,\mathrm{\mathbb {1}}\,}}_{\{L_1\le 1\}}\le (1+\mathcal {P}^*(m(t-\log L_1)-m(-\log L_1))\nonumber \\&-\,\mathcal {P}^*(m(t)-m(-\log L_1))){{\,\mathrm{\mathbb {1}}\,}}_{\{L_1\le 1\}}, \end{aligned}$$
(78)

where \(\mathcal {P}^*:=(\mathcal {P}^*(t))_{t\ge 0}\) is a homogeneous Poisson process of unit intensity which is independent of \(L_1\). More precisely, the arrival times of \(\mathcal {P}^*\) are \(m(-\log L_2)-m(-\log L_1)\), \(m(-\log L_3)-m(-\log L_1),\ldots \). For later use we note that

$$\begin{aligned}&((\mathcal {P}^*(m(t-\log L_1)-m(-\log L_1))-\mathcal {P}^*(m(t)-m(-\log L_1))){{\,\mathrm{\mathbb {1}}\,}}_{\{e^{-t}\le L_1\le 1\}})_{t\ge 0}\nonumber \\&\quad \overset{\mathrm{d}}{=}~((\mathcal {P}^*(m(t-\log L_1))-\mathcal {P}^*(m(t))){{\,\mathrm{\mathbb {1}}\,}}_{\{e^{-t}\le L_1\le 1\}})_{t\ge 0}, \end{aligned}$$
(79)

where \(\overset{\mathrm{d}}{=}\) means that the distributions of the processes are the same. Inequality (78) entails

$$\begin{aligned} \mathbb {E}[\mathcal {N}_1(t)]\le & {} \mathbb {E}(1+m(t-\log L_1)-m(t\vee (-\log L_1))){{\,\mathrm{\mathbb {1}}\,}}_{\{L_1\le 1\}}\\\le & {} \mathbb {E}(1+m(t-\log L_1)-m(t)){{\,\mathrm{\mathbb {1}}\,}}_{\{L_1\le 1\}}. \end{aligned}$$

In view of (75) for \(t,x\ge 0\)

$$\begin{aligned} m(t+x)-m(t)\le & {} c_0 ((t+x)^q-t^q)+\alpha _0+\alpha _1(x+t)^{q-r_1}+|\beta _0|+|\beta _1|t^{q-r_2}\nonumber \\\le & {} c_0(x^q {{\,\mathrm{\mathbb {1}}\,}}_{\{q\in (0,1]\}}+q(x^{q-1}+t^{q-1})x{{\,\mathrm{\mathbb {1}}\,}}_{\{q\in (1,2)\}})\nonumber \\&+\,\alpha _0+\alpha _1(x^{q-r_1}+t^{q-r_1})+|\beta _0|+|\beta _1|t^{q-r_2}. \end{aligned}$$
(80)

We have used subadditivity of \(x\mapsto x^\kappa \) on \(\mathbb {R}_+:=[0,\infty )\) when \(\kappa \in (0,1]\) and the mean value theorem for differentiable functions to obtain \((t+x)^\kappa -t^\kappa \le \kappa x(t+x)^{\kappa -1}\) when \(\kappa >1\). We infer

$$\begin{aligned} \mathbb {E}[\log _-L_1]^\alpha <\infty \quad \text {for any}~\alpha >0 \end{aligned}$$
(81)

as a consequence of \(\int _1^\infty y^{\alpha -1}\mathbb {P}\{-\log L_1>y\}\mathrm{d}y=\int _1^\infty y^{\alpha -1}e^{-m(y)}\mathrm{d}y<\infty \), where the finiteness is justified by (75). Here, as usual, \(\log _-x=(-\log x)\vee 0\) for \(x\ge 0\). Hence,

$$\begin{aligned} \mathbb {E}[\mathcal {N}_1(t)]\le & {} 1+c_0(\mathbb {E}[\log _-L_1]^q {{\,\mathrm{\mathbb {1}}\,}}_{\{q\in (0,1]\}}+q(\mathbb {E}[\log _-L_1]^q \\&+\,t^{q-1}\mathbb {E}[\log _-L_1]{{\,\mathrm{\mathbb {1}}\,}}_{\{q\in (1,2)\}})+\alpha _0\\&+\,\alpha _1(\mathbb {E}[\log _-L_1]^{q-r_1}+t^{q-r_1})+|\beta _0|+|\beta _1|t^{q-r_2}. \end{aligned}$$

Thus, the right-hand inequality in part (a) holds with \(r_3=r_1\wedge r_2\) when \(q\in (0,1]\) and \(r_3=r_1\wedge r_2\wedge 1\) when \(q\in (1,2)\).

To analyse \(\mathcal {N}_2(t)\), set \(\theta :=q\) if \(q\in (0,1]\) and \(\theta :=q/(2-q)\) if \(q\in (1,2)\) and then pick \(\varepsilon >0\) such that \(\theta +\varepsilon \le 2q\) when \(q\in (0,3/2)\) and take the same \(\varepsilon \) as in (73) when \(q\in [3/2, 2)\). Further, choose \(\delta \in (0, 1-(q\vee 1)/2)\) and \(\varrho _1>1\) sufficiently close to one to ensure that \(r_5:=(\theta +\varepsilon )\delta /\varrho _1>q/2\). Put \(\varrho _2:=\varrho _1/(\varrho _1-1)\). It holds that

$$\begin{aligned} \mathcal {N}_2(t)= & {} (\widehat{N}(t)-\widehat{N}(t-\log L)){{\,\mathrm{\mathbb {1}}\,}}_{\{1<L\le \exp (t^\delta )\}}+(\widehat{N}(t)-\widehat{N}(t-\log L)){{\,\mathrm{\mathbb {1}}\,}}_{\{L>\exp (t^\delta )\}}\nonumber \\\le & {} (\widehat{N}(t)-\widehat{N}(t-t^\delta ))+\widehat{N}(t){{\,\mathrm{\mathbb {1}}\,}}_{\{L>\exp (t^\delta )\}}. \end{aligned}$$
(82)

Condition (73) ensures that \(\mathbb {E}[\log _+ L]^{\theta +\varepsilon }<\infty \) by Theorem 25.3 in [41]. Here, \(\log _+x=(\log x)\vee 0\) for \(x\ge 0\). A combination of Hölder’s and Markov’s inequalities yields

$$\begin{aligned} \mathbb {E}[\widehat{N}(t){{\,\mathrm{\mathbb {1}}\,}}_{\{L>\exp (t^\delta )\}}]\le & {} (\mathbb {E}[\widehat{N}(t)]^{\varrho _2})^{1/\varrho _2}(\mathbb {P}\{\log L>t^\delta \})^{1/\varrho _1}\\\le & {} (\mathbb {E}[\widehat{N}(t)]^{\varrho _2})^{1/\varrho _2}(\mathbb {E}[\log _+ L]^{\theta +\varepsilon })^{1/\varrho _1} t^{-(\theta +\varepsilon )\delta /\varrho _1}. \end{aligned}$$

Since \(\widehat{N}(t)\) has a Poisson distribution of mean m(t), and m(t) satisfies (76), the right-hand side does not exceed \(\alpha _5+\alpha _4 t^{q-r_5}\) for \(t\ge 0\) and some \(\alpha _4, \alpha _5>0\).

Further, using (75) we obtain for \(t\ge 0\)

$$\begin{aligned} \mathbb {E}[\widehat{N}(t)-\widehat{N}(t-t^\delta )]= & {} m(t)-m(t-t^\delta )\le c_0 (t^{\delta q}{{\,\mathrm{\mathbb {1}}\,}}_{\{q\in (0,1]\}} +qt^{q-1+\delta }{{\,\mathrm{\mathbb {1}}\,}}_{\{q\in (1,2)\}})\nonumber \\&+\,\alpha _0+\alpha _1 t^{q-r_1} +|\beta _0|+|\beta _1|t^{q-r_2}\le \alpha _7+ \alpha _6 t^{q-r_6}. \end{aligned}$$
(83)

Note that \(r_6\) satisfies \(r_6>q/2\) because \(\delta <1-(q\vee 1)/2\). We have proved the left-hand inequality in part (a) with \(r_4:=r_5\wedge r_6\).

  1. (b)

    Having written

    $$\begin{aligned}&\mathbb {E}\left[ \sup _{s\in [0,\,t]}(N(s)-V(s))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{L>1\}}\right] \\&\quad \le 3\left( \mathbb {E}\left[ \sup _{s\in [0,\,t]}(\mathcal {P}(m(s-\log L))-m(s-\log L))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{L>1\}}\right] \right. \\&\left. \quad +\,\mathbb {E}\left[ \sup _{s\in [0,\,t]}(m(s-\log L)-m(s))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{L>1\}}\right] +\sup _{s\in [0,\,t]}(m(s)-V(s))^2\right) , \end{aligned}$$

we intend to show that each of the three terms on the right-hand side is \(O(t^q)\).

1st summand Recall that \((\mathcal {P}(t)-t)_{t\ge 0}\) is a martingale with respect to the natural filtration. Using

$$\begin{aligned}&\sup _{s\in [0,\,t]}(\mathcal {P}(m(s-\log L))-m(s-\log L))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{L>1\}}\le \sup _{s\in (-\infty ,\, t]}(\mathcal {P}(m(s))-m(s))^2\\&\quad \le \sup _{s\in [0,\,m(t)]}(\mathcal {P}(s)-s)^2 \end{aligned}$$

and then invoking Doob’s inequality we obtain

$$\begin{aligned}&\mathbb {E}\left[ \sup _{s\in [0,\,t]}(\mathcal {P}(m(s-\log L))-m(s-\log L))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{L>1\}}\right] \\&\quad \le \mathbb {E}\left[ \sup _{s\in [0,\,m(t)]}(\mathcal {P}(s)-s)^2\right] \\&\quad \le 4\mathbb {E}[\mathcal {P}(m(t))-m(t)]^2= 4m(t)=O(t^q). \end{aligned}$$

2nd summand. The following inequalities hold

$$\begin{aligned}&\mathbb {E}\left[ \sup _{s\in [0,\,t]}(m(s)-m(s-\log L))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{L>1\}}\right] \nonumber \\&\quad \le (m(t)-m(0))^2\mathbb {P}\{\log L>t\}+\mathbb {E}\left[ \sup _{s\in [0,\,t-\log L]}(m(s+\log L)-m(s))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{0<\log L\le t\}}\right] \nonumber \\&\quad \le (m(t)-m(0))^2\mathbb {P}\{\log L>t\}+\mathbb {E}\left[ \sup _{s\in [0,\,t]}(m(s+\log L)-m(s))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{\log L>0\}}\right] . \end{aligned}$$
(84)

Note that (73) entails \(\mathbb {E}[\log _+L]^{2q}<\infty \). Thus, the first summand on the right-hand side of (84) is O(1) by (76) and Markov’s inequality. Using (80) in combination with \(\mathbb {E}[\log _+L]^{2q}<\infty \) we conclude that the second summand on the right-hand side of (84) is \(O(t^q)\).

3rd summand Appealing to (74) and (75) yields \(\sup _{s\in [0,\,t]}(m(s)-V(s))^2\le \sup _{s\in [0,\,t]}(C_1+C_2s^{q-r})^2=O(t^{2q-2r})\) for appropriate constants \(C_1\), \(C_2\) and \(q/2<r\le q\). The latter inequality ensures that \(\sup _{s\in [0,\,t]}(m(s)-V(s))^2=O(t^q)\).

To deal with the expectation in question on the event \(\{L\le 1\}\) we write

$$\begin{aligned}&\mathbb {E}\left[ \sup _{s\in [0,\,t]}(N(s)-V(s))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{L\le 1\}}\right] \\&\quad \le 3\left( \mathbb {E}\left[ \sup _{s\in [0,\,t]}(\mathcal {P}(m(s-\log L))-\mathcal {P}(m(s)))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{L\le 1\}}\right] \right. \\&\left. \qquad +\,\mathbb {E}\left[ \sup _{s\in [0,\,t]}(\mathcal {P}(m(s))-m(s))^2\right] +\sup _{s\in [0,\,t]}(m(s)-V(s))^2\right) . \end{aligned}$$

We already know from the previous part of the proof, that the second and the third summand on the right-hand side are \(O(t^q)\). As for the first summand, we use (78) and (79) to obtain

$$\begin{aligned}&\mathbb {E}\left[ \sup _{s\in [0,\,t]}(\mathcal {P}(m(s-\log L))-\mathcal {P}(m(s)))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{L\le 1\}}\right] \\&\quad \le \mathbb {E}[ (1+\mathcal {P}^*(m(t-\log L_1))-m(-\log L_1))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{-\log L_1>t\}}]\\&\qquad +\,\mathbb {E}\left[ \sup _{s\in [-\log L_1,\,t]}(1+\mathcal {P}^*(m(s-\log L_1))-\mathcal {P}(m(s)))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{0\le -\log L_1\le t\}}\right] . \end{aligned}$$

The principal asymptotic term of the first summand is \(\mathbb {E}[(m(t-\log L_1)-m(t))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{-\log L_1>t\}}]\). Invoking (80) and (81) we infer that the last expression is o(1). To estimate the second summand we write

$$\begin{aligned}&\mathbb {E}\left[ \sup _{s\in [-\log L_1,\,t]}(\mathcal {P}^*(m(s-\log L_1))-\mathcal {P}(m(s)))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{0\le -\log L_1\le t\}}\right] \\&\quad \le 3\left( \mathbb {E}\left[ \sup _{s\in [-\log L_1,\,t]}(\mathcal {P}^*(m(s-\log L_1))-m(s-\log L_1))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{0\le -\log L_1\le t\}}\right] \right. \\&\left. \qquad +\,\mathbb {E}\left[ \sup _{s\in [0,\,t]}(\mathcal {P}^*(m(s))-m(s))^2\right] \right. \\&\left. \qquad +\,\mathbb {E}\left[ \sup _{s\in [0,\,t]}(m(s-\log L_1)-m(s))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{-\log L_1\ge 0\}}\right] \right) \\&\quad \le 3\left( 2\mathbb {E}\left[ \sup _{s\in [0,\,2t]}(\mathcal {P}^*(m(s))-m(s))^2\right] \right. \\&\left. \qquad +\,\mathbb {E}\left[ \sup _{s\in [0,\,t]}(m(s-\log L_1)-m(s))^2{{\,\mathrm{\mathbb {1}}\,}}_{\{-\log L_1\ge 0\}}\right] \right) . \end{aligned}$$

The last expression is \(O(t^q)\) which can be seen by mimicking the arguments used in the previous part of the proof.

(c) A specialization of the functional limit theorem for the renewal processes with finite variance (see, for instance, Theorem 3.1 on p. 162 in [26]) yields

$$\begin{aligned} \frac{\mathcal {P}(t\cdot )-(t\cdot )}{t^{1/2}}~\Rightarrow ~ B(\cdot ),\quad t\rightarrow \infty \end{aligned}$$
(85)

in the \(J_1\)-topology on D.

It is well-known (see, for instance, Lemma 2.3 on p. 159 in [26]) that the composition mapping \((x, \varphi )\mapsto (x\circ \varphi )\) is continuous at continuous functions \(x{:}\,\mathbb {R}_+ \rightarrow \mathbb {R}\) and nondecreasing continuous functions \(\varphi {:}\,\mathbb {R}_+\rightarrow \mathbb {R}_+\). This observation taken together with (85) and (77) enables us to conclude that

$$\begin{aligned} \frac{\widehat{N}(t\cdot )-m(t\cdot )}{(c_0 t^q)^{1/2}}~\Rightarrow ~ W(\cdot ),\quad t\rightarrow \infty \end{aligned}$$
(86)

in the \(J_1\)-topology on D. Noting that, for all \(s_0>0\), \(\sup _{s\in [0,\,s_0]}|s-t^{-1}\log L-s|=t^{-1}|\log L|\rightarrow 0\) a.s. as \(t\rightarrow \infty \) and applying the aforementioned result on compositions to (86) we infer

$$\begin{aligned} \frac{N(t\cdot )-m(t\cdot -\log L)}{(c_0 t^q)^{1/2}}~\Rightarrow ~ W(\cdot ),\quad t\rightarrow \infty \end{aligned}$$
(87)

in the \(J_1\)-topology on D.

It remains to prove that in (87) we can replace \(m(t\cdot -\log L)\) by \(c_0(t\cdot )^q\). To this end, it is enough to show that, for all \(s_0>0\),

$$\begin{aligned} t^{-q/2}\sup _{s\in [0,\,s_0]}|m(ts-\log L)-c_0(ts)^q|\overset{\mathrm{P}}{\rightarrow }0,\quad t\rightarrow \infty . \end{aligned}$$

This can be done as follows. Use (75) to obtain

$$\begin{aligned} \sup _{s\in [0,\, s_0]}|m(ts)-c_0 (ts)^q|\le \max (\alpha _0+\alpha _1 (ts_0)^{q-r_1}, |\beta _0|+|\beta _1|(ts_0)^{q-r_2}) \end{aligned}$$

whence

$$\begin{aligned} \lim _{t\rightarrow \infty } t^{-q/2}\sup _{s\in [0,\, s_0]}|m(ts)-c_0 (ts)^q|=0, \end{aligned}$$

where the assumption \(r_1,r_2>q/2\) has to be recalled. The analysis of \(\sup _{s\in [0,\,s_0]}|m(ts-\log L)-m(ts)|\) is very similar to (but simpler than) the arguments given in the proof of part (a). Appealing to (80) we conclude that, as \(t\rightarrow \infty \),

$$\begin{aligned}&t^{-q/2}\sup _{s\in [0,\,s_0]}|m(ts-\log L)-m(ts)|{{\,\mathrm{\mathbb {1}}\,}}_{\{\log L\le 0\}}\\&\quad =t^{-q/2}\sup _{s\in [0,\,s_0]}(m(ts-\log L)-m(ts)){{\,\mathrm{\mathbb {1}}\,}}_{\{\log L\le 0\}}\overset{\mathrm{P}}{\rightarrow }0. \end{aligned}$$

Fix any \(\delta \in (0,(q\vee 1)/2)\). Further, we argue as for (82)

$$\begin{aligned}&\sup _{s\in [0,\,s_0]}|m(ts)-m(ts-\log L)){{\,\mathrm{\mathbb {1}}\,}}_{\{\log L>0\}}\\&\quad \le \sup _{s\in [0,\,s_0]}(m(ts)-m(ts-\log L)){{\,\mathrm{\mathbb {1}}\,}}_{\{0<\log L\le (ts)^\delta \}}\\&\qquad +\,\sup _{s\in [0,\,s_0]}(m(ts)-m(ts-\log L)|{{\,\mathrm{\mathbb {1}}\,}}_{\{\log L>(ts)^\delta \}}\\&\quad \le \sup _{s\in [0,\,s_0]}(m(ts)-m(ts-(ts)^\delta ))\\&\qquad +\,\sup _{s\in [0,\,s_0]}(m(ts){{\,\mathrm{\mathbb {1}}\,}}_{\{\log L>(ts)^\delta \}}). \end{aligned}$$

Using (83) yields \(\sup _{s\in [0,\,s_0]}(m(ts)-m(ts-(ts)^\delta ))=o(t^{q/2})\) as \(t\rightarrow \infty \). Finally,

$$\begin{aligned} \sup _{s\in [0,\,s_0]}(m(ts){{\,\mathrm{\mathbb {1}}\,}}_{\{\log L>(ts)^\delta \}})\le m(((\log L)^+)^{1/\delta })\quad \text {a.s.} \end{aligned}$$

\(\square \)