1 Introduction

Motivation and model Random graphs have generated a large amount of literature. This is even the case for one single model: the Erdős–Rényi graph G(np) (graph with n vertices connected pairwise in an i.i.d. way with probability \(p\in [0, 1]\)). Since its introduction by Erdős and Rényi [21] more than fifty years ago, and the discovery of a phase transition where a “giant connected component” gets born, the pursuit of a deeper understanding of its structure has never stopped. Many landmark results by Bollobás [14], Łuczak [32], Janson, Knuth, Łuczak and Pittel [29] have shaped our grasp of this phase transition. From the point of view of precise asymptotics, one of the most important papers is certainly the contribution of Aldous [3], who introduced a stochastic process point of view and paved the way towards the study of scaling limits of critical random graphs. In that paper, he obtained the asymptotics for the sequence of sizes of the connected components of G(np) in the so-called critical window where the phase transition actually occurs. His work paved the way to the construction by Addario-Berry, Goldschmidt and B. [2] of the scaling limits of these connected components, seen as metric spaces, which also confirmed the limit fractal (Brownian) nature.

Following [2], the question of identifying the scaling limits has been investigated for more general models of random graphs. Particular attention has been paid to so-called inhomogeneous random graphs, which exhibit heterogeneity in the node degrees and whose behaviours are often quite different from the Erdős–Rényi graph. (See Fig. 1 for an illustration of this difference). Besides being a theoretical object with intriguing properties, these graphs are also commonly believed to offer more realistic modelling for complex real-world networks (see, e.g. [33]).

Fig. 1
figure 1

Left: a picture of a large connected component of G(np). Right: a picture of a large connected component of \({\varvec{\mathcal {G}}}_\mathtt {w}\). Observe the presence of “hubs” (nodes of high degrees) in the latter

In the present work, we consider such an inhomogeneous random graph model that is defined as follows. Let \(\mathtt {w}= (w_1, w_2, \dots , w_n)\) be a sequence of n positive real numbers sorted in nonincreasing order. Interpreting \(w_i\) as the propensity of vertex i to form edges, we define a random graph \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) as follows: the set of its vertices is \(\{1, 2, \dots , n\}\), the events \(\big \{ \{ i,j\} \, \text {is an edge of }{\varvec{\mathcal {G}}}_{ \mathtt {w}} \big \}\), \(1 \le i < j \le n\), are independent and

$$\begin{aligned}&{\mathbf {P}}\big ( \{ i,j\} \, \text {is an edge of }{\varvec{\mathcal {G}}}_{\mathtt {w}}\big )=1-\exp \big ( - w_i w_j/\sigma _1(\mathtt {w})\big ), \quad \text { where } \quad \sigma _1(\mathtt {w})=w_1+ \cdots + w_n. \nonumber \\ \end{aligned}$$
(1)

The graph \({\varvec{\mathcal {G}}}_\mathtt {w}\) extends the classical Erdős–Rényi random graph in allowing edges to be drawn with non-uniform probabilities, while keeping the independence among edges.

The graph \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) has come under different names in the literature, for instance, Poisson random graph in [10, 34], the Norros–Reittu graph in [10] or rank-1 model in [9, 11, 15, 38, 39]. Here, we will refer to it as the multiplicative graph to emphasise its close connection with the multiplicative coalescent as pointed out by Aldous in [3]. This connection is the starting point of the work [4] of Aldous & Limic who identify the entrance boundary of multiplicative coalescent by looking at the asymptotic distributions of the sizes of the connected components found in \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\). The asymptotic regime and the limit processes found in Aldous & Limic [4] lie at the heart of this paper. Namely, we extend this result to the geometry of the connected components of \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) by proving the weak convergence of these connected components as it has been done by Addario-Berry, Goldschmidt and B. [2] for critical Erdős–Rényi graphs. Our approach relies on the results of a companion paper [17] where we provide a specific coding of \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) and an embedding of \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) into a Galton–Watson forest, and where we construct the continuous multiplicative graphs that are proven here to be the scaling limits of the discrete models.

More precisely, we equip \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) with the graph distance \(d_{\text {gr}}\) and we introduce the weight measure \({\mathbf {m}}_\mathtt {w}= \sum _{1\le i\le n} w_i \delta _{i}\) on \({\varvec{\mathcal {G}}}_\mathtt {w}\). The goal of our article can be roughly rephrased as follows: we construct a class of (pointed and measured) compact random metric spaces \(({\mathbf {G}}, d, {\mathbf {m}})\) such that the graphs \(({\varvec{\mathcal {G}}}_{\mathtt {w}_n}, \varepsilon _n d_{\text {gr}}, \varepsilon ^\prime _n {\mathbf {m}}_{\mathtt {w}_n})\) weakly converge to \( ({\mathbf {G}}, d, {\mathbf {m}})\) along suitable subsequences \((\mathtt {w}_n, \varepsilon _n,\varepsilon _n^\prime )\). We also prove a similar result where \( {\mathbf {m}}_{\mathtt {w}_n}\) is replaced by the counting measure, the limit \({\mathbf {G}}\) being the same. Of course, here the scaling parameters, \(\varepsilon _n\) and \( \varepsilon ^\prime _n\) go to 0, so that \({\mathbf {G}}\) is not discrete. The limits we consider hold in the sense of the weak convergence corresponding to Gromov–Hausdorff–Prokhorov topology on the space of (isometry classes of) compact metric spaces equipped with finite measures. To achieve the construction of the possible limit graphs and to prove the convergence of rescaled multiplicative graphs, we rely on two main new ideas: (1) we encode multiplicative graphs by processes derived from a LIFO-queue; (2) we embed multiplicative graphs into Galton–Watson trees whose scaling limits are well-understood. Before discussing further the connections with previous works and in order to explain the advantages of our approach, let us give a brief but precise overview of our results and of the two above mentioned ideas.

Overview of the results  Our approach relies first on a specific coding of \(\mathtt {w}\)-multiplicative graphs \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) via a LIFO-queue and a related stochastic process; the queue actually yields an exploration of \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) and a spanning tree that encompasses almost all the metric structure of the graph. The LIFO-queue is defined as follows:

  • A single server is visited by n clients labelled by  \(1, \ldots , n\);

  • Client j arrives at time \(E_j\) and she/he requests an amount of time of service \(w_j\);

  • The \(E_j\) are independent exponentially distributed random variables such that \({\mathbf {E}}[E_j] = \sigma _1 (\mathtt {w})/ w_j\);

  • A LIFO (last in first out) policy applies: whenever a new client arrives, the server interrupts the service of the current client (if any) and serves the newcomer; when the latter leaves the queue, the server resumes the previous service.

As mentioned above, the LIFO-queue yields a tree \({\varvec{\mathcal {T}}}_{\mathtt {w}}\) whose vertices are the clients: namely, the server is the root (Client 0) and Client j is a child of Client i in \({\varvec{\mathcal {T}}}_{ \mathtt {w}}\) if and only if Client j interrupts the service of Client i (or arrives when the server is idle if \(i = 0\)). We introduce

$$\begin{aligned} Y^{\mathtt {w}}_t= - t + \sum _{1\le i\le n} w_i \mathbf{1}_{\{ E_i \le t\}}, J^\mathtt {w}_t = \inf _{s\in [0, t]} Y^\mathtt {w}_s \quad \text {and} \quad {\mathcal {H}}^\mathtt {w}_t = \# \Big \{ s\in [0, t] : \inf _{r\in [s, t]} Y^\mathtt {w}_r > Y^\mathtt {w}_{s-} \Big \}.\nonumber \\ \end{aligned}$$
(2)

The quantity \(Y^\mathtt {w}_t - J^\mathtt {w}_t\) is the load of the server, i.e. the amount of service due at time t. We sometimes call \(Y^\mathtt {w}_t\) the algebraic load of the server. Note that the LIFO-queue is encoded by \(Y^\mathtt {w}\). Then, observe that \({\mathcal {H}}^\mathtt {w}_t\) is the number of clients waiting in the queue at time t. We easily see that \({\mathcal {H}}^\mathtt {w}\) is the contour (or the depth-first exploration) of \({\varvec{\mathcal {T}}}_{ \mathtt {w}}\); this entails that the graph-metric of \({\varvec{\mathcal {T}}}_{ \mathtt {w}}\) is entirely encoded by \({\mathcal {H}}^\mathtt {w}\): namely, the distance between the vertices/clients served at times s and t in \({\varvec{\mathcal {T}}}_{\mathtt {w}}\) is \({\mathcal {H}}^\mathtt {w}_t+{\mathcal {H}}^\mathtt {w}_s - 2 \min _{r \in [s \wedge t , s\vee t] } {\mathcal {H}}^\mathtt {w}_r \).

To get to the graph from the tree \({\varvec{\mathcal {T}}}_{ \mathtt {w}}\), we need to include some surplus edges which are sampled from a Poisson point measure. More precisely, conditionally on \(Y^\mathtt {w}\), let

$$\begin{aligned} {\mathcal {P}}_\mathtt {w}= \sum _{\; 1\le p\le {\mathbf {p}}_\mathtt {w}} \delta _{(t_p, y_p) } \; \text {be a Poisson pt. meas. on }[0, \infty )^2\text { with intensity }\frac{{_1}}{{^{\sigma _1 (\mathtt {w})}}} \mathbf{1}_{\{ 0< y <Y^\mathtt {w}_t -J^\mathtt {w}_t \}} \, dt\, dy.\nonumber \\ \end{aligned}$$
(3)

Note that \({\mathbf {p}}_\mathtt {w}< \infty \) a.s., since \(Y^\mathtt {w}- J^\mathtt {w}\) is constant and zero after a random time. We set

$$\begin{aligned} {\varvec{\Pi }}_\mathtt {w}= \big ( (s_p, t_p)\big )_{1\le p \le {\mathbf {p}}_\mathtt {w}} \quad \text {where} \quad s_p = \inf \big \{ s \in [0, t_p] : \inf _{u\in [s, t_p]} Y^\mathtt {w}_u - J^\mathtt {w}_{u} > y_p \big \},\quad 1 \le p \le {\mathbf {p}}_\mathtt {w}\; .\nonumber \\ \end{aligned}$$
(4)

Next, we define the set of additional edges \({\mathcal {S}}_\mathtt {w}\) as the set of the edges connecting the clients served at times \(s_p\) and \(t_p\), for all \(1 \le p \le {\mathbf {p}}_\mathtt {w}\), and we define the graph \({\varvec{\mathcal {G}}}_{\mathtt {w}}\) by

$$\begin{aligned} {\varvec{\mathcal {G}}}_{\mathtt {w}} := ({\varvec{\mathcal {T}}}_{ \mathtt {w}} \backslash \{ 0\}) \cup {\mathcal {S}}_\mathtt {w}\; .\end{aligned}$$

Namely, \({\varvec{\mathcal {G}}}_{\mathtt {w}}\) is the graph obtained by removing the root 0 from \({\varvec{\mathcal {T}}}_{\mathtt {w}}\) and adding the edges in \({\mathcal {S}}_{\mathtt {w}}\). The following is proved in the companion paper [17]:

Theorem 1.1

(Theorem 2.1 in [17]) \({\varvec{\mathcal {G}}}_{\mathtt {w}}\) is distributed as a \(\mathtt {w}\)-multiplicative random graph as specified in (1).

From this representation of the discrete graphs, one expects that if \(Y^\mathtt {w}\) converges, then the graph should also converge, at least in a weak sense. However, since \(Y^\mathtt {w}\) is not Markovian, it is difficult to obtain a limit for the local-time functional \({\mathcal {H}}^\mathtt {w}\), which is the function that encodes the metric. To circumvent this technical difficulty, we embed the non-Markovian LIFO-queue governed by \(Y^\mathtt {w}\) into a Markovian one that is defined as follows:

  • A single server successively receives an infinite number of clients;

  • A LIFO policy applies;

  • Clients arrive at unit rate;

  • Each client has a type that is an integer ranging in \(\{ 1, \ldots , n\}\); the amount of service required by a client of type j is \(w_j\); types are i.i.d. with law \(\nu _\mathtt {w}= \frac{1}{\sigma _1 (\mathtt {w})}\sum _{1 \le j\le n} w_j \delta _{j} \).

Namely, let \(\tau _k\) be the arrival-time of the k-th client and let \({\mathtt {J}}_k\) be the type of the k-th client; then the Markovian LIFO queueing system is entirely characterised by \(\sum _{k\ge 1} \delta _{(\tau _k , {\mathtt {J}}_k)}\) that is a Poisson point measure on \([0, \infty ) \times \{ 1, \ldots , n\}\) with intensity \(\ell \otimes \nu _\mathtt {w}\), where \(\ell \) stands for the Lebesgue measure on \([0, \infty )\). To simplify the explanation of the main ideas, we concentrate in this overview only on the (sub)critical cases where the Markovian queue is recurrent, which amounts to assuming that

$$\begin{aligned}\sigma _2 (\mathtt {w}) \le \sigma _1 (\mathtt {w})\; .\end{aligned}$$

Here, for all \(r \in (0, \infty )\), we use the notation \(\sigma _{r} (\mathtt {w}) = \sum _{1\le j\le n} w_j^r\).

The Markovian queue yields a tree \({\mathbf {T}}_{ \mathtt {w}}\) that is defined as follows: the server is the root of \({\mathbf {T}}_{ \mathtt {w}}\) and the k-th client to enter the queue is a child of the l-th one if the k-th client enters when the l-th client is being served. One easily checks that \({\mathbf {T}}_{ \mathtt {w}} \) is a sequence of i.i.d. Galton–Watson trees glued at their root and that their common offspring distribution is

$$\begin{aligned} \mu _\mathtt {w}(k) = \sum _{1\le j\le n} \frac{w_j^{k+1}}{\sigma _1 (\mathtt {w}) k!}e^{-w_j}, \quad k \in {\mathbb {N}}. \end{aligned}$$
(5)

Observe that \(\sum _{k\in {\mathbb {N}}} k\mu _\mathtt {w}(k) = \sigma _2 (\mathtt {w})/ \sigma _1 (\mathtt {w}) \le 1\), which implies that the GW-trees are finite a.s. The tree \({\mathbf {T}}_{ \mathtt {w}}\) is then encoded by its contour process \((H^\mathtt {w}_t)_{t\in [0, \infty )}\): namely, \(H^\mathtt {w}_t\) stands for the number of clients waiting in the Markovian queue at time t and it is given by

$$\begin{aligned} H^\mathtt {w}_t = \# \Big \{ s\in [0, t] : \inf _{r\in [s, t]} X^\mathtt {w}_r > X^\mathtt {w}_{s-} \Big \} \quad \text {where} \quad X^\mathtt {w}_{t} = -t + \sum _{k\ge 1} w_{{\mathtt {J}}_k}\mathbf{1}_{[0, t]} (\tau _k), \; t \in [0, \infty ).\nonumber \\ \end{aligned}$$
(6)

The quantity \(X^{\mathtt {w}}_{t}\) is called algebraic load of the Markovian server at time t in the queueing theory literature (algebraic because it can take negative values). We will see in Sect. 2 how these definitions extend to the supercritical cases. Note that \(X^\mathtt {w}\) is a spectrally positive Lévy process with initial value 0; it is characterised by its Laplace exponent defined by \({\mathbf {E}}[ e^{-\lambda X^\mathtt {w}_t}] = e^{t\psi _\mathtt {w}(\lambda )}\), for \(t, \lambda \in [0, \infty )\), that is explicitly given by

$$\begin{aligned} \psi _\mathtt {w}(\lambda ) = \alpha _\mathtt {w}\lambda + \sum _{1\le j\le n} \frac{_{w_j} }{^{\sigma _1 (\mathtt {w})}} \big ( e^{-\lambda w_j} - 1 + \lambda w_j \big ) \quad \text {and} \quad \alpha _\mathtt {w}:= 1 - \frac{_{\sigma _2 (\mathtt {w})}}{^{\sigma _1 (\mathtt {w})}} \; . \end{aligned}$$

From this tractable model, we derive the LIFO-queue and the tree \({\varvec{\mathcal {T}}}_{ \mathtt {w}}\) governed by \(Y^\mathtt {w}\) by a time-change that “skips” some time intervals and that is defined as follows. We colour in blue or red the clients of the Markovian queue in the following recursive way:

(i):

if the type \({\mathtt {J}}_k\) of the k-th client already appeared among the types of the blue clients who previously entered the queue, then the k-th client is red;

(ii):

otherwise the k-th client inherits her/his colour from the colour of the client who is currently served when she/he arrives (and this colour is blue if there is no client served when she/he arrives: namely, we consider that the server is blue).

Note that a client who is the first arriving of her/his type is not necessarily coloured in blue. We easily check that exactly n clients are coloured in blue and their types are necessarily distinct. Moreover, while a blue client is served, note that the other clients waiting in the line (if any) are blue too. Actually, the sub-queue of blue clients corresponds to the previous LIFO queue governed by \(Y^\mathtt {w}\). More precisely, we set

$$\begin{aligned}&\mathtt {Blue} = \big \{t \in [0, \infty ) : \text {a blue client is served at }t \big \} \quad \text {and} \quad \theta ^{\mathtt {b}, \mathtt {w}}_t = \inf \Big \{ s \in [0, \infty ) : \int _0^s \mathbf{1}_{\mathtt {Blue}} (u) du > t \Big \}. \end{aligned}$$

We refer to (101) in Sect. 3.3 for a precise definition of \(\theta ^{\mathtt {b}, \mathtt {w}}\). Then,

$$\begin{aligned} (Y^\mathtt {w}_t, {\mathcal {H}}^\mathtt {w}_t)_{t\in [0, \infty )} = \big ( X^\mathtt {w}_{\theta ^{\mathtt {b}, \mathtt {w}}_t} , H^\mathtt {w}_{\theta ^{\mathtt {b}, \mathtt {w}}_t} \big )_{t\in [0 ,\infty )} \; . \end{aligned}$$
(7)

We refer to Proposition 3.2 and Lemma 3.4 in Sect. 3.3 for a more precise statement of this equality. This explains how to encode \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) in terms of the two tractable processes \(X^\mathtt {w}\) and \(H^\mathtt {w}\) derived from the Markovian queue.

The above embedding of LIFO queues is the starting point of our analysis. Let us also point out that it naturally translates to an “embedding” of the graph \({\varvec{\mathcal {G}}}_{\mathtt {w}_{n}}\) into a Galton–Watson forest, which bears a similar flavour to the construction in [34]. However, we have been able to extend the relationship in (7) to a more general setting. In particular, the Markovian queues that appear above and their coding processes \((X^\mathtt {w}, H^\mathtt {w})\) have analogues in the continous time and space setting. In our context, the parameters governing such processes are those identified by Aldous & Limic [4] for the eternal multiplicative coalescent. Namely,

$$\begin{aligned} \alpha \in {\mathbb {R}}, \; \beta \in [0, \infty ), \; \kappa \in (0, \infty ) \quad \text {and} \quad {\mathbf {c}} = (c_j)_{j\ge 1} \; \text {decreasing and such that} \; \sum _{j\ge 1} c_j^3<\infty \, . \end{aligned}$$
(8)

The load of service of the continuous analogue of the Markovian queue is a spectrally positive Lévy process \((X_t)_{t\in [0, \infty )}\) starting at \(X_0 = 0\) whose Laplace exponent \(\psi \) is given by

$$\begin{aligned} \tfrac{1}{t}\log \big ( {\mathbf {E}}\big [ e^{-\lambda X_t}\big ]\big ):= & {} \psi (\lambda ) = \alpha \lambda +\frac{_{_1}}{^{^2}} \beta \lambda ^2 + \sum _{j\ge 1} \kappa c_j \big ( e^{-\lambda c_j} - 1 + \lambda c_j \big ), \ \text {for all }t, \lambda \in [0, \infty ).\nonumber \\ \end{aligned}$$
(9)

To simplify, we restrict our explanations to the cases where X does not drift to \(\infty \), which is equivalent to assuming that \(\alpha \in [0 , \infty )\). The tree corresponding to the clients of the continuous analogue of the Markovian queue that is driven by X is actually the Lévy tree yielded by X, which is defined through its contour process as introduced by Le Gall & Le Jan [31]. To that end, we assume that \(\psi \) (as defined in (9)) satisfies

$$\begin{aligned} \int ^\infty \frac{d\lambda }{\psi (\lambda )}<\infty , \end{aligned}$$
(10)

which implies that either \(\sum _j c_j^2=\infty \) or \(\beta \ne 0\); therefore X has infinite variation sample paths. The assumption (10) is sometimes referred to as Grey’s condition in the literature. As a side note, let us also remark that we allow the sequence \({\mathbf {c}}\) to be null; in that case, we must have \(\beta >0\) as imposed by (10). Under Assumption (10), Le Gall & Le Jan [31] (see also Le Gall & D. [19]) prove that there exists a continuous process \((H_t)_{t\in [0, \infty )}\) such that the following limit holds true for all \(t \in [0, \infty )\) in probability:

$$\begin{aligned} H_t = \lim _{\varepsilon \rightarrow 0} \frac{1}{\varepsilon } \int _0^{t} \mathbf{1}_{\{ X_s - \inf _{r\in [s, t]} X_r \le \varepsilon \}} \, ds \; . \end{aligned}$$
(11)

We explain further how to make sense of this definition in the supercritical cases. The process H is called the height process associated with X and the processes (XH) are the continuous analogues of \((X^\mathtt {w}, H^\mathtt {w})\).

We explain in Sect. 4.2 how to colour the Markovian queue driven by X: namely, we explain how to define a right-continuous increasing time-change \((\theta ^{\mathtt {b}}_t)_{t\in [0, \infty )}\) that is the analogue of the discrete one \(\theta ^{ \mathtt {b}, \mathtt {w}}\). We refer to (145) in Sect. 4.2 for a formal definition of \(\theta ^{\mathtt {b}}\). Then the càdlàg process Y is defined by

$$\begin{aligned} Y_t = X_{\theta ^{\mathtt {b}}_t}, \quad { t \in [0, \infty )}, \end{aligned}$$
(12)

and represents the load driving the analogue of the LIFO-queue (without repetitions). As we will see in (144), Sect. 4.2, for each \(t\in [0,\infty )\), \(Y_t\) can be written as

$$\begin{aligned} Y_t= -\alpha t - \frac{_1}{^2}\kappa \beta t^2 + \sqrt{\beta } B_t + \sum _{j\ge 1} c_j (\mathbf{1}_{\{ E_j \le t \}} - c_j \kappa t), \end{aligned}$$
(13)

where \((B_t)_{t\in [0, \infty )}\) is a standard linear Brownian motion starting at 0 and where the \(E_j\) are independent exponentially distributed random variables that are independent from B and such that \({\mathbf {E}}[E_j] = (\kappa c_j)^{-1}\). The sum in (13), as it is, is informal: it has to be understood in the sense of \(L^2\) semimartingales (see Sect. 4.2 for a precise explanation). The latter expression of Y can be found in Aldous & Limic [4] who proved that the lengths of the excursions of Y above its running infimum (ranked in decreasing order) are distributed as the multiplicative coalescent (Theorem 2 in [4]). We refer to Theorem 4.2 in Sect. 4.2 for a precise statement of (12).

As it is proved in Theorem 2.6 in [17] (that is recalled in Theorem 4.7, Sect. 4.2), there exists a continuous process \(({\mathcal {H}}_t)_{t\in [0, \infty )}\) that is an adapted functional of Y such that for each \(t \in [0,\infty )\),

$$\begin{aligned} {\mathcal {H}}_t = H_{\theta ^{\mathtt {b}}_t}\;. \end{aligned}$$
(14)

Here, \({\mathcal {H}}\) is a.s. a continuous process that is called the height process associated with Y and we claim that \((Y,{\mathcal {H}})\) is the continuous analogue of \((Y^\mathtt {w}, {\mathcal {H}}^\mathtt {w})\), as justified by the limit theorems stated further.

As proved in [17] (and recalled in Lemma 4.8, Sect. 4.2), the excursion intervals of \({\mathcal {H}}\) above 0 and the excursion intervals of Y above its running infimum are the same. Moreover, Proposition 14 in Aldous & Limic [4] (recalled in Proposition 4.5, Sect. 4.2) asserts that these excursions can be indexed in the decreasing order of their lengths. Namely,

$$\begin{aligned} \big \{ t \in [0, \infty ) : {\mathcal {H}}_t>0 \big \}= \Big \{ t \in [0, \infty ) : Y_t > \inf _{[0, t]} Y \Big \}= \bigcup _{k\ge 1} (l_k , r_k) \,, \end{aligned}$$
(15)

where the sequence \(\zeta _k = l_k - r_k\), decreases. The continuous analogue of \({\varvec{\mathcal {G}}}_{\mathtt {w}}\) is derived from \((Y, {\mathcal {H}})\) as follows: first, for all \(s, t \in [0, \infty )\), we define the usual tree pseudometric associated with \({\mathcal {H}}\): \( d_{\mathcal {H}}(s, t)= {\mathcal {H}}_s + {\mathcal {H}}_t -2\min _{u\in [s\wedge t, s\vee t]} {\mathcal {H}}_u \). Then, for each \(t \in [0, \infty )\), we set

$$\begin{aligned} J_t = \inf _{s\in [0, t]} Y_s \;, \end{aligned}$$
(16)

and given Y, let

$$\begin{aligned} {\mathcal {P}}= \sum _{\; p\ge 1} \delta _{(t_p, y_p) } \; \text {be a Poisson pt. meas. on }[0, \infty )^2\text { with intensity } \kappa \mathbf{1}_{\{ 0< y <Y_t -J_t \}} \, dt\, dy. \end{aligned}$$
(17)

Then, we set

$$\begin{aligned} {\varvec{\Pi }}= \big ( (s_p, t_p) \big )_{p\ge 1} \quad \text {where} \quad s_p = \inf \big \{ s \in [0, t_p] : \inf _{u\in [s, t_p]} Y_u - J_u > y_p \big \}, \; \, \text {for } p \ge 1. \end{aligned}$$
(18)

Here \({\varvec{\Pi }}\) plays the role of \({\varvec{\Pi }}_\mathtt {w}\). Fix \(k \ge 1\). One can prove that if \(t_p \in [l_k, r_k]\), then \(s_p \in [l_k, r_k]\). We define \({\mathbf {G}}_k\) as the set \([l_k, r_k]\) where we have identified points \(s, t\in [l_k, r_k]\) such that either \(d_{\mathcal {H}}(s,t) = 0\) or \((s,t) \in \{ (s_p, t_p) ; p \ge 1 : t_p \in [l_k, r_k] \}\). It actually yields a metric, denoted by \(\mathrm {d}_{k}\), on \({\mathbf {G}}_k\); note that \(l_k\) and \(r_k\) are identified and we denote by \(\varrho _{k}\) the corresponding point in \({\mathbf {G}}_k\); we denote by \({\mathbf {m}}_{k}\) the measure induced by the Lebesgue measure on \([l_k, r_k]\). The continuous analogue of \({\varvec{\mathcal {G}}}_{ \mathtt {w}}\) is then the sequence of pointed measured compact metric spaces

$$\begin{aligned} {\mathbf {G}} = \big ( ({\mathbf {G}}_k, \mathrm {d}_{k},\varrho _{k} ,{\mathbf {m}}_{k})\big )_{k\ge 1}\; , \end{aligned}$$
(19)

that is called the \((\alpha , \beta , \kappa , {\mathbf {c}})\)-continuous multiplicative graph. We refer to Sect. 2.3 (and more specifically (58)) for a more precise definition.

As already mentioned, the main goal of the paper is to prove that \({\mathbf {G}}\) is the scaling limit of sequences of rescaled discrete graphs \({\varvec{\mathcal {G}}}_{ \mathtt {w}_n}\) for a suitable sequence of weights with finite support \(\mathtt {w}_n = (w^{_{(n)}}_{^j})_{j\ge 1}\) that are listed in the nonincreasing order: namely, \(w^{_{(n)}}_{^j} \ge w^{_{(n)}}_{^{j+1}}\) for all \(j\ge 1\), and \(w^{_{(n)}}_{^j} = 0\) for all sufficiently large j. Here, we first set

$$\begin{aligned} {\mathbf {j}}_n := \sup \big \{ j \ge 1 : w^{_{(n)}}_{^j} > 0 \big \} < \infty \,. \end{aligned}$$
(20)

We do not require that \({\mathbf {j}}_n\) is equal to n but we require \(\lim _{n\rightarrow \infty } {\mathbf {j}}_n = \infty \). Our main result (Theorem 2.4 in Sect. 2.2) asserts the following:

  • If the Markovian processes \((X^{\mathtt {w}_n}, H^{\mathtt {w}_n})\), properly rescaled in time and space, weakly converge to (XH), then \((Y^{\mathtt {w}_n}, {\mathcal {H}}^{\mathtt {w}_n})\) converges weakly to \((Y, {\mathcal {H}})\) with the same scaling.

More precisely, the graphs \({\varvec{\mathcal {G}}}_{ \mathtt {w}_n}\), or their coding functions, are rescaled by two factors \(a_n\) and \(b_n\) tending to \(\infty \); \(a_n\) is a weight factor and \(b_n\) is an exploration-time factor. Namely, the rescaled processes to consider are \(\frac{_1}{^{a_n}}X^{_{\mathtt {w}_n}}_{^{b_n \cdot }}\) (or \(\frac{_1}{^{a_n}}Y^{_{\mathtt {w}_n}}_{^{b_n \cdot }}\)). We now discuss further natural constraints. First, it is natural to require a priori that \(b_n = O(a_n^2)\) by standard results on Lévy processes. Moreover, we assume that the largest weight "persists" in the limit, or more precisely \(w^{_{(n)}}_1 = O (a_n)\). In the limit, if two large weights persist, they cannot fuse and they tend not to be connected by an edge. Namely, if the two largest weights persist, then \(1 - \exp ( - w^{_{(n)}}_{^1}w^{_{(n)}}_{^2} / \sigma _1 (\mathtt {w}_n)) \rightarrow 0\) and since \(w^{_{(n)}}_{^1} \asymp w^{_{(n)}}_{^2} \asymp a_n\), it entails \(\lim _{n\rightarrow \infty } a_n^2/ \sigma _1 (\mathtt {w}_n) = 0\). Next, since \(b_n\) is an exploration-time factor, we require that \(b_n \asymp {\mathbf {E}}[C_n]\), where \(C_n\) stands for the number of clients who are served before the arrival of Client 1 (i.e. the client corresponding to the largest weight \(w^{_{(n)}}_{^1}\)) in the \(\mathtt {w}_n\)-LIFO queue encoding \({\varvec{\mathcal {G}}}_{ \mathtt {w}_n}\). Let us denote by \(D_n\) the sum of the weights of the vertices explored before visiting Client 1. It is easy to see that \( {\mathbf {E}}[ C_n ] = \sum _{j\ge 2} w^{_{(n)}}_{^j}/ (w^{_{(n)}}_{^j}+w^{_{(n)}}_{^1}) \) and that \( {\mathbf {E}}[ D_n ] = \sum _{j\ge 2} (w^{_{(n)}}_{^j})^2/(w^{_{(n)}}_{^j}+w^{_{(n)}}_{^1}) \). So, when \(w_{^1}^{_{(n)}}\) persists, we get \( \sigma _1 (\mathtt {w}_n) \asymp a_n {\mathbf {E}}[C_n ]\) and \( \sigma _2 (\mathtt {w}_n) \asymp a_n {\mathbf {E}}[D_n ]\). Moreover, in the asymptotic regime that we consider, we require that the number of visited vertices has to be of the same order of magnitude as the sum of the corresponding weights: namely, \({\mathbf {E}}[C_n] \asymp {\mathbf {E}}[D_n]\), which corresponds to the criticality assumption: \(\sigma _1(\mathtt {w}_n) \asymp \sigma _2 (\mathtt {w}_n)\) that also implies \(a_nb_n \asymp \sigma _1 (\mathtt {w}_n) \). These constraints amount to the following assumptions:

$$\begin{aligned} \lim _{n\rightarrow \infty } a_n = \lim _{n\rightarrow \infty } \frac{b_n}{a_n} = \infty , \quad \lim _{n\rightarrow \infty }\frac{b_n}{a^2_n} = : \beta _0 \in [0, \infty ), w^{_{(n)}}_1 = O (a_n) , \quad \lim _{n\rightarrow \infty }\frac{a_nb_n}{\sigma _1 (\mathtt {w}_n)}= \kappa .\nonumber \\ \end{aligned}$$
(21)

Note it is possible to have \(\beta _0 = 0\). Let us present here a more precise statement of Theorem 2.4: If \((a_n, b_n, \mathtt {w}_n)\) satisfies (21),

$$\begin{aligned} \textit{and if} \quad \big ( \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {w}_n}_{b_n \cdot }\, , \frac{_{_{a_n}}}{^{^{b_n}}} H^{\mathtt {w}_n}_{b_n \cdot } \big )\; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big ( X, H \big ) \end{aligned}$$
(22)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}) \times {\mathbf {C}}([0, \infty ) , {\mathbb {R}})\) equipped with the product of the Skorokhod and the continuous topologies, then the joint convergence

$$\begin{aligned} \big ( \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {w}_n}_{b_n \cdot }\, , \frac{_{_{a_n}}}{^{^{b_n}}} H^{\mathtt {w}_n}_{b_n \cdot } \, , \big ( \frac{_{_1}}{^{^{b_n}}} \theta ^{\mathtt {b}, \mathtt {w}_n}_{b_n \cdot } , \frac{_{_1}}{^{^{a_n}}} Y^{\mathtt {w}_n}_{b_n \cdot } \big ), \frac{_{_{a_n}}}{^{^{b_n}}} {\mathcal {H}}^{\mathtt {w}_n}_{b_n \cdot } \big ) \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big ( X, H, (\theta ^\mathtt {b}, Y), {\mathcal {H}}\big ) \end{aligned}$$
(23)

holds weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}) \times {\mathbf {C}}([0, \infty ) , {\mathbb {R}}) \times {\mathbf {D}}([0, \infty ), {\mathbb {R}}^2) \times {\mathbf {C}}([0, \infty ) , {\mathbb {R}}) \) equipped with the product topology.

Necessary and sufficient conditions on the \((a_n, b_n, \mathtt {w}_n)\) for (22) to hold can be derived from previous results due to Le Gall & D. [19] (it is not direct, see Proposition 2.2). Namely, suppose that \((a_n, b_n, \mathtt {w}_n)\) satisfy (21); then (22) holds if and only if the following conditions are satisfied

$$\begin{aligned} (A): \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {w}_n}_{b_n } \overset{\text {(weakly)}}{\underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow }} \! X_1 \quad \text {and} \quad (B): \quad \exists \,\delta \in \! (0, \infty ) , \quad \liminf _{n\rightarrow \infty } {\mathbf {P}}\big ( Z^{\mathtt {w}_n}_{\lfloor b_n \delta /a_n \rfloor } = 0 \big ) >0, \end{aligned}$$
(24)

where \((Z^{\mathtt {w}_n}_k)_{k\in {\mathbb {N}}}\) stands for a Galton–Watson branching process with offspring distribution \(\mu _{\mathtt {w}_n}\) given by (5) and with initial state \(Z^{\mathtt {w}_n}_0 = \lfloor a_n \rfloor \). Proposition 2.3 shows that for all \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\beta _0 \in [0, \beta ]\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}}\) such that \(\sum _{j\ge 1} c_j^3 < \infty \) and such that Grey’s condition (10) is satisfied, there indeed exists a sequence \((a_n, b_n, \mathtt {w}_n)_{n\in {\mathbb {N}}}\) satisfying (21) and (24), so that (23) holds. Proposition 2.3 also shows that in (24), (A) does not necessarily imply (B). Moreover, Proposition 2.3 also provides a more tractable condition that implies (B) in (24) and that is satisfied in all the examples that have been considered previously.

By soft arguments (see Lemma 2.7), the convergence (23) of the coding functions implies that the rescaled sequence of graphs \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\) converges, as random metric spaces. As already mentioned, the convergence holds weakly on the space \({\mathbb {G}}\) of (pointed and measure preserving) isometry classes of pointed measured compact metric spaces endowed with the Gromov–Hausdorff–Prokhorov distance (whose definition is recalled in (53) in Sect. 2.3). Actually, the convergence holds jointly for the connected components of \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\): namely, equip \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\) with the weight-measure \({\mathbf {m}}^{\mathtt {w}_n} = \sum _{j\ge 1} w^{_{(n)}}_{^j} \delta _j\); let \({\mathbf {q}}_{\mathtt {w}_n}\) be the number of connected components of \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\); we index these connected components \(({\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n})_{1\le k \le {\mathbf {q}}_{\mathtt {w}_n}}\) in the decreasing order of their \({\mathbf {m}}^{\mathtt {w}_n}\)-measure:

$$\begin{aligned} {\mathbf {m}}^{\mathtt {w}_n} ({\varvec{\mathcal {G}}}_{ 1}^{\mathtt {w}_n}) \ge \cdots \ge {\mathbf {m}}^{\mathtt {w}_n} ({\varvec{\mathcal {G}}}_{ {\mathbf {q}}_{\mathtt {w}_n} }^{\mathtt {w}_n}) . \end{aligned}$$
(25)

For convenience, we complete this finite sequence of connected components by point graphs with null measure to get an infinite sequence of \({\mathbb {G}}\)-valued r.v. \(\big ( ({\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n}, d_{k}^{\mathtt {w}_n}, \varrho _k^{\mathtt {w}_n}, {\mathbf {m}}_k^{\mathtt {w}_n})\big )_{k\ge 1}\), where \(d_{k}^{\mathtt {w}_n}\) stands for the graph-metric on \({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{ k}\), \(\varrho _k^{\mathtt {w}_n}\) is the first vertex/client of \({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{ k}\) who enters the queue and \({\mathbf {m}}_k^{\mathtt {w}_n}\) is the restriction of \({\mathbf {m}}_{\mathtt {w}_n}\) to \({\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n}\). Then, Theorem 2.8 asserts that if \((a_n, b_n, \mathtt {w}_n)\) satisfy (21) and (22), then

$$\begin{aligned} \big (\big ( {\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n} , \frac{_{_{a_n}}}{^{^{b_n}}}d_{k}^{\mathtt {w}_n} , \varrho _k^{\mathtt {w}_n}, \frac{_{_{1}}}{^{^{b_n}}}{\mathbf {m}}_k^{\mathtt {w}_n} \big ) \big )_{k\ge 1} \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big (\big ( {\mathbf {G}}_{k} , \mathrm {d}_{k}, \varrho _{k} , {\mathbf {m}}_{k} \big ) \big )_{k\ge 1} \end{aligned}$$
(26)

holds weakly on \({\mathbb {G}}^{{\mathbb {N}}^*}\) equipped with the product topology. Moreover, Theorem 2.8 also asserts that in (26) we can replace the weight-measure \({\mathbf {m}}^{\mathtt {w}_n}\) by the counting measure \(\# = \sum _{1\le j \le {\mathbf {j}}_n} \delta _j\), where \({\mathbf {j}}_n := \sup \{ j \ge 1 : w^{_{(n)}}_{^j} > 0\}\), and that under the additional assumption \(\sqrt{{\mathbf {j}}_n}/ b_n \rightarrow 0\), the connected components can be listed in the decreasing order of their number of vertices:

$$\begin{aligned} \# ({\varvec{\mathcal {G}}}_{ 1}^{\mathtt {w}_n}) \ge \cdots \ge \# ({\varvec{\mathcal {G}}}_{ {\mathbf {q}}_{\mathtt {w}_n} }^{\mathtt {w}_n}). \end{aligned}$$
(27)

Discussion We now briefly discuss connections to other works. We refer to Sect. 2.4 for more detailed comments on related papers.

A unified and exhaustive treatment of the limit regimes: While important progress has been made on the Gromov–Hausdorff scaling limits of the multiplicative graphs, notably in Bhamidi, Sen & X. Wang, and Bhamidi, van der Hofstad & Sen [8, 9], previous works have distinguished two seemingly orthogonal cases depending on whether the inhomogeneity is mild enough to be washed away in the limit as in Addario-Berry, B. & Goldschmidt, Bhamidi, B., Sen & X. Wang and Bhamidi, Sen & X. Wang [2, 7, 8], or strong enough to persist asymptotically as in Bhamidi, van der Hofstad & Sen and Bhamidi, van der Hofstad & van Leeuwaarden [9, 11]: the so-called asymptotic (Brownian) homogeneous case and the power-law case. In these papers the proof strategies greatly differ in these two cases. On the other hand, the remarkable work of Aldous and Limic [4] about the weights of large critical connected components deals with the inhomogeneity in a transparent way. We provide here such a unified approach for the geometry, which works not only for both cases but also for graphs which can be seen as a mixture of the two cases.

Furthermore, an easy correspondence (see (61) below) allows us to link our parameters \((\alpha , \beta ,\) \(\kappa , \mathbf{c})\) for the limit objects to the ones parametrising all the extremal eternal multiplicative coalescents, as identified by Aldous & Limic in [4]. We note that our limit theorems are valid in the Gromov–Hausdorff–Prokhorov topology, which controls the distances between all pairs of points, and not just in the Gromov–Prokhorov topology where only distances between typical points are controlled. (A general result has already been proved by Bhamidi, van der Hofstad & Sen [9] for the Gromov–Prokhorov topology in the special case when \(\beta = 0\).) In light of this, we believe our work contains an exhaustive treatment of all the possible limits related to those multiplicative coalescents. In the mean time, we remove some technical conditions that had been imposed on the weight sequences in some of the previous works.

Avoiding to compute the law of connected components: The connected components of the random graphs may be described as the result of the addition of “shortcut edges” to a tree; this picture is useful both for the discrete models and the limit metric spaces. The work of Bhamidi, Sen & X. Wang and Bhamidi, van der Hofstad & Sen [8, 9] yields an explicit description of the law of the random tree to which one should add shortcuts in order to obtain connected components with the correct distribution. As in the case of classical random graphs treated in Addario-Berry, B. & Goldschmidt [2], this law involves a change of measure from one of the “classical” random trees, whose behaviour is in general difficult to control asymptotically. Our connected components are described as the metric induced on a subset of a Galton–Watson tree; the bias of the law of the underlying tree is somewhat transparently handled by the procedure that extracts the relevant subset.

More general models of random graphs. While we focus on the model of the multiplicative graphs, the theorems of Janson [28] on asymptotically equivalent models (see Sect. 2.4) and the expected universality of the limits confers on the results obtained here potential implications that go beyond the realm of this specific model: for instance, random graphs constructed by the celebrated configuration model where the sequence of degrees has asymptotic properties similar to the weight sequence of the present paper are believed to exhibit similar scaling limits; see Section 3.1 in [9] for a related discussion.

Upcoming work. The current version of the limit theorems consider the sequences of connected components in the product topology. The embedding of the graphs in a forest of Galton–Watson trees actually also yields a control on the tail of the sequence, which would allow to strengthen the convergence to \(\ell ^p\)-like spaces as in [2] or [8]; this will be pursued somewhere else.

Organisation of the paper In Sect. 2, after introducing some necessary notation, we give the precise statements of the main results of the paper, compare them with previous results, as well as lay out a plan for the subsequent proof. Sections 38 constitute the main body of the proof. In the appendix, we collect some results used in the proof on Laplace transforms, Skorokhod’s topology, Lévy processes and branching processes. Although most of these results are standard, we either did not find an exact reference or we have adapted the existing version to our need here.

2 Main results

Notation Throughout the paper, \({\mathbb {N}}\) stands for the set of nonnegative integers and \({\mathbb {N}}^* = {\mathbb {N}}\backslash \{ 0\}\). A sequence of weights refers to an element of the set \({\ell }^{_{\, \downarrow }}_{\infty } = \big \{ (w_j)_{j\ge 1} \in [0, \infty )^{{\mathbb {N}}^*} : \, w_j \ge w_{j+1} \big \}\). For all \(r \in (0, \infty )\) and all \(\mathtt {w}= (w_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_\infty \), we set \(\sigma _r (\mathtt {w}) = \sum _{j\ge 1} w_j^r \in [0, \infty ]\). The following subsets of \({\ell }^{_{\, \downarrow }}_{\infty }\) will be of particular interest to us.

$$\begin{aligned} {\ell }^{_{\, \downarrow }}_{{r}} = \big \{ \mathtt {w} \in {\ell }^{_{\, \downarrow }}_\infty : \sigma _r (\mathtt {w}) < \infty \big \}, \quad \text {and} \quad {\ell }^{_{\, \downarrow }}_{{ f}}= \big \{ \mathtt {w} \in {\ell }^{_{\, \downarrow }}_\infty : \exists j_0 \ge 1 : w_{j_0} = 0 \big \} .\end{aligned}$$

We often abbreviate \(X=(X_{t})_{t\ge 0}\) for a process. Occasionally, we write X(t) for \(X_{t}\) if the notation for the time parameter becomes too heavy to stand as subscript.

2.1 Convergence results for the Markovian queue

We fix a sequence \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_{{ f}}\), and two sequences \(a_n, b_n \in (0, \infty )\) that satisfy the a priori assumptions (21). As already mentioned the convergence of the graphs \({\varvec{\mathcal {G}}}_{ \mathtt {w}_n}\) is obtained thanks to the convergence of rescaled versions of \(Y^{\mathtt {w}_n} \) and \({\mathcal {H}}^{\mathtt {w}_n}\) and the convergence of these two processes is also obtained by the convergence of the Markovian processes into which they are embedded: namely, the asymptotic regimes of \((Y^{\mathtt {w}_n} ,{\mathcal {H}}^{\mathtt {w}_n})\) and of \((X^{\mathtt {w}_n}, H^{\mathtt {w}_n})\) should be the same. The purpose of this section is to state weak limit-theorems for \(X^{\mathtt {w}_n}\) and \(H^{\mathtt {w}_n}\). Many results of this section rely on standard limit-theorems on random walks, on results due to Grimvall in [24] on branching processes and on results due to Le Gall & D. in [19] on the height processes of Galton–Watson trees. However, the specific form of the jumps and of the offspring distribution of the trees actually requires a careful analysis done in Sect. 7.

We recall the definition of \(X^{\mathtt {w}_n}\) in (6); recall that the Markovian queueing system induced by \(X^{\mathtt {w}_n}\) yields a tree that is an i.i.d. sequence of Galton–Watson trees with offspring distribution \(\mu _{\mathtt {w}_n}\) whose definition is given by (5). Denote by \((Z^{\mathtt {w}_n}_k)_{k\in {\mathbb {N}}}\) a Galton–Watson branching process with offspring distribution \(\mu _{\mathtt {w}_n}\) and with initial state \(Z^{\mathtt {w}_n}_0 = \lfloor a_n \rfloor \). The following proposition is mainly based on Theorem 3.4 in Grimvall [24] p.1040, that proves weak convergence for Galton–Watson processes to continuous-state branching processes (CSBP for short). Grimvall’s approach relies on the close relationship between the CSBP and Lévy processes. Indeed, according to Lamperti [30], a (conservative) CSBP, which is a \([0, \infty )\)-valued Markov process, can always be represented as a time-changed spectrally positive Lévy process. Thus, the law of the CSBP is completely characterised by the Lévy process, and thus also by its Laplace exponent. This Laplace exponent is usually called the branching mechanism of the CSBP. We refer to Bingham [12] for more details on CSBP (and see Appendix B.2.2 for a very brief account). We denote by \({\mathbf {D}}([0,\infty ) , {\mathbb {R}})\) the space of càdlàg functions from \([0, \infty )\) to \({\mathbb {R}}\) equipped with Skorokhod’s topology and by \({\mathbf {C}}([0,\infty ) , {\mathbb {R}})\) the space of continuous functions from \([0, \infty )\) to \({\mathbb {R}}\), equipped with the topology of uniform convergence on all compact subsets. Recall the above definitions of \(X^{\mathtt {w}_n}\) and \(Z^{\mathtt {w}_n}\).

Proposition 2.1

Let \(a_n , b_n \in (0, \infty ) \) and \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfy (21). Let \((X_t)_{t\in [0, \infty )}\) and \((Z_t)_{t\in [0, \infty )}\) be two càdlàg processes such that \(X_0=0\) and \(Z_0=1\). Then, the following holds true.

  1. (i)

    The following convergences are equivalent:

    1. (i-a)

      There exists \(t \in (0, \infty )\) such that \(\frac{1}{a_n} X^{\mathtt {w}_n}_{b_n t } \rightarrow X_t\) weakly on \({\mathbb {R}}\);

    2. (i-b)

      \((\frac{1}{a_n} X^{\mathtt {w}_n}_{b_n t })_{t\in [0, \infty )} \longrightarrow (X_t)_{t\in [0, \infty )}\) weakly on \({\mathbf {D}}([0,\infty ) , {\mathbb {R}})\);

    3. (i-c)

      \((\frac{1}{a_n} Z^{\mathtt {w}_n}_{\lfloor b_n t/a_n \rfloor })_{t\in [0, \infty )} \longrightarrow (Z_t)_{t\in [0, \infty )}\) weakly on \({\mathbf {D}}([0,\infty ) , {\mathbb {R}})\).

    If any of the three convergences in (i) holds true, then X is a spectrally Lévy process and Z a conservative CSBP; moreover there exist \(\alpha \in {\mathbb {R}}\), \(\beta \in [\beta _0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\) such that the branching mechanism of Z and the Laplace exponent of X are equal to the same function \(\psi \) given by

    $$\begin{aligned} \psi (\lambda ) = \alpha \lambda +\frac{_{_1}}{^{^2}} \beta \lambda ^2 + \sum _{j\ge 1} \kappa c_j \big ( e^{-\lambda c_j} - 1 + \lambda c_j \big ) \;, \quad \lambda \in [0, \infty ). \end{aligned}$$
    (28)
  2. (ii)

    For all \(n \in {\mathbb {N}}\), we set \(\alpha _{\mathtt {w}_n} = 1 - \frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)}\). Then, (i) is equivalent to the following three conditions:

    $$\begin{aligned}&\mathbf {(C1):} \quad b_n \alpha _{\mathtt {w}_n}/ a_n \; \underset{^{n\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; \alpha \qquad \mathbf {(C2):} \quad \frac{b_n}{a^2_n}\! \cdot \! \frac{\sigma _3 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \; \underset{^{n\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; \beta + \kappa \sigma _3 ({\mathbf {c}}) \, , \end{aligned}$$
    (29)
    $$\begin{aligned}&\mathbf {(C3):} \quad \text {for each } j \in {\mathbb {N}}^*, \quad \frac{w^{(n)}_j}{a_n } \; \underset{^{n\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; c_j \, . \end{aligned}$$
    (30)
  3. (iii)

    For all \(\lambda \in [0, \infty )\), we set

    $$\begin{aligned} \psi _{\mathtt {w}_n} (\lambda ) = \alpha _{\mathtt {w}_n} \lambda + \!\! \sum _{1\le j\le n}\! \! \frac{_{w^{(n)}_j} }{^{\sigma _1 (\mathtt {w}_{n})}} \big ( e^{-\lambda w^{(n)}_j}\! -\! 1 + \lambda w^{(n)}_j \big )\; . \end{aligned}$$
    (31)

    Any of the convergences of (i) is equivalent to \(\mathbf {(C1)}\) and the following limit for all \(\lambda \in (0, \infty )\):

    $$\begin{aligned} b_n \psi _{\mathtt {w}_n} (\lambda / a_n) \underset{^{n\rightarrow \infty }}{-\!\!\! -\!\! \! -\!\! \!\longrightarrow } \; \psi (\lambda ) \; , \end{aligned}$$
    (32)
  4. (iv)

    For all \(\lambda \in [0, \infty )\), set \(\psi ^{-1}(\lambda ) = \inf \{ r \in [0, \infty ): \psi (r) > \lambda \}\), \(\psi ^{-1}_{\mathtt {w}_n}(\lambda ) = \inf \{ r \in [0, \infty ): \psi _{\mathtt {w}_n} (r) > \lambda \}\), \(\varrho = \psi ^{-1} (0)\) and \(\varrho _{\mathtt {w}_n} = \psi ^{-1}_{\mathtt {w}_n} (0)\) that are the largest roots of the convex functions \(\psi \) and \(\psi _{\mathtt {w}_n}\). Then, for all \(\lambda \in [0, \infty )\),

    $$\begin{aligned} \lim _{n\rightarrow \infty } a_n \psi ^{-1}_{\mathtt {w}_n} (\lambda / b_n) = \psi ^{-1} (\lambda )\,. \end{aligned}$$
    (33)

    In particular, \(\lim _{n\rightarrow \infty } a_n \varrho _{\mathtt {w}_n} = \varrho \).

  5. (v)

    For all \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\), there are sequences \(a_n , b_n \in (0, \infty ) \), \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfying (21) with \(\beta _0 \in [0, \beta ]\), \(\mathbf {(C1)}\), \(\mathbf {(C2)}\), \(\mathbf {(C3)}\) and \(\sqrt{{\mathbf {j}}_n}/ b_n \rightarrow 0\) where we recall that \({\mathbf {j}}_n = \max \{ j \ge 1\,: w^{_{(n)}}_{^j} > 0 \}\).

Proof

See Sect. 7 (and more specifically Sect. 7.2). As already mentioned, Proposition 2.1 (i) strongly relies on Theorem 3.4 in Grimvall [24] p. 1040. However, the proof of (ii) and (v) requires arguments tailored to our case where \(\psi \) takes the particular form (28). \(\square \)

Remark 2.1

We will see in Theorem 2.8 that the condition \(\sqrt{{\mathbf {j}}_n}/b_n \rightarrow 0\) ensures that the same scaling limit holds even if we rank the connected components with respect to the number of vertices. \(\square \)

Recall the definition of \(H^{\mathtt {w}_n}\) in (6), the height process associated with \(X^{\mathtt {w}_n}\). Note here that we also deal with supercritical cases.

Proposition 2.2

Let \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\). Let \(\psi \) in (9) satisfy (10).

Let X be a spectrally positive Lévy process with Laplace exponent \(\psi \). Let H be its height process as defined in (11). Let \(a_n , b_n \in (0, \infty ) \), \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfy (21) with \(\beta _0 \in [0, \beta ]\), \(\mathbf {(C1)}\), \(\mathbf {(C2)}\) and \(\mathbf {(C3)}\). We also assume

$$\begin{aligned} \mathbf {(C4):} \quad \exists \, \delta \in (0, \infty ) , \qquad \liminf _{n\rightarrow \infty } {\mathbf {P}}\big ( Z^{\mathtt {w}_n}_{\lfloor b_n \delta /a_n \rfloor } = 0 \big ) >0 \; . \end{aligned}$$
(34)

Then,

$$\begin{aligned} \big ( (\frac{_{_1}}{^{^{a_n}}} X^{\mathtt {w}_n}_{b_n t })_{t\in [0, \infty )} , (\frac{_{_{a_n}}}{^{^{b_n}}} H^{\mathtt {w}_n}_{b_n t })_{t\in [0, \infty )} \big ) \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } (X, H) \end{aligned}$$
(35)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}) \times {\mathbf {C}}([0, \infty ), {\mathbb {R}})\) equipped with the product topology. Furthermore, for all \(t \in [0, \infty )\),

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathbf {P}}\big ( Z^{\mathtt {w}_n}_{\lfloor b_n t /a_n \rfloor } = 0 \big ) = e^{-v_\psi (t)} \quad \text {where} \quad \int _{v_\psi (t)}^\infty \frac{d\lambda }{\psi (\lambda )}= t . \end{aligned}$$
(36)

Proof

See Sect. 7 (and more specifically Sect. 7.2). Proposition 2.2 strongly relies on Theorem 2.3.1 in Le Gall & D. [19]. However, its proof requires more care than one might expect because \(H^{\mathtt {w}_n}\) is not exactly the height process as defined in [19] (it is actually a time-changed version of the so-called contour process as in Theorem 2.4.1 [19] p. 68). \(\square \)

The following proposition provides a practical criterion to check \(\mathbf {(C4)}\). In particular, it shows that \(\mathbf {(C4)}\) is always true when \(\beta _0 > 0\). It also shows that Proposition 2.2 is never void. Recall that \({\mathbf {j}}_n = \max \{ j \ge 1: w^{_{(n)}}_{^j} > 0\}\).

Proposition 2.3

Let \(\alpha \in {\mathbb {R}}, \beta \in [0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\). Let \(\psi \) in (9) satisfy (10). The following statements hold true.

  1. (i)

    Let \(a_n , b_n \in (0, \infty ) \), \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfy (21), \(\mathbf {(C1)}\), \(\mathbf {(C2)}\) and \(\mathbf {(C3)}\). Denote by \(\psi _n\) the Laplace exponent of \((\frac{1}{a_n} X^{\mathtt {w}_n}_{b_n t })_{t\in [0, \infty )}\): namely, for all \(\lambda \in [0, \infty )\),

    $$\begin{aligned} \psi _n (\lambda ) = \frac{b_n}{a_n} \Big ( 1-\frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \Big ) \lambda + \frac{a_n b_n}{\sigma _1 (w_n)} \sum _{j\ge 1} \frac{w_j^{(n)}}{a_n} \Big ( e^{-\lambda w^{(n)}_j /a_n}-1 + \lambda \, w^{(n)}_j /a_n \Big ) . \end{aligned}$$
    (37)

    Then, \(\mathbf {(C4)}\) holds true if

    $$\begin{aligned} \lim _{y\rightarrow \infty } \limsup _{n\rightarrow \infty } \int _y^{a_n} \frac{d\lambda }{\psi _n (\lambda )} = 0 \; . \end{aligned}$$
    (38)

    In particular, if \(\beta _0 > 0\) in (21), then (38) is always satisfied and \(\mathbf {(C4)}\) holds true.

  2. (ii)

    There are sequences \(a_n , b_n \in (0, \infty ) \), \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfying (21) with \(\beta _0 = 0\), \(\sqrt{{\mathbf {j}}_n}/b_n \rightarrow 0\), \(\mathbf {(C1)}\), \(\mathbf {(C2)}\) and \(\mathbf {(C3)}\) but not \(\mathbf {(C4)}\).

  3. (iii)

    There are sequences \(a_n , b_n \in (0, \infty )\), \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfying (21) with any \(\beta _0 \in [0, \beta ]\), \(\sqrt{{\mathbf {j}}_n}/b_n \rightarrow 0\), \(\mathbf {(C1)}, \mathbf {(C2)}, \mathbf {(C3)}\) and \(\mathbf {(C4)}\).

Proof

See Sects. 7.3.1, 7.3.2 and 7.3.3. \(\square \)

2.2 Convergence of the processes encoding the multiplicative graphs

Let us recall that we generate surplus edges with the help of the sequence of points \({\varvec{\Pi }}_{\mathtt {w}_n}\) as introduced in (18). To deal with limits of \(({\varvec{\Pi }}_{\mathtt {w}_n})\), it is convenient to embed \(([0, \infty )^2)^p\) into \(({\mathbb {R}}^2)^{{\mathbb {N}}^*}\) by extending any sequence \(((s_i,t_i))_{1\le i\le p} \in ([0, \infty )^2)^p\) by setting \((s_i, t_i) = (-1, -1)\), for all \(i > p\). Here, \((-1, -1)\) plays the role of an unspecific cemetery point. We equip \( ({\mathbb {R}}^2)^{{\mathbb {N}}^*}\) with the product topology. Recall the definition of Y in (13) and that of \({\mathcal {H}}\) in (14). Recall the notation of \({\varvec{\Pi }}\) in (18), \((Y^{\mathtt {w}_n}, {\mathcal {H}}^{\mathtt {w}_n})\) in (2), as well as \({\varvec{\Pi }}_{\mathtt {w}_n}\) in (4). Then, the main theorem of the paper is the following:

Theorem 2.4

Let \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\). Let \(\psi \) in (9) satisfy (10). Let \(a_n , b_n \in (0, \infty )\), and \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfy (21), \(\mathbf {(C1)} - \mathbf {(C4)}\) as specified in (29), (30) and (34). Then, the joint convergence

$$\begin{aligned} \big ( \frac{_{_1}}{^{^{a_n}}} Y^{\mathtt {w}_n}_{b_n \cdot } , \, \frac{_{_{a_n}}}{^{^{b_n}}} {\mathcal {H}}^{\mathtt {w}_n}_{b_n \cdot } \, , \frac{_{_1}}{^{^{b_n}}} {\varvec{\Pi }}_{ \mathtt {w}_n} \big ) \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big ( Y, {\mathcal {H}}, {\varvec{\Pi }}\big ) \end{aligned}$$
(39)

holds weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\, \times {\mathbf {C}}([0, \infty ) , {\mathbb {R}}) \times ({\mathbb {R}}^2)^{{\mathbb {N}}^*} \) equipped with the product topology.

Proof

See Sect. 5.1. Let us mention that we actually prove a joint convergence of all the involved processes such as \(X^{\mathtt {w}_n}, H^{\mathtt {w}_n}, \theta ^{\mathtt {b}, \mathtt {w}_n}\), ... to their continuous counterparts. \(\square \)

Theorem 2.4 implies the convergence of the coding processes of the connected components of \({\varvec{\mathcal {G}}}_{ \mathtt {w}_n}\), because each connected component of \({\varvec{\mathcal {G}}}_{ \mathtt {w}_n}\) is encoded by an excursion above 0 of \({\mathcal {H}}^{\mathtt {w}_n}\) and the corresponding pinching points. More precisely, denote by \((l^{\mathtt {w}_n}_k, r^{\mathtt {w}_n}_k)\), \(1 \le k \le {\mathbf {q}}_{\mathtt {w}_n}\), the excursion intervals of \({\mathcal {H}}^{\mathtt {w}_n}\) above 0, that are exactly the excursion intervals of \(Y^{\mathtt {w}_n}\) above its running infimum process \(J^{\mathtt {w}_n}_t = \inf _{s\in [0, t]} Y^{\mathtt {w}_n}_s\). Namely,

$$\begin{aligned} \bigcup _{1\le k\le {\mathbf {q}}_{\mathtt {w}_n}} [l^{\mathtt {w}_n}_k, r^{\mathtt {w}_n}_k) = \big \{ t \in [0, \infty ) : {\mathcal {H}}^{\mathtt {w}_n}_t>0 \big \}= \big \{ t \in [0, \infty ) : Y^{\mathtt {w}_n}_t > J^{\mathtt {w}_n}_t \big \} \, . \end{aligned}$$
(40)

Here the indexation is such that \(\zeta ^{\mathtt {w}_n}_k \ge \zeta ^{\mathtt {w}_n}_{k+1}\), where we have set \(\zeta ^{\mathtt {w}_n}_k = r^{\mathtt {w}_n}_k - l^{\mathtt {w}_n}_k\) (if \(\zeta ^{\mathtt {w}_n}_k = \zeta ^{\mathtt {w}_n}_{k+1}\), then we agree on the convention that \(l^{\mathtt {w}_n}_k < l^{\mathtt {w}_n}_{k+1}\)); the excursions processes are then defined as

$$\begin{aligned} {\varvec{\mathtt {H}}}^{\mathtt {w}_n}_k (t) = {\mathcal {H}}^{\mathtt {w}_n}_{(l^{\mathtt {w}_n}_k + t)\wedge r^{\mathtt {w}_n}_k} ,\quad {\forall k \in \{ 1, \ldots , {\mathbf {q}}_{\mathtt {w}_n}\}, \, \forall t \in [0, \infty )}. \end{aligned}$$
(41)

We next define the sequences of pinching points of the excursions: to that end, recall the definition of \({\varvec{\Pi }}_{\mathtt {w}_n} = \big ( (s_p, t_p)\big )_{1\le p\le {\mathbf {p}}_{\mathtt {w}_n}} \) in (4); \({\varvec{\Pi }}_{\mathtt {w}_n}\) is the sequence of pinching times of \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\); observe that if \(t_p \in [l^{\mathtt {w}_n}_k, r^{\mathtt {w}_n}_k]\), then \(s_p \in [l^{\mathtt {w}_n}_k, r^{\mathtt {w}_n}_k]\); then, it allows to define the following for all \(k \in \{ 1, \ldots , {\mathbf {q}}_{\mathtt {w}_n}\}\):

$$\begin{aligned} {\varvec{\Pi }}_{k}^{\mathtt {w}_n}= & {} \big ( (s^k_p, t^k_p)\big )_{1\le p\le {\mathbf {p}}^\mathtt {w}_k} \; \text {where } (t^k_p)_{1\le p\le {\mathbf {p}}^{\mathtt {w}_n}_k} \text { increases and where} \nonumber \\&\text { the }(l^{\mathtt {w}_n}_k +s^k_p, l^{\mathtt {w}_n}_k+t^k_p)\text {'s are exactly the terms }(s_{p^\prime } , t_{p^\prime }) \hbox { of } {\varvec{\Pi }}_{\mathtt {w}_n} \text { such that }t_{p^\prime } \in [l^{\mathtt {w}_n}_k, r^{\mathtt {w}_n}_k].\nonumber \\ \end{aligned}$$
(42)

As already specified, we trivially extend each finite sequence \({\varvec{\Pi }}_{k}^{\mathtt {w}_n}\) as a random element of \(({\mathbb {R}}^2)^{{\mathbb {N}}^*}\). We pass to the limit for rescaled versions of \((({\varvec{\mathtt {H}}}_k^{\mathtt {w}_n} , l_k^{\mathtt {w}_n}, r_k^{\mathtt {w}_n}, {\varvec{\Pi }}_k^{\mathtt {w}_n}))_{1\le k \le q_{\mathtt {w}_n}}\). Since \(q_{\mathtt {w}_n}\) tends to \(\infty \), it is convenient to extend this sequence by taking \({\varvec{\mathtt {H}}}_k^{\mathtt {w}_n}\) as the null function, \(l_k^{\mathtt {w}_n} = r_k^{\mathtt {w}_n} = 0 \) and \({\varvec{\Pi }}_k^{\mathtt {w}_n}\) as the sequence constant to \((-1, -1)\) for all \(k > q_{\mathtt {w}_n}\).

Similarly, recall the definition of the excursion intervals of \({\mathcal {H}}\) above 0 in (15): \(\bigcup _{k\ge 1} (l_k, r_k) = \{ t \in [0, \infty ): {\mathcal {H}}_t > 0 \}\), where indexation is chosen in such a way that the sequence \(\zeta _k:= r_k - l_k\), \(k \ge 1\), decreases. We define the excursion processes \({\varvec{\mathtt {H}}}_{k}\), \(k\ge 1, \) by

$$\begin{aligned} {\varvec{\mathtt {H}}}_{k}(t)= {\mathcal {H}}_{(l_k + t)\wedge r_k} , \end{aligned}$$
(43)

for \(t \in [0, \infty )\). The pinching times are defined as follows: in (17) and (18) recall the definition of \({\varvec{\Pi }}= \big ( (s_p, t_p)\big )_{p\ge 1}\). If \(t_p \in [l_k, r_k]\), then note that \(s_p \in [l_k, r_k]\), by definition of \(s_p\). For all \(k \ge 1\), we set

$$\begin{aligned}&{\varvec{\Pi }}_{k} = \big ( (s^k_p, t^k_p)\big )_{1\le p\le {\mathbf {p}}_k} \; \text {where }(t^k_p)_{1\le p\le {\mathbf {p}}_k}\text { increases and where} \nonumber \\&\quad \quad \text { the }(l_k +s^k_p, l_k+t^k_p)\text {'s are exactly the terms } (s_{p^\prime } , t_{p^\prime })\text { of }{\varvec{\Pi }}\text { such that } t_{p^\prime } \in [l_k, r_k].\nonumber \\ \end{aligned}$$
(44)

Then the following theorem holds true.

Theorem 2.5

Under the same assumptions as in Theorem 2.4, the convergence

$$\begin{aligned} \big (\big ( (\frac{_{_{a_n}}}{^{^{b_n}}} {\varvec{\mathtt {H}}}_k^{\mathtt {w}_n} (b_n t))_{t\in [0, \infty )} , \, \frac{_{_{1}}}{^{^{b_n}}}l_k^{\mathtt {w}_n} , \, \frac{_{_{1}}}{^{^{b_n}}}r_k^{\mathtt {w}_n} , \, \frac{_{_{1}}}{^{^{b_n}}}{\varvec{\Pi }}_k^{\mathtt {w}_n} \big ) \big )_{k\ge 1} \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big (\big ({\varvec{\mathtt {H}}}_k, l_k, r_k, {\varvec{\Pi }}_k\big ) \big )_{k\ge 1} \end{aligned}$$
(45)

holds weakly on \((({\mathbf {C}}([0, \infty ), {\mathbb {R}}) \times [0, \infty )^2 \times ({\mathbb {R}}^2)^{{\mathbb {N}}^*})^{{\mathbb {N}}^*}\) equipped with the product topology.

Proof

See Sect. 5.2. \(\square \)

2.3 Convergence of the multiplicative graphs

We recall here a generic procedure described in [17] which allows us to extract the \(\mathtt {w}\)-graph \({\varvec{\mathcal {G}}}_{\mathtt {w}}\) from the coding processes \((Y^{\mathtt {w}}, {\mathcal {H}}^{\mathtt {w}}, {\varvec{\Pi }}_{\mathtt {w}})\) and the continuous multiplicative graph from \((Y, {\mathcal {H}}, {\varvec{\Pi }})\). We begin with the encoding of trees by real-valued functions.

Encoding trees Let \(h : [0, \infty ) \rightarrow [0, \infty )\) be a càdlàg function such that

$$\begin{aligned} \zeta _h = \sup \{ t \in [0, \infty ) : h(t) > 0 \}< \infty \; . \end{aligned}$$
(46)

We further assume that one of the following conditions is satisfied:

$$\begin{aligned} \text {either} \quad \text {(a)} \quad h \text { takes finitely many values or } \quad \text {(b)} \quad h \text { is continuous.} \end{aligned}$$
(47)

Note that the (discrete) height process \({\mathcal {H}}^{\mathtt {w}}\) as defined in (2) verifies Condition (a), while in the continuous setting, the process \({\mathcal {H}}\) defined in (14) verifies Condition (b), as asserted by Theorem 4.7 below. For all \(s, t \in [0, \zeta _h)\), we set

$$\begin{aligned} b_h(s,t)= \inf _{\quad r\in [s\wedge t,s\vee t]} h(r) \qquad \mathrm{and} \qquad d_h(s,t)=h(s)+h(t)-2b_h(s,t). \end{aligned}$$
(48)

We readily check that \(d_{h}\) satisfies the four-point inequality: for all \(s_1, s_2, s_3, s_4 \) belonging to \([0, \zeta _h)\), \(d_h(s_1,s_2) + d_h(s_3, s_4) \le \big (d_h(s_1, s_3) + d_h(s_2, s_4)\big ) \vee \big ( d_h(s_1, s_4) + d_h(s_2, s_3) \big ) \). It follows that \(d_h\) is a pseudometric on \([0, \zeta _h)\). We denote by \(s \sim _h t\) the equivalence relation \(d_h(s,t) = 0\) and we set

$$\begin{aligned} T_h= [0, \zeta _h) / \sim _h \; . \end{aligned}$$
(49)

Then, \(d_h\) induces a true metric on the quotient set \(T_h\) that we keep denoting by \(d_h\) and we denote by \(p_h : [0, \zeta _h) \rightarrow T_h \) the canonical projection. Note that if h is continuous, then \(p_h\) is a continuous map. It follows that in that case the metric space \(T_h\) is a compact real tree, namely a compact metric space where any pair of points is joined by a unique injective path that turns out to be a geodesic (see Evans [23] for more references on this topic). If, on the other hand, h satisfies Condition (a) in (47), then \(T_{h}\) is compact but not connected. It is still tree-like, as \(d_{h}\) satisfies the four-point inequality.

We will also need some additional features of the metric space \((T_{h}, d_{h})\), which are defined as follows: a distinguished point \(\rho _h = p_h (0)\), called the root of \(T_h\), and the mass measure \(m_h\), which satisfies that for any Borel measurable function \(f : T_h \rightarrow [0, \infty ) \), we have \(\int _{T_h} f(\sigma ) \, m_h (d\sigma ) = \int _{[0, \zeta _h]} f(p_h(t)) \, dt.\)

Pinched metric spaces Let (Ed) be a metric space and let \({\varvec{\Pi }}= ((x_i, y_i))_{1\le i\le p}\) where the elements \((x_i, y_i) \in E^2\), \(1 \le i \le p\), are referred to as pinching points. Let \(\varepsilon \in [0, \infty )\), that is interpreted as the length of the edges that are added to E (if \(\varepsilon = 0\), then each \(x_i\) is identified with \(y_i\)). Set \(A_E = \{ (x,y) : x,y \in E\}\) and for all \(e = (x,y) \in A_E\), set \({\underline{e}} = x\) and \({\overline{e}} = y\). A path \(\gamma \) joining x to y is a sequence of \(e_1, \ldots , e_q \in A_E\) such that \({\underline{e}}_1 = x\), \({\overline{e}}_q = y\) and \({\overline{e}}_i = {\underline{e}}_{i+1}\), for all \(1 \le i < q\). For all \(e = (x, y) \in A_E\), we then define its length by \(l^\varepsilon _e = \varepsilon \wedge d(x_i, y_i)\) if (xy) or (yx) is equal to some \((x_i,y_i) \in {\varvec{\Pi }}\); otherwise we set \(l^\varepsilon _e = d(x, y)\). The length of a path \(\gamma = (e_1, \ldots , e_q)\) is given by \(l_\varepsilon (\gamma ) = \sum _{1\le i\le q} l_{e_i}^{\varepsilon }\), and we set for all \(x,y\in E\):

$$\begin{aligned} d_{{\varvec{\Pi }}, \varepsilon } (x,y)= \inf \big \{ l_\varepsilon (\gamma ) :\; \gamma \text { is a path joining }x\text { to }y \big \} \; . \end{aligned}$$
(50)

We set \(A_{{\varvec{\Pi }}}= \{ (x_i, y_i), (y_i, x_i); 1 \le i \le p \}\) and we easily check that

$$\begin{aligned} d_{{\varvec{\Pi }}, \varepsilon } (x,y)= & {} d(x,y) \wedge \min \big \{ \, l_\varepsilon (\gamma )\; : \; \gamma = (e_0, e^\prime _0, \ldots ,e_{r-1}, e^\prime _{r-1}, e_r), \nonumber \\&\text {a path joining }x\text { to }y\text { such that}\; e_0^\prime , \ldots e^\prime _{r-1} \in A_{{\varvec{\Pi }}} \; \text {and} \; r \le p \big \} . \end{aligned}$$
(51)

Clearly, \(d_{{\varvec{\Pi }}, \varepsilon }\) is a pseudo-metric and we denote the equivalence relation \(d_{{\varvec{\Pi }}, \varepsilon } (x,y) = 0\) by \(x \equiv _{{\varvec{\Pi }}, \varepsilon } y\); the \(({\varvec{\Pi }}, \varepsilon )\)-pinched metric space associated with (Ed) is then the quotient space \(E/ \equiv _{{\varvec{\Pi }}, \varepsilon }\) equipped with \(d_{{\varvec{\Pi }}, \varepsilon }\). First note that if (Ed) is compact or connected, so is the associated \(({\varvec{\Pi }}, \varepsilon )\)-pinched metric space since the canonical projection \(\varpi _{{\varvec{\Pi }}, \varepsilon } : E \rightarrow E/ \equiv _{{\varvec{\Pi }}, \varepsilon }\) is 1-Lipschitz. Of course when \(\varepsilon > 0\), \(d_{{\varvec{\Pi }}, \varepsilon }\) on E is a true metric, \(E = E/ \equiv _{{\varvec{\Pi }}, \varepsilon }\) and \(\varpi _{{\varvec{\Pi }}, \varepsilon }\) is the identity map on E.

Encoding pinched trees Let \(h : [0, \infty ) \rightarrow [0, \infty )\) be a càdlàg function that satisfies (46) and (47); let \(\Pi = ((s_i, t_i))_{1\le i \le p}\) where \(0 \le s_i \le t_i < \zeta _h\), for all \(1 \le i \le p\) and let \(\varepsilon \in [0, \infty ) \). Then, the compact measured metric space encoded by h and the pinching setup \((\Pi , \varepsilon )\) is the \(({\varvec{\Pi }}, \varepsilon )\)-pinched metric space associated with \((T_h, d_h)\) and the pinching points \({\varvec{\Pi }}= ((p_h (s_i), p_h (t_i)))_{1\le i\le p}\), where \(p_h : [0, \zeta _h ) \rightarrow T_h\) stands for the canonical projection. We shall denote by \(p_{h, \Pi , \varepsilon }\) the composition of the canonical projections \(\varpi _{{\varvec{\Pi }}, \varepsilon } \circ p_h : [0, \zeta _h) \rightarrow G_{h, \Pi , \varepsilon }\); then \(\varrho _{h, \Pi , \varepsilon } = p_{h, \Pi , \varepsilon } (0)\) and \(m_{h, \Pi , \varepsilon }\) stands for the pushforward measure of the Lebesgue on \([0, \zeta _h )\) via \(p_{h, \Pi , \varepsilon }\). We shall use the notation

$$\begin{aligned} G (h, \Pi , \varepsilon )= \big ( G_{h, \Pi , \varepsilon } , d_{h, \Pi , \varepsilon }, \varrho _{h, \Pi , \varepsilon } , m_{h, \Pi , \varepsilon } \big ). \end{aligned}$$
(52)

Convergence of metric spaces Let \((G_1, d_1, \rho _1, m_1)\) and \((G_2, d_2, \rho _2, m_2)\) be two pointed compact measured metric spaces. The pointed Gromov-Hausdorff-Prokhorov distance of \(G_1\) and \(G_2\) is then defined by

$$\begin{aligned} {\varvec{\delta }}_{\mathrm {GHP}} (G_1, G_2)= & {} \inf \Big \{ d_{E}^{\text {Haus}} \big (\phi _1 (G_1), \phi _2 (G_2)\big ) \nonumber \\&+ d_E (\phi _1 (\rho _1), \phi _2 (\rho _2)) + {d_{E}^{\text {Prok}}} \big (m_1 \circ \phi ^{-1}_1 , m_2 \circ \phi ^{-1}_2\big ) \Big \}. \end{aligned}$$
(53)

Here, the infimum is taken over all Polish spaces \((E, d_E)\) and all isometric embeddings \(\phi _i: G_i \hookrightarrow E\), \(i \in \{ 1, 2\}\); \(d_{E}^{\text {Haus}}\) stands for the Hausdorff distance on the space of compact subsets of E, \(d_{E}^{\text {Prok}}\) stands for the Prokhorov distance on the space of finite Borel measures on E and for all \(i \in \{ 1, 2\}\), \(m_i \circ \phi ^{-1}_i\) stands for the pushforward measure of \(m_i\) via \(\phi _i\).

We recall Theorem 2.5 in Abraham, Delmas & Hoscheit [1] which asserts the following: \({\varvec{\delta }}_{\mathrm {GHP}} \) is symmetric and it satisfies the triangle inequality; \({\varvec{\delta }}_{\mathrm {GHP}} (G_1, G_2) = 0\) if and only if \(G_1\) and \(G_2\) are isometric, namely if and only if there exists a bijective isometry \(\phi : G_1 \rightarrow G_2\) such that \(\phi (\rho _1) = \rho _2\) and that \(m_2 = m_1 \circ \phi ^{-1}\). Denote by \({\mathbb {G}}\) the isometry classes of pointed compact measured metric spaces. Then, we recall the following result:

Theorem 2.6

(Theorem 2.5 in [1]) \(({\mathbb {G}}, {\varvec{\delta }}_{\mathrm {GHP}})\) is a complete and separable metric space.

Actually in our paper, weak-limits are proved for coding functions, which entail \({\varvec{\delta }}_{\mathrm {GHP}}\)-limits as asserted by the following lemma:

Lemma 2.7

Let \(h, h^\prime : [0, \infty ) \rightarrow [0, \infty )\) be two càdlàg functions such that \(\zeta _h\) and \(\zeta _{h^\prime }\) are finite and that (47) is satisfied. Let \(\Pi = ((s_i, t_i))_{1\le i \le p}\) and \(\Pi ^\prime = ((s^\prime _i, t^\prime _i))_{1\le i \le p}\) be two sequences such that \(0 \le s_i \le t_i < \zeta _h \) and \(0 \le s^\prime _i \le t^\prime _i < \zeta _{h^\prime }\), and let \(\delta \in (0, \infty )\) be such that

$$\begin{aligned} |s_i - s^\prime _i |\le \delta \quad \text {and} \quad |t_i - t^\prime _i |\le \delta , \qquad i \in \{ 1, \ldots , p\} \; . \end{aligned}$$
(54)

Let \(\varepsilon , \varepsilon ^\prime \in [0, \infty )\). Recall the pointed compact measured metric spaces \(G := G(h, \Pi , \varepsilon )\) and \(G^\prime := G(h^\prime , \Pi ^\prime , \varepsilon ^\prime )\) in (52). Then,

$$\begin{aligned} {\varvec{\delta }}_{\mathrm {GHP}} (G, G^\prime ) \le 6 (p+1) \big ( \Vert h - h^\prime \Vert _{\infty } + \omega _{\delta } (h) \big ) + 3p (\varepsilon \vee \varepsilon ^\prime ) + |\zeta _h - \zeta _{h^\prime } |\; , \end{aligned}$$
(55)

where \(\omega _\delta (h) = \max \big \{ |h(s) - h(t) |:\; s, t \in [0, \infty ) : |s - t| \le \delta \big \}\) and where \( \Vert \cdot \Vert _{\infty }\) stands for the uniform norm on \([0, \infty )\).

Proof

See Appendix C. The proof is partly adapted from Theorem 2.1 in Le Gall & D. [20], Proposition 2.4 Abraham, Delmas & Hoscheit [1] and Lemma 21 in Addario-Berry, Goldschmidt & B. in [2]. \(\square \)

Limit theorems for multiplicative graphs Recall the definition of the \(\mathtt {w}_n\)-multiplicative graph \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\) in (1). We equip its vertex set with a measure \({\mathbf {m}}_{\mathtt {w}_n} = \sum _{1\le j\le {\mathbf {j}}_n} w^{_{(n)}}_{^j} \delta _{j}\). Recall, as in (40), \([l^{\mathtt {w}_n}_k, r^{\mathtt {w}_n}_k)\), \(1 \le k \le {\mathbf {q}}_{\mathtt {w}_n}\) are the excursion intervals above 0 of \({\mathcal {H}}^{\mathtt {w}_n}\); similarly, \(\mathtt {H}^{\mathtt {w}_n}_k (\cdot )\) are the excursion intervals of \({\mathcal {H}}^{\mathtt {w}_n}\). Recall as well the sets of pinching times \({\varvec{\Pi }}^{\mathtt {w}_n}_k\). We recall that each excursion \(\mathtt {H}^{\mathtt {w}_n}_k (\cdot )\) corresponds to a connected component \({\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n}\) of \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\) and we have \({\mathbf {m}}_{\mathtt {w}_n} \big ( {\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n} \big ) = \zeta ^{\mathtt {w}_n}_k = r^{\mathtt {w}_n}_k - l^{\mathtt {w}_n}_k\). Thus, we get

$$\begin{aligned} {\mathbf {m}}^{\mathtt {w}_n} ({\varvec{\mathcal {G}}}_{ 1}^{\mathtt {w}_n}) \ge \cdots \ge {\mathbf {m}}^{\mathtt {w}_n} ({\varvec{\mathcal {G}}}_{ {\mathbf {q}}_{\mathtt {w}_n} }^{\mathtt {w}_n}) . \end{aligned}$$
(56)

Then, \({\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n}\) is the pinched (measured pointed) metric space encoded by \(({\varvec{\mathtt {H}}}^{\mathtt {w}_n}_k, {\varvec{\Pi }}_k^{ \mathtt {w}_n})\). So

$$\begin{aligned} G ( {\varvec{\mathtt {H}}}^{\mathtt {w}_n}_k, {\varvec{\Pi }}_k^{ \mathtt {w}_n}, 1) \; \, \text {is isometric to} \; \, ({\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n} , d^{\mathtt {w}_n}_k, \varrho _k^{\mathtt {w}_n}, {\mathbf {m}}^{\mathtt {w}_n}_k) \;, \end{aligned}$$
(57)

and thus, these objects define the same random element in the space \({\mathbb {G}}\) of the isometry classes of pointed compact measured metric spaces equipped with the Gromov-Hausdorff-Prokhorov distance \({\varvec{\delta }}_{\mathrm {GHP}}\) defined in (53). Here, we have denoted the graph-distance by \( d^{\mathtt {w}_n}_k\), the first vertex explored via the LIFO coding by \(\varrho _k^{\mathtt {w}_n}\) and \({\mathbf {m}}^{\mathtt {w}_n}_k\) stands for the restriction to \({\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n} \) of the weight measure \({\mathbf {m}}_{\mathtt {w}_n}\). Since \({\mathbf {q}}_{\mathtt {w}_n}\) tends to \(\infty \), it is convenient to extend the sequence \(({\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n})_{1\le k\le {\mathbf {q}}_{\mathtt {w}_n}}\) by taking \({\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n}\) equal to the point space equipped with the null measure for all \(k >{\mathbf {q}}_{\mathtt {w}_n}\).

Similarly, recall the excursion intervals \((l_k, r_k)\) and the corresponding excursions \(\mathtt {H}_k (\cdot )\), \(k \ge 1\), of \({\mathcal {H}}\), as well as the set of pinching times \({\varvec{\Pi }}_k\) in (44). Recall the continuous \((\alpha , \beta , {\mathbf {c}}, \kappa )\)-multiplicative graph \({\mathbf {G}} = (({\mathbf {G}}_k, d_k, \varrho _k, {\mathbf {m}}_k ))_{k\ge 1}\) as seen in (19), where for all \(k \ge 1\), \({\mathbf {G}}_k\) is the pinched (measured pointed) metric space encoded by \((\mathtt {H}_k, {\varvec{\Pi }}_k, 0)\), namely,

$$\begin{aligned} {\mathbf {G}}_{ k}:= G ( {\varvec{\mathtt {H}}}_k, {\varvec{\Pi }}_k, 0)\;. \end{aligned}$$
(58)

Then, Theorem 2.5 and Lemma 2.7 entail the following theorem:

Theorem 2.8

Under the same assumptions as in Theorem 2.4, the convergence

$$\begin{aligned} \big (\big ( {\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n} , \frac{_{_{a_n}}}{^{^{b_n}}}d_{k}^{\mathtt {w}_n} , \varrho _k^{\mathtt {w}_n}, \frac{_{_{1}}}{^{^{b_n}}}{\mathbf {m}}_k^{\mathtt {w}_n} \big ) \big )_{k\ge 1} \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big (\big ( {\mathbf {G}}_{k} , \mathrm {d}_{k}, \varrho _{k} , {\mathbf {m}}_{k} \big ) \big )_{k\ge 1} \end{aligned}$$
(59)

holds weakly on \({\mathbb {G}}^{{\mathbb {N}}^*}\) equipped with the product topology. Denote the counting measure on \( {\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n}\) by \(\varvec{\mu }^{\mathtt {w}_n}_k = \sum _{j\in {\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n}} \delta _j\). Then, the convergence

$$\begin{aligned} \big (\big ( {\varvec{\mathcal {G}}}_{ k}^{\mathtt {w}_n} , \frac{_{_{a_n}}}{^{^{b_n}}}d_{k}^{\mathtt {w}_n} , \varrho _k^{\mathtt {w}_n}, \frac{_{_{1}}}{^{^{b_n}}}\varvec{\mu }_k^{\mathtt {w}_n} \big ) \big )_{k\ge 1} \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big (\big ( {\mathbf {G}}_{k} , \mathrm {d}_{k}, \varrho _{k} , {\mathbf {m}}_{k} \big ) \big )_{k\ge 1} \end{aligned}$$
(60)

holds weakly on \({\mathbb {G}}^{{\mathbb {N}}^*}\) equipped with the product topology.

Recall the notation \({\mathbf {j}}_n = \max \{ j \ge 1: w^{_{(n)}}_{^j} > 0\}\). If we furthermore assume that \(\sqrt{{\mathbf {j}}_n}/b_n \rightarrow 0\), then (60) holds when the connected components are listed in the decreasing order of their numbers of vertices, namely, when \( \varvec{\mu }_1^{\mathtt {w}_n} \big ( {\varvec{\mathcal {G}}}_{ 1}^{\mathtt {w}_n}\big ) \ge \cdots \ge \varvec{\mu }_{q_{\mathtt {w}_n}}^{\mathtt {w}_n} \big ( {\varvec{\mathcal {G}}}_{ q_{\mathtt {w}_n}}^{\mathtt {w}_n}\big )\).

Proof

See Sect. 5.3. \(\square \)

Remark 2.2

The assumption \(\sqrt{{\mathbf {j}}_n}/b_n \rightarrow 0\) may not be optimal for (60) to hold when the connected components are listed in the decreasing order of their numbers of vertices. However, for all \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\) satisfying (10), this statement is never void since the examples of \((a_n, b_n, \mathtt {w}_n)\) provided in Proposition 2.3 (iii) satisfy \(\sqrt{{\mathbf {j}}_n}/b_n \rightarrow 0\). Moreover, let us mention that all the cases that have been considered previously by other authors satisfy this assumption, as it is pointed out in the Sect. 2.4. \(\square \)

Remark 2.3

Theorem 2.8 holds true under the assumption \(\kappa > 0\). When \(\kappa = 0\), the processes encoding the graphs may converge as in (39) for a wider class of branching mechanisms (see Theorem 7.1). In these cases, however, it turns out that the components that are explored are the exceptionally small ones and they are all trees. \(\square \)

2.4 Connections with previous results

Entrance boundary of the multiplicative coalescent The model of \(\mathtt {w}\)-multiplicative random graphs appears in the work of Aldous [3] as an extension of Erdős–Rényi random graphs that have close connections with multiplicative coalescent processes. Relying upon this connection, Aldous and Limic determine in [4] the extremal eternal versions of the multiplicative coalescent in terms of the excursion lengths of Lévy-type processes Y (up to rescaling, as explained below); to that end, they consider in Proposition 7 [4] asymptotics of the masses of the connected components of sequences of multiplicative random graphs. The asymptotic regime in Proposition 7 [4] is very close to Assumptions (21) and (\(\mathbf {C1}\)) – (\(\mathbf {C3}\)) in our Theorem 2.8.

Let us briefly recall Proposition 7 in [4] since it is used in the proof of Theorem 2.8. Aldous & Limic fix a sequence of weights \(\mathtt {x}_n \in {\ell }^{_{\, \downarrow }}_{ f}\), \(n \in {\mathbb {N}}\), and their notation for multiplicative graphs is the following: let \((\xi _{i,j})_{j>i\ge 1}\) be an array of independent and exponentially distributed r.v. with unit mean; let \(N(\mathtt {x}_n) = \max \{ j \ge 1: x^{_{(n)}}_{^j} > 0\}\); then for all \(q \in [0, \infty )\), Aldous & Limic consider the random graph \(G_n (q)\) whose set of vertices is \({\mathscr {V}}(G_n (q)) = \{ 1, \ldots , N(\mathtt {x}_n)\}\) and whose set of edges \({\mathscr {E}}(G_n (q))\) is such that \(\{ i,j\} \in {\mathscr {E}}(G_n (q))\) if and only if \(\xi _{i,j} \le q x^{_{(n)}}_{^i} x^{_{(n)}}_{^j} \); the multiplicative graph \(G_n(q)\) is equipped with the measure \(m_{n} = \sum _{j\ge 1} x^{_{(n)}}_{^j} \delta _j\); let \(\zeta _1 (\mathtt {x}_n , q) \ge \cdots \ge \zeta _k (\mathtt {x}_n, q) \ge \cdots \) stand for the (eventually null) sequence of the \(m_n\)-masses of the connected components of \(G_n (q)\). Then, it is easy to check that \({\mathbf {X}}_n : q \mapsto ( \zeta _k (\mathtt {x}_n, q))_{k\ge 1}\) is a multiplicative coalescent process with finite support. Aldous & Limic describe the limit of the processes \({\mathbf {X}}_n\) in terms of the excursion-lengths of a process \((W^{\kappa _{\text {AL}}, -\tau _{\text {AL}}, {\mathbf {c}}_{\text {AL}} }_s)_{s\in [0, \infty )}\) whose law is characterized by three parameters: \(\kappa _{\text {AL}} \in [0, \infty )\), \(\tau _{\text {AL}} \in {\mathbb {R}}\) and \( {\mathbf {c}}_{\text {AL}} \in {\ell }^{_{\, \downarrow }}_3\); this process is connected to the \((\alpha , \beta , \kappa , {\mathbf {c}})\)-process Y defined in (13) as follows: for \(s \in [0, \infty )\),

$$\begin{aligned} W^{\kappa _{\text {AL}}, -\tau _{\text {AL}}, {\mathbf {c}}_{\text {AL}} }_s = Y_{s/ \kappa }, \; \text {where} \quad \kappa _{\text {AL}} = \frac{\beta }{\kappa }, \quad \tau _{\text {AL}}= \frac{\alpha }{\kappa } \quad \text {and} \quad {\mathbf {c}}_{\text {AL}}={\mathbf {c}}. \end{aligned}$$
(61)

Proposition 7 [4] assumes that

$$\begin{aligned}&\lim _{n\rightarrow \infty } \frac{\sigma _3 (\mathtt {x}_n)}{(\sigma _2 (\mathtt {x}_n))^3} = \kappa _{\text {AL}}+ \sigma _3 ({\mathbf {c}}_{\text {AL}}), \quad \text { for }j \in {\mathbb {N}}^* , \nonumber \\&\quad \lim _{n\rightarrow \infty } \frac{x^{_{(n)}}_{^j}}{\sigma _2 (\mathtt {x}_n)} = c^{\text {AL}}_j \quad \text {and} \quad \lim _{n\rightarrow \infty }\sigma _2 (\mathtt {x}_n) = 0 , \end{aligned}$$
(62)

and asserts that for all \(\tau _{\text {AL}} \in {\mathbb {R}}\), \({\mathbf {X}}_n ( \sigma _2 (\mathtt {x}_n)^{-1} - \tau _{\text {AL}})\rightarrow (\zeta _k)_{k\ge 1}\), weakly in \({\ell }^{_{\, \downarrow }}_2\), where \((\zeta _k)_{k\ge 1}\) are the excursion-lengths of \( W^{\kappa _{\text {AL}}, -\tau _{\text {AL}}, {\mathbf {c}}_{\text {AL}} }\) above its running infimum, listed in the decreasing order.

The assumptions in (62) are close to (\(\mathbf {C2}\)) and (\(\mathbf {C3}\)). More precisely, let \((\alpha , \beta , \kappa , {\mathbf {c}})\) be connected with \(\kappa _{\text {AL}}\), \(\tau _{\text {AL}}\) and \({\mathbf {c}}_{\text {AL}}\) as in (61); let \(a_n , b_n \in (0, \infty )\) and \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\) satisfy (21) and (\(\mathbf {C1}\)) – (\(\mathbf {C3}\)); then, set

$$\begin{aligned}\forall j \in {\mathbb {N}}^*, \quad x^{_{(n)}}_{^j}= \frac{\kappa w^{_{(n)}}_{^j} }{{b_n}} \quad \text {and} \quad \tau ^n_{\text {AL}}= \frac{b^2_n }{\kappa ^2 \sigma _2 (\mathtt {w}_n)} \Big (1 - \frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \Big ) \underset{n\rightarrow \infty }{-\!\!\!-\!\!\! -\!\!\! \longrightarrow }\; \frac{\alpha }{\kappa } = \tau _{\text {AL}} . \end{aligned}$$

Recall that \({\varvec{\mathcal {G}}}_{ \mathtt {w}_n}\) is the \(\mathtt {w}_n\)-multiplicative graph in (1). Recall that \({\mathbf {m}}_{\mathtt {w}_n} = \sum _{j\ge 1} w^{_{(n)}}_{^j} \delta _j\) and \({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{ k}\), \(k\ge 1\), stand for the connected components of \({\varvec{\mathcal {G}}}_{ \mathtt {w}_n}\) listed in the nonincreasing order of their \({\mathbf {m}}_{\mathtt {w}_n}\)-mass. Then, it is easy to check that

$$\begin{aligned} G_n \big (\sigma _2 (\mathtt {x}_n)^{-1} - \tau ^n_{\text {AL}} \big ) {\overset{\text {(law)}}{=}} {\varvec{\mathcal {G}}}_{ \mathtt {w}_n} \quad \text {and} \quad \zeta _k \big (\mathtt {x}_n\, , \, \sigma _2 (\mathtt {x}_n)^{-1} - \tau ^n_{\text {AL}} \big ) {\overset{\text {(law)}}{=}} \frac{\kappa }{b_n} {\mathbf {m}}_{\mathtt {w}_n} \big ({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{ k} \big )=: \kappa \zeta _k^{n} . \end{aligned}$$
(63)

Note that the \(\zeta ^{n}_{k}\) are the excursion-lengths of \((\frac{1}{{a_n}} Y^{\mathtt {w}_n}_{ b_n t })_{t\in [0, \infty )}\) above its running infimum. Recall the definition of \(Y^{\mathtt {w}_n}\) (resp. of Y) in (2) (resp. in (13)). Since \(\tau ^n_{\text {AL}} \rightarrow \alpha / \kappa \) and since multiplicative coalescent processes have no fixed time-discontinuity, Proposition 7 in [4] immediately entails the following proposition that is used in Sect. 5.2 to prove Theorems 2.5 and 2.8:

Proposition 2.9

(Proposition 7 [4]) Let \(a_n , b_n \in (0, \infty )\) and \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\) satisfy (21) and \((\mathbf {C1})\)\((\mathbf {C3})\), with \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}} \in {\ell }^{_{\, \downarrow }}_3\). Let \((\zeta ^n_k)_{1\le k \le {\mathbf {q}}_{\mathtt {w}_n}}\) (resp. \((\zeta _k)_{k\ge 1}\)) be the excursion-lengths of \((\frac{1}{{a_n}} Y^{\mathtt {w}_n}_{ b_n t })_{t\in [0, \infty )}\) (resp. of Y) above its running infimum. Then,

$$\begin{aligned} \big ( \zeta ^n_k\big )_{1\le k \le {\mathbf {q}}_{\mathtt {w}_n}} \overset{\hbox { weakly in}\ {\ell }^{_{\, \downarrow }}_2}{\underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow }}\; (\zeta _k)_{k\ge 1} . \end{aligned}$$
(64)

Limits of Erdős–Rényi graphs in the critical window The first result proving metric convergence in a strong Hausdorff sense of rescaled Erdős–Rényi graphs and their inhomogeneous extensions is due to Addario-Berry, Goldschmidt & B. in [2]. In this paper, the authors study the scaling limits of the largest components of Erdős–Rényi random graph \(G(n, p_n)\) in the critical window \(p_n = n^{-1} - \alpha n^{-4/3}\), with \(\alpha \in {\mathbb {R}}\), which corresponds to the graph \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\) where \(w^{_{(n)}}_j = \mathbf{1}_{\{ j\le n \}} n\log (\frac{1}{1 - p_n})\), \(j \ge 1\). Taking, \(a_n = n^{1/3}\) and \(b_n = n^{2/3}\), we immediately see that \(a_n , b_n\) and \(\mathtt {w}_n\) satisfy (21) with \(\kappa = \beta _0 = 1\), \((\mathbf {C1})\), \((\mathbf {C2})\), \((\mathbf {C3})\) and \(\sqrt{\mathbf {j_n}}/b_n = n^{-1/6} \rightarrow 0\), with the parameters \(\alpha \in {\mathbb {R}}\), \(\beta = 1\) and \({\mathbf {c}} = 0\). Namely, the branching mechanism is \(\psi (\lambda ) = \alpha \lambda + \frac{_1}{^2} \lambda ^2\). Since \(\beta _0 > 0\), Proposition 2.3 (i) implies that \((\mathbf {C4})\) is automatically satisfied and Theorem 2.8 applies: in this case, Theorem 2.8 is a weaker version of Theorem 2 in Addario-Berry, Goldschmidt & B. [2], p. 369: the result in [2] actually provides precise estimates on the size of metric components. Let us mention that [2] also contains tail-estimates on the diameters of the small components. Such estimates seem difficult to obtain in the case of general \(\mathtt {w}_n\).

Multiplicative graphs in the same basin of attraction as Erdős–Rényi graphs Bhamidi, van der Hofstad & van Leeuwaarden in [10] prove the scaling limit of the component sizes (number of vertices) for examples of multiplicative graphs which behave asymptotically like the Erdős–Rényi graphs. Bhamidi, Sen & X. Wang in [8] and Bhamidi, Sen, X. Wang & B. in [7] investigate instead the scaling limits of these graphs seen as measured metric spaces. The conditions under which these authors prove their limit theorems slightly differ. We give here a detailed account of these conditions so as to make a connection with our results. In all the cases covered by [7, 8, 10], the scalings appear to be \(a_n = n^{1/3}\), \(b_n = n^{2/3}\) and \(\mathtt {w}_n\) is a sequence of length n having the following asymptotic behaviour:

$$\begin{aligned} \frac{w^{(n)}_1}{n^{1/3}}\rightarrow 0, \quad \exists \, \sigma , \sigma ^\prime \in (0, \infty ) : \, \sigma _i (\mathtt {w}_n)= & {} n \sigma + o(n^{2/3}), \; i \in \{ 1, 2\} \; \text {and} \; \sigma _3 (\mathtt {w}_n)\nonumber \\= & {} n \sigma ^\prime + o(n). \end{aligned}$$
(65)

For all \(\alpha \in {\mathbb {R}}\), set

$$\begin{aligned} \mathtt {w}_n (\alpha ) = \big (1 - \alpha n^{-\frac{1}{3}} \big ) \mathtt {w}_n = \big ( \big (1 - \alpha n^{-\frac{1}{3}} \big ) w^{_{(n)}}_j \big )_{j\ge 1} \; .\end{aligned}$$

This is a situation covered by Theorem 2.8. Indeed, (65) easily implies that \(a_n , b_n , \mathtt {w}_n(\alpha )\) satisfy (21), \((\mathbf {C1})\), \((\mathbf {C2})\), \((\mathbf {C3})\), \(\sqrt{\mathbf {j_n}}/b_n = n^{-1/6} \rightarrow 0\), with the parameters \(\alpha \in {\mathbb {R}}\), \(\beta _0 = 1\), \(\beta = \sigma ^\prime / \sigma \), \(\kappa = 1/ \sigma \) and \({\mathbf {c}} = 0\). Thus, the branching mechanism is \(\psi (\lambda ) = \alpha \lambda + \frac{1}{2} \frac{\sigma ^\prime }{\sigma } \lambda ^2\). Since \(\beta _0 = 1\), Proposition 2.3 (i) implies \((\mathbf {C4})\). Then, Theorem 2.8 applies in this setting, which allows us to extend

  • Theorem 1.1 in [10], which has been proved under the supplementary assumption (Assumption (b) there) that there exists a r.v. \(W: \Omega \rightarrow [0, \infty )\) such that

    $$\begin{aligned} \tfrac{1}{n} \sum _{i}{\mathbf {1}}_{\{w^{(n)}_{i}\le x\}} \rightarrow \mathbf{P}(W\le x) \quad \text {for all } x\ge 0, \text { and } \sigma= & {} \mathbf{E}[W]={\mathbf {E}}[W^{2}], \sigma '={\mathbf {E}}[W^{3}]. \end{aligned}$$
  • Theorem 3.3 in Bhamidi, Sen & X. Wang in [8] that has been proved by quite different methods and under two additional technical assumptions (Assumptions 3.1 (c) and (d)).

Turova in [36] also proved a result similar to Theorem 1.1 of [10] for i.i.d. random weight sequences. Let us mention that the convergence under the sole assumptions (65), that we proved, has been conjectured by Bhamidi, Sen and X. Wang in [8], Section 5, part (c).

Gromov–Prokhorov convergence of multiplicative graphs without Brownian component In light of the above mentioned result of Aldous & Limic [4] on the convergence of the component masses of the multiplicative graph in the asymptotic regime (62), it is natural to expect that the graph itself should also converge in some sense. The first affirmation in this direction is due to Bhamidi, van der Hofstad and Sen who prove the following in [9]: Denote by \({\mathscr {C}}_i(q)\) the i-largest (in \(m_n\)-mass) connected component of \(G_n(q)\), that is, \(m_n({\mathscr {C}}_i(q))=\zeta _i(\mathtt {x}_n, q)\). Equip each component \(\mathscr {C}_i(-\tau _{\text {AL}}+\sigma _2(\mathtt {x}_n)^{-1})\) with its graph distance rescaled by \(\sigma _2(\mathtt {x}_n)\) and with the mass measure \(m_n\), they prove that under (62) with \(\kappa _{\mathrm {AL}}=0\), the collection of rescaled metric spaces converge in the sense of Gromov–Prokhorov topology to a collection of measured metric spaces, which are not necessarily compact. They also give an explicit construction of the limit spaces based upon a model of continuum random tree called ICRT. The Gromov–Prokhorov convergence is equivalent to the convergence of mutual distance of an i.i.d. sequence with law \(m_n\), which is weaker than the Gromov–Hausdorff–Prokhorov that we obtain in Theorem 2.8 under the compactness assumption \(\int ^\infty d\lambda / \psi (\lambda )<\infty \). Our approach via coding processes is quite distinct from that of Bhamidi, van der Hofstad & Sen in [9].

Power-law cases We extend the power-law cases investigated in Bhamidi, van der Hofstad & van Leeuwaarden [11] and Bhamidi, van der Hofstad & Sen [9]. Let \(W : \Omega \rightarrow [0, \infty )\) be a r.v. such that

$$\begin{aligned} r= {\mathbf {E}}[W] = {\mathbf {E}}[ W^2] < \infty \quad \text {and} \quad {\mathbf {P}}(W \ge x) = x^{-\rho } L(x), \end{aligned}$$
(66)

where \(\rho \in (2, 3)\) (in the notation of [9], \(\tau = \rho +1 \in (3, 4)\)) and where L is slowly varying at \(\infty \). We then set for all \(y \in [0, \infty )\),

$$\begin{aligned} G(y) = \sup \big \{ x \in [0, \infty ) : {\mathbf {P}}(W \ge x) \ge 1 \wedge y \big \} . \end{aligned}$$
(67)

Note that \(G(y) = 0\) for all \(y \in [1, \infty )\) and that \(G(y) = y^{-1/\rho } \, \ell (y)\), where \(\ell \) is slowly varying at 0. We assume that for each \(n \in {\mathbb {N}}^*\) we have

$$\begin{aligned} {\mathbf {P}}\big ( W = G(1/n) \big )= 0\; . \end{aligned}$$
(68)

Let \(\kappa , q \in (0, \infty )\) and let \(a_n, b_n , \mathtt {w}_n\) be such that

$$\begin{aligned} a_n \underset{{n\rightarrow \infty }}{\sim } q^{-1} G(1/n), \quad \forall \, j \ge 1, \; \, w_j^{(n)} = G(j/n), \quad b_n \underset{{n\rightarrow \infty }}{\sim } \kappa \sigma _1 (\mathtt {w}_n) /a_n \; . \end{aligned}$$
(69)

Then, the following lemma holds true.

Lemma 2.10

We keep the notation from above and we assume (68). Then \(a_n \sim q^{-1}n^{\frac{1}{\rho }} \ell (1/n)\), \(b_n \sim q\kappa \, n^{1-\frac{1}{^\rho }} / \ell (1/n)\) and \(a_n , b_n \) and \(\mathtt {w}_n\) satisfy (21) with \(\beta _0 = 0\) and \(\sqrt{{\mathbf {j}}_n}/b_n \sim \frac{\ell (1/n)}{q\kappa } n^{\frac{1}{^\rho }-\frac{1}{2}} \rightarrow 0\). Next, for all integers \(j \ge 1\) and for all \(\alpha \in {\mathbb {R}}\), set

$$\begin{aligned} w^{(n)}_j (\alpha ) = \big (1 - \frac{_{a_n}}{^{b_n}} (\alpha - \alpha _0) \big ) w^{(n)}_j , \; \text {where} \quad \alpha _0 = 2\kappa q^2 \Big ( \int _0^1 y \{ y^{-\rho }\} \, dy + \frac{_1}{^{\rho - 2}} \Big ), \end{aligned}$$
(70)

and where \(\{ \cdot \} \) stands for the fractional part function. Then, \(a_n , b_n , \mathtt {w}_n(\alpha )\) satisfy (21), \((\mathbf {C1})\)\((\mathbf {C4})\) and \(\sqrt{{\mathbf {j}}_n}/b_n \rightarrow 0\), with the parameters \(\alpha \in {\mathbb {R}}\), \(\kappa \in (0, \infty )\), \(\beta = \beta _0 = 0\) and \(c_j = q \, j^{-\frac{1}{\rho }}\), for all \(j \ge 1\).

Proof

See Sect. 8. \(\square \)

Lemma 2.10 implies that Theorem 2.8 applies to \(a_n, b_n\) and \(\mathtt {w}_n (\alpha )\) as defined above. This extends Theorem 1.1 in Bhamidi, van der Hofstad & van Leeuwaarden [11] that proves the convergence of the component sizes under the more restrictive assumption that \(L(x) = x^\rho {\mathbf {P}}(W \ge x) \rightarrow c_F \in (0, \infty )\) as \(x \rightarrow \infty \) (see (1.6) in [11]) as well as Theorem 1.2 in Bhamidi, van der Hofstad & Sen [9] (Section 1.1.3) that asserts the convergence of the components as measured metric spaces under the supplementary assumptions that \({\mathbf {P}}(W \in dx) = f(x) dx\), where f is a continuous function whose support is of the form \([\varepsilon , \infty )\) with \(\varepsilon > 0\), and such that \(x \in [\varepsilon , \infty ) \mapsto xf(x)\) is nonincreasing (see Assumption 1.1 in [9], Section 1.1.3). Again, the proofs in [11] and in [9] are quite different from ours.

Let us also mention that a solution to the Conjecture 1.3 on fractal dimensions of the components of \({\mathbf {G}}\) right after Theorem 1.2 in [9] is given in the companion paper [17], Proposition 2.7.

General inhomogeneous Erdős–Rényi graphs that are close to be multiplicative In [28], Janson investigates strong asymptotic equivalence of general inhomogeneous Erdős–Rényi graphs that are defined as follows: denote by P the set of arrays \({\mathbf {p}} = (p_{{i,j}})_{j>i\ge 1}\) of real numbers in [0, 1] such that \(N_{{\mathbf {p}}} = \sup \{ j \ge 2 : \sum _{1\le i< j} p_{i,j} > 0 \} < \infty \); the \({\mathbf {p}}\)-inhomogeneous Erdős–Rényi graph \(G({\mathbf {p}})\) is the random graph whose set of vertices is \(\{ 1, \ldots , N({\mathbf {p}})\}\) and whose random set of edges \({\mathscr {E}}(G({\mathbf {p}}))\) is such that the r.v. \((\mathbf{1}_{\{ \{ i,j\} \in {\mathscr {E}}(G({\mathbf {p}}))\}})_{1 \le i < j \le N({\mathbf {p}})}\) are independent and such that \({\mathbf {P}}(\{ i,j\} \in {\mathscr {E}}(G({\mathbf {p}}))) = p_{i,j}\). The asymptotic equivalence is measured through the following function \(\rho \) that is defined for all \(p, q \in [0, 1]\), by \(\rho (p, q) = (\sqrt{p} - \sqrt{q})^2 + (\sqrt{1 - p} - \sqrt{1 - q})^2\). More precisely, let \({\mathbf {p}}_n, {\mathbf {q}}_n \in P\), \(n \in {\mathbb {N}}\); then Theorem 2.2 in Janson [28] implies that there are couplings of \(G({\mathbf {p}}_n)\) and \(G({\mathbf {q}}_n)\) such that \(\lim _{n\rightarrow \infty } {\mathbf {P}}(G({\mathbf {p}}_n) \ne G({\mathbf {q}}_n)) = 0\) if and only if

$$\begin{aligned} \lim _{n\rightarrow \infty } \sum _{j>i\ge 1} \rho (p^{_{(n)}}_{^{i,j}} , q^{_{(n)}}_{^{i,j}}) = 0 \; . \end{aligned}$$
(71)

We then apply this result as follows: let \(a_n , b_n \in (0, \infty )\) and \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_{ f}\), \(n \in {\mathbb {N}}\), satisfy the assumptions of Theorem 2.8. For \(j > i \ge 1\), we set

$$\begin{aligned} p^{_{(n)}}_{^{i,j}} = \frac{ w^{_{(n)}}_{^{i}} w^{_{(n)}}_{^{j}}}{\sigma _1 (\mathtt {w}_n)} \quad \text {and} \quad u^{_{(n)}}_{^{i,j}}= \left\{ \begin{array}{ll} \frac{q^{_{(n)}}_{^{i,j}} }{p^{_{(n)}}_{^{i,j}} } - 1\, , &{} \text {if} \; p^{_{(n)}}_{^{i,j}} > 0 \\ 0\, , &{} \text {if} \; p^{_{(n)}}_{^{i,j}} = 0 . \end{array}\right. \end{aligned}$$
(72)

First note that \(\max _{j>i\ge 1} p^{_{(n)}}_{^{i,j}} = O (( w^{_{(n)}}_{^{1}} / a_n)^2 a_n / b_n) \rightarrow 0\) by (21); next, as proved in Janson [28] (2.5) p. 30, if \(p \le 0.9\), then \(\rho (p, q) \asymp |p - q| \big (1 \wedge |q/p - 1| \big )\). Thus, (71) is equivalent to

$$\begin{aligned} \lim _{n\rightarrow \infty } \sum _{j>i\ge 1} p^{_{(n)}}_{^{i,j}} |u^{_{(n)}}_{^{i,j}}| \big (1 \wedge |u^{_{(n)}}_{^{i,j}}| \big ) = 0 ,\quad \text {with the convention} \; p^{_{(n)}}_{^{i,j}} |u^{_{(n)}}_{^{i,j}}| = q^{_{(n)}}_{^{i,j}} \; \text {if} \; p^{_{(n)}}_{^{i,j}}= 0.\nonumber \\ \end{aligned}$$
(73)

In particular, let \(h: [0, 1] \rightarrow [0, 1]\) be such that \(h(x) = x + O (x^2)\). If we set \(q^{_{(n)}}_{^{i,j}} = h( p^{_{(n)}}_{^{i,j}})\), then there exists \(C \in (0, \infty )\) such that \(|u^{_{(n)}}_{^{i,j}}| \le C p^{_{(n)}}_{^{i,j}}\). In this case, for all sufficiently large n,

$$\begin{aligned} \sum _{j>i\ge 1} p^{_{(n)}}_{^{i,j}} |u^{_{(n)}}_{^{i,j}}| \big (1 \wedge |u^{_{(n)}}_{^{i,j}}| \big ) \le C^2 \sum _{j>i\ge 1} (p^{_{(n)}}_{^{i,j}})^3\le C^2 \frac{\sigma _3 (\mathtt {w}_n)^2}{\sigma _1 (\mathtt {w}_n)^3} \sim C^\prime (a_n/b_n)^3 \longrightarrow 0\end{aligned}$$

by (\(\mathbf {C2}\)) and (21). Cases where \(h(x) = 1 \wedge x\) have been studied by Chung & Lu [18] and van der Esker, van der Hofstad & Hooghiemstra [37]; the cases where \(h(x) = 1 - e^{-x}\), was first studied by Aldous [3], Aldous & Limic [4], as well as [2, 7,8,9,10,11, 34]; cases where \(h(x) = x/ (1+ x)\) have been investigated by Britton, Deijfen & Martin-Löf [16]. To summarise, Janson’s Theorem 2.2 [28], p. 31 combined with Theorem 2.8 imply the following result.

Theorem 2.11

(Theorem 2.2 in Janson [28]) Assume that \(a_n , b_n, \mathtt {w}_n\) satisfy the same assumptions as in Theorem 2.4 (and thus as in Theorem 2.8). We furthermore assume that \(\sqrt{{\mathbf {j}}_n}/b_n \rightarrow 0\). We define \({\mathbf {p}}_n\) by (72). Let \({\mathbf {q}}_n \in P\). We define \((u^{_{(n)}}_{^{i,j}})_{j>i\ge 1}\) by (72) and we suppose (73). Then, there exist couplings of \(G({\mathbf {q}}_n)\) and \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathbf {P}}({\varvec{\mathcal {G}}}_{\mathtt {w}_n} \ne \! G({\mathbf {q}}_n)) = 0 \end{aligned}$$
(74)

and the weak limit (60) in Theorem 2.8 holds true in the same scaling for the connected components of \(G({\mathbf {q}}_n)\) that are listed in the decreasing order of their numbers of vertices and that are equipped with the graph distance and the the counting measure. In particular, its holds true when \(u^{_{(n)}}_{^{i,j}} = h(p^{_{(n)}}_{^{i,j}})\), \(j > i \ge \! 1\), for all functions \(h\! : \! [0, 1] \! \rightarrow \! [0, 1]\) such that \(h(x) = x + O (x^2)\).

2.5 An overview of the proof

The rest of the paper is taken up by the proof of the results announced in this section. We briefly describe how it is organised.

Section 3 collects most of the tools we need for the discrete model. In particular, we recall some results on the (discrete) red and blue processes established in the companion paper [17]. We also provide some estimates on the fluctuations of these processes (particularly Lemma 3.6), which will be a key ingredient in the proof of the limit theorems later on.

Section 4 is, on the other hand, our tool box for the continuous model. We explain the construction of the continuum graph based on the coding processes Y and \({\mathcal {H}}\) as introduced in [17] in more detail. Let us note that the graph-encoding process Y is a time change of X (see (12)); a similar relationship (14) also holds between \({\mathcal {H}}\) and H. However, the dependence between X and the time-change \(\theta ^{\mathtt {b}}\) constitutes a point of subtlety when dealing with the limit theorems and our approach will strongly rely on the properties of red and blue processes stated in that section.

The proof of the main limit theorems is given in Sects. 57. In Sect. 5, we show that the convergence of the graphs is a consequence of the convergence of the coding processes (Proposition 5.1). This proves our main results Theorems 2.4, 2.5 and 2.8 subject to Proposition 5.1. In Sect. 6, we explain how to derive Proposition 5.1 from the convergence of the Markovian queueing system (Propositions 2.12.2). The actual proof of the latter is given in Sect. 7.

Section 8 concerns the specific example of power-law weight sequence, which has attracted some attention: we show that the assumptions of our main results are verified in this case.

3 Preliminary results on the discrete model

In this section, we gather the results we need for the discrete model. In Sect. 3.1, we recall some useful processes encoding Galton–Watson trees: Lukasiewicz path, height process and contour process. In Sect. 3.2, we use the connection with the Markovian queue to prove estimates on these coding processes, which will be used in Sect. 7. In Sect. 3.3, we explain in more detail the embedding of the multiplicative graph into the Galton–Watson trees, obtained in our construction via the blue and red processes. Estimates on these processes are then proved in Sect. 3.4.

3.1 Height and contour processes of Galton–Watson trees

Let us briefly recall some basic notation about the coding of trees. We first denote by \({\mathbb {U}}= \bigcup _{n\in {\mathbb {N}}} ({\mathbb {N}}^*)^n\) the set of finite words written with positive integers; here, \(({\mathbb {N}}^*)^0\) is taken as \(\{ \varnothing \}\). The set \({\mathbb {U}}\) is totally ordered by the lexicographical order \(\le _{\mathtt {lex}}\) (the strict order is denoted by \(<_\mathtt {lex}\)).

Let \(u = [i_1, \ldots , i_n] \in {\mathbb {U}}\) be distinct from \(\varnothing \). We set \(|u| = n\) that is the length or the height of u, with the convention that \(|\varnothing | = 0\). We next set \(\overleftarrow{u} = [i_1, \ldots , i_{n-1}]\) that is interpreted as the parent of u (and if \(n\! = 1\), then \(\overleftarrow{u}\) is taken as \(\varnothing \)). More generally, for all \(p \in \{ 1, \ldots , n\}\), we set \(u_{| p} = [i_1, \ldots , i_p]\), with the convention: \(u_{| 0} = \varnothing \). Note that \(\overleftarrow{u} = u_{|n-1}\). We will also use the following notation: \([\![\varnothing , u]\!]= \{ \varnothing , u_{|1}, \ldots , u_{| n-1} , u\}\), \(]\!]\varnothing , u]\!]= [\![\varnothing , u]\!]\backslash \{ \varnothing \}\), \([\![\varnothing , u[\![\, = \! [\![\varnothing , u]\!]\backslash \{ u\}\) and \(]\!]\varnothing , u[\![\, = \! [\![\varnothing , u]\!]\backslash \{\varnothing , u \}\). For all \(v = [j_1, \ldots , j_m] \in {\mathbb {U}}\), we also set \(u*v = [i_1, \ldots , i_n, j_1, \ldots , j_m]\) that is the concatenation of u with v, with the convention that \(\varnothing *u = u *\varnothing = u\).

A rooted ordered tree can be viewed as a subset \(t\! \subset \! {\mathbb {U}}\) such that the following holds true:

(a) : :

\(\varnothing \in t\).

(b) : :

If \(u \in t\backslash \{ \varnothing \}\), then \(\overleftarrow{u} \in t\).

(c) : :

For all \(u \in t\), there exists \(k_u (t) \in {\mathbb {N}}\cup \{ \infty \}\) such that \(u *[i] \in t\) if and only if \(1 \le i \le k_u (t)\).

Here, \(k_u (t)\) is interpreted as the number of children of u and if \(1 \le i \le k_u (t)\), then \(u*[i]\) is the i-th child of u; \(k_u (t)+1\) is the degree of the vertex u in the graph t when u is distinct from the root. Implicitly, if \(k_u (t) = 0\), then there is no child stemming from u and assertion (c) is trivially satisfied. Note that the subtree stemming from u that is \(\theta _u t\! = \! \{ v \in {\mathbb {U}}: u*v \in t \}\) is also a rooted ordered tree.

Let \({\mathbb {T}}\) be the set of rooted ordered trees that is equipped with the \(\sigma \)-field \({\mathscr {F}}({\mathbb {T}})\) generated by the sets \(\{ t \in {\mathbb {T}}\! : \! u \in t\}\), \(u \in {\mathbb {U}}\). Then, a Galton–Watson tree with offspring distribution \(\mu \) (a GW(\(\mu \))-tree, for short) is a \(({\mathscr {F}}, {\mathscr {F}}({\mathbb {T}}))\)-measurable r.v. \(\tau \! :\! \Omega \! \rightarrow \! {\mathbb {T}}\) that satisfies the following:

\((a^\prime ):\):

\(k_\varnothing (\tau )\) has law \(\mu \).

\((b^\prime ):\):

For all \(k\! \ge \! 1\) such that \(\mu (k)\! >\! 0\), the subtrees \(\theta _{[1]} \tau , \ldots , \theta _{[k]} \tau \) under \({\mathbf {P}}(\, \cdot \, | k_\varnothing (\tau ) = k)\) are independent with the same law as \(\tau \) under \({\mathbf {P}}\).

Assume that \(\mu (1)\! <\! 1\). Recall that \(\tau \) is a.s. finite if and only if \(\mu \) is critical or subcritical: namely, if and only if \(\sum _{k\ge 1} k\mu (k) \le 1\).

A Galton–Watson forest with offspring distribution \(\mu \) (a GW(\(\mu \))-forest, for short) is a random tree \({\mathbf {T}}\) such that \(k_\varnothing ({\mathbf {T}}) = \infty \) and such that the subtrees \((\theta _{[k]} {\mathbf {T}})_{k\ge 1}\) stemming from \(\varnothing \) are i.i.d. GW(\(\mu \))-trees. We next recall how to encode a GW(\(\mu \))-forest \({\mathbf {T}}\) thanks to three processes: its Lukasiewicz path, its height process and its contour process. We denote by \((u_l)_{l\in {\mathbb {N}}}\) the sequence of vertices of \({\mathbf {T}}\) such that \(u_0 = \varnothing \) and such that, for all l, \(u_{l+1}\) is the smallest vertex of \({\mathbf {T}}\) with respect to the lexicographical order that is larger than \(u_l\). If \(\mu \) is critical or subcritical, then \((u_l)_{l\in {\mathbb {N}}}\) exhausts all the vertices of \({\mathbf {T}}\); however, if \(\mu \) is supercritical (namely if \(\sum _{k\ge 1} k\mu (k)\! >\! 1\)), then \((u_l)_{l\in {\mathbb {N}}}\) exhausts the vertices of \({\mathbf {T}}\) that are situated before (or on) the first infinite line of descent. We first set

$$\begin{aligned} V^{{\mathbf {T}}}_0 = 0, \quad \text {and for } l\! \ge \! 0, \quad V_{l+1}^{\mathbf {T}}= V_{l}^{\mathbf {T}}+ k_{u_{l+1}} ({\mathbf {T}})\! -\! 1 \quad \text {and} \quad \mathtt {Hght}^{\mathbf {T}}_l\! =\! |u_{l+1}|\! -\! 1. \end{aligned}$$
(75)

The process \((V_l^{\mathbf {T}})_{l\in {\mathbb {N}}}\) is the Lukasiewicz path associated with \({\mathbf {T}}\) and \((\mathtt {Hght}^{\mathbf {T}}_ l)_{l\in {\mathbb {N}}}\) is the height process associated with \({\mathbf {T}}\). We recall the following results in Le Gall & Le Jan [31]:

(i):

\(V^{\mathbf {T}}\) is distributed as a random walk starting from 0 and with jump-law \(\nu (k) = \mu (k+1)\), \(k \in {\mathbb {N}}\cup \{ -1\}\).

(ii):

We set \({\underline{V}}^{\mathbf {T}}_{\, 0} = 0\) and for all \(l\! \ge \! 1\), \({\underline{V}}^{\mathbf {T}}_{\, l} = \inf _{0\le k \le l-1} V^{\mathbf {T}}_k -1\). Note that \(u_l \in \theta _{[p]} {\mathbf {T}}\) if and only if \((u_l)_{| 1}\! =\! p\). Then, we get

$$\begin{aligned} - {\underline{V}}^{\mathbf {T}}_{\, l} = (u_l)_{| 1} \quad \text {and} \quad V^{\mathbf {T}}_l \! -\! {\underline{V}}^{\mathbf {T}}_{\, l} = \# \big \{ v \in {\mathbf {T}}: u_{l}<_\mathtt {lex} v \; \text {and} \; \overleftarrow{v}\! \in \, ]\!]\varnothing , u_l ]\!]\big \}\,. \end{aligned}$$
(76)
(iii):

The height process \(\mathtt {Hght}^{\mathbf {T}}\) is derived from \(V^{\mathbf {T}}\) by setting \(\mathtt {Hght}^{\mathbf {T}}_0 = 0\) and, for \(l \! \ge \! 1\),

$$\begin{aligned} \mathtt {Hght}^{\mathbf {T}}_l= \# \big \{ m \in \{ 0, \ldots , l\! -\! 1\}: V^{\mathbf {T}}_m = \inf _{m\le j\le l} V^{\mathbf {T}}_j \big \}. \end{aligned}$$
(77)

The contour process of \({\mathbf {T}}\) is informally defined as follows: suppose that \({\mathbf {T}}\) is embedded in the oriented half plane in such a way that edges have length one and that orientation reflects lexicographical order of visit; we think of a particle starting at time 0 from \(\varnothing \) and exploring the tree from the left to the right, backtracking as little as possible and moving continuously along the edges at unit speed. In (sub)critical cases, the particle crosses each edge twice (upwards first and then downwards). In supercritical cases, the particle only explores the edges that are situated before (or on) the first infinite line of descent in the lexicographical order: the edge on the infinite line of descent are visited once (upwards only) and the edge strictly before the infinite line of descent are visited twice (upwards first and then downwards). For all \(s \in [0, \infty )\), we define \(C^{\mathbf {T}}_s \) as the distance at time s of the particle from the root \(\varnothing \). The associated distance \(d_{C^{\mathbf {T}}}\) as defined in (48) is the graph distance of \({\mathbf {T}}\) in the (sub)critical cases. We refer to Le Gall & D. [19] (Section 2.4, Chapter 2, pp. 61–62) for a formal definition and a formula relating the contour process to the height process.

3.2 Coding processes related to the Markovian queueing system

We fix \(\mathtt {w}= (w_1, \ldots , w_n, 0, 0 , \ldots ) \in {\ell }^{_{\, \downarrow }}_f\) and we briefly recall the definition of the Markovian queue as in the introduction: a single server is visited by infinitely many clients; clients arrive according to a Poisson process with unit rate; each client has a type that is a positive integer ranging in \(\{ 1, \ldots , n\}\); the amount of time of service required by a client of type j is \(w_j\); types are i.i.d. with law

$$\begin{aligned} \nu _\mathtt {w}= \frac{1}{\sigma _1 (\mathtt {w})}\sum _{j=1}^n w_j \delta _{j} \; . \end{aligned}$$
(78)

Let \(\tau _l\) stand for the time of arrival of the l-th client in the queue and let \({\mathtt {J}}_l\) stand for her/his type; then, the queueing system is entirely characterised by the point measure

$$\begin{aligned} {\mathscr {X}}_\mathtt {w}\! = \sum _{k\ge 1} \delta _{(\tau _k , {\mathtt {J}}_k)}, \end{aligned}$$
(79)

that is distributed as a Poisson point measure on \([0, \infty ) \! \times \! \{ 1, \ldots , n\}\) whith intensity \(\ell \! \otimes \! \nu _\mathtt {w}\), where \(\ell \) stands for the Lebesgue measure on \([0, \infty )\). We next introduce

$$\begin{aligned} X^\mathtt {w}_t = -t + \sum _{l\ge 1} w_{{\mathtt {J}}_l}\mathbf{1}_{[0, t]} (\tau _l) \; \quad \text {and} \quad I^\mathtt {w}_t = \inf _{s\in [0, t]} X^\mathtt {w}_s , \quad t\in [0, \infty ). \end{aligned}$$
(80)

Then, \(X^\mathtt {w}_t \! -\! I^\mathtt {w}_t\) is interpreted as the load of the Markovian queueing system at time t and \(X^{\mathtt {w}}_t\) is the algebraic load of the queue. Note that \(X^\mathtt {w}\) is a spectrally positive Lévy process whose law is determined by its Laplace exponent \(\psi _\mathtt {w}\! : \! [0, \infty ) \! \rightarrow \! {\mathbb {R}}\) defined as

$$\begin{aligned} {\mathbf {E}}\big [ e^{-\lambda X^\mathtt {w}_t}\big ]\!= & {} \! e^{t\psi _\mathtt {w}(\lambda )} \quad \text {where} \nonumber \\ \psi _\mathtt {w}(\lambda )= & {} \alpha _\mathtt {w}\lambda + \sum _{1\le j\le n}\! \! \frac{_{w_j} }{^{\sigma _1 (\mathtt {w})}} \big ( e^{-\lambda w_j}\! -\! 1 + \lambda w_j \big ) \quad \text {and} \quad \alpha _\mathtt {w}\! := \! 1\! -\! \frac{_{\sigma _2 (\mathtt {w})}}{^{\sigma _1 (\mathtt {w})}} \; . \end{aligned}$$
(81)

Here, recall that \(\sigma _r (\mathtt {w}) = w_1^r + \ldots + w_n^r\), \(r\ge 0\). We call the queueing system recurrent if a.s. \(\liminf _{t\rightarrow \infty }\) \(X^\mathtt {w}_t \!= \! -\infty \), which means that all the clients will eventually depart. Observe that the system is recurrent if and only if \(\sigma _2 (\mathtt {w}) / \sigma _1 (\mathtt {w}) \le 1\). If, on the other hand, \(\sigma _2 (\mathtt {w}) / \sigma _1 (\mathtt {w})\! >\! 1\), then \(\alpha _\mathtt {w}\! <\! 0\) and a.s. \(\lim _{t\rightarrow \infty } X^\mathtt {w}_t \!= \! \infty \) (the queue will see an accumulation of infinitely many clients). As a common practice for branching processes, in the sequel, we shall refer to the following cases:

$$\begin{aligned} \text {supercritical:} \; \sigma _2 (\mathtt {w})\! >\! \sigma _1 (\mathtt {w}), \quad \text {critical:} \; \sigma _2 (\mathtt {w})\! =\! \sigma _1 (\mathtt {w}), \quad \text {subcritical:} \; \sigma _2 (\mathtt {w})\! < \! \sigma _1 (\mathtt {w}). \end{aligned}$$
(82)

Note that the criticality alluded to above is distinct from the critical regime of the random graph \({\varvec{\mathcal {G}}}_{\mathtt {w}_{n}}\).

The LIFO queueing system governed by \({\mathscr {X}}_\mathtt {w}\) generates a tree that can be informally defined as follows: the clients are the vertices and the server is the root (or the ancestor); the j-th client to enter the queue is a child of the i-th one if the j-th client enters when the i-th client is served; among siblings, clients are ordered according to their time of arrival. In critical or subcritical cases, it fully defines a Galton–Watson forest; however in supercritical cases, it only defines the part of a Galton–Watson forest situated before the first infinite line of descent. To circumvent this problem, we actually define the tree first and then we couple it with the queueing system.

In what follows, what we mean by a Poisson random subset \(\Pi \) on \([0, \infty )\) with unit rate is the set of atoms of a unit-rate Poisson random measure: namely, it is the random subset \(\{ \mathtt {e}_1+ \cdots + \mathtt {e}_n ; n\! \ge \! 1\}\), where the \(\mathtt {e}_n\) are i.i.d. exponentially distributed r.v. with unit mean. For all \(u \in {\mathbb {U}}\backslash \{ \varnothing \}\), let J(u) and \(\Pi _u\) be independent r.v. whose laws are given as follows: J(u) has law \(\nu _\mathtt {w}\) as defined in (78) and \(\Pi _u\) is a Poisson random subset of \([0, \infty )\) with unit rate. We next define \(\Pi _\varnothing \) as a Poisson random subset of \([0, \infty )\) with unit-rate that is assumed to be independent of \((J(u), \Pi _u)_{u\in {\mathbb {U}}\backslash \{ \varnothing \}}\) and for convenience, we set \(J(\varnothing ) = 0\). For all \(u \in {\mathbb {U}}\), we index the points of \(\Pi _u\) using the children of u. Formally, we define a map \(\sigma : \{ u \! *\! [p]\, ;\; p\! \ge \! 1 \} \rightarrow \Pi _{u}\) as follows:

$$\begin{aligned} \Pi _u = \big \{ \sigma (u \! *\! [p]) \, ; \; p\! \ge \! 1\big \}, \,\; \text {where}\; \, \sigma (u \! *\! [p]) < \sigma (u \! *\! [p+1]), \; \, p\! \ge \! 1. \end{aligned}$$
(83)

Note that it defines a collection \((\sigma (u))_{u\in {\mathbb {U}}\backslash \{ \varnothing \}}\) of r.v. It is easy to check that there is a unique random tree \({\mathbf {T}}_\mathtt {w}\! : \! \Omega \! \rightarrow \! {\mathbb {T}}\) such that, for \(u \in {\mathbf {T}}_\mathtt {w}\backslash \{ \varnothing \}\),

$$\begin{aligned} k_u ({\mathbf {T}}_\mathtt {w}) = \# \big ( \Pi _u \cap [ 0, w_{J(u)} ] \big ) \quad \text {and} \quad k_\varnothing ({\mathbf {T}}_\mathtt {w}) = \infty . \end{aligned}$$
(84)

Clearly \({\mathbf {T}}_\mathtt {w}\) is distributed as a GW(\(\mu _\mathtt {w}\))-forest where \(\mu _\mathtt {w}\) is given by

$$\begin{aligned} \forall k \in {\mathbb {N}}, \quad \mu _\mathtt {w}(k) \! =\! \sum _{j\ge 1} \frac{{w_j^{k+1} e^{-w_j} }}{{ \sigma _1 (\mathtt {w}) \, k!}} \; . \end{aligned}$$
(85)

Namely, \(k_\varnothing ({\mathbf {T}}_\mathtt {w}) = \infty \) and the subtrees \((\theta _{[k]} {\mathbf {T}}_\mathtt {w})_{k\ge 1}\) stemming from \(\varnothing \) are i.i.d. GW(\(\mu _\mathtt {w}\))-trees. Note that \(\sum _{k\ge 0} k\mu _\mathtt {w}(k) = \sigma _2 (\mathtt {w}) / \sigma _1 (\mathtt {w})\).

Then we define the point process \({\mathscr {X}}_\mathtt {w}\) governing the Markovian queueing system as follows: denote by \((u_l)_{l\in {\mathbb {N}}}\) the sequence of vertices of \({\mathbf {T}}_\mathtt {w}\) such that \(u_0 = \varnothing \) and such that for all l, \(u_{l+1}\) is the smallest vertex of \({\mathbf {T}}_\mathtt {w}\) (with respect to the lexicographical order) that is larger than \(u_l\). Then we set

$$\begin{aligned} J_l = J(u_l) \, , \quad \tau _l = \! \! \!\! \!\!\! \! \sum _{\quad \begin{array}{c} v\in {\mathbf {T}}_\mathtt {w}: v <_{\mathtt {lex}} u_l \\ \text {and} \; v\notin [\![\varnothing , u_l ]\!] \end{array} } \!\! \!\!\! \!\!\! \!\!\! \! w_{J(v)} +\sum _{v\in ]\!]\varnothing , u_l ]\!]} \sigma (v) \quad \text {and} \quad {\mathscr {X}}_\mathtt {w}= \sum _{l\ge 1} \delta _{(\tau _l, J_l)}. \end{aligned}$$
(86)

We also set, for each \(t \in [0, \infty )\),

$$\begin{aligned} N^\mathtt {w}(t)= \sum _{l\ge 1} \mathbf{1}_{[0, t]} ( \tau _l) \; . \end{aligned}$$
(87)

In (75), recall that \((V^{{\mathbf {T}}_\mathtt {w}}_l)_{l\ge 0}\) stands for the Lukasiewicz path associated with \({\mathbf {T}}_\mathtt {w}\); we also recall the notation \({\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{\, l} \) for the quantity \(\inf _{0\le k\le l-1} V^{{\mathbf {T}}_\mathtt {w}}_k -1\).

Lemma 3.1

We keep the notation from above. Then \({\mathscr {X}}_\mathtt {w}\) as defined by (86) is a Poisson point measure on \([0, \infty ) \! \times \! \{ 1, \ldots , n\}\) with intensity \(\ell \! \otimes \! \nu _\mathtt {w}\) and therefore \(N^\mathtt {w}\) in (87) is a Poisson process on \([0, \infty )\) with unit rate. Let \(X^\mathtt {w}\) and \(I^\mathtt {w}\) be derived from \({\mathscr {X}}_\mathtt {w}\) by (80). For all \(t \in [0, \infty )\), the following statements hold true:

(i):

Conditional on \(X^\mathtt {w}_t \! -\! I^\mathtt {w}_t\), \(V^{{\mathbf {T}}_\mathtt {w}}_{N^\mathtt {w}(t)} \!- \! {\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{N^\mathtt {w}(t)}\) is distributed as a Poisson r.v. with mean \(X^\mathtt {w}_t \! -\! I^\mathtt {w}_t\).

(ii):

\({\mathbf {P}}\)-almost surely: \(-{\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{N^\mathtt {w}(t)} \! =\! \# (\Pi _\varnothing \cap [0, -I^\mathtt {w}_t] ) \).

Then, for all \(a , x \in (0, \infty )\), we get

$$\begin{aligned} {\mathbf {P}}\big ( \big | V^{{\mathbf {T}}_{\! \mathtt {w}}}_{N^\mathtt {w}(t)} \! -\! X^\mathtt {w}_t \big |> 2a\big ) \le 1 \! \wedge \! (4x/a^2) + {\mathbf {P}}\big ( \! -\! I^\mathtt {w}_t \! >\! x) + {\mathbf {E}}\big [ 1 \wedge \! \big ( (X^\mathtt {w}_t \! -\! I^\mathtt {w}_t)/a^2 \big )\big ] . \end{aligned}$$
(88)

Proof

We first explain how \((\tau _{l+1}, X^\mathtt {w}_{\tau _{l+1}})\) is derived from \((\tau _l, X^\mathtt {w}_{\tau _l})\) in terms of the r.v. \((J(u), \Pi _u)\), \(u\in {\mathbb {U}}\). To that end, we need notation: fix \(u \in {\mathbb {U}}\backslash \{ \varnothing \}\); then for all \(0 \le p \! < \! |u|\), we set

$$\begin{aligned} R^u_p = \big ( w_{J(u_{|p})} \! -\! \sigma (u_{| p+1}) \big )_+\quad \text {and} \quad Q^u_p = \big \{ \sigma (v)\! -\! \sigma (u_{| p+1})\, ; \; v \in {\mathbb {U}}: u_{|p+1} \!\! <_{\mathtt {lex}} \! v \; \text {and} \; \overleftarrow{v} = u_{| p} \big \} .\end{aligned}$$

Note that \(J(u_{| 0}) = J (\varnothing ) = 0\) and that \(w_0 = \infty \) (by convention); thus, \(R_0^u = \infty \). We also set \(R^u_{|u|} = w_{J(u)} \), \(Q^u_{|u|} = \Pi _u\), \(R(u) = (R^u_1, \ldots , R^u_{|u|})\) and \(Q(u) = (Q^u_0, \ldots , Q^u_{|u|})\). By convention, we finally set \(R^\varnothing _0 = \infty \), \(R(\varnothing ) = \varnothing \) and \(Q(\varnothing ) = (\Pi _\varnothing )\).

We next denote by \({\mathscr {G}}(u)\) the \(\sigma \)-field generated by the r.v. \((\sigma (v), J(v), \Pi _v \cap [0, \sigma (v)))_{ v \in \, ]\!]\varnothing , u ]\!]}\) and \((J (v), \Pi _v)_{ v <_\mathtt {lex} u \; \text {and} \; v \notin [\![\varnothing , u ]\!]}\). Elementary properties of Poisson point processes imply that conditionnally given \({\mathscr {G}}(u)\), the \(Q^u_p\), \(0 \le p\! < \! |u|\) are independent Poisson random subsets of \([0, \infty )\) with unit rate: they are therefore independent of \({\mathscr {G}}(u)\); by construction they are also independent from the r.v. \((J (v), \Pi _v)\), \(u \! <_\mathtt {lex} v\).

For all \(u \in {\mathbb {U}}\backslash \{ \varnothing \}\), we next define \(s(u) \in {\mathbb {U}}\) and \(e(u) \in [0, \infty )\) that satisfy \(s(u_{l}) = u_{l+1}\) and \(\tau _{l+1} = e(u_l)+ \tau _l\). To that end, we first set \({\mathbf {q}} = \sup \big \{ p \in \{ 0, \ldots , |u| \} : \# (Q^u_p \cap [0, R^u_p])\! \ge \! 1 \big \}\) that is well-defined since \(R^u_0 = \infty \).

  • If \({\mathbf {q}} = |u|\), then we set \(s(u) = u\! * \! [1]\) and \(e(u) = \sigma (u \! *\! [1])\).

  • If \({\mathbf {q}}\! <\! |u|\), then \(|u|\! \ge \! 1\) and we set \(s(u) = [i_1, \ldots , i_{{\mathbf {q}}} , i_{{\mathbf {q}}+1} \! +\! 1]\) (namely, \(s(u) = [i_1\! +\! 1]\) if \({\mathbf {q}} = 0\)), where \(u = [i_1, \ldots , i_{|u|} ]\). We also set

    $$\begin{aligned} e(u) = \sigma (s(u))\! -\! \sigma (u_{|{\mathbf {q}} +1}) + \sum _{{\mathbf {q}}< p \le |u|} R^u_p \; . \end{aligned}$$
    (89)

Elementary properties on Poisson point processes imply that e(u), J(s(u)) and \({\mathscr {G}}(u)\) are independent, that e(u) is exponentially distributed with unit mean and that J(u) has law \(\nu _\mathtt {w}\). Then, we easily derive from (84) that for all \(l \in {\mathbb {N}}\), \(u_{l+1} = s(u_l)\), as already mentioned. It is also easy to deduce from (86) that \(\tau _{l+1} = e(u_l)+ \tau _l\). Thus, \(\tau _{l+1}\! -\! \tau _l\), \(J(u_{l+1})\) are independent and they are also independent of \({\mathscr {G}}(u_l)\) and therefore of the r.v.  \(((\tau _k, J(u_k)))_{1\le k\le l}\). It implies that \({\mathscr {X}}_\mathtt {w}\) is a Poisson point measure on \([0, \infty ) \! \times \! \{ 1, \ldots , n\}\) with intensity \(\ell \! \otimes \! \nu _\mathtt {w}\).

We next prove inductively that for all \(l\! \ge \! 1\),

$$\begin{aligned} Z_l\! := \!\!\! \sum _{1\le p\le |u_l|} \!\!\! R^{u_l}_p = X^\mathtt {w}_{\tau _l} -I^\mathtt {w}_{\tau _l} \quad \text {and} \quad \sigma ((u_l)_{| 1}) = -I^\mathtt {w}_{\tau _l}. \end{aligned}$$
(90)

Proof of (90): For \(l=1\), \(u_{1}\) is the first customer in the queue. In that case, \(Z_{1}=R^{u_{1}}_{|u_{1}|}=w_{J(u_{1})}\) is the size of the first jump of \(X^{\mathtt {w}}\). The first identity in (90) then follows. Note also that \(\sigma (u_1) = \tau _{1}\) by (86), the latter equal to \(-I_{\tau _{1}}^{\mathtt {w}}\) as \(X^{\mathtt {w}}\) has slope \(-1\). This proves (90) for \(l=1\).

Assume it holds true for l. Set \(k = (u_l)_{| 1}\); namely \(u_l \in \theta _{[k]} {\mathbf {T}}_\mathtt {w}\). Since \(u_{l+1} = s(u_l)\), \(u_{l+1} \in \theta _{[k]} {\mathbf {T}}_\mathtt {w}\) if and only if \({\mathbf {q}} = \sup \big \{ p \in \{ 0, \ldots , |u_l|\} : \# (Q^{u_l}_p \cap [0, R^{u_l}_p])\! \ge \! 1 \big \} \! \ge \! 1\). We first suppose that \({\mathbf {q}}\! \ge \! 1\). By comparing (89) and (90), we see that \(e(u_l) \! <\! Z_l\). Since \(\tau _{l+1} \! - \tau _l = e(u_l)\) and since \(X^\mathtt {w}\) does not jump on \([\tau _l, \tau _{l+1})\) (by the definition (80)), we have

$$\begin{aligned} \inf _{s\in [\tau _l, \tau _{l+1}] } X^\mathtt {w}_s = X^{\mathtt {w}}_{\tau _{l+1}-} = X^\mathtt {w}_{\tau _l} \! -\! (\tau _{l+1} \! -\! \tau _l) = X^\mathtt {w}_{\tau _l} -e(u_l) = Z_l -e(u_l) + I^\mathtt {w}_{\tau _l} \end{aligned}$$
(91)

and thus \(-I^\mathtt {w}_{\tau _{l+1}} = -I^\mathtt {w}_{\tau _l}\). Since \(u_{l+1} \in \theta _{[k]} {\mathbf {T}}_\mathtt {w}\), \(k = (u_{l+1})_{| 1} = (u_{l})_{| 1}\) and thus \(\sigma ( (u_{l+1})_{| 1}) = \sigma ((u_{l})_{| 1})\). Then (90) entails \(-I^\mathtt {w}_{\tau _{l+1}} = \sigma ( (u_{l+1})_{| 1})\). We also check easily that \(Z_{l+1} \! -\! Z_l = w_{J(u_{l+1}) } \! -\! e(u_l) = X^{\mathtt {w}}_{\tau _{l+1}} \! -\! X^\mathtt {w}_{\tau _l}\), which easily entails (90) for \(l+1\).

Suppose now that \({\mathbf {q}} = 0\), which is equivalent to \(u_{l+1} = [k+1]\). Then \(u_{l+1}\) is the only one in the queue when it arrives. Thus, \(R(u_{l+1})\! =\! ( w_{J(u_{l+1})})\) and \(Z_{l+1} = w_{J(u_{l+1})}\). Since \({\mathbf {q}} = 0\), \(e(u_l) = Z_l+ \sigma ([k+1])\! -\! \sigma ([k])\) by (89). As in (91), we get \( \inf _{s\in [\tau _l, \tau _{l+1}] } X^\mathtt {w}_s = Z_l -e(u_l) + I^\mathtt {w}_{\tau _l} = I^\mathtt {w}_{\tau _l} \! -\! \sigma ([k+1])+ \sigma ([k]) = - \sigma ([k+1])\), the last equality being a consequence of (90) for l. It implies that \(-I^\mathtt {w}_{\tau _{l+1}} = \sigma ([k+1])\! >\! \sigma ([k]) = -I^\mathtt {w}_{\tau _{l}}\). Therefore, \(X^\mathtt {w}_{\tau _{l+1}} \! -\! I^\mathtt {w}_{\tau _{l+1}} = \Delta X^\mathtt {w}_{\tau _{l+1}} = w_{J(u_{l+1})} = Z_{l+1}\). This proves that (90) holds for \(l+1\). It also completes the proof of (90) by induction. \(\square \)

Next, it is easy to check that

$$\begin{aligned} \# \big \{ v \in {\mathbf {T}}_\mathtt {w}: u_{l}<_\mathtt {lex} v \; \text {and} \; \overleftarrow{v}\! \in \, ]\!]\varnothing , u_l ]\!]\big \} = \sum _{1 \le p \le |u_l |} \# \big ( Q^{u_l}_p\cap [0, R^{u_l}_p ] \big ) , \end{aligned}$$

and by (76) we get \(\sum _{1 \le p \le |u_l|} \# \big ( Q^{u_l}_p\cap [0, R^{u_l}_p ] \big ) = V^{{\mathbf {T}}_\mathtt {w}}_l \! -\! {\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{\, l}\). By (90) and elementary properties of Poisson point processes, it shows that given \(X^\mathtt {w}_{\tau _l} -I^\mathtt {w}_{\tau _l}\), \(V^{{\mathbf {T}}_\mathtt {w}}_l \! -\! {\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{\, l}\) is distributed as a Poisson r.v. with mean \(X^\mathtt {w}_{\tau _l} -I^\mathtt {w}_{\tau _l}\). This proves (i).

Next, we have seen in (76) and (90) that \(-{\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{\, l} = (u_l)_{| 1}\) and \(\sigma ((u_l)_{| 1}) = -I^\mathtt {w}_{\tau _l}\). Namely, \(-{\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{\, l} = \# \big (\Pi _\varnothing \cap [0, -I^\mathtt {w}_{\tau _l} ] \big )\) and elementary arguments entail \(-{\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{\, l} = \# \big (\Pi _\varnothing \cap [0, -I^\mathtt {w}_{t} ] \big )\) for all \(t\, \in [\tau _l, \tau _{l+1})\). This easily proves (ii) for all \(t \in [\tau _1, \infty )\). For all \(t \in [0, \tau _1)\), observe that \(N^\mathtt {w}_{t} = 0\) and \(-I^\mathtt {w}_t\! = t\! < \! \tau _1\). Since \({\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{\, 0} = 0\), it entails (ii) for all \(t \in [0, \tau _1)\), which completes the proof of (ii).

We next prove (88). We fix \(t \in [0, \infty )\) and to simplify we set

$$\begin{aligned}D = V^{{\mathbf {T}}_\mathtt {w}}_{N^\mathtt {w}(t)} \!- \! {\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{N^\mathtt {w}(t)}, \; Z = X^\mathtt {w}_t \! -\! I^{\mathtt {w}}_t\, , \; D^\prime = -{\underline{V}}^{{\mathbf {T}}_\mathtt {w}}_{N^\mathtt {w}(t)} \quad \text {and} \quad Z^\prime = -I^\mathtt {w}_t \; .\end{aligned}$$

By (i), \({\mathbf {E}}\big [ (D\! -\! Z)^2| Z \big ] = Z\); thus \({\mathbf {P}}(|D\! -\! Z| \! >\! a) \le {\mathbf {E}}[1 \wedge (Z/a^2)]\). By (ii), \(D^\prime = \# (\Pi _\varnothing \cap [0, Z^\prime ])\); then, for all \(x \in (0, \infty )\), we get \( {\mathbf {P}}(|D^\prime \! -\! Z^\prime | \!>\! a) \le {\mathbf {P}}(\sup _{z\in [0, x]}|\# (\Pi _\varnothing \cap [0, z])\! -\! z| \!> \! a )+ {\mathbf {P}}(Z^\prime \!> \! x) \! \le \! 1\! \wedge \! (4x/a^2) + {\mathbf {P}}(Z^\prime \! > \! x)\) by Doob \(L^2\)-inequality for the martingale \(z\! \mapsto \! \# (\Pi _\varnothing \cap [0, z])\! -\! z\). It implies (88), which completes the proof of Lemma 3.1. \(\square \)

Fig. 2
figure 2

An example of \(X^{\mathtt {w}}\) and the associated height process \(H^{\mathtt {w}}\) drawn side by side. Observe that at each \(\tau _{i}\), both \(X^{\mathtt {w}}\) and \(H^{\mathtt {w}}\) jump upwards: these are the arrival times of the customers. Note that \(H^{\mathtt {w}}\) also jumps down by one unit each time a customer leaves the queue

The contour of \({\mathbf {T}}_{\! \mathtt {w}}\): estimates In (6), recall that \(H^\mathtt {w}_t\) stands for the number of clients waiting in the line right after time t. More precisely, for all \(s, t \in [0, \infty )\) such that \(s \le t\), we have

$$\begin{aligned} H^\mathtt {w}_t \! = \# {\mathcal {K}}_t, \; \text {where} \; {\mathcal {K}}_t \! =\! \big \{ s \in [0, t]\! : \! I^{\mathtt {w}, s-}_{t} \! <\! I^{\mathtt {w}, s}_{t} \big \} \; \text {and where} \; I^{\mathtt {w}, s}_t= \inf _{r\in [s, t]}X^\mathtt {w}_r \text { for }s \in [0, t]. \end{aligned}$$
(92)

See Fig. 2 for an example. The process \(H^\mathtt {w}\) is called the height process associated with \(X^\mathtt {w}\) by analogy with (77), but \(H^\mathtt {w}\) is actually closer to the contour process of \({\mathbf {T}}_{\! \mathtt {w}}\).

To see this, recall that \((u_l)_{l\in {\mathbb {N}}}\) stands for the sequence of vertices of \({\mathbf {T}}_{\! \mathtt {w}}\) listed in the lexicographical order; we identify \(u_l\) with the l-th client to enter the queueing system. For all \(t \in [0, \infty )\), we denote by \({\mathbf {u}} (t)\) the client currently served right after time t: namely \({\mathbf {u}} (t) = u_l\) where \(l = \sup \{ k \in {\mathbb {N}}\! : \tau _k \le t \; \text {and} \; X^\mathtt {w}_{\tau _k-} \! < \! \inf _{s\in [\tau _k, t]}X^\mathtt {w}_s \}\). Then, the length of the word \({\mathbf {u}} (t)\) is the number of clients waiting in the line right after time t: \(| {\mathbf {u}} (t)| = H^{\mathtt {w}}_t \).

We next denote by \((\xi _m)_{m\ge 1}\) the sequence of jump-times of \(H^{\mathtt {w}}\): namely, \(\xi _{m+1} = \inf \{ s\! >\! \xi _m \! : H^\mathtt {w}_s \! \ne \! H^{\mathtt {w}}_{\xi _m} \}\), for all \(m \in {\mathbb {N}}\), with the convention \(\xi _0 = 0\). We then set, for \(t \in [0, \infty )\),

$$\begin{aligned} M^\mathtt {w}_t =\sum _{m\ge 1} \mathbf{1}_{[0, t]} (\xi _m) \; . \end{aligned}$$
(93)

Note that \((\xi _m)_{m\ge 1}\) is also the sequence of jump-times of \({\mathbf {u}}\) and that for all \(m\! \ge \! 1\), \(({\mathbf {u}} (\xi _{m-1}), {\mathbf {u}} (\xi _m))\) is necessarily an oriented edge of \({\mathbf {T}}_{\! \mathtt {w}}\). We then set \({\mathbf {T}}_{\! \mathtt {w}} (t) = \{{\mathbf {u}} (s); s \in [0, t]\}\), that represents the set of the clients who entered the queue before time t (and the server \(\varnothing \)); \({\mathbf {T}}_{\! \mathtt {w}} (t)\) has \(N^\mathtt {w}(t)+1\) vertices (including the server represented by \(\varnothing \)); therefore, \({\mathbf {T}}_{\! \mathtt {w}} (t)\) has \(2N^\mathtt {w}(t)\) oriented edges. Among the \(2N^\mathtt {w}(t)\) oriented edges of \({\mathbf {T}}_{\! \mathtt {w}} (t)\), there are \(|{\mathbf {u}} (t)|\) edges going down from \({\mathbf {u}} (t)\) to \(\varnothing \) which do not belong to the subset \(\{ ({\mathbf {u}} (\xi _{m-1}), {\mathbf {u}} (\xi _m)); m\! \ge \! 1 \! : \xi _m \le t \}\). Thus, for each \(t \in [0, \infty )\),

$$\begin{aligned} M^\mathtt {w}_t = 2N^\mathtt {w}(t) \! -\! H^\mathtt {w}_t \; . \end{aligned}$$
(94)

Recall the definition of the contour and the height processes of \({\mathbf {T}}_{\! \mathtt {w}}\) introduced in Sect. 3.1, and denoted resp. by \((C^{{\mathbf {T}}_{\! \mathtt {w}}}_t)\) and \((\mathtt {Hght}^{{\mathbf {T}}_{\! \mathtt {w}}}_k)\). Then, observe that, for each \(t \in [0, \infty )\),

$$\begin{aligned} C^{{\mathbf {T}}_{\! \mathtt {w}}}_{M^\mathtt {w}(t)} = H^\mathtt {w}_t \quad \text {and} \quad \sup _{s\in [0, t]} H^\mathtt {w}_s \le 1+ \sup _{s\in [0, t]} \mathtt {Hght}^{{\mathbf {T}}_{\! \mathtt {w}}}_{N^\mathtt {w}_s} . \end{aligned}$$
(95)

Since \(N^\mathtt {w}\) is a homogeneous Poisson process with unit rate, Doob’s \(L^2\)-inequality combined with (94) and (95) imply that for \(t, a \in (0, \infty )\),

$$\begin{aligned} {\mathbf {P}}\big (\! \sup _{s\in [0, t]}\! |M^\mathtt {w}_s \! -\! 2s |> 2a\big ) \le 1\! \wedge \! (16t/a^2) + {\mathbf {P}}\big (1+ \!\! \sup _{s\in [0, t]} \! \mathtt {Hght}^{{\mathbf {T}}_{\! \mathtt {w}}}_{N^\mathtt {w}_s} > a \big ) . \end{aligned}$$
(96)

3.3 Red and blue processes

This section contains no new result and we recall here more precisely the embedding of the LIFO queue without repetition encoding the multiplicative graph \({\varvec{\mathcal {G}}}_{\! \mathtt {w}}\) into the Markovian queue considered in the previous Sect. 3.2. This embedding has been introduced in [17] (and it is informally recalled in the introduction). This embedding uses two auxiliary processes, the blue and red processes, that we now define. First, we introduce two independent random point measures on \([0, \infty ) \! \times \! \{ 1, \ldots , n\} \):

$$\begin{aligned} {\mathscr {X}}_\mathtt {w}^{\mathtt {b}} = \sum _{k\ge 1} \delta _{(\tau ^{\mathtt {b}}_k , {\mathtt {J}}^{\mathtt {b}}_k)} \quad \text {and} \quad {\mathscr {X}}_\mathtt {w}^{\mathtt {r}} = \sum _{k\ge 1} \delta _{(\tau ^{\mathtt {r}}_k , {\mathtt {J}}^{\mathtt {r}}_k)}, \end{aligned}$$
(97)

that are Poisson point measures with intensity \(\ell \! \otimes \! \nu _\mathtt {w}\), where we recall that \(\ell \) stands for the Lebesgue measure and that \(\nu _\mathtt {w}= \frac{1}{\sigma _1 (\mathtt {w})}\sum _{1 \le j\le n} w_j \delta _{j} \). The blue process \(X^{\mathtt {b}, \mathtt {w}}\) and the red process \(X^{\mathtt {r}, \mathtt {w}}\) are defined respectively by

$$\begin{aligned} X^{\mathtt {b}, \mathtt {w}}_t = -t + \sum _{k\ge 1} w_{{\mathtt {J}}^{\mathtt {b}}_k}\mathbf{1}_{[0, t]} (\tau ^{\mathtt {b}}_k) \quad \text {and} \quad X^{\mathtt {r}, \mathtt {w}}_t = -t + \sum _{k\ge 1} w_{{\mathtt {J}}^{\mathtt {r}}_k}\mathbf{1}_{[0, t]} (\tau ^{\mathtt {r}}_k). \end{aligned}$$
(98)

Note that \(X^{\mathtt {b}, \mathtt {w}}\) and \(X^{\mathtt {r}, \mathtt {w}}\) are two independent spectrally positive Lévy processes with Laplace exponent \(\psi _\mathtt {w}\) given by (81). For all \(j \in \{ 1, \ldots , n\}\) and all \(t\in [0, \infty )\), we next set

$$\begin{aligned} N^{\mathtt {w}}_j (t) = {\mathscr {X}}_\mathtt {w}^{\mathtt {b}} \big ( [0, t] \! \times \! \{ j\} \big ) \quad \text {and} \quad E^\mathtt {w}_j = \inf \big \{ t \in [0, \infty ) : {\mathscr {X}}^{\mathtt {b}}_\mathtt {w}([0, t] \! \times \! \{ j\}) = 1 \big \}. \end{aligned}$$
(99)

Then the \(N^\mathtt {w}_j\) are independent homogeneous Poisson processes with jump-rate \(w_j/ \sigma _1 (\mathtt {w})\) and the r.v. \((\frac{w_j}{\sigma _1 (\mathtt {w})} E^\mathtt {w}_j)_{1\le j\le n}\) are i.i.d. exponentially distributed r.v. with unit mean. We next set

$$\begin{aligned} Y^\mathtt {w}_t = -t \, +\!\!\! \sum _{1\le j \le n} \!\!\! w_j \mathbf{1}_{\{ E^\mathtt {w}_j \le t \}} \quad \text {and} \quad A^\mathtt {w}_t = X^{\mathtt {b}, \mathtt {w}}_t -Y^\mathtt {w}_t = \! \sum _{1\le j\le n} \!\! w_j (N^\mathtt {w}_j (t)\! -\! 1)_+ \; . \end{aligned}$$
(100)

Here \(Y^\mathtt {w}\) is the algebraic load of the following queue without repetition that encodes the multiplicative graph \({\varvec{\mathcal {G}}}_{\! \mathtt {w}}\) (as explained in the introduction): a single server is visited by n clients labelled by \(1, \ldots , n\); Client j arrives at time \(E^\mathtt {w}_j\) and she/he requests an amount of time of service \(w_j\); a LIFO (last in first out) policy applies: whenever a new client arrives, the server interrupts the service of the current client (if any) and serves the newcomer; when the latter leaves the queue, the server resumes the previous service.

We embed this queue into a Markovian one that is obtained from \((Y^\mathtt {w}, A^\mathtt {w})\) and \(X^{\mathtt {r}, \mathtt {w}}\) as follows. We first introduce the following time-change process that will play a prominent role:

$$\begin{aligned}&\theta ^{\mathtt {b}, \mathtt {w}}_t \!\! = t + \gamma ^{\mathtt {r}, \mathtt {w}}_{A^\mathtt {w}_t}, \; \text {where for all }x \in [0, \infty ),\quad \text { we have set} \;\; \gamma ^{\mathtt {r}, \mathtt {w}}_x \! = \! \inf \big \{ t \in [0, \infty ) \! : X^{\mathtt {r} , \mathtt {w}}_t \!\!\! < \! - x \big \},\nonumber \\ \end{aligned}$$
(101)

with the convention that \(\inf \emptyset = \infty \). We next recall various properties of \(\theta ^{\mathtt {b}, \mathtt {w}}\) that are used in the sequel. To that end, let us first note that standard results on Lévy processes (see e.g. Bertoin’s book [6] Chapter VII) assert that \((\gamma ^{\mathtt {r}, \mathtt {w}}_x)_{x\in [0, \infty )}\) is a (possibly killed) subordinator whose Laplace exponent is given by, for \(\lambda \in [0, \infty )\),

$$\begin{aligned} {\mathbf {E}}\big [ e^{-\lambda \gamma ^{\mathtt {r}, \mathtt {w}}_x } \big ]= e^{-x\psi ^{-1}_\mathtt {w}(\lambda ) } \quad \text {where} \quad \psi ^{-1}_\mathtt {w}(\lambda ) = \inf \big \{ u \in [0, \infty ) : \psi _\mathtt {w}(u) \! >\! \lambda \big \}. \end{aligned}$$
(102)

Set \(\varrho _{\mathtt {w}} = \psi ^{-1}_\mathtt {w}(0)\), the largest root of \(\psi _\mathtt {w}\). Then, \(\varrho _{\mathtt {w}} = 0\) in the subcritical or critical cases, while \(\varrho _{\mathtt {w}}\! > \! 0\) in the supercritical case. Moreover, in the latter case, \(-I^{\mathtt {r}, \mathtt {w}}_\infty :=-\inf _{t\in [0, \infty )}X^{\mathtt {r}, \mathtt {w}}_{t}\) is exponentially distributed with parameter \(\varrho _\mathtt {w}\) and \(\gamma ^{\texttt {r}, \mathtt {w}}_x \! <\! \infty \) if and only if \(x\! <\! -I^{\mathtt {r}, \mathtt {w}}_\infty \). It follows that the explosion time for \(\theta ^{\mathtt {b}, \mathtt {w}}\) is given by

$$\begin{aligned} T^*_\mathtt {w}= \sup \{ t \in [0, \infty )\! : \theta ^{\mathtt {b}, \mathtt {w}}_t \!< \infty \}= \sup \{ t \in [0, \infty )\! : A^\mathtt {w}_t \! < \! - I^{\mathtt {r}, \mathtt {w}}_\infty \} \; , \end{aligned}$$
(103)

which is infinite in the critical and subcritical cases and which is a.s. finite in the supercritical cases. Note that \( \theta ^{\mathtt {b}, \mathtt {w}} (T^*_\mathtt {w}-) \! < \! \infty \) in the supercritical cases.

We also introduce the following processes:

$$\begin{aligned} \Lambda ^{\mathtt {b}, \mathtt {w}}_t \! = \inf \big \{ s \in [0, \infty ) \! : \theta ^{\mathtt {b}, \mathtt {w}}_s \! >\! t \} \quad \text {and} \quad \Lambda ^{\mathtt {r}, \mathtt {w}}_t \!\! = t \! -\! \Lambda ^{\mathtt {b}, \mathtt {w}}_t \!. \end{aligned}$$
(104)

Both processes \(\Lambda ^{\mathtt {b}, \mathtt {w}}\) and \(\Lambda ^{\mathtt {r}, \mathtt {w}}\) are continuous and nondecreasing. Moreover, a.s. \(\lim _{t\rightarrow \infty } \Lambda ^{\mathtt {r}, \mathtt {w}}_t = \infty \). In critical and subcritical cases, we also get a.s. \(\lim _{t\rightarrow \infty } \Lambda ^{\mathtt {b}, \mathtt {w}}_t = \infty \) and \(\Lambda ^{\mathtt {b}, \mathtt {w}} (\theta ^{\mathtt {b}, \mathtt {w}}_t) = t\) for all \(t \in [0, \infty )\). However, in supercritical cases, \( \Lambda ^{\mathtt {b}, \mathtt {w}}_t = T^*_\mathtt {w}\) for \(t \in [\theta ^{\mathtt {b}, \mathtt {w}} (T^*_\mathtt {w}-), \infty )\) and a.s. for all \(t \in [0, T^*_\mathtt {w})\), \(\Lambda ^{\mathtt {b}, \mathtt {w}} (\theta ^{\mathtt {b}, \mathtt {w}}_t) = t\). The following proposition was proved in [17].

Proposition 3.2

We keep the previous notation and we define the process \(X^\mathtt {w}\) by, \(t \in [0, \infty )\),

$$\begin{aligned} X^{\mathtt {w}}_t = X^{\mathtt {b}, \mathtt {w}}_{ \Lambda ^{\mathtt {b}, \mathtt {w}}_t } + X^{\mathtt {r}, \mathtt {w}}_{ \Lambda ^{\mathtt {r}, \mathtt {w}}_t } \; . \end{aligned}$$
(105)

Then, \(X^{\mathtt {w}}\) has the same law as \(X^{\mathtt {b}, \mathtt {w}}\) and \(X^{\mathtt {r}, \mathtt {w}}\): namely, it is a spectrally positive Lévy process with Laplace exponent \(\psi _\mathtt {w}\) as defined in (81). Furthermore, we have

$$\begin{aligned} \quad Y^\mathtt {w}_t = X^\mathtt {w}_{\theta ^{\mathtt {b}, \mathtt {w}}_t }, \quad \text {a.s.}\; \text {for all }t \in [0, T^*_\mathtt {w}) , \end{aligned}$$
(106)

Proof

See Proposition 2.2 in [17]. \(\square \)

Recall that \(\mathtt {Blue}\) and \(\mathtt {Red}\) are the sets of times during which respectively blue and red clients are served (the server is considered as a blue client). Then formally these sets are given by

$$\begin{aligned} \mathtt {Red} = \bigcup _{\qquad t\in [0, T^*_\mathtt {w}]: \Delta \theta ^{\mathtt {b}, \mathtt {w}}_t >0} \big [ \theta ^{\mathtt {b}, \mathtt {w}}_{t-}, \theta ^{\mathtt {b}, \mathtt {w}}_t \big ) \quad \text {and} \quad \mathtt {Blue} = [0, \infty ) \backslash \mathtt {Red} . \end{aligned}$$
(107)

Note that the union defining \(\mathtt {Red}\) is countably infinite in critical and subcritical cases and that it is a finite union in supercritical cases where \(\big [ \theta ^{\mathtt {b}, \mathtt {w}} (T^*_\mathtt {w}-), \theta ^{\mathtt {b}, \mathtt {w}} (T^*_\mathtt {w})) = [\theta ^{\mathtt {b}, \mathtt {w}} (T^*_\mathtt {w}-), \infty )\). Recall the definition of the time-changes \(\Lambda ^{\mathtt {b}, \mathtt {w}}\) and \(\Lambda ^{\mathtt {r}, \mathtt {w}}\) in (104); then, we easily check that

$$\begin{aligned} \Lambda ^{\mathtt {b}, \mathtt {w}}_t \! = \! \int _0^t \!\! \mathbf{1}_{\mathtt {Blue}} (s) \, ds \quad \text {and} \quad \Lambda ^{\mathtt {r}, \mathtt {w}}_t \!\! = t \! -\! \Lambda ^{\mathtt {b}, \mathtt {w}}_t \! = \! \int _0^t \!\! \mathbf{1}_{\mathtt {Red}} (s) \, ds. \end{aligned}$$
(108)

We have the following properties of \(X^{\mathtt {w}}\), \(\theta ^{\mathtt {b}, \mathtt {w}}\), etc. from [17] (see also Figure 3).

Lemma 3.3

A.s. for all \(b \in [0, T^*_\mathtt {w}]\) such that \(\theta ^{\mathtt {b}, \mathtt {w}}_{b-}<\theta ^{\mathtt {b}, \mathtt {w}}_{b}\), we get the following for all \(s \in [\theta ^{\mathtt {b}, \mathtt {w}}_{b-} , \theta ^{\mathtt {b}, \mathtt {w}}_{b} )\):

$$\begin{aligned} X^{\mathtt {w}}_s \! > \! X^\mathtt {w}_{(\theta ^{\mathtt {b}, \mathtt {w}}_{b-})-}\!\!\! = \! Y^{\mathtt {w}}_b , \; \, \, \Delta X^\mathtt {w}_{\theta ^{\mathtt {b}, \mathtt {w}}_{b-}} = \Delta A^{\mathtt {w}}_b \; \,\, \text {and} \; \, X^\mathtt {w}_{(\theta ^{\mathtt {b}, \mathtt {w}}_{b-})-}\! \! = X^\mathtt {w}_{\theta ^{\mathtt {b}, \mathtt {w}}_{b}}=X^\mathtt {w}_{(\theta ^{\mathtt {b}, \mathtt {w}}_{b})-} \; \, \text {if} \; \, \theta ^{\mathtt {b}, \mathtt {w}}_{b}\! \! < \! \infty . \end{aligned}$$
(109)

Thus, a.s. for all \(s \in [0, \infty )\), \(X^\mathtt {w}_s \! \ge \! Y^\mathtt {w}(\Lambda ^{\mathtt {b}, \mathtt {w}}_s)\). Moreover, a.s. for all \(s_1, s_2 \in [0, \infty )\) such that \(\Lambda ^{\mathtt {b}, \mathtt {w}}_{s_1} \! < \! \Lambda ^{\mathtt {b}, \mathtt {w}}_{s_2}\), we have

$$\begin{aligned} \inf _{b\in [\Lambda ^{\mathtt {b}, \mathtt {w}}_{s_1} , \Lambda ^{\mathtt {b}, \mathtt {w}}_{s_2}]} Y^\mathtt {w}_b= \inf _{s\in [s_1, s_2]} X^\mathtt {w}_s\; . \end{aligned}$$
(110)

If the red time-change is defined by

$$\begin{aligned} \theta ^{\mathtt {r}, \mathtt {w}}_t = \inf \big \{s \in [0, \infty ): \Lambda ^{\mathtt {r}, \mathtt {w}}_s >t \, \big \}\,, \end{aligned}$$
(111)

Then, for all \(s, t \in [0, \infty )\), \(\theta ^{\mathtt {r}, \mathtt {w}}_{s+t} \! -\! \theta ^{\mathtt {r}, \mathtt {w}}_t \! \ge \! s\) and if \(\Delta \theta ^{\mathtt {r}, \mathtt {w}}_t \! >\! 0\), then \(\Delta X^{\mathtt {r}, \mathtt {w}}_t = 0\).

Proof

See Lemma 4.1 in [17]. \(\square \)

Fig. 3
figure 3

Decomposition of \(X^\mathtt {w}\) into \(X^{\mathtt b, {\mathbf {w}}}\) and \(X^{\mathtt r, {\mathbf {w}}}\). Above, the process \(X^\mathtt {w}\): clients are in bijection with its jumps; their types are the numbers next to the jumps. The grey blocks correspond to the set \(\mathtt {Blue}\). Concatenating these blocks yields the blue process \(X^{\mathtt b, {\mathbf {w}}}\). The remaining pieces give rise to the red process \(X^{\mathtt r, {\mathbf {w}}}\). Concatenating the grey blocks but without the final jump of each block yields \(Y^\mathtt {w}\). Alternatively, we can obtain \(Y^\mathtt {w}\) by removing the temporal gaps between the grey blocks in \(X^{\mathtt {w}}\): this is the graphic representation of \(Y^\mathtt {w}=X^\mathtt {w}\circ \theta ^{\mathtt b, {\mathbf {w}}}\). Observe also that each connected component of \(\mathtt {Red}\) begins with the arrival of a client whose type is a repeat among the types of the previous blue ones, and ends with the departure of this red client, marked by \({\scriptstyle \times }\) on the abscissa \(\square \)

Embedding of the tree The previous embedding of the LIFO queue without repetition governed by \(Y^\mathtt {w}\) into the Markovian queue governed by \(X^\mathtt {w}\) yields a related embedding of the trees associated with these queues. More precisely, consider first the queue governed by \(Y^\mathtt {w}\): the LIFO rule implies that Client i arriving at time \(E_i\) will leave the queue at the moment \(\inf \{t\ge E_i: Y^\mathtt {w}_t<Y^\mathtt {w}_{E_i-}\}\), namely the first moment when the service load falls back to the level right before her/his arrival. It follows that the number of clients waiting in queue at time t is given by

$$\begin{aligned} {\mathcal {H}}^\mathtt {w}_t \!= & {} \# {\mathcal {J}}_t, \; \text {where} \; {\mathcal {J}}_t = \big \{ s \in [0, t] \! : \! J^{\mathtt {w},s -}_{t} \! <\! J^{\mathtt {w},s }_{t} \big \} \; \text {and where }J^{\mathtt {w},s }_t = \inf _{r\in [s, t]} \! Y^\mathtt {w}_r \text { for } s \in [0, t].\nonumber \\ \end{aligned}$$
(112)

Recall that we denote by \({\varvec{\mathcal {T}}}_{\!\! \mathtt {w}}\) the tree formed by the clients in the queue governed by \(Y^\mathtt {w}\). The process \({\mathcal {H}}^{\mathtt {w}}\) is actually the contour (or the depth-first exploration) of \({\varvec{\mathcal {T}}}_{\!\! \mathtt {w}}\) and the graph-metric \(d_{{\varvec{\mathcal {T}}}_{\!\! \mathtt {w}} }\) of \({\varvec{\mathcal {T}}}_{\!\! \mathtt {w}}\) is encoded by \({\mathcal {H}}^\mathtt {w}\) in the following way: if we denote by \(V_{t}\in \{0, 1, \dots , n\}\) the label of the client served at time t (with the understanding that \(V_{t}=0\) if the server is idle), then for all \(s, t \in [0, \infty )\),

$$\begin{aligned} d_{{\varvec{\mathcal {T}}}_{\!\! \mathtt {w}} } (V_s, V_t) = {\mathcal {H}}^\mathtt {w}_t+{\mathcal {H}}^\mathtt {w}_s - \, 2 \min _{\quad r \in [s \wedge t , s\vee t] } {\mathcal {H}}^\mathtt {w}_r \; . \end{aligned}$$
(113)

Similarly for the Markovian queue governed by the process \(X^{\mathtt {w}}\) given in Proposition 3.2, we define its associated height process \(H^\mathtt {w}\) by setting \(H^\mathtt {w}_t\) to be the number of the clients waiting at time t, namely,

$$\begin{aligned} H^\mathtt {w}_t \!= & {} \# {\mathcal {K}}_t, \; \text {where} \; {\mathcal {K}}_t = \big \{ s \in [0, t]\! : \! I^{\mathtt {w}, s-}_{t} \! <\! I^{\mathtt {w}, s}_{t} \big \} \; \text {and where } I^{\mathtt {w}, s}_t= \inf _{r\in [s, t]}X^\mathtt {w}_r \text { for }s \in [0, t]\;.\nonumber \\ \end{aligned}$$
(114)

Then \(H^{\mathtt {w}}\) is the contour process of the i.i.d. Galton–Watson forest \({\mathbf {T}}_{\mathtt {w}}\) with offspring distribution \(\mu _{\mathtt {w}}\) characterized by (84). Note that in (sub)critical cases, \(H^\mathtt {w}\) fully explores the whole tree \({\mathbf {T}}_{\mathtt {w}}\). However in supercritical cases, the exploration of \(H^\mathtt {w}\) does not go beyond the first infinite line of descent. We shall use the following form of the previsouly mentioned embedding of \({\varvec{\mathcal {T}}}_{\!\! \mathtt {w}}\) into \({\mathbf {T}}_{\mathtt {w}}\) that is recalled in [17].

Lemma 3.4

Following the previous notation, we have

$$\begin{aligned} \quad {\mathcal {H}}^\mathtt {w}_t = H^\mathtt {w}_{\theta ^{\mathtt {b} , \mathtt {w}}_t} \quad \text {a.s. for all } t \in [0, T^*_\mathtt {w}). \end{aligned}$$
(115)

Proof

See Lemma 2.3 in [17]. \(\square \)

3.4 Estimates on the coloured processes

We keep the notation from Sect. 3.3, and provide here estimates for \(A^\mathtt {w}\), \(X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\) and \(X^{\mathtt {r}, \mathtt {w}}_{\Lambda ^{\mathtt {r}, \mathtt {w}}}\) that are used in the course of the proof of Theorem 2.4.

Recall that \(A^\mathtt {w}_t = \sum _{j\ge 1} \!\! w_j (N^\mathtt {w}_j (t)\! -\! 1)_+\), where the \(N^\mathtt {w}_j (\cdot )\) are independent homogeneous Poisson processes with respective jump-rate \(w_j/ \sigma _1 (\mathtt {w})\). Let \(({\mathscr {F}}_t)_{t\in [0, \infty )}\) be a filtration such that for all \(j\! \ge \! 1\), \(N_j^{\mathtt {w}}\) is an \(({\mathscr {F}}_t)\)-homogeneous Poisson process. Namely,

  • \(N^\mathtt {w}_j\) is \(({\mathscr {F}}_t)\)-adapted;

  • for all a.s. finite \(({\mathscr {F}}_t)\)-stopping time T, set \(N^{_{\mathtt {w}, T}}_{^j} (t) = N^\mathtt {w}_j (T+ t) \! -\! N^\mathtt {w}_j (T)\). Then, the sequence of processes \((N^{_{\mathtt {w}, T}}_{^j})_{j\ge 1}\) is independent of \({\mathscr {F}}_T\) and distributed as \((N^\mathtt {w}_j)_{j\ge 1}\).

Thus, the process \(A^{\mathtt {w}, T} = \sum _{j\ge 1} \!\! w_j ( N^{\mathtt {w}, T}_j (\cdot )\! -\! 1)_+\) is independent of \({\mathscr {F}}_T\) and distributed as \(A^\mathtt {w}\). We easily obtain

$$\begin{aligned} A^{\mathtt {w}}_{T+t} - A^\mathtt {w}_T = A^{\mathtt {w}, T}_t + \sum _{j\ge 1}w_j \mathbf{1}_{\{ E^\mathtt {w}_j \le T \}} \mathbf{1}_{\{ N^{\mathtt {w}, T}_j (t) \ge 1 \}} , \end{aligned}$$
(116)

where we recall that \(E^\mathtt {w}_j\) stands for the first jump-time of \(N^\mathtt {w}_j\); \(E^\mathtt {w}_j\) is therefore exponentially distributed with mean \(\sigma _1 (\mathtt {w})/ w_j\). Elementary calculations combined with (116) immediately entail the following lemma.

Lemma 3.5

We keep the notation from above. For all \(({\mathscr {F}}_t)\)-stopping time T and all \(a, t_0 , t \in (0, \infty )\),

$$\begin{aligned} a \, {\mathbf {P}}\big ( T\le t_0 \, ; \, A^\mathtt {w}_{T+t}\! -\! A^\mathtt {w}_T \ge a \big ) \le {\mathbf {E}}[A^\mathtt {w}_t] + \sum _{j\ge 1} w_j {\mathbf {P}}(E^\mathtt {w}_j \le t_0 ) {\mathbf {P}}(N^{\mathtt {w}}_j (t) \ge 1). \end{aligned}$$
(117)

Note that \({\mathbf {E}}[A^\mathtt {w}_t] = \sum _{j\ge 1} w_j (e^{-tw_j / \sigma _1 (\mathtt {w})} \! -\! 1 + \frac{ tw_j}{\sigma _1 (\mathtt {w})} )\). Thus,

$$\begin{aligned} a \, {\mathbf {P}}\big ( T\le t_0 \, ; \, A^\mathtt {w}_{T+t}\! - \! A^\mathtt {w}_T \ge a \big ) \le t \big ( t_0 + \frac{_{_1}}{^{^2}} t \big ) \frac{_{\sigma _3 (\mathtt {w})}}{^{\sigma _1 (\mathtt {w})^2}} . \end{aligned}$$
(118)

We next discuss the oscillations of \(X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\) and of \(X^{\mathtt {r}, \mathtt {w}}_{\Lambda ^{\mathtt {r}, \mathtt {w}}}\). To that end, let us recall that \({\mathbf {D}}([0, \infty ) , {\mathbb {R}})\) stands for the space of \({\mathbb {R}}\)-valued càdlàg functions equipped with Skorokhod’s topology. For all \(y \in {\mathbf {D}}([0, \infty ) , {\mathbb {R}})\) and for all intervals I of \([0, \infty )\), we set

$$\begin{aligned} \mathtt {osc} (y, I)= \sup \big \{ |y(s)\! -\! y(t) | ; \, s, t \in I \big \}\,, \end{aligned}$$
(119)

that is the oscillation of y on I. It is easy to check that for all \( a \!<\! b \! <\! c\),

$$\begin{aligned} \mathtt {osc} (y, [a, c) \, )\le & {} \mathtt {osc} (y, [a, b]) + \mathtt {osc} (y, [b, c)) \le \mathtt {osc} (y, [a, b) \, ) + |\Delta y(b)| + \mathtt {osc} (y, [b, c))\; ,\nonumber \\ \end{aligned}$$
(120)

where we recall that \(\Delta y (b) = y(b)\! -\! y(b-)\). We also recall the definition of the càdlàg modulus of continuity of y: let \(z, \eta \in (0, \infty )\); then, we set

$$\begin{aligned} w_z(y,\eta )= \inf \big \{\! \max _{1\le i \le r } \mathtt {osc} (y, [t_{i-1} , t_i ) \, )\; ; \; \, 0 = t_0 \!< \! \cdots \! < \! t_r = z \; : \min _{\quad 1\le i \le r-1} (t_i\! -\! t_{i-1}) \ge \eta \; \big \} , \end{aligned}$$
(121)

Here the infimum is taken over the set of all subdivisions \((t_i)_{0\le i\le r}\), of [0, z], r being a positive integer; note that we do not require \(t_r\! -\! t_{r-1} \! \ge \! \eta \). We refer to Jacod & Shiryaev [27] Chapter VI for a general introduction on Skorokhod’s topology. Recall the definition of \(T^*_\mathtt {w}\) in (103) and recall the definition of \( \Lambda ^{\mathtt {b}, \mathtt {w}}\) and \(\Lambda ^{\mathtt {r}, \mathtt {w}}\) in (104). Recall also that \(X^{\mathtt {w}} = X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}+ X^{\mathtt {r}, \mathtt {w}}_{\Lambda ^{\mathtt {r}, \mathtt {w}}}\) (see (105) in Proposition 3.2). The following lemma is a key argument in the proof of Theorem 2.4.

Lemma 3.6

We keep the notation from above. Let \( \eta \in (0, \infty )\). Then, the following statements hold true:

  1. (i)

    Almost surely, for all \(z_0, z_1, z \in [0, \infty )\), if \( z_1\!\le \! \theta ^{\mathtt {b}, \mathtt {w}}_{z_0} \le z\), then

    $$\begin{aligned} w_{z_1} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta \big ) \le w_{z+\eta } \big (X^\mathtt {w}\! , \eta \big ) + w_{z_0} \big (X^{\mathtt {b}, \mathtt {w}}\! , \eta \big ) . \end{aligned}$$
    (122)
  2. (ii)

    Assume that we are in the supercritical cases (namely, \(\alpha _\mathtt {w}= 1\! -\! \frac{_{\sigma _2(\mathtt {w})}}{^{\sigma _1(\mathtt {w})}} \! < \! 0\)) where a.s. \(T^*_{\mathtt {w}}\!< \! \infty \) and \(\theta ^{\mathtt {b}, \mathtt {w}} (T^*_\mathtt {w}\! -)\! < \!\infty \). Then a.s. for all \(z_0, z_1, z \in [0, \infty )\) if \(z \! >\! \theta ^{\mathtt {b}, \mathtt {w}} (T^*_\mathtt {w}\! -)\) and \(z_0 \!>\! T^*_{\mathtt {w}} \! >\! 2\eta \), we have

    $$\begin{aligned} w_{z_1} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta \big ) \le w_{z+\eta } \big (X^\mathtt {w}\! , \eta \big ) +3w_{z_0} \big (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta \big ). \end{aligned}$$
    (123)
  3. (iii)

    Almost surely on the event \(\{ z \! >\! \Lambda ^{\mathtt {r}, \mathtt {w}}_{z_1} \}\), we have \(w_{z_1} \big ( X^{\mathtt {r}, \mathtt {w}}_{\Lambda ^{\mathtt {r}, \mathtt {w}}}\, , \eta \big ) \le w_{z} \big (X^{\mathtt {r}, \mathtt {w}}\! , \eta \big )\).

Proof

First note that for all intervals I, we have

$$\begin{aligned} \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} , I \big )= & {} \sup \big \{ \big |X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}_t}\! -\! X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}_s} \big | ; s, t \in I \big \} = \sup \big \{ \big |X^{\mathtt {b}, \mathtt {w}}_{t}\! -\! X^{\mathtt {b}, \mathtt {w}}_{s} \big | ; s, t \in \big \{\Lambda ^{\mathtt {b}, \mathtt {w}}_u ; u \in I \big \} \big \}. \end{aligned}$$

We fix \(\eta , a, b \in [0, T^*_\mathtt {w})\) such that \(b\! - \! a \! \ge \! \eta \). By the definition (101) of \(\theta ^{\mathtt {b}, \mathtt {w}}\), we get \(\theta ^{\mathtt {b}, \mathtt {w}}_{b-} \! \! -\! \theta ^{\mathtt {b}, \mathtt {w}}_{a} \! \ge \! b\! -\! a \! \ge \! \eta \). Since \(\Lambda ^{\mathtt {b}, \mathtt {w}}\) is non-decreasing and continuous, and since \(\theta ^{\mathtt {b}, \mathtt {w}}\) is strictly increasing, we get \(\{\Lambda ^{\mathtt {b}, \mathtt {w}}_t ; t \in [\theta ^{\mathtt {b}, \mathtt {w}}_{a} , \theta ^{\mathtt {b}, \mathtt {w}}_{b-} ) \} = [a, b)\) and

$$\begin{aligned} \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} ,[\theta ^{\mathtt {b}, \mathtt {w}}_{a} , \theta ^{\mathtt {b}, \mathtt {w}}_{b-}) \, \big ) = \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}} , [a, b) \, \big ) . \end{aligned}$$
(124)

We next suppose that \(\Delta \theta ^{\mathtt {b}, \mathtt {w}}_{b} \! \! >\! 0\). Then, \(\{\Lambda ^{\mathtt {b}, \mathtt {w}}_t ; t \in [\theta ^{\mathtt {b}, \mathtt {w}}_{a} , \theta ^{\mathtt {b}, \mathtt {w}}_{b} ) \} = [a, b]\) and by (120) it follows that

$$\begin{aligned} \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} ,[\theta ^{\mathtt {b}, \mathtt {w}}_{a} , \theta ^{\mathtt {b}, \mathtt {w}}_{b}) \, \big ) = \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}} , [a, b] \, \big ) \le \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}} , [a, b) \, \big ) + |\Delta X^{\mathtt {b}, \mathtt {w}}_b| . \end{aligned}$$
(125)

Since the process \( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} \) is constant on \([\theta ^{\mathtt {b}, \mathtt {w}}_{b-} , \theta ^{\mathtt {b}, \mathtt {w}}_{b})\), we get \(\mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} ,[\theta ^{\mathtt {b}, \mathtt {w}}_{b-} , \theta ^{\mathtt {b}, \mathtt {w}}_{b}) \big ) = 0\) and thus

$$\begin{aligned} \max \Big ( \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} ,[\theta ^{\mathtt {b}, \mathtt {w}}_{a} , \theta ^{\mathtt {b}, \mathtt {w}}_{b-})\big ) , \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} ,[\theta ^{\mathtt {b}, \mathtt {w}}_{b-} , \theta ^{\mathtt {b}, \mathtt {w}}_{b}) \big )\Big )= \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}} , [a, b) \, \big ) . \end{aligned}$$
(126)

We next assume that \(\Delta \theta ^{\mathtt {b}, \mathtt {w}}_{b} \in (0, \eta )\). We want to control \(|\Delta X^{\mathtt {b}, \mathtt {w}}_b| \) in terms of the càdlàg \(\eta \)-modulus of continuity of \(X^\mathtt {w}\). To that end, let us introduce \(z \in (0, \infty )\) such that \( \theta ^{\mathtt {b}, \mathtt {w}}_{b-}\! \le \! z\) and \(0 = t_0 \!< \! \cdots \! < \! t_r = z+\eta \) such that \(\min _{1\le i \le r-1} (t_i\! -\! t_{i-1}) \! \ge \! \eta \). Then, there exists \(i \in \{ 1, \ldots , r\}\) such that \(t_{i-1} \le \theta ^{\mathtt {b}, \mathtt {w}}_{b-} \! < \! t_{i} \) and necessarily i satisfies \(t_i\! -\! t_{i-1}\! \ge \! \eta \): indeed, it is clear if \(i\! <\! r\) and if \(i\! =\! r\), then \(t_{r-1} \le \theta ^{\mathtt {b}, \mathtt {w}}_{b-} \! \le z \!< \!z+ \eta \! =\! t_{r} \). There are two cases to consider:

  • If \(t_{i-1} \! < \! \theta ^{\mathtt {b}, \mathtt {w}}_{b-}\), then \(\mathtt {osc} (X^\mathtt {w}\! , [t_{i-1}, t_{i} )) \! \ge \! |\Delta X^\mathtt {w}( \theta ^{\mathtt {b}, \mathtt {w}}_{b-})|\). Since \(\theta ^{\mathtt {b}, \mathtt {w}}_{b}\! < \! \infty \), (109) in Lemma 3.3 implies that \(|\Delta X^\mathtt {w}( \theta ^{\mathtt {b}, \mathtt {w}}_{b-})| = |\Delta X^{\mathtt {b}, \mathtt {w}}_b|\). Thus, \(\mathtt {osc} (X^\mathtt {w}\! , [t_{i-1}, t_{i} )) \! \ge \! |\Delta X^{\mathtt {b}, \mathtt {w}}_b|\).

  • If \(t_{i-1} = \theta ^{\mathtt {b}, \mathtt {w}}_{b-}\), since \(\Delta \theta ^{\mathtt {b}, \mathtt {w}}_{b} \in (0, \eta )\) and since \(t_i - t_{i-1}\! \ge \! \eta \), we get \( \theta ^{\mathtt {b}, \mathtt {w}}_{b} \! < \! t_{i} \). Then \(\mathtt {osc} (X^\mathtt {w}, [t_{i-1}, t_{i} ))\) \( \ge \! | X^{\mathtt {w}} (\theta ^{\mathtt {b}, \mathtt {w}}_{b-}) \! -\! X^{\mathtt {w}} (\theta ^{\mathtt {b}, \mathtt {w}}_{b})|\). Since \(\theta ^{\mathtt {b}, \mathtt {w}}_{b}\! < \! \infty \), (109) in Lemma 3.3 entails \(X^{\mathtt {w}} ((\theta ^{\mathtt {b}, \mathtt {w}}_{b-}) -) = X^{\mathtt {w}} (\theta ^{\mathtt {b}, \mathtt {w}}_{b}) \) and \(| X^{\mathtt {w}} (\theta ^{\mathtt {b}, \mathtt {w}}_{b-}) \! -\! X^{\mathtt {w}} (\theta ^{\mathtt {b}, \mathtt {w}}_{b})| = |\Delta X^\mathtt {w}( \theta ^{\mathtt {b}, \mathtt {w}}_{b-})| = |\Delta X^{\mathtt {b}, \mathtt {w}}_b| \). Consequently, \(\mathtt {osc} (X^\mathtt {w}, [t_{i-1}, t_{i} )) \! \ge \! |\Delta X^{\mathtt {b}, \mathtt {w}}_b|\).

We have proved that if \(\Delta \theta ^{\mathtt {b}, \mathtt {w}}_{b} \! \in (0, \eta )\) and if \( \theta ^{\mathtt {b}, \mathtt {w}}_{b-}\! \le \! z\), then \(|\Delta X^{\mathtt {b}, \mathtt {w}}_b| \le \max _{1\le i\le r} \mathtt {osc} (X^\mathtt {w}, [t_{i-1}, t_{i} ) ) \); since it holds true for all subdivisions of \([0, z+\eta ]\) satisfying the conditions as above, it follows that

$$\begin{aligned} \text {a.s. on }\{ \theta ^{\mathtt {b}, \mathtt {w}}_{b-}\! \le \! z \, ; \, \Delta \theta ^{\mathtt {b}, \mathtt {w}}_{b} \! \in (0, \eta ) \}, \quad |\Delta X^{\mathtt {b}, \mathtt {w}}_b| \le w_{z+\eta } \big ( X^\mathtt {w}, \eta \big ). \end{aligned}$$
(127)

We are now ready to prove (122). Let us fix \(z_0, z \in (0, \infty )\) and let \(0 = t_0 \!< \! \cdots \! < \! t_r\! = z_0\) be such that \(\min _{1\le i \le r-1} (t_i\! -\! t_{i-1}) \! \ge \! \eta \). We assume that \(\theta ^{\mathtt {b} , \mathtt {w}}_{z_0} \le z\). For all \(i \in \{ 1, \ldots , r\}\), we set \(S_i = \{ \theta ^{\mathtt {b} , \mathtt {w}}_{t_i} \}\) if \(\Delta \theta ^{\mathtt {b} , \mathtt {w}}_{t_i} \! < \! \eta \) and we set \(S_i = \{ \theta ^{\mathtt {b} , \mathtt {w}}_{t_i-} , \theta ^{\mathtt {b} , \mathtt {w}}_{t_i} \}\) if \(\Delta \theta ^{\mathtt {b} , \mathtt {w}}_{t_i} \! \ge \! \eta \); we then define \(S\! =\! \{ s_0 = 0< \ldots \! < \! s_{r^\prime } = \theta ^{\mathtt {b} , \mathtt {w}}_{z_0}\} = \{ 0\} \cup S_1 \cup \ldots \cup S_r \) that is a subdivision of \([0, \theta ^{\mathtt {b} , \mathtt {w}}_{z_0}]\) such that \(\min _{1\le i \le r^\prime -1} (s_i\! -\! s_{i-1}) \! \ge \! \eta \) (indeed, recall that \(\theta ^{\mathtt {b} , \mathtt {w}}_{t_i-}\! -\! \theta ^{\mathtt {b} , \mathtt {w}}_{t_{i-1}} \! \ge \! t_i \! -\! t_{i-1}\)). By (126) (if \(S_i\) has two points) and by (125) and (127) (if \(S_i\) reduces to a single point), we obtain

$$\begin{aligned} w_{\theta ^{\mathtt {b} , \mathtt {w}}_{z_0}} \big (X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} , \eta )\le & {} \max _{1\le i\le r^\prime } \Big ( \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} , [s_{i-1} , s_i)\big ) \Big ) \le w_{z+\eta } \big ( X^\mathtt {w}\! , \eta \big ) + \max _{1\le i\le r} \Big ( \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}\! ,\, [t_{i-1} , t_i)\big ) \Big ). \end{aligned}$$

Since this holds true for all subdivisions \((t_i)\) and since \(z^\prime \! \mapsto \! w_{z^\prime } ( y(\cdot ), \eta )\) is nondecreasing, it easily entails (122) if \(z_1\! \le \! \theta ^{\mathtt {b} , \mathtt {w}}_{z_0} \le z\), which completes the proof of (i).

Let us prove (ii). We assume that we are in the supercritical cases. The control of the càdlàg modulus of continuity of \(X^{\mathtt {b}, \mathtt {w}} \circ \Lambda ^{\mathtt {b}, \mathtt {w}}\) is more complicated because this process becomes eventually constant after a last jump at time \(\theta ^{\mathtt {b}, \mathtt {w}} (T^*_\mathtt {w}\! -)\). To simplify notation we set \(\tau = \theta ^{\mathtt {b}, \mathtt {w}} (T^*_\mathtt {w}\! -)\). We suppose that \(z \! >\! \tau \) and \(z_0 \!>\! T^*_{\mathtt {w}} \! >\! 2\eta \). We fix \(z_1 \in (0, \infty )\). There are several cases to consider.

\(\bullet \) We first assume that \(z_1 \le \tau \). If \(z_{1}<\tau \), then there is \(z^\prime _0 \in [0, T^*_{\mathtt {w}})\) such that \(z_1 \! \le \! \theta ^{\mathtt {b}, \mathtt {w}}_{z_0^\prime }\); next, note that \( \theta ^{\mathtt {b}, \mathtt {w}}_{z_0^\prime }\! \le \! z \) and \(z^\prime _0 \le z_0\). Thus, applying (122) to \((z_{0}^{\prime }, z_{1}, z)\), we get that \( w_{z_1} ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta ) \le w_{z+\eta } (X^\mathtt {w}\! , \eta ) + w_{z^\prime _0} (X^{\mathtt {b}, \mathtt {w}}\! , \eta ) \le w_{z+\eta } (X^\mathtt {w}\! , \eta ) + w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , \eta )\) for all \(z_{1}<\tau \). We then extend this to \(z_{1}\le \tau \) by using a basic property of the càdlàg modulus of continuity: \(\lim _{z_1 \rightarrow \tau -} w_{z_1} ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta ) = w_{\tau } ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta )\). Thus we have proved for all \(z_1 \in [0, \tau ]\),

$$\begin{aligned} w_{z_1} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta \big ) \le w_{z+\eta } \big (X^\mathtt {w}\! , \eta \big ) + w_{z_0} \big (X^{\mathtt {b}, \mathtt {w}}\! , \eta \big ) \; , \end{aligned}$$
(128)

which implies (123) when \(z_1\le \tau \).

\(\bullet \) We next assume that \(z_1\! >\! \tau \). Observe that \(\Delta (X^{\mathtt {b}, \mathtt {w}}\! \circ \! \Lambda ^{\mathtt {b}, \mathtt {w}})(\tau ) = \Delta X^{\mathtt {b} , \mathtt {w}} (T^*_{\mathtt {w}})\). There are two subcases to consider.

\(\circ \):

We first assume that \(z_1\! >\! \tau \) and that \(\Delta X^{\mathtt {b}, \mathtt {w}} (T^*_{\mathtt {w}}) \le w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta )\). As an easy consequence of (120) and of the definition (121) of the càdlàg modulus of continuity, we get \(w_{z_1} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta \big ) \le w_{\tau } \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta \big )+ \Delta (X^{\mathtt {b}, \mathtt {w}}\! \circ \! \Lambda ^{\mathtt {b}, \mathtt {w}})(\tau ) +\mathtt {osc} \big (X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}, [\tau , z_1) \big )\). Since \(X^{\mathtt {b}, \mathtt {w}} \circ \Lambda ^{\mathtt {b}, \mathtt {w}}\) is constant on \([\tau , \infty )\), we get \(w_{z_1} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta \big ) \le w_{\tau } \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}\, , \eta \big ) + w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta )\), which implies (123) thanks to (128) and since \(w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , \eta ) \le w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta )\).

\(\circ \):

Now assume that \(\Delta X^{\mathtt {b}, \mathtt {w}} (T^*_{\mathtt {w}}) \! > \! w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta )\). Then there exists a subdivision \(t_0 = 0 \!< \! t_1 \!< \! \cdots \! < \! t_r = z_0\) such that \(\min _{1\le i\le r-1} (t_i-t_{i-1}) \! \ge \! 2\eta \) and such that

$$\begin{aligned}\max _{1\le i\le r} \mathtt {osc} (X^{\mathtt {b}, \mathtt {w}}, [t_{i-1}, t_i)) \! < \! (2 w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta )) \wedge \Delta X^{\mathtt {b}, \mathtt {w}} (T^*_{\mathtt {w}}),\end{aligned}$$

which, combined with the assumption \(T^*_{\mathtt {w}}\! >\! 2\eta \), implies that there exists \(i \in \{ 1, \ldots , r\! -\! 1\}\) such that \(t_i = T^*_{\mathtt {w}}\). Thus, \( \mathtt {osc} (X^{\mathtt {b}, \mathtt {w}}, [t_{i-1}, T^*_{\mathtt {w}})) \! <\! 2 w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta )\). By (124) applied to \(a = t_{i-1}\) and all \(b\! < \! T_{\mathtt {w}}^*\), we get \(\mathtt {osc} (X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}, [\theta ^{\mathtt {b}, \mathtt {w}}_{t_{i-1}} ,\tau ) )\! <\! 2 w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta )\). Recall that \(\tau \! -\! \theta ^{\mathtt {b}, \mathtt {w}}_{t_{i-1}}\! \ge \! T^*_{\mathtt {w}} \! -\! t_{i-1} \! \ge \! 2\eta \). Consequently, there is \(z_1^\prime \in (\theta ^{\mathtt {b}, \mathtt {w}}_{t_{i-1}} ,\tau \! -\! \eta )\) such that

$$\begin{aligned} \Delta (X^{\mathtt {b}, \mathtt {w}}\! \circ \! \Lambda ^{\mathtt {b}, \mathtt {w}})(z_1^\prime ) = 0 \quad \text {and} \quad \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}}, [z_1^\prime ,\tau ) \big ) \! <\! 2 w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta ) \; . \end{aligned}$$
(129)

Let \(s_0\! = 0 \!<\! s_1 \!< \! \cdots \! < s_r\! =\! z^\prime _1 \) be such that \(\min _{1\le i \le r-1} (s_i \! -\! s_{i-1})\! \ge \! \eta \). We define the subdivision \((s^\prime _i)_{0\le i\le r+1}\) of \([0, z_1]\) by setting \(s^\prime _i = s_i\) for all \(i \in \{ 0, \ldots , r\! -\! 1\}\) and \(s_r^\prime = \tau \), \(s^\prime _{r+1} = z_1\). Clearly, \(\min _{1\le i \le r} (s^\prime _i \! -\! s^\prime _{i-1}) \! \ge \! \eta \) since \(\tau \! -\! z^\prime _1 \! >\! \eta \). Note that \(\mathtt {osc} ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} , [\tau , z_1) ) = 0\). On the other hand, since \( \Delta (X^{\mathtt {b}, \mathtt {w}}\! \circ \! \Lambda ^{\mathtt {b}, \mathtt {w}})(z_1^\prime ) = 0\), (120) and (129) imply that

$$\begin{aligned} \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} , [s^\prime _{r-1} , \tau ) \big )\le & {} \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} , [s_{r-1} , z^\prime _1) \big )+ \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b} , \mathtt {w}}}, [z^\prime _1, \tau ) \big ) \\< & {} \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b} , \mathtt {w}}}, [s_{r-1} , z^\prime _1) \big ) + 2 w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta )\; . \end{aligned}$$

Putting all these together, we obtain

$$\begin{aligned} w_{z_1} ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b} , \mathtt {w}}}, \eta )\le & {} \max _{1\le i \le r+1} \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} , [s^\prime _{i-1} , s^\prime _i) \big ) \le \max _{1\le i \le r} \mathtt {osc} \big ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b}, \mathtt {w}}} , [s_{i-1} , s_i) \big ) + 2 w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta ). \end{aligned}$$

Since \((s_i)\) is arbitrary, we get \( w_{z_1} ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b} , \mathtt {w}}}, \eta ) \le w_{z^\prime _1} ( X^{\mathtt {b}, \mathtt {w}}_{\Lambda ^{\mathtt {b} , \mathtt {w}}}, \eta ) + 2 w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta ) \) and we obtain (123) thanks to (128) and the fact that \(w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , \eta ) \le w_{z_0} (X^{\mathtt {b}, \mathtt {w}}\! , 2\eta )\). This completes the proof of (ii).

The proof of (iii) is similar and simpler. In (111) recall that \( \theta ^{\mathtt {r} , \mathtt {w}}_{t} = \inf \{ s \in [0, \infty ) \! : \! \Lambda ^{\! \mathtt {r} , \mathtt {w}}_{s}\! >\! t \}\). Let \(b\! >\! a\). In Lemma 3.3 recall that \(\theta ^{\mathtt {r} , \mathtt {w}}_{b-}\! -\! \theta ^{\mathtt {r} , \mathtt {w}}_{a} \! \ge \! b\! -\! a \) and observe that \(\{\Lambda ^{\mathtt {r}, \mathtt {w}}_t ; t \in [\theta ^{\mathtt {r}, \mathtt {w}}_{a} , \theta ^{\mathtt {r}, \mathtt {w}}_{b-} ) \} = [a, b)\). Thus, \(\mathtt {osc} \big ( X^{\mathtt {r}, \mathtt {w}}_{\Lambda ^{\mathtt {r}, \mathtt {w}}} ,[\theta ^{\mathtt {r}, \mathtt {w}}_{a} , \theta ^{\mathtt {r}, \mathtt {w}}_{b-}) \, \big ) = \mathtt {osc} \big ( X^{\mathtt {r}, \mathtt {w}} , [a, b) \, \big ) \). Suppose next that \(\Delta \theta ^{\mathtt {r}, \mathtt {w}}_{b} \! \! >\! 0\). Then, \(\{\Lambda ^{\mathtt {r}, \mathtt {w}}_t ; t \in [\theta ^{\mathtt {r}, \mathtt {w}}_{a} , \theta ^{\mathtt {r}, \mathtt {w}}_{b} ) \} = [a, b]\) but since \( |\Delta X^{\mathtt {r}, \mathtt {w}}_b| = 0 \) by Lemma 3.3, we get \(\mathtt {osc} \big ( X^{\mathtt {r}, \mathtt {w}}_{\Lambda ^{\mathtt {r}, \mathtt {w}}} ,[\theta ^{\mathtt {r}, \mathtt {w}}_{a} , \theta ^{\mathtt {r}, \mathtt {w}}_{b}) \, \big ) = \mathtt {osc} \big ( X^{\mathtt {r}, \mathtt {w}} , [a, b] \, \big ) = \mathtt {osc} \big ( X^{\mathtt {r}, \mathtt {w}} , [a, b) \, \big ) \). Thus, we have proved for all \(b\! >\! a\),

$$\begin{aligned}\mathtt {osc} \big ( X^{\mathtt {r}, \mathtt {w}}_{\Lambda ^{\mathtt {r}, \mathtt {w}}} ,[\theta ^{\mathtt {r}, \mathtt {w}}_{a} , \theta ^{\mathtt {r}, \mathtt {w}}_{b-}) \, \big ) = \mathtt {osc} \big ( X^{\mathtt {r}, \mathtt {w}}_{\Lambda ^{\mathtt {r}, \mathtt {w}}} ,[\theta ^{\mathtt {r}, \mathtt {w}}_{a} , \theta ^{\mathtt {r}, \mathtt {w}}_{b}) \, \big ) = \mathtt {osc} \big ( X^{\mathtt {r}, \mathtt {w}} , [a, b) \, \big ) \; .\end{aligned}$$

To complete the proof of (iii) we then argue as in the proof of (122). \(\square \)

4 Previous results on the continuous setting

This section is a recap on the construction of the continuum graph in [17]. In more detail in Sect. 4.1, we recall the construction of Lévy trees, which constitute the limits of the Galton–Watson trees, from a spectrally positive Lévy process. We also briefly explain how to extend this construction to the case where the Lévy process has a positive drift. In Sect. 4.2, we introduce the analogue of the blue and red processes in the continuous setting, based on which we are able to define a limit height process for the graph.

4.1 Preliminary results on spectrally positive Lévy processes and their height process

In this section we briefly recall the known results that we need on the analogues (XH) in the continuous setting of the processes \((X^\mathtt {w}, H^\mathtt {w})\) encoding the Markovian queue. More precisley, we fix \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty ) \), \(\kappa \in (0, \infty ) \), \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\) and we set, for \(\lambda \in [0, \infty )\),

$$\begin{aligned}\psi (\lambda ) = \alpha \lambda +\frac{_{_1}}{^{^2}} \beta \lambda ^2 + \! \sum _{j\ge 1} \kappa c_j \big ( e^{-\lambda c_j}\! -\! 1 + \lambda c_j \big ). \end{aligned}$$

Let \((X_t)_{t\in [0, \infty )}\) be a spectrally positive Lévy process with initial state \(X_0 = 0\) and with Laplace exponent \(\psi \): namely, \( \log {\mathbf {E}}[ \exp ( - \lambda X_t )] = t\psi (\lambda ) \), for all \(t, \lambda \in [0, \infty )\). The Lévy measure of X is \(\pi = \sum _{j\ge 1} \kappa c_j \delta _{c_j}\), \(\beta \) is its Brownian parameter and \(\alpha \) is its drift.

First, note that these cases include the discrete processes \(X^{\mathtt {w}}\) by taking \({\mathbf {c}} = \mathtt {w} \in {\ell }^{_{\, \downarrow }}_{^{\! f}}\), \(\kappa = 1/\sigma _1 (\mathtt {w})\), \(\beta = 0\) and \(\alpha = 1\! -\! \frac{_{\sigma _2 (\mathtt {w})}}{^{\sigma _1 (\mathtt {w})}}\). However, in the sequel we shall focus on the cases where X has infinite variation sample paths, which is equivalent to the following conditions: \(\beta \! > \! 0\) or \(\sigma _2 ({\mathbf {c}}) = \int _{(0, \infty )}\! r\, \pi (dr) = \infty \), by standard results on Lévy processes. If \(\alpha \! \ge 0\), then a.s. \(\liminf _{t\rightarrow \infty } X_t = -\infty \) and if \(\alpha \! < \! 0\), then a.s. \(\lim _{t\rightarrow \infty } X_t = \infty \). By analogy with the discrete setting, we refer to the following cases as

$$\begin{aligned} \text {the supercritical cases if} \; \alpha \! < \! 0, \; \, \text {the critical cases if} \; \alpha = 0, \; \, \text {the subcritical cases if} \; \alpha \! >\! 0. \end{aligned}$$
(130)

We next introduce the process \(\gamma \) defined for \(x \in [0, \infty )\) by

$$\begin{aligned} \gamma _x = \inf \{ s \in [0, \infty ): X_s \! < \! -x \} \; . \end{aligned}$$
(131)

with the convention that \(\inf \emptyset = \infty \). For all \(t \in [0, \infty )\), we also set

$$\begin{aligned} I_t = \inf _{s \in [0, t]} X_s \quad \text {and} \quad I_\infty = \lim _{t\rightarrow \infty } \! I_t . \end{aligned}$$
(132)

Note that \(I_\infty \) is a.s. finite in supercritical cases and a.s. infinite in critical or subcritical cases. Observe that \(\gamma _x\! < \! \infty \) if and only if \(x\! < \! \! -I_\infty \). Standard results on spectrally positive Lévy processes (see e.g. Bertoin’s book [6] Ch. VII) assert that \((\gamma _x)_{x\in [0, \infty )}\) is a subordinator (a killed subordinator in supercritical cases) whose Laplace exponent is given for all \(\lambda \in [0, \infty )\) by

$$\begin{aligned} {\mathbf {E}}\big [ e^{-\lambda \gamma _x } \big ]= e^{-x\psi ^{-1}(\lambda ) } \quad \text {where} \quad \psi ^{-1} (\lambda ) = \inf \big \{ u \in [0, \infty ) : \psi (u) \! >\! \lambda \big \}. \end{aligned}$$
(133)

We set \(\varrho = \psi ^{-1}(0)\) that is the largest root of \(\psi \). Note that \(\varrho \! > \! 0 \) if and only if \(\alpha \! < \! 0\). The following elementary lemma gathers basic properties of X that are used further in the proofs.

Lemma 4.1

Let X be a spectrally positive Lévy process with Laplace exponent \(\psi \) given by (9) and with initial value \(X_0 = 0\). Assume that there is \(\lambda \in (0, \infty )\) such that \(\psi (\lambda ) \! >\! 0\). Let \(\psi ^{-1}\) be as in (133) and recall that \(\varrho = \psi ^{-1} (0)\) is the largest root of \(\psi \). Let \({\overline{X}}\) stand for a spectrally positive Lévy process with Laplace exponent \(\psi (\varrho + \cdot )\) and with initial value 0. Then, the following statements hold true:

  1. (i)

    A.s. \(\liminf _{t\rightarrow \infty } {\overline{X}}_t = -\infty \). Moreover, for all \(t \in [0, \infty )\) and for any nonnegative measurable functional \(F \! : \! {\mathbf {D}}([0, \infty ), {\mathbb {R}}) \! \rightarrow \! {\mathbb {R}}\),

    $$\begin{aligned} {\mathbf {E}}\big [ F( X_{\cdot \wedge t} ) ] \! =\! {\mathbf {E}}\big [ \exp (\varrho {\overline{X}}_t ) \, F( {\overline{X}}_{\cdot \wedge t} )\big ] . \end{aligned}$$
    (134)
  2. (ii)

    The càdlàg process \(x \in [0, \infty )\! \mapsto \! \gamma _x ({\overline{X}})\! := \! \inf \{ s \in [0, \infty )\! : \! {\overline{X}}_s \! < \! -x \}\) is a (conservative) subordinator with Laplace exponent \(\psi ^{-1} (\cdot ) \! -\! \varrho \).

  3. (iii)

    For all \(x \in [0, \infty )\), we set

    $$\begin{aligned} {\overline{\gamma }}_{x} = \gamma _x \quad \text {if }x\! <\! -I_\infty \quad \text {and} \quad {\overline{\gamma }}_{x} = \gamma ((-I_\infty ) -) \quad \text {if }x\! \ge \! -I_\infty . \end{aligned}$$
    (135)

    Let \({\mathcal {E}}\) be an exponentially distributed r.v. with parameter \(\varrho \) that is independent from \({\overline{X}}\) (with the convention that a.s. \({\mathcal {E}}= \infty \) if \(\varrho = 0\)). Then,

    $$\begin{aligned} \big ( ({\overline{\gamma }}_{x})_{x\in [0, \infty )} \, , -I_\infty \big ) \overset{\text {(law)}}{=} \big ( ( \gamma _{x\wedge {\mathcal {E}}} ({\overline{X}}))_{x\in [0, \infty )}\, , {\mathcal {E}}\big ) \; . \end{aligned}$$
    (136)
  4. (iv)

    Let \(({\mathscr {G}}_{ x})_{x\in [0, \infty )}\) be a right-continuous filtration such that for all \(x,y \in [0, \infty )\), \(\gamma _x\) is \({\mathscr {G}}_{ x}\)-measurable and \(\gamma _{x+y}\! -\! \gamma _x\) is independent of \({\mathscr {G}}_{ x}\). Let T be a \(({\mathscr {G}}_{x})\)-stopping time. Then, for all \(x, \varepsilon \in (0, \infty )\),

    $$\begin{aligned} {\mathbf {P}}\big ( {\overline{\gamma }}_{x+T} - {\overline{\gamma }}_{T}> \varepsilon \, ; \, T \! < \! \infty \big ) \le {\mathbf {P}}\big ( \gamma _x >\varepsilon \big ) \le \frac{1-e^{-x\psi ^{-1} (1/\varepsilon )}}{1-e^{-1}} \; . \end{aligned}$$
    (137)

Proof

The assertions in (i), (ii) and (iii) are (easy consequences of) standard results that can be found e.g. in Bertoin’s book [6] Chapter VII. We only need to prove (iv). To that end, first note that the second inequality in (137) is a consequence of a standard inequality combined with (133). Then, note that in the critical or subcritical cases where \({\overline{\gamma }} = \gamma \), the first inequality in (137) is a straightforward consequence of the fact that \(\gamma \) is a subordinator. Therefore, we now assume that \(\varrho \! >\! 0\). Let \(\gamma ^* \) be a copy of \(\gamma \) that is independent of \({\mathscr {G}}_{ \infty }\). Then, we set \(\gamma ^\prime = \gamma _{\, \cdot + T} - \gamma _T\) if \(T\! <\! \infty \) and \(\gamma _T \! < \! \infty \), and \(\gamma ^\prime = \gamma ^*\) otherwise. Then, \(\gamma ^\prime \) is independent of \({\mathscr {G}}_T\) and it is distributed as \(\gamma \). We next set \({\mathcal {E}}^\prime = \sup \{ x \in (0, \infty ) \! : \! \gamma ^\prime _x \! < \! \infty \}\); we also define \({\overline{\gamma }}^\prime \) by setting \({\overline{\gamma }}^\prime _x = \gamma ^\prime _x\) if \(x\! < \! {\mathcal {E}}^\prime \) and \({\overline{\gamma }}^\prime _x = \gamma ^\prime ({\mathcal {E}}^\prime -)\) if \(x\! \ge \! {\mathcal {E}}^\prime \). Thus,

$$\begin{aligned} {\mathbf {P}}({\overline{\gamma }}_{x+T}\! -\! {\overline{\gamma }}_{T}> \varepsilon \, ; \, T \!< \! \infty )= & {} {\mathbf {P}}( {\overline{\gamma }}^\prime _{x}> \varepsilon ; \gamma _T \!< \! \infty ; T \!< \! \infty ) = {\mathbf {P}}( {\overline{\gamma }}^\prime _{x} > \varepsilon ) {\mathbf {P}}(\gamma _T \!< \! \infty ; T \! < \! \infty ) \; . \end{aligned}$$

Then observe that \({\mathbf {P}}( {\overline{\gamma }}^\prime _{x} \!>\! \varepsilon ) \le {\mathbf {P}}(\gamma ^\prime _x\!>\! \varepsilon ) \! = {\! {\mathbf {P}}(\gamma _x \! >\! \varepsilon )}\), which completes the proof of (137). \(\square \)

Height process of X We next define the analogue of \(H^\mathtt {w}\). To that end, let us recall that \(\psi \) further satisfies Grey’s condition (10). In particular, note that (10) implies that either \(\beta \! >\! 0\) or \(\sigma _2 ({\mathbf {c}}) = \infty \), so that (10) ensures that X has infinite variation sample paths. Le Gall & Le Jan [31] (see also Le Gall & D. [19]) prove that there exists a continuous process \(H = (H_t)_{t\in [0, \infty )}\) such that the following limit holds true for all \(t \in [0, \infty )\) in probability:

$$\begin{aligned} H_t = \lim _{\varepsilon \rightarrow 0} \frac{1}{\varepsilon } \! \int _0^{t} \! \mathbf{1}_{\{ X_s - \inf _{r\in [s, t]} X_r \le \varepsilon \}} \, ds \; . \end{aligned}$$
(138)

Note that (138) is a local time version of (114). We refer to H as the height process of X.

Remark 4.1

Let us mention that in Le Gall & Le Jan [31] and Le Gall & D. [19], the height process H is introduced only for critical and subcritical spectrally positive processes. However, it easily extends to supercritical cases thanks to (134). \(\square \)

We next recall here that the excursions of X above its running infimum process I are the same as the excursions of H above 0. More specifically, \(X-I\) and H have the same set of zeros:

$$\begin{aligned} {\mathscr {Z}}:= \{ t\! \in \! {\mathbb {R}}_+ : H_t \! =\! 0 \}= \{ t \! \in \! {\mathbb {R}}_+ : X_t \! =\! I_t \} \end{aligned}$$
(139)

(see Le Gall & D. [19] Chapter 1). We also recall that since \(-I\) is a local time for \(X\! -\! I\) at 0, the topological support of the Stieltjes measure \(d(-I)\) is \({\mathscr {Z}}\). Namely,

$$\begin{aligned} {\mathbf {P}}\text {-a.s. for all }s, t \in [0, \infty )\text { such that }s\! < \! t, \quad \Big ( (s, t) \cap {\mathscr {Z}}\ne \emptyset \Big ) \Longleftrightarrow \Big ( I_s \! > \! I_t \Big ) \end{aligned}$$
(140)

We shall also recall here the following result:

$$\begin{aligned} \forall x, a \in (0, \infty ), \quad {\mathbf {P}}\Big ( \sup _{\quad t\in [0, \gamma _x ]} \, H_t\le a \Big ) = e^{-x v(a)} \quad \text {where} \quad \int _{v(a)}^\infty \! \frac{d\lambda }{\psi (\lambda )}= a \; . \end{aligned}$$
(141)

Here, \(\gamma _x\) is given by (131) and we see that the integral equation completely determines the function \(v \! :\! (0, \infty ) \! \rightarrow \! (\varrho , \infty )\) that is bijective, decreasing and \(C^\infty \). In the critical and subcritical cases, this result is a consequence from the excursion theory for H and from Corollary 1.4.2 in Le Gall & D. [19], p. 41. This result remains true in the supercritical cases thanks to (134): we leave the details to the readers.

4.2 The red and blue processes in the continuous setting

In this section, we give the precise definition of the analogues in the continuous setting of the processes \(X^{\mathtt {b}, \mathtt {w}}, X^{\mathtt {r}, \mathtt {w}}, Y^{ \mathtt {w}}, A^{ \mathtt {w}}, \theta ^{\mathtt {b}, \mathtt {w}}\), etc that have been introduced in [17]. Let us start with some notation and some convention.

Let \(({\mathscr {F}}_t)_{t\in [0, \infty )}\) be a filtration on \((\Omega , {\mathscr {F}})\) that is specified further. A process \((Z_t)_{ t\in [0, \infty )}\) is said to be an \(({\mathscr {F}}_t)\)-Lévy process with initial value 0 if a.s. Z is càdlàg, \(Z_0 = 0\) and if for any a.s. finite \(({\mathscr {F}}_t)\)-stopping time T, the process \(Z_{T+ \, \cdot }\! -\! Z_{T}\) is independent of \({\mathscr {F}}_{T}\) and has the same law as Z.

Let \((M_j(\cdot ))_{j\ge 1}\) be a sequence of càdlàg \(({\mathscr {F}}_t)\)-martingales that are \(L^2\)-summable and orthogonal: namely, for all \(t \in [0, \infty )\), \(\sum _{j\ge 1} {\mathbf {E}}\big [ M_j(t)^2\big ] \! < \! \infty \) and \({\mathbf {E}}[M_j (t)M_k(t)] = 0\) if \(k\! >\! j\). By Doob’s inequality, we have \({\mathbf {E}}\big [ \sup _{s\in [0, t]} (\sum _{j\le l\le k} M_l (s) \big )^2 \big ] \le 4 \sum _{l>j} {\mathbf {E}}[M_l (t)^2] \), for all \(k\! \ge \! j\! \ge \! 1\) and all \(t \in [0, \infty )\). It follows that there is a unique càdlàg \(({\mathscr {F}}_t)\)-martingale M such that for all \(t \in [0, \infty )\), \({\mathbf {E}}\big [ \sup _{s\in [0, t]} \big | M (s) \! -\! \sum _{1\le k\le j} M_k (s) \big |^2 \big ] \rightarrow 0\), as \(j\rightarrow \infty \). We denote M by \(\sum _{j\ge 1}^{_{\perp }} M_j \).

Blue processes We fix the parameters \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty ) \), \(\kappa \in (0, \infty ) \), \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\). Let \((B_t)_{t \in [0, \infty )}\), \((N_j (t))_{t\in [0, \infty )}\), \(j\! \ge \! 1\) be processes that satisfy the following.

  • \((b_1)\) B is an \(({\mathscr {F}}_t)\)-real valued standard Brownian motion.

  • \((b_2)\) For all \(j\! \ge \! 1\), \(N_j \) is an \(({\mathscr {F}}_t)\)-homogeneous Poisson process with jump-rate \(\kappa c_j\).

  • \((b_3)\) The processes B, \(N_j\), \(j\! \ge \! 1\), are independent.

The blue Lévy process is then defined by, for \(t \in [0, \infty )\),

$$\begin{aligned} X^{\mathtt {b}}_t = -\alpha t + \sqrt{\beta } B_t + \sum _{j\ge 1} \!\!\! \,^{\perp }\, c_j \big ( N_j (t) \! -\! c_j \kappa t \big ) . \end{aligned}$$
(142)

Clearly \(X^{\mathtt {b}}\) is an \(({\mathscr {F}}_t)\)-spectrally positive Lévy process with initial value 0 with Laplace exponent \(\psi \) as defined in (9). We next introduce the analogues of the processes \(A^\mathtt {w}\) and \(Y^\mathtt {w}\! \) in (100). To that end, note that \({\mathbf {E}}[ c_j (N_j (t) \! -\! 1)_+ ] = c_j \big ( e^{- c_j \kappa t}\! -\! 1 + c_j \kappa t \big )\,\le \frac{_1}{^2} (\kappa t)^2 c^3_j \). So it makes sense to define, for \(t \in [0, \infty )\),

$$\begin{aligned} A_t = \frac{_{_1}}{^{^2}} \kappa \beta t^2 + \sum _{j\ge 1} c_j \big ( N_j (t) \! -\! 1 \big )_+ \quad \text {and} \quad Y_t = X^{\mathtt {b}}_t \! -\! A_t . \end{aligned}$$
(143)

To view Y as in (13), set \(E_j = \inf \{ t \in [0, \infty )\! :\! N_j (t) = 1 \}\); note that \( c_j (N_j (t) \! -\! c_j \kappa t)\! -\! c_j (N_j(t)\! -\! 1)_+ = c_j (\mathbf{1}_{\{ E_j \le t \}} \! -\! c_j \kappa t)\) and check that \( c_j (\mathbf{1}_{\{ E_j \le t \}} \! -\! c_j \kappa t) = M^\prime _j (t) \! -\! \kappa c^2_j (t\! -\! E_j)_+\), where \(M^\prime _j\) is a centered \(({\mathscr {F}}_t)\)-martingale such that \({\mathbf {E}}[M^\prime _j (t)^2] = c_j^2 (1\! -\! e^{-c_j \kappa t}) \le \kappa t c_j^3 \). Since \({\mathbf {E}}[ \kappa c^2_j (t\! -\! E_j)_+] \le \kappa t c_j^2 (1\! -\! e^{-\kappa c_j t}) \le \kappa ^2 t^{2} c_j^3\), it makes sense to write for all \(t \in [0, \infty )\),

$$\begin{aligned} Y_t= & {} - \alpha t \! -\! \frac{_1}{^2}\kappa \beta t^2 + \sqrt{\beta } B_t + \sum _{j\ge 1} \!\!\! \,^{\perp } c_j \big ( \mathbf{1}_{\{ E_j \le t \}} \! -\! \kappa c_j (t\! \wedge E_j)\big ) - \!\! \sum _{j\ge 1} \kappa c^2_j (t\! -\! E_j)_+ \nonumber \\= & {} -\alpha t \! -\! \frac{_1}{^2}\kappa \beta t^2 + \sqrt{\beta } B_t + \sum _{j\ge 1} \! c_j (\mathbf{1}_{\{ E_j \le t \}} \! -\! c_j \kappa t). \end{aligned}$$
(144)

Namely the jump-times of Y are the \(E_j\) and \(\Delta Y_{E_j} = c_j\).

Red and bi-coloured processes We next introduce the red process \(X^{\mathtt {r}}\) that satisfies the following.

  • \((r_1)\) \(X^{\mathtt {r}}\) is an \(({\mathscr {F}}_t)\)-spectrally positive Lévy process starting at 0 and whose Laplace exponent is \(\psi \) in (9).

  • \((r_2)\) \(X^{\mathtt {r}}\) is independent of the processes B and \((N_j)_{j\ge 1}\).

We next introduce, for \(x, t \in [0, \infty )\),

$$\begin{aligned} \gamma ^{\mathtt {r}}_x = \inf \{ s \in [0, \infty ): X^{\mathtt {r}}_s \! < \! -x \} \quad \text {and} \quad \theta _t^{\mathtt {b}}= t + \gamma ^{\mathtt {r}}_{A_t} , \end{aligned}$$
(145)

with the convention that \(\inf \emptyset = \infty \). For all \(t \in [0, \infty )\), we set \(I^\mathtt {r}_t = \inf _{s \in [0, t]} X^\mathtt {r}_s \) and \(I^{\mathtt {r}}_\infty = \lim _{t\rightarrow \infty } \! I^{\mathtt {r}}_t\) that is a.s. finite in supercritical cases and that is a.s. infinite in critical or subcritical cases. Note that \(\gamma ^{\mathtt {r}}_x\! < \! \infty \) if and only if \(x\! < \! \! -I^{\mathtt {r}}_\infty \). Recall that \(\varrho \) stands for the largest root of \(\psi \): in supercritical cases, \(\varrho \! >\! 0\) and \(-I^{\mathtt {r}}_\infty \) is exponentially distributed with parameter \(\varrho \), as recalled in Lemma 4.1 (iii). We next set

$$\begin{aligned} T^* = \sup \{ t \in [0, \infty )\! : \theta ^{\mathtt {b}}_t \!< \infty \}= \sup \{ t \in [0, \infty )\! : A_t \! < \! - I^{\mathtt {r}}_\infty \} \; . \end{aligned}$$
(146)

In critical and subcritical cases, \(T^* = \infty \) and \( \theta ^{\mathtt {b}}\) only takes finite values. In supercritical cases, a.s. \(T^* \! < \! \infty \) and we check that \( \theta ^{\mathtt {b}} (T^*-) \! < \! \infty \). We next define, for \(t \in [0, \infty )\),

$$\begin{aligned} \Lambda ^{\mathtt {b}}_t = \inf \{ s \in [0, \infty ): \theta ^{\mathtt {b}}_s \! > \! t \} \quad \text {and} \quad \Lambda ^{\mathtt {r}}_t = t- \Lambda ^{\mathtt {b}}_t . \end{aligned}$$
(147)

Both processes \(\Lambda ^{\mathtt {b}}\) and \(\Lambda ^{\mathtt {r}}\) are continuous and nondecreasing. In critical and subcritical cases, we also get a.s. \(\lim _{t\rightarrow \infty } \Lambda ^{\mathtt {b}}_t = \infty \) and \(\Lambda ^{\mathtt {b}} (\theta ^{\mathtt {b}}_t) = t\) for all \(t \in [0, \infty )\). However, in supercritical cases, a.s. \( \Lambda ^{\mathtt {b}}_t = T^*\) for all \(t \in [ \theta ^{\mathtt {b}} (T^*-), \infty )\) and a.s. for all \(t \in [0, T^*)\), \(\Lambda ^{\mathtt {b}} (\theta ^{\mathtt {b}}_t ) = t\). In the following theorem we quote from [17] the results about the previous processes that we need; in particular, it contains the analogue of Proposition 3.2.

Theorem 4.2

Let \((\alpha , \beta , \kappa , {\mathbf {c}})\) be as in (8). Assume that either \(\beta >0\) or \(\sigma _2 ({\mathbf {c}}) = \infty \). We keep the previous definition for \(X^{\mathtt {b}}\), A, Y, \(X^{\mathtt {r}}\), \(\theta ^{\mathtt {b}}\), \(T^*\), \(\Lambda ^{\mathtt {b}}\) and \(\Lambda ^{\mathtt {r}}\).

  1. (i)

    A.s. the process A is strictly increasing and the process Y has infinite variation sample paths.

  2. (ii)

    The process \(\Lambda ^{\mathtt {r}}\) is continuous, nondecreasing and a.s. \(\lim _{t\rightarrow \infty } \Lambda ^{\mathtt {r}}_t = \infty \).

  3. (iii)

    For all \(t \in [0, \infty )\), we set

    $$\begin{aligned} X_t = X^{\mathtt {b}}_{\Lambda ^{\mathtt {b}}_t } + X^{\mathtt {r}}_{\Lambda ^{\mathtt {r}}_t }\, ; \end{aligned}$$
    (148)

    the processes X, \(X^\mathrm {b}\) and \(X^\mathrm {r}\) have the same law: namely, X is a spectrally positive Lévy process with initial value 0 and Laplace exponent \(\psi \) as in (9). Moreover,

    $$\begin{aligned} \quad Y_t = X_{\theta ^{\mathtt {b}}_t }\quad \text {a.s. for all }t \in [0, T^*) . \end{aligned}$$
    (149)

Proof

For (i), see Lemma 2.4 in [17]; for (ii) and (iii), see Theorem 2.5 in [17]. \(\square \)

The red and blue processes behave quite similarly as in the discrete setting (see Lemma 3.3). More precisely, we recall from [17] the various properties concerning the red and blue processes that are used in the proof.

Lemma 4.3

We keep the assumption of Theorem 4.2. Then, the following statements hold true.

  1. (i)

    \({\mathbf {P}}\)-a.s. for all \(a \in [0, T^*)\), if \(\Delta \theta ^{\mathtt {b}}_a \! =\! 0\), then \(t = \theta ^\mathtt {b}_a\) is the unique \(t \in [0, \infty )\) such that \(\Lambda ^{\mathtt {b}}_t = a\).

  2. (ii)

    \({\mathbf {P}}\)-a.s. for all \(a \in [0, T^*]\), if \(\Delta \theta ^{\mathtt {b}}_a \! >\! 0\), then \(\Delta X \! (\theta ^{\mathtt {b}}_{a-}) = \Delta A_a \) and \(\Delta Y_a = 0\). Moreover, for \(t \in \big ( \theta ^{\mathtt {b}}_{a-},\theta ^{\mathtt {b}}_{a} \big )\),

    $$\begin{aligned}X_t \! \ge \! X_{t-} \! > \! X_{(\theta ^{\mathtt {b}}_{a-})-}\!\! =\! Y_a \quad \text {and if }a<T^*\! ,\text { then }\quad X_{(\theta ^{\mathtt {b}}_{a-})-}\! = X_{\theta ^{\mathtt {b}}_a} .\end{aligned}$$
  3. (iii)

    \({\mathbf {P}}\)-a.s. if \((\Delta X^\mathtt {r} )(\Lambda ^\mathtt {r}_t) \! >\! 0\), then there exists \(a \in [0, T^*]\) such that \(\theta ^\mathtt {b}_{a-} \!< \! t \! < \! \theta ^\mathtt {b}_{a} \).

  4. (iv)

    \({\mathbf {P}}\)-a.s. for all \(b \in [0, \infty )\) such that \(\Delta X^\mathtt {r}_b\! >\! 0\), there is a unique \(t \in [0, \infty )\) such that \(\Lambda ^\mathtt {r}_t = b\).

  5. (v)

    For all \(t \in [0, \infty )\), set \(Q^\mathtt {b}_t = X^{\mathtt {b}}_{\Lambda ^{\mathtt {b}}_t }\) and \(Q^\mathtt {r}_t = X^{\mathtt {r}}_{\Lambda ^{\mathtt {r}}_t }\). Then, a.s. for all \(t \in [0, \infty )\), \(\Delta Q^\mathtt {b}_t \Delta Q^\mathtt {r}_t = 0\).

Proof

For (i) and (ii), see Lemma 5.4 in [17]; for (iii), (iv) and (v), see Lemma 5.5 in [17]. \(\square \)

The excursions of Y above its running infimum Let X be derived from \(X^{\mathtt {b}}\) and \(X^{\mathtt {r}}\) as in (148) and recall the notation \(I_t = \inf _{s\in [0, t]} X_s\) for the running infimum process of X. Thanks to (140), we can say that \(-I\) is a local-time for the set of zeros \({\mathscr {Z}} = \{ t \in [0, \infty ) : X_t = I_t \}\). Let Y be defined by (143) and recall the notation \(J_t = \inf _{s\in [0, t]} Y_s\) in (16). The following lemma (recalled from [17]) asserts that\(-J\) is a local-time for the set \({\mathscr {Z}}^{\mathtt {b}} = \{ t \in [0, \infty ) : Y_t = J_t \}\) (more precisely, it shows that \({\mathscr {Z}}^{\mathtt {b}}\) is bijectively sent to \({\mathscr {Z}}\) via \(\Lambda ^{\mathtt {b}}\)).

Lemma 4.4

We keep the assumptions of Theorem 4.2. Then, the following holds true.

  1. (i)

    A.s. for all \(t \in [0, \infty )\), \(X_t \! \ge \! Y( \Lambda ^\mathtt {b}_t)\). Then, a.s. for all \(t_1, t_2 \in [0, \infty )\) such that \(\Lambda ^\mathtt {b}_{t_1} \! < \! \Lambda ^\mathtt {b}_{t_2}\), \(\inf _{s\in [t_1, t_2]} X_s = \inf _{a\in [\Lambda ^\mathtt {b} (t_1), \Lambda ^\mathtt {b} (t_2)]} Y_a\). It implies that a.s. for all \(t \in [0, \infty )\), \(I_t = J (\Lambda ^{\! \mathtt {b}}_t)\).

  2. (ii)

    A.s. \(\big \{ t \in [0, \infty ) : X_t \!>\! I_t \big \} = \big \{ t \in [0, \infty ) : Y({\Lambda ^{\! \mathtt {b}}_t}) \! >\! J({\Lambda ^{\! \mathtt {b}}_t}) \big \} \).

  3. (iii)

    A.s. the set \({\mathscr {E}}= \big \{ a \in [0, \infty ) : Y_a \! >\! J_a \big \}\) is open. Moreover, if (lr) is a connected component of \({\mathscr {E}}\), then \(Y_l = Y_r = J_l = J_r\) and for all \(a \in (l, r)\), we get \(J_a = J_l\) and \(Y_{a-}\! \wedge \! Y_a \! >\! J_l \).

  4. (iv)

    Set \( {\mathscr {Z}}^\mathtt {b} = \{ a \in [0, \infty ) \! : \! Y_a = J_a \}\). Then, \({\mathbf {P}}\)-a.s.

    $$\begin{aligned} \text {for all }a, z \in [0, \infty )\text { such that }a\!< \! z, \quad \Big ( {\mathscr {Z}}^\mathtt {b}\! \cap (a, z) \ne \emptyset \Big ) \Longleftrightarrow \Big ( J_z \! < \! J_a \Big ). \end{aligned}$$
    (150)

Proof

See Lemma 5.7 in [17]. \(\square \)

We next recall the following result due to Aldous & Limic [4] (Proposition 14, p. 20) that is used in our proofs.

Proposition 4.5

(Proposition 14 [4]) We keep the assumptions of Theorem 4.2 and the previous notation. Then, the following holds true.

  1. (i)

    For all \(a \in [0, \infty )\), \({\mathbf {P}}(Y_a = J_a) = 0\).

  2. (ii)

    \({\mathbf {P}}\)-a.s. the set \(\{ a \in [0, \infty )\! : \! Y_a\! =\! J_a \}\) contains no isolated points.

  3. (iii)

    Set \(M_a = \max \{ r\! -\! l\, ; \; r\! \ge \! l\! \ge \! a \! : \! (l,r) \text { is an excursion interval of } Y\! -\! J \text { above }0\}\). Then, \(M_a \! \rightarrow \! 0\) in probability as \(a\! \rightarrow \! \infty \).

Proof

The process \((Y_{s/\kappa })_{s\in [0, \infty )}\) is the process \(W^{\kappa ^\prime , -\tau , {\mathbf {c}}}\) in [4], where \(\kappa ^\prime = \beta / \kappa \) and \(\tau \! =\! \alpha / \kappa \) (note that the letter \(\kappa \) plays another role in [4]). Then (i) (resp. (ii) and (iii)) is Proposition 14 [4] (b) (resp. (d) and (c)). \(\square \)

Thanks to Proposition 4.5 (iii), the excursion intervals of \(Y\! -\! J\) above 0 can be listed as follows

$$\begin{aligned} \{ a \in [0, \infty )\! : Y_a\! > \! J_a \}= \bigcup _{k\ge 1} (l_k, r_k) \; . \end{aligned}$$
(151)

where \(\zeta _k = r_k \! -\! l_k \), \(k\! \ge \! 1\), is decreasing. Then, as a consequence of Theorem 2 in Aldous & Limic [4], p. 4, we recall the following

Proposition 4.6

(Theorem 2 [4]) We keep the assumptions of Theorem 4.2 and the previous notation. Then, \((\zeta _k)_{k\ge 1}\), that is the ordered sequence of lengths of the excursions of \(Y\! -\! J\) above 0, is distributed as the \((\beta / \kappa , \alpha / \kappa , {\mathbf {c}})\)-multiplicative coalescent (as defined in [4]) taken at time 0. In particular, we get a.s. \(\sum _{k\ge 1} \zeta _k^2 \! < \! \infty \).

Height process of Y We define the analogue of \({\mathcal {H}}^\mathtt {w}\) in the continuous setting thanks to the following theorem that is recalled from various results in [17].

Theorem 4.7

Let \((\alpha , \beta , \kappa , {\mathbf {c}})\) be as in (8) and assume that (10) holds, which implies the assumptions of Theorem 4.2. Let X be derived from \(X^{\mathtt {b}}\) and \(X^{\mathtt {r}}\) by (148). Let H be the height process associated with X as defined by (138) (and by Remark 4.1 in the supercritical cases). Then, there exists a continuous process \(({\mathcal {H}}_t)_{t\in [0, \infty )}\) such that for all \(t \in [0, \infty )\), \({\mathcal {H}}_t\) is a.s. equal to a measurable functional of \((Y_{\cdot \wedge t}, A_{\cdot \wedge t}) \) and such that

$$\begin{aligned} \quad {\mathcal {H}}_t = H_{\theta ^{\mathtt {b}}_t }, \quad \text {a.s. for all }t \in [0, T^*). \end{aligned}$$
(152)

We refer to \({\mathcal {H}}\) as the height process associated with Y.

Proof

See Theorem 2.6 in [17]. \(\square \)

As for H and \(X\! -\! I\), the following lemma (recalled from [17]) asserts that the excursion intervals of \({\mathcal {H}}\) and \(Y\! -\! J\) above 0 are the same.

Lemma 4.8

We keep the same assumptions as in Theorem 4.7. Then, the following holds true.

  1. (i)

    Almost surely for all \(t \in [0, \infty )\), \(H_t \! \ge \! {\mathcal {H}}( \Lambda ^\mathtt {b}_t)\) and a.s. for all \(t_1, t_2 \in [0, \infty )\) such that \(\Lambda ^\mathtt {b}_{t_1} \! < \! \Lambda ^\mathtt {b}_{t_2}\), \(\inf _{s\in [t_1, t_2]} H_s = \inf _{a\in [\Lambda ^\mathtt {b} (t_1), \Lambda ^\mathtt {b} (t_2)]} {\mathcal {H}}_a\).

  2. (ii)

    Almost surely \(\big \{ a \in [0, \infty ) : Y_a \!>\! J_a \big \} \! =\! \big \{ a \in [0, \infty ) : {\mathcal {H}}_a \! >\! 0 \big \}\).

Proof

See Lemma 5.11 in [17]. \(\square \)

5 Convergence of the graphs

In this section, we derive the convergence of the connected components of the graphs provided that the coding processes converge. More precisely, recall the definitions of \(Y^{\mathtt {w}}\) and \(A^\mathtt {w}\) in (100) and that of \({\mathcal {H}}^\mathtt {w}\) in (112). Recall also the definitions of Y and A in (143) and that of \({\mathcal {H}}\) in Theorem 4.7. We prove Theorem 2.4, Theorems 2.5 and 2.8 subject to the following proposition whose proof is postponed to Sect. 6.

Proposition 5.1

Under the assumptions of Theorem 2.4, we have

$$\begin{aligned} {\mathscr {Q}}_n \! :=\! \big ( \frac{_{_1}}{^{^{a_n}}} Y^{\mathtt {w}_n}_{b_n \cdot } , \, \frac{_{_1}}{^{^{a_n}}} A^{\mathtt {w}_n}_{b_n \cdot },\, \frac{_{_{a_n}}}{^{^{b_n}}} {\mathcal {H}}^{\mathtt {w}_n}_{b_n \cdot } \big ) \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big ( Y, A, {\mathcal {H}}\big )\! =:\! {\mathscr {Q}} \end{aligned}$$
(153)

weakly on \(({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^2 \! \times \! {\mathbf {C}}([0, \infty ), {\mathbb {R}})\) equipped with the product-topology.

Proof

See Sect. 6. \(\square \)

5.1 Proof of Theorem 2.4

Subject to Proposition 5.1, in order to complete the proof of Theorem 2.4, it remains to prove the convergence of the sequences of pairs of pinching times \({\varvec{\Pi }}_{\mathtt {w}_n}\) (see (3) and (4) for a definition). This is done by a soft argument involving a coupling.

Recall the definition of \({\varvec{\Pi }}\) in (17) and in (18). By Skorokhod’s representation theorem (but with a slight abuse of notation) we can assume without loss of generality that (153) holds almost surely: namely, a.s. \({\mathscr {Q}}_n \! \rightarrow \! {\mathscr {Q}} \). Then, we couple the \({\varvec{\Pi }}_{\mathtt {w}_n}\) and \({\varvec{\Pi }}\) as follows.

  • Let \({\mathcal {R}}= \sum _{i\in \mathtt {I}} \delta _{(t_i, r_i, u_i)}\) be a Poisson point measure on \([0, \infty )^3\) with intensity the Lebesgue measure dtdrdv on \([0, \infty )^3\). We assume that \({\mathcal {R}}\) is independent of \({\mathscr {Q}}\) and of \(({\mathscr {Q}}_n)_{n\in {\mathbb {N}}}\).

  • We set \(\kappa _n = a_n b_n / \sigma _1 (\mathtt {w}_n)\) and for all \(t \in [0, \infty )\) we set \(\mathtt {Z}^n_t = \frac{_1}{^{a_n}}(Y^{\mathtt {w}_n}_{b_n t} \! -\! J^{\mathtt {w}_n}_{b_n t})\), where we recall that \(J^{\mathtt {w}_n}_{b_nt}\! =\! \inf _{s\in [0, b_n t]} Y^{\mathtt {w}_n}_s\). We then set \(S_{n} \! =\! \{ (t, r, v) \in [0, \infty )^3\! : \! 0 \!< \! r \! < \! \mathtt {Z}^n_t \; \text {and} \; 0 \le v \! \le \kappa _n \} \) and we define \( {\mathcal {P}}_{\! n} = \sum _{i\in \mathtt {I}} \mathbf{1}_{\{ (t_i, r_i, u_i)\in S_n\}} \delta _{(t_i, r_i, u_i)}=: \sum _{1\le p < {\mathbf {p}}_n } \delta _{(t^n_p, r^n_p, v^n_p)}\), where the ordering is such that the finite sequence \((t^n_p)_{1\le p<{\mathbf {p}}_n}\) increases. Note that since \(\mathtt {Z}^n\) is eventually null, \({\mathcal {P}}_{\! n}\) is a finite point process.

  • For all \(t \in [0, \infty )\), for all \(r \in {\mathbb {R}}\) and for all \(z \in {\mathbf {D}}([0, \infty ), {\mathbb {R}})\), we set

    $$\begin{aligned} \tau (z, t,r) \! =\! \inf \big \{ s \in [0, t] : \inf _{u\in [s, t]} z(u) \! >\! r \big \}\; \text {with the convention that }\inf \emptyset = \infty . \end{aligned}$$
    (154)

    Then, we set

    $$\begin{aligned} \frac{_1}{^{b_n}} {\varvec{\Pi }}_{\mathtt {w}_n}= \big ( (s^n_p , t^n_p) \big )_{1\le p<{\mathbf {p}}_n } \; \, \text {where} \quad s^p_n = \tau (\mathtt {Z}^n, t^n_p , r^n_p), \; 1\le p < {\mathbf {p}}_n. \end{aligned}$$
    (155)

Thanks to (3) and (4), we see that given \(Y^{\mathtt {w}_n}\! \), \(\frac{_1}{^{b_n}} {\varvec{\Pi }}_{\mathtt {w}_n}\) has the right law. For convenience, we set \( (s^n_p, t^n_p) = (-1, -1)\), for all \(p\! \ge \! {\mathbf {p}}_n\).

Similarly, we set \(\mathtt {Z}^\infty _t = Y_t \! -\! J_t \), where \(J_t = \inf _{s\in [0, t]} Y_s\) and we also set \(S\! = \{ (t, r, v) \in [0, \infty )^3\! :\! 0 \!< \! r \! < \! \mathtt {Z}^\infty _t \; \text {and} \; 0 \le v \! \le \kappa \} \); we then define \({\mathcal {P}}= \sum _{i\in \mathtt {I}} \mathbf{1}_{\{ (t_i, r_i, u_i)\in S\}} \delta _{(t_i, r_i, u_i)} =: \sum _{p\ge 1 } \delta _{(t_p, r^\prime _p, v_p)}\), where the indexation is such that \((t_p)_{p\ge 1}\) increases. Then, set

$$\begin{aligned} {\varvec{\Pi }}= \big ( ( s_p , t_p) \big )_{p\ge 1 } \; \, \text {where} \quad s_p = \tau (\mathtt {Z}^\infty , t_p , r^\prime _p) \text { for }p \ge 1. \end{aligned}$$
(156)

It is easy to check that \({\varvec{\Pi }}\) has the right law conditional on Y.

First observe that \(\kappa _n \! \rightarrow \! \kappa >0\), by the last point of (21). Next, we prove that \(\mathtt {Z}^n \! \rightarrow \! \mathtt {Z}^\infty \) a.s. in \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\): indeed, since Y has no negative jumps, J is continuous and by Proposition 5.1 and by Lemma B.3 (ii), \( (\frac{_1}{^{a_n}}J^{\mathtt {w}_n}_{b_n t} )_{t\in [0, \infty )} \! \rightarrow \! (J_t)_{t\in [0, \infty )}\) a.s. in \({\mathbf {C}}([0, \infty ), {\mathbb {R}})\). Since J is continuous, Y and J do not share any jump-times, Proposition 5.1 and Lemma B.1 (iii) imply that \((\frac{_1}{^{a_n}}( Y^{\mathtt {w}_n}_{b_n t}, J^{\mathtt {w}_n}_{b_n t}) )_{t\in [0, \infty )} \! \rightarrow \! ((Y_t, J_t))_{t\in [0, \infty )}\) a.s. in \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^2)\), which entails that \(\mathtt {Z}^n \! \rightarrow \! \mathtt {Z}^\infty \) a.s. in \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\).

Let us fix \(a, b , c \in (0, \infty )\) such that

$$\begin{aligned}b \!> \; 2 \!\!\! \sup _{n\in {\mathbb {N}}\cup \{ \infty \}} \sup _{s\in [0, a]} \mathtt {Z}^n_s \quad \text {and} \quad c\! > \; 2 \sup _{n\in {\mathbb {N}}\cup \{ \infty \}} \kappa _n \; . \end{aligned}$$

Here b is random but only depends on the \({\mathscr {Q}}_n\). We introduce

$$\begin{aligned}\sum _{1\le l \le N}\delta _{(t^*_l, r^*_l, u_l^*)}\! := \! \sum _{i\in \mathtt {I}} \mathbf{1}_{\{ t_i<a \, ; \, r_i<b \, ; \, u_i <c \}}\delta _{(t_i, r_i, u_i)}, \end{aligned}$$

where \((t^*_l)_{1\le l\le N}\) increases. Conditional on \(({\mathscr {Q}}_n)_{n\in {\mathbb {N}}}\), the r.v. N is a Poisson r.v. with mean abc. Note that conditional on N and \(({\mathscr {Q}}_n)_{n\in {\mathbb {N}}}\), the law of the r.v. \((t^*_l, r^*_l, u_l^*)\) is absolutely continuous with respect to Lebesgue measure. Therefore, a.s. for all \(l \! \in \! \{ 1, \ldots , N\}\) (if any), \(\Delta \mathtt {Z}^\infty _{t^*_l} = 0\), \(u^*_l \! \ne \!\kappa \), and \(r^*_l \! \ne \! \mathtt {Z}^\infty _{t^*_l}\), and if \(r^*_l \! < \! \mathtt {Z}^\infty _{t^*_l}\), then we get \(\tau (\mathtt {Z}^\infty , t^*_l , r_l^* -) = \tau (\mathtt {Z}^\infty , t^*_l , r_l^* )\) because by Lemma B.3 (iv), the function \(r \! \mapsto \! \tau ( \mathtt {Z}^\infty , t^*_l , r)\) is right-continuous and it has therefore a countable number of discontinuities. Since \(\Delta \mathtt {Z}^\infty _{t^*_l} = 0\), Lemma B.1 (ii) entails that \(\mathtt {Z}^n_{t^*_l} \! \rightarrow \! \mathtt {Z}^\infty _{t^*_l}\), and for all sufficiently large n, \(u^*_l \! \ne \! \kappa _n\) and \(r^*_l \! \ne \! \mathtt {Z}^n_{t^*_l}\), and if \(r^*_l \! < \! \mathtt {Z}^n_{t^*_l}\), then Lemma B.3 (iv) shows that \(\tau (\mathtt {Z}^n, t^*_l, r^*_l) \! \rightarrow \! \tau ( \mathtt {Z}^\infty , t^*_l, r^*_l)\). This proves that if \(t_p \! <\! a\), then \((s^n_p, t^n_p)\! \rightarrow \! (s_p, t_p)\), since we have \(t^n_p=t_{p}\) for n sufficiently large as a consequence of the above coupling. Since a can be arbitrarily large, we get \(\frac{_1}{^{b_n}}{\varvec{\Pi }}_{\mathtt {w}_n}\! \rightarrow \! {\varvec{\Pi }}\) a.s. in \(({\mathbb {R}}^2)^{{\mathbb {N}}^*}\!\! \) equipped with the product topology. This, combined with Proposition 5.1, completes the proof.

5.2 Proof of Theorem 2.5

From Theorem 2.4, we derive Theorem 2.5 that states the convergence of the excursions of the processes encoding the connected components of the graphs.

More precisely, recall the definition of Y in (143). In Theorem 4.7, recall the existence and the properties of \({\mathcal {H}}\), the height process associated with Y. Recall the notation \(J_t = \inf _{s\in [0,t]} Y_s\), \(t \in [0, \infty )\). Lemma 4.8 (ii) asserts that the excursions of \({\mathcal {H}}\) above 0 and those of \(Y\! -\! J\) above 0 are the same. As recalled in Proposition 4.5, Proposition 14 in Aldous & Limic [4] asserts that these excursions can be indexed in the decreasing order of their lengths. Namely,

$$\begin{aligned} \big \{ t \in [0, \infty ) : {\mathcal {H}}_t>0 \big \}= \big \{ t \in [0, \infty ) : Y_t > J_t \big \}= \bigcup _{k\ge 1} (l_k , r_k) \; , \end{aligned}$$
(157)

where the sequence \(\zeta _k = l_k - r_k\), \(k\! \ge \! 1\), decreases. Moreover, the sequence \((\zeta _k)_{k\ge 1}\) appears as a version of the multiplicative coalescent at a fixed time: see Theorem 2 in Aldous & Limic [4] (recalled in Proposition 4.6). In particular, it implies that a.s. \(\sum _{k\ge 1} \zeta _k^2 < \infty \). Recall the definition of excursion processes of \({\mathcal {H}}\) and \(Y\! -\! J\) above 0 in (43): for \(k \! \ge \! 1\) and \(t \in [0, \infty )\), we have

$$\begin{aligned} {\varvec{\mathtt {H}}}_{k}(t)= {\mathcal {H}}_{(l_k + t)\wedge r_k} \quad \text {and} \quad {\varvec{\mathtt {Y}}}_{k}(t)= Y_{(l_k + t)\wedge r_k}- J_{l_k} . \end{aligned}$$
(158)

Next recall the definition of \({\varvec{\Pi }}= \big ( (s_p, t_p)\big )_{p\ge 1}\) introduced in (17) and (18).

Let \(a_n , b_n \! \in \! (0, \infty )\) and \(\mathtt {w}_n \! \in \! {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfy (21) and \(\mathbf {(C1)}\)\(\mathbf {(C4)}\) as in (29), (30) and (34). Recall the definition of \(Y^{\mathtt {w}_n}\) in (100), while \({\mathcal {H}}^{\mathtt {w}_n}\) is the associated height process in (112). Recall the definition of \({\varvec{\Pi }}_{\mathtt {w}_n}\) in (3) and (4). For all \(t \in [0, \infty )\), to simplify notation, we introduce the following:

$$\begin{aligned} Y^{_{(n)}}_{t} \! \! :=\! \frac{_{_1}}{^{^{a_n}}} Y^{\mathtt {w}_n}_{b_n t}, \; J^{_{(n)}}_{t} \! := \! \inf _{s\in [0, t]} Y^{_{(n)}}_{s}, {\mathcal {H}}^{_{(n)}}_{t}\! := \! \frac{_{_{a_n}}}{^{^{b_n}}} {\mathcal {H}}^{\mathtt {w}_n}_{b_n t} , \; {\varvec{\Pi }}^{(n)}\! := \! \frac{_1}{^{b_n}} {\varvec{\Pi }}_{\mathtt {w}_n} \! =:\! \big ( (s^n_p , t^n_p) \big )_{1\le p<{\mathbf {p}}_{n} }. \end{aligned}$$
(159)

Recall that (see (40))

$$\begin{aligned} \big \{ t \in [0, \infty ) : {\mathcal {H}}^{_{(n)}}_t \!>\! 0 \big \}= \big \{ t \in [0, \infty ) : Y^{_{(n)}}_t\! >\! J^{_{(n)}}_t \big \} = \bigcup _{1\le k\le {\mathbf {q}}_{\mathtt {w}_n}} [l^n_k, r^n_k) \end{aligned}$$
(160)

where the indexation is such that the \(\zeta ^n_k\! := \! r^n_k \! -\! l^n_k\) are nonincreasing and such that \(l^n_k \! < \! l^n_{k+1}\) if \(\zeta ^n_k = \zeta ^n_{k+1}\) (within the notation of (40), \(l^n_k = l^{\mathtt {w}_n}_k/b_n\), \(r^n_k = r^{\mathtt {w}_n}_k/b_n\) and \(\zeta ^n_k = \zeta ^{\mathtt {w}_n}_k/b_n\)).

By Skorokhod’s representation theorem (but with a slight abuse of notation) we can assume without loss of generality that (39) in Theorem 2.4holds \({\mathbf {P}}\)-a.s. We first prove the following lemma.

Lemma 5.2

We keep the previous notation and we assume that (39) in Theorem 2.4 holds \({\mathbf {P}}\)-a.s. Then, for all \(k, n\! \ge 1\), there exists a sequence \(j(n,k) \in \{1, \ldots , {\mathbf {q}}_{\mathtt {w}_n}\}\) such that \({\mathbf {P}}\)-a.s. for all \(k\! \ge \! 1\),

$$\begin{aligned} \big ( l_{j(n,k)}^{n} , r_{j(n,k)}^{n}\big ) \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big ( l_k, r_k \big ). \end{aligned}$$
(161)

Proof

Fix \(k\! \ge \! 1\) and let \(t_0 \in (l_k, r_k)\); note that \(l_k = \sup \{ t \in [0, t_0]\! :\! {\mathcal {H}}_t \! =\! 0\}\) and \(r_k = \inf \{ t \in [t_0, \infty ) \! :\! {\mathcal {H}}_t = 0 \}\). For all \(n\! \ge \! 1\), set \(\gamma (n) = \sup \{ t \in [0, t_0)\! :\! {\mathcal {H}}^{_{(n)}}_t \! =\! 0\}\) and \(\delta (n) = \inf \{ t \in [t_0, \infty )\! :\! {\mathcal {H}}^{_{(n)}}_t = 0 \}\). Let q and r be such that \(l_k\!< \! q \!< \! t_0 \!< \! r \! < \! r_k\). Since \(\inf _{t\in [q, r]} {\mathcal {H}}_t \! >\! 0\), for all sufficiently large n, we get \(\inf _{t\in [q, r]} {\mathcal {H}}^{_{(n)}}_t \! >\! 0\), which implies that \(\gamma (n) \le q \) and \(r \le \delta (n)\). This easily implies that \(\limsup _{n\rightarrow \infty } \gamma (n) \le l_k\) and \(r_k \le \liminf _{n\rightarrow \infty } \delta (n)\).

Let q and r be such that \(q \! < \! l_k\) and \(r_k \! < \! r\). Since \({\mathcal {H}}_{l_k}\! =\! {\mathcal {H}}_{r_k} = 0\), (150) in Lemma 4.4 (iv) implies that \(J_{q}\!>\! J_{t_0} \! >\! J_{r}\). Since J is continuous, Lemma B.1 (iii) entails that \(J^{(n)} \! \rightarrow \! J\) a.s. in \({\mathbf {C}}([0, \infty ) , {\mathbb {R}})\). Thus, for all sufficiently large n, \(J^{_{(n)}}_{q}\!>\! J^{_{(n)}}_{t_0} \! >\! J^{_{(n)}}_{r}\); by definition, it implies that \(Y^{(n)}\! -\! J^{(n)}\) (and thus \({\mathcal {H}}^{(n)}\)) hits the value 0 between the times q and \(t_0\) and between the times \(t_0\) and r: namely, for all sufficiently large n, \(\gamma (n) \! \ge \! q\) and \(\delta (n) \le r\). This easily entails \(\liminf _{n\rightarrow \infty } \gamma (n) \! \ge \! l_k\) and \(r_k \! \ge \! \limsup _{n\rightarrow \infty } \delta (n)\), and we have proved that \(\lim _{n\rightarrow \infty } \gamma (n) \!=\! l_k\) and \(\lim _{n\rightarrow \infty } \delta (n) = r_k\).

Let \(n_0\! \ge \! 1\) be such that for all \(n\! \ge \! n_0\), \({\mathcal {H}}^{_{(n)}}_{t_0} \! >\! 0\). Then, for all \(n\! \ge \! n_0\), there exists \(j(n,k) \in \{1, \ldots , {\mathbf {q}}_{\mathtt {w}_n}\}\) such that \(\gamma (n) = l^n_{j(n,k)}\) and \(\delta (n) = r^n_{j(n,k)}\); for all \(n \le n_0\), we take for instance \(j(n, k) = 1\). Then, (161) holds true, which completes the proof. \(\square \)

We next recall that Proposition 2.9 (Proposition 7 in Aldous & Limic [4]) asserts that \(\sum _{1\le k\le {\mathbf {q}}_{\mathtt {w}_n}} (\zeta ^n_k)^2 \rightarrow \sum _{k\ge 1} (\zeta _k)^2 \) weakly on \([0, \infty )\) as \(n\! \rightarrow \! \infty \). We use this result to prove the following

Lemma 5.3

We keep the previous notation and follow the assumptions of Theorem 2.4. Then

$$\begin{aligned} {\mathscr {Q}}^\prime _n \! :=\! \Big ( Y^{(n)} , {\mathcal {H}}^{(n)} , {\varvec{\Pi }}^{(n)}, \!\!\! \!\!\! \!\! \sum _{\quad 1\le k\le {\mathbf {q}}_{\mathtt {w}_n}} \!\!\! \!\!(\zeta ^n_k)^2 \Big ) \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; {\mathscr {Q}}^\prime \! :=\! \Big ( Y, {\mathcal {H}}, {\varvec{\Pi }}, \sum _{k\ge 1} (\zeta _k)^2\Big ) \end{aligned}$$
(162)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\, \times \! {\mathbf {C}}([0, \infty ) , {\mathbb {R}})\! \times \!({\mathbb {R}}^2)^{{\mathbb {N}}^*}\! \!\times \![0, \infty )\), equipped with product topology.

Proof

First note that the laws of the \({\mathscr {Q}}_n^\prime \) are tight. It follows from (39) in Theorem 2.4 combined with the weak convergence \(\sum _{1\le k\le {\mathbf {q}}_{\mathtt {w}_n}} (\zeta ^n_k)^2 \! \rightarrow \! \sum _{k\ge 1} (\zeta _k)^2 \). We only need to prove that the law of \({\mathscr {Q}}^\prime \) is the unique limit law. To that end, let \((n(p))_{p\in {\mathbb {N}}}\) be an increasing sequence of integers such that \({\mathscr {Q}}_{n(p)}^\prime \! \rightarrow \! ( Y, {\mathcal {H}}, {\varvec{\Pi }}, Z)\) weakly. It remains to prove that \(Z = \sum _{k\ge 1} (\zeta _k)^2\). Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that \({\mathscr {Q}}_{n(p)} \! \rightarrow \! ( Y, {\mathcal {H}}, {\varvec{\Pi }}, Z)\) holds true \({\mathbf {P}}\)-a.s. Then, by Lemma 5.2, observe that for all \(l\ge \! 1\),

$$\begin{aligned} Z \; \underset{n\rightarrow \infty }{\longleftarrow \!\!\! -\!\!\! -} \quad \sum _{1\le k\le {\mathbf {q}}_{\mathtt {w}_n}} (\zeta _k^n)^2 \; \ge \; \sum _{1\le k\le l} (\zeta _{j(n,k)}^n)^2\quad \underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } \; \sum _{1\le k\le l} (\zeta _k)^2 . \end{aligned}$$
(163)

Set \(Z^\prime = \sum _{ k\ge 1} (\zeta _k)^2\); by letting l go to \(\infty \) in (163), we get \(Z\! \ge \! Z^\prime \), which implies \(Z = Z^\prime \) a.s. since Z and \(Z^\prime \) have the same law. This completes the proof of the lemma. \(\square \)

Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that (162) holds true a.s. on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\,\times \! {\mathbf {C}}([0, \infty ) , {\mathbb {R}})\! \times \!\,({\mathbb {R}}^2)^{{\mathbb {N}}^*}\,\times \! [0, \infty )\), equipped with product topology.

We next prove the following:

Lemma 5.4

We keep the previous notation. Assume that (162) holds true almost surely. Then,

$$\begin{aligned} {\mathbf {P}}\text {-a.s. for all }k\! \ge \! 1, \qquad \big ( l^n_k , r^n_k\big ) \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big ( l_k, r_k \big ). \end{aligned}$$
(164)

Proof

Let \(\varepsilon \in (0, \infty ) \backslash \{ \zeta _k; k\! \ge \! 1\}\) and let \(k_\varepsilon \) be such that \(\zeta _k \! >\! \varepsilon \) for all \(k \in \{ 1, \ldots , k_\varepsilon \} \) and \(\zeta _k \! < \! \varepsilon \) for all \(k\! >\! k_\varepsilon \). Let \(k_\varepsilon ^\prime \! > \! k_\varepsilon \) be such that \(\sum _{k>k^\prime _\varepsilon } (\zeta _k)^2 \! < \! \varepsilon ^2/3\). Since \(k^\prime _\varepsilon \! >\! k_\varepsilon \), we also get \(\min _{1\le k\le k^\prime _\varepsilon } |\varepsilon \! -\! \zeta _k | \! < \! \varepsilon \). By Lemmas 5.2 and 5.3, there exists \(n_0\! \ge \! 1\) such that for all \(n\! \ge \! n_0\),

$$\begin{aligned}&\Big | \sum _{\quad 1\le k\le {\mathbf {q}}_{\mathtt {w}_n}} (\zeta _k^n)^2\, -\! \sum _{k\ge 1} (\zeta _k)^2\, \Big |< \varepsilon ^2/3, \quad \sum _{1\le k\le k^\prime _\varepsilon } \big | (\zeta ^n_{j(n,k)})^2 \!-\! (\zeta _k)^2 \big |< \varepsilon ^2/ 3 \nonumber \\&\quad \text {and} \quad \max _{1\le k\le k^\prime _\varepsilon } \big | \zeta _k\! -\! \zeta _{j(n,k)}^n \big |< \! \min _{1\le k\le k^\prime _\varepsilon } |\varepsilon \! -\! \zeta _k | \mathtt {< \varepsilon } . \end{aligned}$$
(165)

Set \(S_n = \{ 1, \ldots , {\mathbf {q}}_{\mathtt {w}_n}\} \backslash \{ j(n,1), \ldots , j(n, k^\prime _\varepsilon )\}\). The previous inequalities imply that for all \(n\! \ge \! n_0\), \(\sum _{k\in S_n} (\zeta ^n_k)^2 < \varepsilon ^2\). Thus, for all \(n\!\ge \! n_0\), if \(k\! \in \! S_n\), then \(\zeta ^n_k \! < \! \varepsilon \). Next observe that for all \(k \in \{k_\varepsilon + 1, \ldots , k^\prime _\varepsilon \}\),

$$\begin{aligned} \zeta ^n_{j(n, k)} \le \varepsilon -(\varepsilon -\zeta _k)+ \max _{1\le \ell \le k^\prime _\varepsilon } \big | \zeta _\ell \! -\! \zeta _{j(n,\ell )}^n \big |< \varepsilon + \min _{1\le \ell \le k^\prime _\varepsilon } |\varepsilon \! -\! \zeta _\ell | -|\varepsilon -\zeta _k| < \varepsilon \,, \end{aligned}$$

by (165). Also note that for all \(k \in \{ 1, \ldots , k_\varepsilon \}\),

$$\begin{aligned}\zeta ^n_{j(n, k)} \ge \zeta _k - \max _{1\le \ell \le k^\prime _\varepsilon } \big | \zeta _\ell \! -\! \zeta _{j(n,\ell )}^n \big |> \varepsilon + |\zeta _k -\varepsilon |- \min _{1\le \ell \le k^\prime _\varepsilon } |\varepsilon \! -\! \zeta _\ell | > \varepsilon \,, \end{aligned}$$

again by (165). To summarise, for all \(n\! \ge \! n_0\), \(\zeta ^n_{j(n, k)} \! >\! \varepsilon \) if \(k \in \{ 1, \ldots , k_\varepsilon \}\) and \(\zeta ^n_{j(n, k)} \! <\! \varepsilon \) for all \(k \in \{ k_\varepsilon +1, \ldots , {\mathbf {q}}_{\mathtt {w}_n} \}\). Since \(\zeta _1\!>\! \zeta _2 \!>\! \ldots \! >\! \zeta _{k_\varepsilon }\), there exists \(n_1\! \ge \! n_0\) such that for all \(n\! \ge \! n_1\), \( \zeta _{j(n,1)}^n \!>\! \zeta _{j(n,2)}^n \!>\! \ldots \! >\! \zeta _{j(n,k_\varepsilon )}^n\). Thus, for all \(n\! \ge \! n_1\) and for all \(k \in \{ 1, \ldots , k_\varepsilon \}\), we have proved that \(j(n,k) = k\), which entails (164) since \(\varepsilon \) can be chosen arbitrarily small. \(\square \)

Recall the notation \(\mathtt {H}_k\) and \(\mathtt {Y}_k\) for the excursions of resp. \({\mathcal {H}}\) and \(Y\! -\! J\) above 0. We define the (rescaled) excursion of \(Y^{(n)} \! -\! J^{(n)}\) and of \({\mathcal {H}}^{(n)}\) above 0 as follows: for \(k \! \ge \! 1\) and \(t \in [0, \infty )\),

$$\begin{aligned} {\varvec{\mathtt {H}}}^{_{(n)}}_{k}(t)= {\mathcal {H}}^{_{(n)}}_{(l^n_k + t)\wedge r^n_k} \quad \text {and} \quad {\varvec{\mathtt {Y}}}^{_{(n)}}_{k}(t)= Y^{_{(n)}}_{(l^n_k + t)\wedge r^n_k}- J^{_{(n)}}_{l^n_k} . \end{aligned}$$
(166)

By Lemma 4.4 (iii), we have \(\Delta Y_{l_{k}}=0\) a.s. Then by (162), Lemma 5.4 and Lemma B.4 (iii) in Appendix, we immediately get the following:

Lemma 5.5

We keep the previous notation. Assume that (162) holds true almost surely. Then, \({\mathbf {P}}\)-a.s. for all \(k\! \ge \! 1\),

$$\begin{aligned} \big ({\varvec{\mathtt {Y}}}^{_{(n)}}_{k}, {\varvec{\mathtt {H}}}^{_{(n)}}_{k} , l^n_k, r^n_k \big ) \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big ({\varvec{\mathtt {Y}}}_k, {\varvec{\mathtt {H}}}_k, l_k , r_k\big ). \end{aligned}$$
(167)

in \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\,\times \! {\mathbf {C}}([0, \infty ) , {\mathbb {R}})\! \times \![0, \infty )^2\).

Recall the definition of \({\varvec{\Pi }}= \big ( (s_p, t_p) \big )_{p\ge 1}\) in (17) and (18), and recall the notation \({\varvec{\Pi }}^{(n)} = \big ( (s^n_p, t^n_p) \big )_{1\le p\le {\mathbf {p}}_n}\) in (159). We next prove the following:

Lemma 5.6

Assume that (162) holds almost surely. Then, a.s. for all \(p\! \ge \! 1\), there exists \(k\! \ge \! 1\) such that \(l_k \!< \! s_p \le t_p \! < \! r_k\) and for all sufficiently large n, \(l^n_k \!< \! s^n_p \!< \! t^n_p \! < \! r^n_k\) and \((l^n_k,s^n_p, t^n_p, r^n_k) \! \rightarrow \! (l_k,s_p, t_p, r_k)\).

Proof

By Proposition 4.5 (i), \({\mathbf {P}}\)-a.s. for all \(p\! \ge \! 1\), \(Y_{t_p} \! >\! J_{t_p}\) and there exists \(k\! \ge \! 1\) such that \(t_p \in (l_k, r_k)\). By Lemma 4.4 (iii), we get \(Y_{l_k} \! -\! J_{l_k} = 0\). Note that \(y_p \in (0, Y_{t_p} \! -\! J_{t_p})\) and \(s_p = \inf \big \{ s \in [0, t_p] : \inf _{u\in [s, t_p]} Y_u\! -\! J_u > y_p \big \}\) by definition (18). Thus, we get \(l_k \!< \! s_p \le t_p \! < \! r_k\) and the proof is completed by (162) that asserts that \((s^n_p, t^n_p) \! \rightarrow \! (s_p, t_p)\) and by Lemma 5.4 that asserts that \((l^n_k, r^n_k)\! \rightarrow \! (l_k, r_k)\). \(\square \)

Proof of Theorem 2.5

In (44) recall that for all \(k\! \ge \! 1\), \({\varvec{\Pi }}_{k} = \big ( (s^k_p, t^k_p)\, ;\, 1 \le p \le {\mathbf {p}}_k \big )\) where \((t^k_p\, ;\, 1 \le p \le {\mathbf {p}}_k )\) increases and where the \((l_k +s^k_p, l_k+t^k_p)\) are exactly the terms \((s_{p^\prime } , t_{p^\prime })\) of \({\varvec{\Pi }}\) such that \(t_{p^\prime } \in [l_k, r_k]\). Similarly recall the definition in (42) of the sequence of pinching times \({\varvec{\Pi }}^{\mathtt {w}_{n}}_k\), \(1 \le k\! \le \! {\mathbf {q}}_{\mathtt {w}_n} \): namely, in their rescaled version, \(\frac{_1}{^{b_n}}{\varvec{\Pi }}_{k}^{\mathtt {w}_n} = \big ( (s^{n,k}_p, t^{n,k}_p) \, ; \, 1 \le p \le {\mathbf {p}}^{n}_k \big ) \), where \((t^{n, k}_p\, ; \, 1\! \le \! p\! \le \! {\mathbf {p}}^n_k)\) increases and where the \((l^n_k +s^{n, k}_p, l^n_k+t^{n,k}_p)\) are exactly the terms \((s^n_{p^\prime } , t^n_{p^\prime })\) of \({\varvec{\Pi }}^{(n)}\) such that \(t^n_{p^\prime } \in [l^n_k, r^n_k]\). Thus, Lemma 5.6 immediately entails that \({\mathbf {P}}\)-a.s. for all \(k\ge \! 1\), \( \frac{_1}{^{b_n}}{\varvec{\Pi }}_{k}^{\mathtt {w}_n}\! \rightarrow \! {\varvec{\Pi }}_k \) as \(n\! \rightarrow \! \infty \). This convergence combined with Lemma 5.5 implies Theorem 2.5. \(\square \)

5.3 Proof of Theorem 2.8

From Theorem 2.5, we derive Theorem 2.8 that states the convergence of the connected components of the graphs. We first prove (59) in which the connected components are indexed in the decreasing order of their measure. This result is obtained by soft arguments that follow from Lemma 2.7. We then prove (60) where the connected components are equipped with their counting measures and we also prove that asymptotically the connected components are listed in the decreasing order of their numbers of vertices when \(\sqrt{{\mathbf {j}}_n}/b_n\! \rightarrow \! 0\). This result is more difficult to prove.

Proof of (59) We keep the previous notation and recall that \(\big ( {\varvec{\mathcal {G}}}_k^{\mathtt {w}_n}, d_{k}^{\mathtt {w}_n}, \varrho _k^{\mathtt {w}_n}, {\mathbf {m}}_k^{\mathtt {w}_n}\big )\), \(1 \le k \le {\mathbf {q}}_{\mathtt {w}_n}\), stand for the connected components of the \(\mathtt {w}_n\)-multiplicative random graph \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\). Here, \(d_{k}^{\mathtt {w}_n}\) stands for the graph-metric on \({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_k\), \({\mathbf {m}}_k^{\mathtt {w}_n}\) is the restriction to \({\varvec{\mathcal {G}}}_k^{\mathtt {w}_n}\) of the measure \({\mathbf {m}}_{\mathtt {w}_n} = \sum _{j\ge 1} w^{_{(n)}}_{j} \delta _j\), \(\varrho _k^{\mathtt {w}_n}\) is the first vertex of \({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_k\) that is visited during the exploration of \({\varvec{\mathcal {G}}}_{\mathtt {w}_n}\), and the indexation is such that \({\mathbf {m}}_1^{\mathtt {w}_n} \big ( {\varvec{\mathcal {G}}}_1^{\mathtt {w}_n}\big ) \ge \cdots \ge {\mathbf {m}}_{{\mathbf {q}}_{\mathtt {w}_n}}^{\mathtt {w}_n} \big ( {\varvec{\mathcal {G}}}_{{\mathbf {q}}_{\mathtt {w}_n}}^{\mathtt {w}_n}\big )\).

Next, recall that \({\varvec{\mathtt {H}}}^{_{(n)}}_k (\cdot )\), defined in (166), stands for the k-th longest excursion of \({\mathcal {H}}^{\mathtt {w}_n}\) that is rescaled in time by a factor \(1/b_n\) and rescaled in space by a factor \(a_n / b_n\); similarly, \(\frac{_1}{^{b_n}}{\varvec{\Pi }}_{k}^{\mathtt {w}_n} = \big ( (s^{n,k}_p, t^{n,k}_p); 1 \le p \le {\mathbf {p}}^{n}_k \big )\) is the (\(1/b_n\)-rescaled) finite sequence of pinching times of \({\varvec{\mathtt {H}}}^{_{(n)}}_k\). Then, for all \(k \in \{ 1, \ldots , {\mathbf {q}}_{\mathtt {w}_n}\}\) the compact measured metric space \( {\mathbf {G}}^{_{(n)}}_{k}:= \big ( {\varvec{\mathcal {G}}}_k^{\mathtt {w}_n} , \frac{_{_{a_n}}}{^{^{b_n}}}d_{k}^{\mathtt {w}_n} , \varrho _k^{\mathtt {w}_n}, \frac{_{_{1}}}{^{^{b_n}}}{\mathbf {m}}_k^{\mathtt {w}_n} \big )\) is isometric to \(G\big ( {\varvec{\mathtt {H}}}_k^{_{(n)}} , \frac{_{_1}}{^{^{b_n}}}{\varvec{\Pi }}_{k}^{\mathtt {w}_n} , \frac{_{a_n}}{^{b_n}} \big )\), the compact measured metric space encoded by \( {\varvec{\mathtt {H}}}_k^{_{(n)}}\) and the pinching setup \(( \frac{_1}{^{b_n}}{\varvec{\Pi }}_{k}^{\mathtt {w}_n} , \frac{_{a_n}}{^{b_n}})\) as defined in (52). On the other hand for the limit processes, \({\varvec{\mathtt {H}}}_k (\cdot )\) stands for the k-th longest excursion of \({\mathcal {H}}\) and \({\varvec{\Pi }}_k = \big ( (s^{k}_p, t^{k}_p); 1 \le p \le {\mathbf {p}}_k \big )\) is the finite sequence of pinching times of \({\varvec{\mathtt {H}}}_k\). Then, for all \(k\! \ge \! 1\), the compact measured metric space \( {\mathbf {G}}_{k} :=\big ( {\mathbf {G}}_{k} , \mathrm {d}_{k}, \varrho _{k} , {\mathbf {m}}_{k} \big ) \) is isometric to \( G ({\varvec{\mathtt {H}}}_k, {\varvec{\Pi }}_k, 0)\) that is the compact measured metric space encoded by \( {\varvec{\mathtt {H}}}_k\) and the pinching setup \(( {\varvec{\Pi }}_{k} ,0)\) as defined in (52). Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that the convergence in Theorem 2.5 holds almost surely. Namely a.s. for all \(k\! \ge \! 1\), \(\big ( {\varvec{\mathtt {H}}}^{_{(n)}}_k , \zeta ^n_k, \frac{_1}{^{b_n}} {\varvec{\Pi }}^{\mathtt {w}_n}_k\big )\! \rightarrow \! \big ( {\varvec{\mathtt {H}}}_k , \zeta _k, {\varvec{\Pi }}_k\big )\) on \({\mathbf {C}}([0, \infty ), {\mathbb {R}}) \times [0, \infty ) \times ({\mathbb {R}}^2)^{{\mathbb {N}}^*}\). We next fix \(k\! \ge \! 1\); then for all sufficiently large n, \(\frac{_1}{^{b_n}} {\varvec{\Pi }}^{\mathtt {w}_n}_k\) and \({\varvec{\Pi }}_k\) have the same number of points: namely, \({\mathbf {p}}^{n}_k = {\mathbf {p}}_k\) and for \( 1 \le p \le {\mathbf {p}}^{n}_k = {\mathbf {p}}_k\),

$$\begin{aligned} (s^{n,k}_p, t^{n,k}_p) \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; (s^k_p, t^k_p) \; . \end{aligned}$$
(168)

Recall the definition in (53) of the Gromov–Hausdorff–Prokhorov distance \({\varvec{\delta }}_{\mathrm {GHP}}\). We next apply Lemma 2.7 with \((h,h^\prime ) = ( {\varvec{\mathtt {H}}}_k, {\varvec{\mathtt {H}}}^{_{(n)}}_k )\), \((\Pi , \Pi ^\prime ) = ({\varvec{\Pi }}_k, \frac{_1}{^{b_n}} {\varvec{\Pi }}^{\mathtt {w}_n}_k)\), \((\varepsilon , \varepsilon ^\prime ) = (0, a_n/b_n)\) and \(\delta = \delta _n = \max _{1\le p \le {\mathbf {p}}_k} |s^k_p\! -\! s^{n,k}_p|\vee |t^k_p\! -\! t^{n,k}_p|\). Then, by (55),

$$\begin{aligned} {\varvec{\delta }}_{\mathrm {GHP}} ( {\mathbf {G}}_{k}, {\mathbf {G}}^{_{(n)}}_{k} ) \le 6({\mathbf {p}}_k+1) \big ( \Vert {\varvec{\mathtt {H}}}_k \! -\! {\varvec{\mathtt {H}}}_k^{_{(n)}} \Vert _\infty + \omega _{\delta _n} ({\varvec{\mathtt {H}}}_k) \big ) + 3a_n{\mathbf {p}}_k /b_n + |\zeta _k\! -\! \zeta ^n_k|, \end{aligned}$$
(169)

where \( \omega _{\delta _n} ({\varvec{\mathtt {H}}}_k) = \max \{ |{\varvec{\mathtt {H}}}_k (t) \! -\! {\varvec{\mathtt {H}}}_k (s)|; s, t \in [0, \infty ): |s\! -\! t| \! \le \delta _n \}\). By (168), \(\delta _n\! \rightarrow \! 0\); since \({\varvec{\mathtt {H}}}_k\) is continuous and since it is null on \([\zeta _k, \infty )\), it is uniformly continuous and \( \omega _{\delta _n} ({\varvec{\mathtt {H}}}_k)\! \rightarrow \! 0\); recall that \(a_n / b_n\! \rightarrow \! 0\). Thus, the right member of (169) goes to 0 as \(n\! \rightarrow \! 0\). Thus, we have proved that a.s. for all \(k\! \ge \! 1\), \( {\varvec{\delta }}_{\mathrm {GHP}} ( {\mathbf {G}}_{k}, {\mathbf {G}}^{_{(n)}}_{k} ) \! \rightarrow \! 0\), which implies (59) in Theorem 2.8. \(\square \)

Proof of (60) We next prove the convergence of the connected components equipped with the counting measure. Recall that in the introduction we have introduced the discrete tree \({\varvec{\mathcal {T}}}_{\! \! \mathtt {w}_n}\) encoded by the \(\mathtt {w}_n\)-LIFO queue without repetition (namely, the tree encoded by \({\mathcal {H}}^{\mathtt {w}_n}\)): the vertices of \({\varvec{\mathcal {T}}}_{\! \! \mathtt {w}_n}\) are the clients; the server is the root (Client 0) and Client j is a child of Client i in \({\varvec{\mathcal {T}}}_{\! \! \mathtt {w}}\) if and only if Client j interrupts the service of Client i (or arrives when the server is idle if \(i\! =\! 0\)). We denote by \({\mathcal {C}}^{\mathtt {w}_n}\) the contour process associated with \({\varvec{\mathcal {T}}}_{\! \! \mathtt {w}_n}\) that is informally defined as follows: suppose that \({\varvec{\mathcal {T}}}_{\! \! \mathtt {w}_n}\) is embedded in the oriented half plane in such a way that edges have length one and that orientation reflects lexicographical order of visit; we think of a particle starting at time 0 from the root of \({\varvec{\mathcal {T}}}_{\! \! \mathtt {w}_n}\) and exploring the tree from the left to the right, backtracking as little as possible and moving continuously along the edges at unit speed. Since \({\varvec{\mathcal {T}}}_{\! \! \mathtt {w}_n}\) is finite, the particle crosses each edge twice (upwards first and then downwards). For all \(s \in [0, \infty )\), we define \({\mathcal {C}}^{\mathtt {w}_n}_s \) as the distance at time s of the particle from the root of \({\varvec{\mathcal {T}}}_{\! \! \mathtt {w}_n}\). We refer to Le Gall & D. [19] (Section 2.4, Chapter 2, pp. 61-62) for a formal definition and the connection with the height process (see also the end of Sect. 3.2).

It is important to notice that the trees encoded by \({\mathcal {C}}^{\mathtt {w}_n}\) and by \({\mathcal {H}}^{\mathtt {w}_n}\) are the same: the only difference is the measure induced by the two different coding functions. More precisely, \({\mathcal {C}}^{\mathtt {w}_n}\) is derived from \({\mathcal {H}}^{\mathtt {w}_n}\) by the following time-change: recall that \({\mathbf {j}}_n = \max \{ j\! \ge \! 1\! : \! w^{_{(n)}}_{^j} \! >\! 0 \}\) and let \((\xi ^n_k)_{1\le k\le 2{\mathbf {j}}_n}\) be the sequence of jump-times of \({\mathcal {H}}^{\mathtt {w}_n}\): namely, \(\xi ^n_{k+1} = \inf \{ s\! >\! \xi ^n_k \! : {\mathcal {H}}^{\mathtt {w}_n}_s \! \ne \! {\mathcal {H}}^{\mathtt {w}_n}_{\xi ^n_k} \}\), for all \(1 \le k \! < \! 2{\mathbf {j}}_n \), with the convention \(\xi ^n_0 = 0\). We then set, for \(t \in [0, \infty )\),

$$\begin{aligned} \Phi _n (t) =\sum _{\quad 1 \le k\le 2{\mathbf {j}}_n}\, \mathbf{1}_{[0, t]} (\xi ^n_k) \; \text {and} \; \phi _n (s) = \inf \big \{ t \in [0, \infty ) \! :\! \Phi _n (t) \! \ge \! s \big \}, \ s \in [0, 2{\mathbf {j}}_n], \; \end{aligned}$$
(170)

Note that \(\phi _n (k)=\xi ^n_k\). Then, we obtain, for \(t \in [0, \infty )\),

$$\begin{aligned} {\mathcal {C}}^{\mathtt {w}_n}_{\Phi _n (t)}= {\mathcal {H}}^{\mathtt {w}_n}_t \; \quad \text {and} \quad {\mathcal {C}}^{\mathtt {w}_n}_{k} = {\mathcal {H}}^{\mathtt {w}_n}_{\xi ^n_k}={\mathcal {H}}^{\mathtt {w}_n}_{\phi _n(k)}\;, \quad \text {for all } k \in \{ 0, \ldots , 2 {\mathbf {j}}_n\} . \end{aligned}$$
(171)

We next set, for \(t \in [0, \infty )\),

$$\begin{aligned} R^n_t = \sum _{j\ge 1} \mathbf{1}_{\{ E^{\mathtt {w}_n}_j \le t \}} \end{aligned}$$
(172)

that counts the number of clients who entered the \(\mathtt {w}_n\)-LIFO queue governed by \(Y^{\mathtt {w}_n}\). Note that \(E^{{\mathtt {w}_n}}_{j}\) is the first jump-time of \(N^{\mathtt {w}_n}_j\): namely the \(E^{\mathtt {w}_n}_j\) are independent exponentially distributed r.v. with respective parameters \(w_{^j}^{_{(n)}}/ \sigma _1 (\mathtt {w}_n)\). In terms of the tree \({\varvec{\mathcal {T}}}_{\!\! \mathtt {w}_n}\), \(R^n_t\) is the number of distinct vertices that have been explored by \({\mathcal {H}}^{\mathtt {w}_n}\) up to time t. By arguing as in the proof of (94), we easily check that, for each \(t \in [0, \infty )\), we have

$$\begin{aligned} \Phi _n (t)= 2 R^n_t - {\mathcal {H}}^{\mathtt {w}_n}_t \; . \end{aligned}$$
(173)

We prove the following

Lemma 5.7

We keep the previous notation. Then for all \(t \in [0, \infty )\), we have

$$\begin{aligned} {\mathbf {E}}\Big [ \!\! \sup _{\;\; s\in [0, t]} \! \big | R^n_s \!-\! s\big | \, \Big ] \le 2\sqrt{t} + \frac{t^2\sigma _2 (\mathtt {w}_n)}{2\sigma _1 (\mathtt {w}_n)^2} \; . \end{aligned}$$
(174)

Moreover, there exists a positive r.v. \(Q_n\) that is a measurable function of \((N^{\mathtt {w}_n}_j)_{j \ge 1}\), such that \({\mathbf {E}}[Q_n^2]\! \le \! 4{\mathbf {j}}_n \) (recall that \({\mathbf {j}}_n \! := \! \max \{ j\! \ge \! 1: w_j^{(n)} \! >\! 0 \}\)) and such that \({\mathbf {P}}\)-a.s. for all \(s, t \in [0, \infty )\),

$$\begin{aligned} R^n_{t+s}-R^n_t \le s+ 2Q_n \; . \end{aligned}$$
(175)

Proof

Set \(M_j(t) = \mathbf{1}_{\{ E^{\mathtt {w}_n}_j \le t \}}\! -\! \frac{w_j^{(n)}}{\sigma _1(\mathtt {w}_n)} (t \wedge E^{\mathtt {w}_n}_j)\) and denote by \(({\mathscr {G}}_t)\) the natural filtration associated with the \((N^{\mathtt {w}_n}_j)_{j \ge 1}\). Standard results on point processes tell us that \((M_j)_{j\ge 1}\) are independent \(({\mathscr {G}}_t)\)-martingales with \(\sigma _1(\mathtt {w}_n)\langle M_{j}\rangle _{t} = \int _{0}^{t}w_j^{(n)}\mathbf{1}_{\{s<E^{\mathtt {w}_{n}}_{j}\}}ds\), so that \({\mathbf {E}}\big [ M_j (t)^2\big ]\! = {\mathbf {E}}[\langle M_{j}\rangle _{t}]=\! 1\! -\! \exp (-w_{^j}^{_{(n)}} t/\sigma _1 (\mathtt {w}_n)) \le w_{^{j}}^{_{(n)}} t/\sigma _1 (\mathtt {w}_n)\). We then set \(M(t) = \sum _{1\le j \le {\mathbf {j}}_n} M_j (t)\). Then M is a \(({\mathscr {G}}_t)\)-martingale and Doob’s \(L^2\) inequality implies that \({\mathbf {E}}[\sup _{s\in [0, t]} M (s)^{2}] \le 4 {\mathbf {E}}[M (t)^2] \le 4 t\). Thus, \({\mathbf {E}}[\sup _{s\in [0, t]} |M (s)] \, ] \le 2\sqrt{t}\). Next, we set \({\overline{M}}(t) = R^n_t \! -\! M_t\). We easily check the following:

$$\begin{aligned} t\! -\! {\overline{M}}(t) = \sum _{j \ge 1} \frac{_{w_j^{(n)}}}{^{\sigma _1(\mathtt {w}_n)}} \big (t\! -\! E^{\mathtt {w}_n}_j \big )\mathbf{1}_{\{ E^{\mathtt {w}_n}_j \le t \}} \, , \end{aligned}$$

which is nonnegative and nondecreasing in t so that \(\sup _{s\in [0, t]} |s\! -\! {\overline{M}} (s)| = t\! -\! {\overline{M}}(t)\). Moreover, for all \(j\! \ge \! 1\), we check that

$$\begin{aligned}\frac{_{w_j^{(n)}}}{^{\sigma _1(\mathtt {w}_n)}} {\mathbf {E}}\big [\big (t\! -\! E^{\mathtt {w}_n}_j \big )\mathbf{1}_{\{ E^{\mathtt {w}_n}_j \le t\}} \big ] = e^{-w_{j}^{{(n)}} t/\sigma _1 (\mathtt {w}_n)} \!-\! 1 + \frac{_{w_j^{(n)}}}{^{\sigma _1(\mathtt {w}_n)}} t \, \le \, \frac{_1}{^2} t^2 \big ( w_j^{(n)} /\sigma _1 (\mathtt {w}_n)\big )^2 \; .\end{aligned}$$

This implies that \({\mathbf {E}}[\sup _{s\in [0, t]} |s\! -\! {\overline{M}} (s)| \, ] = {\mathbf {E}}[ t\! -\! {\overline{M}}(t) ] \le t^2 \sigma _2 (\mathtt {w}_n) / (2\sigma _1 (\mathtt {w}_n)^2)\), which easily completes the proof of (174) thanks to the previous inequality regarding M.

Let us prove (175). To that end, observe that \(\lim _{t\rightarrow \infty } {\mathbf {E}}[M_j (t)^2] = 1\). Thus, \(\lim _{t\rightarrow \infty } {\mathbf {E}}[M (t)^2] = {\mathbf {j}}_n\) and Doob’s inequality entails that \({\mathbf {E}}[\sup _{t\in [0, \infty )} M^2 (t)] \le 4 {\mathbf {j}}_n\). We then set \(Q_n = \sup _{t\in [0, \infty )}| M (t) | \) and we get almost surely for all \(t,s \in [0, \infty )\), \(R^n_{t+s}\! -\! R^n_t = M(t+s) \! -\! M(t)+ {\overline{M}} (t+s) \! -\! {\overline{M}} (t) \le 2Q_n + {\overline{M}} (t+s) \! -\! {\overline{M}} (t)\). Since for all \(a \in [0, \infty )\), the function \(t \mapsto t\wedge a\) is 1-Lipschitz and since \({\overline{M}} \) is a convex combination of these functions, \({\overline{M}} \) is also 1-Lipschitz: namely, \(| {\overline{M}} (t+s) \! -\! {\overline{M}} (t)| \le s\), which completes the proof of (175). \(\square \)

By (173) and (174) we easily get for all \(t, \varepsilon \in (0, \infty )\)

$$\begin{aligned}&{\mathbf {P}}\big ( \sup _{s\in [0, t]} | \frac{_1}{^{b_n}}\Phi _n (b_n s) \! -\! 2s |> 2\,\varepsilon \big ) \le {\mathbf {P}}\big ( \! \sup _{\quad s\in [0, b_nt]} | R^n_s \! -\! s |> \,b_n\varepsilon /2 \big )+ {\mathbf {P}}\big ( \sup _{\quad s\in [0, b_nt]} {\mathcal {H}}^{\mathtt {w}_n}_s> b_n \varepsilon \big ) \\&\quad \le 4\varepsilon ^{-1} \sqrt{t/b_n}+ \frac{t^2b_n \sigma _2 (\mathtt {w}_n)}{\varepsilon \sigma _1 (\mathtt {w}_n)^2} + {\mathbf {P}}\big ( \sup _{s\in [0, t]} \frac{_{a_n}}{^{b_n}}{\mathcal {H}}^{\mathtt {w}_n}_{b_ns} > a_n \varepsilon \big ) \,. \end{aligned}$$

Thus, by (21) and (39) in Theorem 2.4, we get \(\lim _{n\rightarrow \infty } {\mathbf {P}}\big ( \sup _{s\in [0, t]} | \frac{_1}{^{b_n}}\Phi _n (b_n s) \! -\! 2s | > 2\,\varepsilon \big )\! =\! 0\). This proves that \(\frac{_1}{^{b_n}}\Phi _n (b_n \cdot )\) converges to \(2\mathrm {Id}\) in probability on \({\mathbf {C}}([0, \infty ), {\mathbb {R}})\), where \(\mathrm {Id}\) stands for the identity map on \([0, \infty )\). Then, standard arguments also imply that \(\frac{_1}{^{b_n}}\phi _n (b_n \cdot )\) converges to \(\frac{_1}{^2}\mathrm {Id}\) in probability on \({\mathbf {C}}([0, \infty ), {\mathbb {R}})\). We also note that on any interval \([k, k+1]\) where k is an integer, \({\mathcal {C}}^{\mathtt {w}_{n}}_{t}\) is a linear interpolation between \({\mathcal {C}}^{\mathtt {w}_{n}}_{k}\) and \({\mathcal {C}}^{\mathtt {w}_{n}}_{k+1}\). These convergences combined with Theorem 2.4 imply

$$\begin{aligned} \big ( \frac{_{_1}}{^{^{a_n}}} A^{\mathtt {w}_n}_{b_n \cdot }\, , \frac{_{_1}}{^{^{a_n}}} Y^{\mathtt {w}_n}_{b_n \cdot } , \, \frac{_{_{a_n}}}{^{^{b_n}}} {\mathcal {H}}^{\mathtt {w}_n}_{b_n \cdot } \, , \frac{_{_{a_n}}}{^{^{b_n}}} {\mathcal {C}}^{\mathtt {w}_n}_{b_n \cdot } \, , \frac{_{_1}}{^{^{b_n}}} {\varvec{\Pi }}_{ \mathtt {w}_n}\, , \frac{_{_1}}{^{^{b_n}}} \Phi _n ({\varvec{\Pi }}_{ \mathtt {w}_n})\big ) \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! \longrightarrow } \big ( A, Y, {\mathcal {H}}, {\mathcal {H}}_{\cdot /2}, {\varvec{\Pi }},2 {\varvec{\Pi }}\big ), \end{aligned}$$
(176)

weakly on the appropriate space.

We now deal with the excursions of \({\mathcal {C}}^{\mathtt {w}_n}\) above 0, that are the contour processes of the spanning trees \({\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\!\! k }\), \(1 \le k \le {\mathbf {q}}_{\mathtt {w}_n}\), of the \({\mathbf {q}}_{\mathtt {w}_n}\) connected components of \({\varvec{\mathcal {G}}}_{\! \mathtt {w}_n}\). Note that the trees \({\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\!\! k }\) are also the connected components obtained from the tree \({\varvec{\mathcal {T}}}_{\!\! \mathtt {w}_n}\) after removing its root. Recall that \([l^{\mathtt {w}_n}_k, r^{\mathtt {w}_n}_k)\) are the excursion intervals of \({\mathcal {H}}^{\mathtt {w}_n}\) above 0. Namely, \(\bigcup _{1\le k\le {\mathbf {q}}_{\mathtt {w}_n}} [l^{\mathtt {w}_n}_k, r^{\mathtt {w}_n}_k) \! =\! \{ t \in [0, \infty )\! :\! {\mathcal {H}}^{\mathtt {w}_n}_t \! >\! 0 \}\). Recall that the excursion intervals are listed in the decreasing order of their lengths and \(\mathtt {H}^{\mathtt {w}_n}_k (t) = {\mathcal {H}}^{\mathtt {w}_n} ((l^{\mathtt {w}_n}_k + t)\! \wedge r^{\mathtt {w}_n}_k)\), \(t \in [0, \infty )\), is the k-th longest excursion process of \({\mathcal {H}}^{\mathtt {w}_n}\) above 0. Recall the notation \({\varvec{\Pi }}^{\mathtt {w}_n}_k = ((s^{n,k}_p, t^{n,k}_p); 1 \le k \le {\mathbf {p}}_k^n)\) for the sequence of pinching times falling into the k-th longest excursion. Also recall that \({\mathbf {m}}^{\mathtt {w}_n} = \sum _{j\ge 1} w^{_{(n)}}_{^j} \delta _j \) and \({\mathbf {m}}^{\mathtt {w}_n}_k\) is the restriction to \({\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\!\! k}\) of \({\mathbf {m}}^{\mathtt {w}_n}\). Note that \(\big ({\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\!\! k} , d_{\mathrm {gr}} , \varrho _{k}^{\mathtt {w}_n} , {\mathbf {m}}^{\mathtt {w}_n}_k)\) stands for the measured tree encoded by \(\mathtt {H}_k^{\mathtt {w}_n}\) and that \(\big ( {\varvec{\mathcal {G}}}_k^{\mathtt {w}_n}, d_{k}^{\mathtt {w}_n}, \varrho _k^{\mathtt {w}_n}, {\mathbf {m}}_k^{\mathtt {w}_n}\big )\) is the measured graph encoded by \(\mathtt {H}_k^{\mathtt {w}_n}\) and the pinching setup \(({\varvec{\Pi }}_k^{\mathtt {w}_n}, 1)\), which means that \( {\varvec{\mathcal {G}}}_k^{\mathtt {w}_n}\) is isometric to the graph \(G( \mathtt {H}_k^{\mathtt {w}_n}, {\varvec{\Pi }}_k^{\mathtt {w}_n}, 1)\) as defined in (52) and it is the k-th largest (with respect to the measure \({\mathbf {m}}^{\mathtt {w}_n}\)) connected component of \({\varvec{\mathcal {G}}}_{\! \mathtt {w}_n}\). We next set for all \(k \in \{ 1, \ldots , {\mathbf {q}}_{\mathtt {w}_n}\}\), \({\overline{l}}^n_k = \Phi _n (l^{\mathtt {w}_n}_k)\), \({\overline{r}}^n_k = \Phi _n (r^{\mathtt {w}_n}_k)\),

$$\begin{aligned} \mathtt {C}^{\mathtt {w}_n}_k (t) = {\mathcal {H}}^{\mathtt {w}_n} \big ( \phi _n ( ({\overline{l}}^n_k + t) \wedge {\overline{r}}^n_k) \big ) = \mathtt {H}^{\mathtt {w}_n}_k \big ( \phi _n ( {\overline{l}}^n_k + t)\! -\! l_k^{\mathtt {w}_n} \big )\;, \end{aligned}$$

and

$$\begin{aligned}{\overline{{\varvec{\Pi }}}}^{\mathtt {w}_n}_k = \big ( (\Phi _n (l^{\mathtt {w}_n}_k + s^{n,k}_p)\! -\! {\overline{l}}^n_k \, , \, \Phi _n (l^{\mathtt {w}_n}_k + t^{n,k}_p)\! -\! {\overline{l}}^n_k)\big )_{1\le k \le {\mathbf {p}}_k^n} .\end{aligned}$$

Then, we easily check the following:

(i):

\(\{ t \in [0, \infty )\! : \! {\mathcal {C}}^{\mathtt {w}_n}_t \! >\! 0 \} \! = \! \bigcup _{1 \le k \le {\mathbf {q}}_{\mathtt {w}_n}} [{\overline{l}}^n_k , {\overline{r}}^n_k)\).

(ii):

\( \mathtt {C}^{\mathtt {w}_n}_k (\cdot ) -1\) is the contour process of \({\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\!\! k}\). We denote by \(\varvec{\nu }^{\mathtt {w}_n}_k\) the measure that the contour process induces on \({\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\!\! k}\): namely, \(\big ({\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\!\! k} , d_{\mathrm {gr}} , \varrho _{k}^{\mathtt {w}_n} , \varvec{\nu }^{\mathtt {w}_n}_k)\) is the measured tree encoded by \( \mathtt {C}^{\mathtt {w}_n}_k (\cdot ) -1\).

(iii):

\(\big ( {\varvec{\mathcal {G}}}_k^{\mathtt {w}_n}, d_{k}^{\mathtt {w}_n}, \varrho _k^{\mathtt {w}_n}, \varvec{\nu }_k^{\mathtt {w}_n}\big )\) is isometric to \( G \big ( \mathtt {C}^{\mathtt {w}_n}_k (\cdot ) -1, {\overline{{\varvec{\Pi }}}}^{\mathtt {w}_n}_k, 1)\).

Since \((b_n^{-1} \Phi _n (b_n \cdot ) , b_n^{-1} \phi _n (b_n \cdot )) \! \rightarrow \! (2\mathrm {Id}, \frac{_1}{^2} \mathrm {Id})\) in probability on \({\mathbf {C}}([0, \infty ), {\mathbb {R}})^2\), we easily get from Theorem 2.5 that

$$\begin{aligned} \big (\big ( \frac{_{_{a_n}}}{^{^{b_n}}} \mathtt {C}^{\mathtt {w}_n}_k (b_n \cdot ) , \, \frac{_{_{1}}}{^{^{b_n}}} {\overline{l}}_k^{n} , \, \frac{_{_{1}}}{^{^{b_n}}} {\overline{r}}_k^{n}, \, \frac{_{_{1}}}{^{^{b_n}}}{\overline{{\varvec{\Pi }}}}_k^{\mathtt {w}_n} \big ) \big )_{k\ge 1} \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big (\big ({\varvec{\mathtt {H}}}_k (\cdot /2), 2l_k, 2r_k, 2{\varvec{\Pi }}_k\big ) \big )_{k\ge 1} \end{aligned}$$
(177)

weakly on \(({\mathbf {C}}([0, \infty ), {\mathbb {R}})\! \times \! [0, \infty )^2 \! \times \! ({\mathbb {R}}^2)^{{\mathbb {N}}^*})^{{\mathbb {N}}^*}\) equipped with the product topology, with obvious notation. Then, by Lemma 2.7 and the same argument as in the proof of (59), we get

$$\begin{aligned} \big (\big ( {\varvec{\mathcal {G}}}_{\! k}^{\mathtt {w}_n} , \frac{_{_{a_n}}}{^{^{b_n}}}d_{k}^{\mathtt {w}_n} , \varrho _k^{\mathtt {w}_n}, \frac{_{_{1}}}{^{^{b_n}}}\varvec{\nu }_k^{\mathtt {w}_n} \big ) \big )_{k\ge 1} \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big (\big ( {\mathbf {G}}_{k} , \mathrm {d}_{k}, \varrho _{k} , 2{\mathbf {m}}_{k} \big ) \big )_{k\ge 1} \end{aligned}$$
(178)

weakly on \({\mathbb {G}}^{{\mathbb {N}}^*}\) equipped with the product topology. The last step in the proof of (60) consists in comparing the measure \(\varvec{\nu }_k^{\mathtt {w}_n}\) with the counting measure \(\varvec{\mu }_k^{\mathtt {w}_n}\). We will rely on the following: \(\square \)

Lemma 5.8

Let us denote by \(\varvec{\mu }^{\mathtt {w}_n}_{k}\) the counting measure on \({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k}\). We equip \({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k}\) with the graph distance and for all non-empty subsets of vertices A we denote by \(A^{(1)}\) the set of vertices at graph-distance at most 1 from A. Then

$$\begin{aligned} \varvec{\nu }^{\mathtt {w}_n}_{k} (A) \le 2\varvec{\mu }^{\mathtt {w}_n}_{k} \big ( A^{(1)} \big ) +1\quad \text {and} \quad 2\varvec{\mu }^{\mathtt {w}_n}_{k} ( A) \le \varvec{\nu }^{\mathtt {w}_n}_{k} \big ( A^{(1)} \big ) +1. \end{aligned}$$
(179)

Proof

Since adding edges only diminishes the graph distance, it is sufficient to prove (179) on \({\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\! \! k}\) equipped with the graph-distance \(d_{\mathrm {gr}}\). Recall that \(\varrho ^{\mathtt {w}_n}_k\) is the root of \({\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\! \! k}\). To simplify notation we set \({\varvec{\mathcal {T}}}= {\varvec{\mathcal {T}}}^{\mathtt {w}_n}_{\! \! k}\), \(\varrho = \varrho ^{\mathtt {w}_n}_k\), \(\varvec{\nu }= \varvec{\nu }^{\mathtt {w}_n}_{k}\), \(\varvec{\nu }^\prime = \delta _{\varrho }+ \varvec{\nu }\) and \(\varvec{\mu }= \varvec{\mu }^{\mathtt {w}_n}_{k}\). Since the contour process of \({\varvec{\mathcal {T}}}\) crosses twice each edge, we easily get \( \varvec{\nu }^\prime = \delta _{\varrho }+ \sum _{v\in {\varvec{\mathcal {T}}}} \texttt {deg}(v) \delta _{v} = \varvec{\mu }+ \varvec{\mu }\circ f^{-1}\) where f(v) is the parent of \(v\in {\varvec{\mathcal {T}}}\) for \(v\! \ne \! \varrho \) and \(f(\varrho ) = \varrho \). Let \(M = \sum _{v\in {\varvec{\mathcal {T}}}} (\delta _{(v,v)}+ \delta _{(v, f(v))})\) that is a measure on \({\varvec{\mathcal {T}}}\! \times \! {\varvec{\mathcal {T}}}\) such that \(M (A \! \times \! {\varvec{\mathcal {T}}}) = 2\varvec{\mu }(A)\) and \(M ({\varvec{\mathcal {T}}}\! \times \! A) = \varvec{\nu }^\prime (A)\). Then, set \(D = \{ (v,v^\prime ) \in {\varvec{\mathcal {T}}}\! \times \! {\varvec{\mathcal {T}}}\! : \! d_{\mathrm {gr}} (v, v^\prime ) \le 1 \}\). Since \(d_{\mathrm {gr}} (f(v), v) \le 1\), M is supported on D. Next, observe that \((A \! \times \! {\varvec{\mathcal {T}}})\cap D \! \subset \! {\varvec{\mathcal {T}}}\! \times \! A^{(1)}\) and similarly \(D \cap ({\varvec{\mathcal {T}}}\! \times \! A) \! \subset \! A^{(1)} \! \times \! {\varvec{\mathcal {T}}}\), which easily entails (179). \(\square \)

Since \(d^{\mathtt {w}_n}_k\) is the graph-distance on \({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k}\), we easily see that on the rescaled space \(({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k}, \frac{a_n}{b_n} d^{\mathtt {w}_n}_k)\), (179) implies that \(\tfrac{1}{b_n}\varvec{\nu }^{\mathtt {w}_n}_{k} (A) \le \tfrac{2}{b_n}\varvec{\mu }^{\mathtt {w}_n}_{k} ( A^{_{(a_n/b_n)}}_{^{\, }} )+\tfrac{1}{b_n}\) and \(\tfrac{2}{b_n}\varvec{\mu }^{\mathtt {w}_n}_{k} (A) \le \tfrac{1}{b_n}\varvec{\nu }^{\mathtt {w}_n}_{k} ( A^{_{(a_n/b_n)}}_{^{\, }} )+\tfrac{1}{b_n}\), for any subset of vertices A. Since \(b_n^{-1} \le a_n/b_n\) (for all sufficiently large n), we get \({d^{\mathrm {Prok}}_{{\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k}}} \big ( \frac{1}{b_n} \varvec{\nu }^{\mathtt {w}_n}_{k}, \frac{2}{b_n} \varvec{\mu }^{\mathtt {w}_n}_{k}\big ) \le a_n/b_n\). This combined with (178) entails

$$\begin{aligned}\big (\big ( {\varvec{\mathcal {G}}}_{\! k}^{\mathtt {w}_n} , \frac{_{_{a_n}}}{^{^{b_n}}}d_{k}^{\mathtt {w}_n} , \varrho _k^{\mathtt {w}_n}, \frac{_{_{1}}}{^{^{b_n}}}2\varvec{\mu }_k^{\mathtt {w}_n} \big ) \big )_{k\ge 1} \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; \big (\big ( {\mathbf {G}}_{k} , \mathrm {d}_{k}, \varrho _{k} , 2{\mathbf {m}}_{k} \big ) \big )_{k\ge 1}\end{aligned}$$

weakly on \({\mathbb {G}}^{{\mathbb {N}}^*}\) equipped with the product topology, which easily implies (60).

End of proof of Theorem 2.8 We next make the following additional assumption : \(\sqrt{{\mathbf {j}}_n} / b_n \! \rightarrow \! 0\) and we complete the proof of Theorem 2.8. To that end, it is sufficient to prove that for all fixed \(k\! \ge \! 1\), the probability that \(\varvec{\mu }^{\mathtt {w}_n}_1 ({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! 1})\!> \! \ldots \!> \! \varvec{\mu }^{\mathtt {w}_n}_k ({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k}) \!> \! \max _{j > k } \varvec{\mu }^{\mathtt {w}_n}_j ({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! j})\) tends to 1 as \(n\! \rightarrow \! \infty \).

In Lemma 5.7 recall that \({\mathbf {E}}[ Q^2_n] \le 4{\mathbf {j}}_n\). Thus, \(Q_n / b_n \! \rightarrow \! 0\) in probability. Recall that

$$\begin{aligned}(b_n^{-1} \Phi _n (b_n \cdot ) , b_n^{-1} \phi _n (b_n \cdot )) \! \rightarrow \! (2\mathrm {Id}, \frac{_1}{^2} \mathrm {Id})\end{aligned}$$

in probability on \(({\mathbf {C}}([0, \infty ), {\mathbf {R}}))^2\). By Slutzky’s theorem, we get a joint convergence of

$$\begin{aligned}(b_n^{-1} Q_n, b_n^{-1} \Phi _n (b_n \cdot ) , b_n^{-1} \phi _n (b_n \cdot ))\end{aligned}$$

with (177). Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that the convergence

$$\begin{aligned}&\big ( \frac{_{_{1}}}{^{^{b_n}}} Q_n , \frac{_{_{1}}}{^{^{b_n}}} \Phi _n (b_n \cdot ) , \frac{_{_{1}}}{^{^{b_n}}} \phi _n (b_n \cdot ) ; \big (\big ( \frac{_{_{a_n}}}{^{^{b_n}}} \mathtt {C}^{\mathtt {w}_n}_k (b_n \cdot ) , \, \frac{_{_{1}}}{^{^{b_n}}}{\overline{l}}_k^{n} , \, \frac{_{_{1}}}{^{^{b_n}}} {\overline{r}}_k^{n} , \, \frac{_{_{1}}}{^{^{b_n}}}{\overline{{\varvec{\Pi }}}}_k^{\mathtt {w}_n} \big ) \big )_{k\ge 1}\big ) \nonumber \\&\quad \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \big ( 0,2\mathrm {Id}, \frac{_1}{^2} \mathrm {Id}; \big (\big ({\varvec{\mathtt {H}}}_k (\cdot /2), 2l_k, 2r_k, 2{\varvec{\Pi }}_k\big ) \big )_{k\ge 1}\big ) \end{aligned}$$
(180)

holds almost surely on the appropriate space.

Recall the notation \(\zeta _k \!= \! r_k - l_k\), \(\zeta ^{\mathtt {w}_n}_k \!= \! r^{\mathtt {w}_n}_k \! -\! l^{\mathtt {w}_n}_k={\mathbf {m}}^{\mathtt {w}_n}_k ({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k})\) and set \({\overline{\zeta }}^n_k \!= \! {\overline{r}}^n_k \! - {\overline{l}}^n_k=\varvec{\nu }^{\mathtt {w}_n}_k ({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k})\). First, we easily derive from the argument of the proof of (179) that \(\varvec{\nu }^{\mathtt {w}_n}_k ({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k}) = 2\varvec{\mu }^{\mathtt {w}_n}_k ({\varvec{\mathcal {G}}}^{\mathtt {w}_n}_{\! k}) \mathtt {+1}\). Let \(\sigma _n\) be a permutation of \(\{ 1, \ldots , {\mathbf {q}}_k^{\mathtt {w}_n}\}\) such that \(({\overline{\zeta }}^n_{\sigma _n (k)})_{1\le k \le {\mathbf {q}}_k^{\mathtt {w}_n}}\) is nonincreasing. To complete the proof of Theorem 2.8, it is then sufficient to prove that for all \(k\! \ge \! 1\), there exists \(n_k\) such that for all \(n\! \ge \! n_k\), \(\sigma _n (k)\! =\! k\).

To prove that, we then fix \(k\! \ge \! 1\) and we recall that \(\zeta _1 \!> \! \ldots \!> \! \zeta _k\! >\! \zeta _{k+1}\) so it makes sense to fix \(\varepsilon \in (0, \infty )\) such that \( \varepsilon \! < \! \frac{_1}{^3} \min _{1\le j\le k} (\zeta _{j} \! -\! \zeta _{j+1})\). Observe first that (180) implies that for all \(j\ge 1\), \(b_n^{-1}\zeta ^{\mathtt {w}_n}_j\! \rightarrow \zeta _j\) and \(b_n^{-1}{\overline{\zeta }}^n_j\! \rightarrow 2\zeta _j\) almost surely. Therefore, there exists \(n_k \in {\mathbb {N}}\) such that for all \(n\! \ge \! n_k\),

$$\begin{aligned} b_n^{-1} (4Q_n +1)+ \max _{1\le j\le k+1} |b_n^{-1}{\overline{\zeta }}^n_j \! -\! 2\zeta _j| + \max _{1\le j\le k+1} |b_n^{-1}\zeta ^{\mathtt {w}_n}_j \! -\! \zeta _j| \; < \varepsilon \; . \end{aligned}$$
(181)

Then, we fix \(n\! \ge \! n_k\) and for all \(j \in \{ 1, \ldots , k\}\), Lemma 5.7 and (173) imply

$$\begin{aligned} {\overline{\zeta }}^n_{\sigma _n (j)}= & {} {\overline{r}}^{n}_{\sigma _n (j)} - {\overline{l}}^{n}_{\sigma _n (j)} = \Phi _n (r^{\mathtt {w}_n}_{\sigma _n (j)} ) \! -\! \Phi _n (l^{\mathtt {w}_n}_{\sigma _n (j)}) \\= & {} 2R^n (r^{\mathtt {w}_n}_{\sigma _n (j)}) \! -\! 2R^n (l^{\mathtt {w}_n}_{\sigma _n (j)}) - {\mathcal {H}}^{\mathtt {w}_n} (r^{\mathtt {w}_n}_{\sigma _n (j)}) + {\mathcal {H}}^{\mathtt {w}_n} (l^{\mathtt {w}_n}_{\sigma _n (j)}) \\= & {} 2R^n (r^{\mathtt {w}_n}_{\sigma _n (j)}) \! -\! 2R^n (l^{\mathtt {w}_n}_{\sigma _n (j)}) + 1 \overset{\text {by }(175)}{\le } 2 \zeta ^{\mathtt {w}_n}_{\sigma _n (j)} + 4 Q_n + 1 . \end{aligned}$$

Thus, \(2b_n^{-1}\zeta ^{\mathtt {w}_n}_{\sigma _n (j)}\! \ge \! b_n^{-1} {\overline{\zeta }}^n_{\sigma _n (j)} \! -\! \varepsilon \). Moreover,

$$\begin{aligned} b_n^{-1} {\overline{\zeta }}^n_{j}\ge & {} 2\zeta _j -\varepsilon {=} 2 (\zeta _j \! -\! \zeta _{j+1}) + 2\zeta _{j+1} -\varepsilon \ge 6 \varepsilon + (2\zeta _{j+1}+ \varepsilon ) -2\varepsilon \ge 4\varepsilon + b_n^{-1} {\overline{\zeta }}^n_{j+1} , \end{aligned}$$

which implies that \( {\overline{\zeta }}^{n}_1 \!> \! \ldots \! > \! {\overline{\zeta }}^{n}_k\) for all \(n\! \ge \! n_k\). Next, set \(S = \{ {\overline{\zeta }}^{n}_\ell ; 1 \le \ell \le {\mathbf {q}}_{\mathtt {w}_n} \}\); the previous inequality implies that, for all \(j \in \{ 1, \ldots , k\}\), \(\# (S\cap [ {\overline{\zeta }}^{n}_j, \infty )) \! \ge \! j = \# (S \cap [{\overline{\zeta }}^n_{\sigma _n (j)} , \infty )) \). It follows that \( {\overline{\zeta }}^n_{\sigma _n (j)} \! \ge \! {\overline{\zeta }}^n_{j}\), \(j \in \{ 1, \ldots , k\}\). Combined with the previous lower bound for \(2b_n^{-1}\zeta ^{\mathtt {w}_n}_{\sigma _n (j)}\), this implies that for all \(j \in \{ 1, \ldots , k\}\),

$$\begin{aligned} 2b_n^{-1}\zeta ^{\mathtt {w}_n}_{\sigma _n (j)}\! \ge \! b_n^{-1} {\overline{\zeta }}^n_{\sigma _n (j)} \! -\! \varepsilon> b_n^{-1} {\overline{\zeta }}^n_{j} \! -\! \varepsilon >2\zeta _j -4 \varepsilon . \end{aligned}$$

Consequently, \(b_n^{-1}\zeta ^{\mathtt {w}_n}_{\sigma _n (j)} \! >\! \zeta _j \! -\! 2\varepsilon \). This implies that \(\sigma _n (j) \le j\). Indeed, suppose that \(\sigma _n (j) \! \ge \! j+1\); thus \(\zeta ^{\mathtt {w}_n}_{j+1} \! \ge \! \zeta ^{\mathtt {w}_n}_{\sigma _n(j)}\) and the previous inequality combined with (181) would entail \(\zeta _{j+1} \! +\! \varepsilon \!>\! b_n^{-1}\zeta ^{\mathtt {w}_n}_{j+1} \! \ge \! b_n^{-1}\zeta ^{\mathtt {w}_n}_{\sigma _n(j)} \! >\! \zeta _j \! -\! 2\varepsilon \), which would contradict \( \varepsilon \! < \! \frac{_1}{^3} \min _{1\le \ell \le k} (\zeta _{\ell } \! -\! \zeta _{\ell +1})\). Thus, for all \(n\! \ge \! n_k\) and for all \(j \in \{ 1, \ldots , k\}\), \(\sigma _n (j) \le j\), which entails that \(\sigma _n (j) = j\), and therefore completes the proof. \(\square \)

6 Proof of Proposition 5.1

In this section, we prove Proposition 5.1subject to Proposition 2.1and Proposition2.2, whose proofs are later given in Sect. 7.2. The proof of Proposition 5.1 relies upon the representations \(Y = X\circ \theta ^{\mathtt {b}}\), \({\mathcal {H}}= H\circ \theta ^{\mathtt {b}}\) and their discrete counterparts. Note that although the convergence of \((X^{\mathtt {w}_n}, H^{\mathtt {w}_n})\) is provided by Proposition 2.2, their joint convergence with \(\theta ^{\mathtt {b}, \mathtt {w}_{n}}\) in Skorokhod’s topology is a very delicate matter, as X and \(\theta ^{\mathtt {b}}\) share jumps. Therefore, we need to proceed with utmost care.

Let us first recall two general lemmas from Ethier & Kurtz [22] that we use several times.

Lemma 6.1

(Lemma 3.8.2 [22]) For all \(n \in {\mathbb {N}}\), let \(({\mathbf {s}}^n_k)_{k\in {\mathbb {N}}}\) be a nondecreasing \([0, \infty ]\)-valued sequence of r.v. such that \({\mathbf {s}}^n_0 = 0\), a.s. \(\lim _{k\rightarrow \infty } {\mathbf {s}}^n_k = \infty \) and \({\mathbf {s}}^n_k\! < \! {\mathbf {s}}^n_{k+1}\) for all \(k \in {\mathbb {N}}\) such that \({\mathbf {s}}^n_k\! < \! \infty \). Fix \(z \in (0, \infty )\) and set \({\mathbf {k}}_n = \max \{ k \in {\mathbb {N}}: {\mathbf {s}}^n_k \! < \! z \}\). Then

$$\begin{aligned}&\lim _{\eta \rightarrow 0+} \, \sup _{n\in {\mathbb {N}}} \, {\mathbf {P}}\Big ( \min _{\quad 0\le k\le {\mathbf {k}}_n} {\mathbf {s}}^n_{k+1}\! - \! {\mathbf {s}}^n_{k} \!< \eta \Big ) \! =\! 0\Longleftrightarrow \quad \lim _{\eta \rightarrow 0+} \, \sup _{n\in {\mathbb {N}}} \, \sup _{k\in {\mathbb {N}}} \, {\mathbf {P}}\big ( {\mathbf {s}}^n_k \!< \! z \, ; \, {\mathbf {s}}^n_{k+1} \! -\! {\mathbf {s}}^n_{k} \! < \! \eta \big ) = 0 . \end{aligned}$$

Proof

See Lemma 3.8.2 in Ethier & Kurtz [22] (p. 134). Note that Lemma 3.8.2 in [22] only deals with sequences that take finite values but the proof extends immediately to our case. \(\square \)

The previous lemma entails a tightness result for nondecreasing processes that is a consequence of Proposition 3.8.3 in Ethier & Kurtz [22]. To recall this statement we need to introduce the following notation. Let \(y \in {\mathbf {D}}([0, \infty ), {\mathbb {R}})\) that is the space of càdlàg functions equipped with Skorokhod’s topology, and let \(z, \eta \in (0, \infty )\). Recall the notation \(w_z (y, \eta ) \) in (121) of the càdlàg modulus of continuity of \(y \in {\mathbf {D}}([0, \infty ), {\mathbb {R}})\). Assume that \(y(\cdot ) \) is nonnegative and nondecreasing; then for all \(\varepsilon \in (0, \infty )\), we inductively define times \((\tau ^\varepsilon _k (y))_{k\in {\mathbb {N}}}\) by setting

$$\begin{aligned} \tau ^{\varepsilon }_0 (y) = 0 \quad \text {and} \quad \tau _{k+1}^\varepsilon (y) = \inf \big \{ t>\tau ^\varepsilon _k (y) \! : \, y(t)\! -\! y \big ( \tau ^\varepsilon _k (y) \big ) > \varepsilon \big \} , \end{aligned}$$
(182)

with the convention that \(\inf \emptyset = \infty \). Observe that if \(z\! >\! \eta \) and if \(w_z (y, \eta ) \! >\! \varepsilon \), then there exists \(k \! \ge \! 1\) such that \(\tau ^\varepsilon _k (y) \! \le \! z\) and \( \tau _{k}^\varepsilon (y) \! - \! \tau _{k-1}^\varepsilon (y) \! < \! \eta \). Indeed, set \(r = 1+ \max \{ k \in {\mathbb {N}}\! : \! \tau ^\varepsilon _k (y) \! < z \}\). Note that \(z\! >\! \eta \) and \(w_z (y, \eta ) \! >\! \varepsilon \) imply that \(r\! \ge \! 2\); then for all \(i \in \{ 0, \ldots , r\! -\! 1\}\), set \(t_i = \tau ^\varepsilon _i(y)\) and \(t_r = z\). By definition of the \(\tau ^\varepsilon _i(y)\), we get \(\max _{1\le i \le r } \mathtt {osc} (y, [t_{i-1} , t_i ) \, ) \! \le \varepsilon \). Since \(w_z (y, \eta ) \! >\! \varepsilon \), we necessarily get \(\min _{1\le i \le r-1} (t_i\! -\! t_{i-1}) \! < \! \eta \), which is the desired result. This observation combined with Lemma 3.8.2 of [22] (recalled above as Lemma 6.1) immediately entails the following.

Lemma 6.2

For all \(n \in {\mathbb {N}}\), let \((R^n_t)_{t\in [0, \infty )}\) be a càdlàg nonnegative and nondecreasing process. Then, the laws of the \(R^n\) are tight in \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\) if for all \(t \in [0, \infty )\) the laws of the \(R_n (t)\), \(n \in {\mathbb {N}}\) are tight on \({\mathbb {R}}\) and if for all \(z, \varepsilon \in (0, \infty )\) we have

$$\begin{aligned} \lim _{\eta \rightarrow 0+} \, \limsup _{n\in {\mathbb {N}}} \, \sup _{k\in {\mathbb {N}}} \, {\mathbf {P}}\big ( \tau ^\varepsilon _k (R^n) \!< \! z \, ; \, \tau ^\varepsilon _{k+1} (R^n) \! -\! \tau ^\varepsilon _k (R^n) \! < \! \eta \big ) = 0 . \end{aligned}$$
(183)

Proof

See the previous arguments or Lemma 3.8.1 and Proposition 3.8.3 in Ethier & Kurtz [22] (pp. 134-135). \(\square \)

Recall the definition of \(A^{\mathtt {w}_n}\) in (100). We immediately apply Lemma 6.2 in combination with the estimates in Lemma 3.5 to prove the tightness of a rescaled version of \(A^{\mathtt {w}_n}\).

Lemma 6.3

Let \((\alpha , \beta , \kappa , {\mathbf {c}})\) be as in (8). Let \(\psi \) in (9) satisfy \(\int ^\infty d\lambda / \psi (\lambda ) \! <\! \infty \). Let \(a_n , b_n \! \in \! (0, \infty )\) and \(\mathtt {w}_n \! \in \! {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfy (21) and \(\mathbf {(C1)}\)\(\mathbf {(C3)}\) as in (29) and (30). Then, the laws of \((\frac{_1}{^{a_n}}A^{\mathtt {w}_n}_{b_n t})_{t\in [0,\infty )}\) are tight on \({\mathbf {D}}([0,\infty ), {\mathbb {R}})\).

Proof

We repeatedly use the following estimates on Poisson r.v. N with mean \(r \in (0, \infty )\):

$$\begin{aligned} {\mathbf {E}}\big [ (N\! -\! 1)_+\big ]\! =\! e^{-r} \! -\! 1 + r \quad \text {and} \quad \mathbf {var} \big ( (N\! -\! 1)_+\big )\! =\! r^2 \! -\! (e^{-r} \! -\! 1 + r)(e^{-r} + r) \le r^2 . \end{aligned}$$
(184)

By the definition (100), we get \( {\mathbf {E}}[A^{\mathtt {w}_n}_t ] = \sum _{j\ge 1} w^{_{(n)}}_{^j} (e^{-tw^{_{(n)}}_{^j} / \sigma _1 (\mathtt {w}_n)} \! -\! 1 + \frac{ tw^{_{(n)}}_{^j}}{\sigma _1 (\mathtt {w}_n)} ) \le \frac{ t^2 \sigma _3 (\mathtt {w}_n)}{2\sigma _1 (\mathtt {w}_n)^2}\). Thus, by \(\mathbf {(C1)}\)\(\mathbf {(C3)}\) and the Markov inequality, we get

$$\begin{aligned} \limsup _{n\rightarrow \infty } {\mathbf {P}}\Big (\frac{_1}{^{a_n}}A^{\mathtt {w}_n}_{b_n t} \ge x \Big ) \le \frac{_1}{^2} x^{-1}t^2 \kappa \big ( \kappa \sigma _3 ({\mathbf {c}}) + \beta \big ) \underset{x\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } 0. \end{aligned}$$

This shows that for any \(t \in [0, \infty )\), the laws of the \(\frac{_1}{^{a_n}}A^{\mathtt {w}_n}_{b_n t}\) are tight on \({\mathbb {R}}\).

We next prove (183) with \(R^n_t = \frac{_1}{^{a_n}}A^{\mathtt {w}_n}_{b_n t}\), \(t \in [0, \infty )\). To that end, we fix \(z, \varepsilon \in (0, \infty )\) and \(k \in {\mathbb {N}}\), and we set \(T_n\! :=\! \tau ^\varepsilon _k (R^n) \). Then, (118) in Lemma 3.5 with \(a = a_n \varepsilon \), \(T \! =\! b_nT_n\), \(t = b_n \eta \) and \(t_0 = b_n z\) implies the following:

$$\begin{aligned} {\mathbf {P}}\big ( \tau ^\varepsilon _k (R^n) \!< & {} \! z \, ; \, \tau ^\varepsilon _{k+1} (R^n) \! -\! \tau ^\varepsilon _k (R^n) \!< \! \eta \big ) = {\mathbf {P}}\big ( b_nT_n \! < \! b_n z \, ; \, A^{\mathtt {w}_n}_{b_n T_n+ b_n \eta }\! -\! A^{\mathtt {w}_n}_{b_n T_n} \! > \! a_n \varepsilon \big ) \\&\quad \le \, (a_n\varepsilon )^{-1} b_n \eta \big ( b_n z + \frac{_1}{^2} b_n \eta \big ) \frac{\sigma _3 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)^2} \\&\quad \le \,\varepsilon ^{-1} \eta (z + \eta ) \frac{a_n b_n}{\sigma _1 (\mathtt {w}_n)} \frac{b_n \sigma _3 (\mathtt {w}_n)}{a_n^2\sigma _1 (\mathtt {w}_n)} . \end{aligned}$$

Then \(\mathbf {(C1)}\)\(\mathbf {(C3)}\) entails (183) and Lemma 6.2 completes the proof. \(\square \)

Recall the definition of \(X^{\mathtt {b}, \mathtt {w}}\) in (98) and that of the Poisson processes \(N^{\mathtt {w}}_j (\cdot )\), \(j\! \ge \! 1\) in (99). Recall also the definition of \(X^{\mathtt {b}}\) in (142) and that of the Poisson processes \(N_j (\cdot )\), \(j\! \ge \! 1\).

Lemma 6.4

Under the assumptions of Lemma 6.3, the following convergence

$$\begin{aligned} \big (\big ( \frac{_{_1}}{^{^{a_n}}}X^{\mathtt {b},\mathtt {w}_n}_{b_n t} \big )_{t\in [0, \infty )} , (N^{\mathtt {w}_n}_j(b_n t))_{t\in [0, \infty )} ; \, j\ge 1\, \big ) \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \big ( X^\mathtt {b}, N_j ; \, j\ge 1 \big ) \end{aligned}$$
(185)

holds weakly on \(({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^{\mathbb {N}}\) equipped with the product topology.

Proof

Let \(u \in {\mathbb {R}}\). Note that

$$\begin{aligned}{\mathbf {E}}\big [ \! \exp (i u N^{\mathtt {w}_n}_j(b_n t))\big ] = \exp (- tb_n w^{_{(n)}}_{^j}\! (1\! -\! e^{iu }) / \sigma _1 (\mathtt {w}_n) ) \! \longrightarrow \exp (- t\kappa c_j (1\! -\! e^{iu }) )\end{aligned}$$

by (21) and \(\mathbf {(C3)}\). Thus, for all \(t \in [0, \infty )\), \(N^{\mathtt {w}_n}_j(b_n t)\! \rightarrow \! N_j (t)\) in law. Next, fix \(k\! \ge \! 1\) and set, for \(t \in [0, \infty )\),

$$\begin{aligned} Q^n_t = \frac{_1}{^{a_n}}X^{\mathtt {b}, \mathtt {w}_n}_{b_n t} \! -\! \sum _{1 \le j\le k} a_n^{-1}w^{_{(n)}}_{^j} N^{\mathtt {w}_n}_j (b_nt) \quad \text {and} \quad Q_t = X^{\mathtt {b}}_t \! -\! \sum _{1\le j\le k} c_j N_j (t) \; . \end{aligned}$$

Since we assume that Proposition 2.1 holds true, \(\frac{_1}{^{a_n}}X^{\mathtt {b}, \mathtt {w}_n}_{b_n t} \! \rightarrow \! X^\mathtt {b}_t\) weakly on \({\mathbb {R}}\). Since \(Q^n_t\) (resp. \(Q_t\)) is independent of \((N^{\mathtt {w}_n}_j)_{1\le j\le k}\) (resp. independent of \((N_j)_{1\le j\le k}\)), we easily check

$$\begin{aligned} {\mathbf {E}}\big [ e^{iu Q^n_t }\big ]= & {} {\mathbf {E}}\big [ e^{iu X^{\mathtt {b}, \mathtt {w}_n}_{b_n t}/ a_n}\big ] \big / \!\! \!\! \prod _{1\le j \le k} {\mathbf {E}}\big [ e^{-i u \frac{w^{_{(n)}}_{^j}}{a_n} N^{\mathtt {w}_n}_j(b_n t)} \big ]\underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } {\mathbf {E}}\big [ e^{iu X^{\mathtt {b}}_{t}}\big ] \big / \prod _{1\le j \le k} {\mathbf {E}}\big [e^{-i u c_j N_j(t)} \big ] \! =\! {\mathbf {E}}\big [ e^{iuQ_t}\big ]. \end{aligned}$$

Thus, \(Q^n_t \! \rightarrow \! Q_t\) weakly on \({\mathbb {R}}\). Since Lévy processes weakly converge in \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\) if and only if unidimensional marginals weakly converge on \({\mathbb {R}}\) (see Lemma B.8 for precise references), we get \(Q^n \! \rightarrow \! Q\) and for all \(j\! \ge \! 1\), \(N^{\mathtt {w}_n}_j(b_n \cdot )\! \rightarrow \! N_j\), weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\).

Since \(Q^n, N^{\mathtt {w}_n}_1, \ldots , N^{\mathtt {w}_n}_k\) are independent Lévy processes, they have a.s. no common jump-times and Lemma B.2 asserts that

$$\begin{aligned}(Q^n_t, N^{\mathtt {w}_n}_1(b_n t), \ldots , N^{\mathtt {w}_n}_k(b_n t))_{t\in [0, \infty )} \! \longrightarrow \! (Q, N_1, \ldots , N_k) \; \, \text {weakly on }{\mathbf {D}}([0, \infty ), {\mathbb {R}}^{k+1} ).\end{aligned}$$

Since \(X^{\mathtt {b}, \mathtt {w}_n}\) is a linear combination of \(Q^n\) and the \((N^{\mathtt {w}_n}_j)_{1\le j\le k}\), we get

$$\begin{aligned}\big ( (\frac{_{_1}}{^{^{a_n}}}X^{\mathtt {b}, \mathtt {w}_n}_{b_n t}, N^{\mathtt {w}_n}_1(b_n t), \ldots , N^{\mathtt {w}_n}_k(b_n t) \big )_{t\in [0, \infty )} \! \longrightarrow \! (X^\mathtt {b}, N_1, \ldots , N_k) \; \, \text {weakly on }{\mathbf {D}}([0, \infty ), {\mathbb {R}}^{k+1} ),\end{aligned}$$

which implies the weaker statement: \((\frac{_1}{^{a_n}}X^{\mathtt {b}, \mathtt {w}_n}_{b_n \cdot }, N^{\mathtt {w}_n}_1(b_n \cdot ), \ldots , N^{\mathtt {w}_n}_k(b_n \cdot )) \! \longrightarrow \! (X^\mathtt {b}, N_1, \ldots , N_k)\), weakly on \(({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^{k+1}\) equipped with the product topology. Since it holds true for all k, an elementary result (see Lemma B.7) entails (185). \(\square \)

Recall the definition of \(A^\mathtt {w}\) in (100) and recall the definition of A in (143).

Lemma 6.5

Under the assumptions of Lemma 6.3, we have

$$\begin{aligned} \big (\big ( \frac{_{_1}}{^{^{a_n}}}X^{\mathtt {b},\mathtt {w}_n}_{b_n t} \big )_{t\in [0, \infty )}, \big ( \frac{_{_1}}{^{^{a_n}}} A^{\mathtt {w}_n}_{b_n t}\big )_{t\in [0, \infty )} \big ) \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \big ( X^\mathtt {b} , A \big ) \quad \text {weakly on }({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^2. \end{aligned}$$
(186)

Proof

Lemma 6.3 and Lemma 6.4 imply that the laws of \((\frac{_{1}}{^{{a_n}}}A^{\mathtt {w}_n}_{b_n \cdot }, \frac{_{1}}{^{{a_n}}}X^{\mathtt {b},\mathtt {w}_n}_{b_n \cdot } , N^{\mathtt {w}_n}_j (b_n \cdot ) ; j\! \ge \! 1)\) are tight on \(({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^{\mathbb {N}}\) equipped with the product topology. We want to prove that there is a unique limit law: let \((n(p))_{p\in {\mathbb {N}}}\) be an increasing sequence of integers such that

$$\begin{aligned} (\frac{_{1}}{^{{a_{n(p)}}}}A^{\mathtt {w}_{n(p)}}_{b_{n(p)} \cdot }, \frac{_{1}}{^{{a_{n(p)}}}}X^{\mathtt {b},\mathtt {w}_{n(p)}}_{b_{n(p)} \cdot } , N^{\mathtt {w}_{n(p)}}_j (b_{n(p)} \cdot ) ; j\! \ge \! 1) \underset{p\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \big ( A^\prime , X^\mathtt {b} , N_j ; j\ge 1\big ) , \end{aligned}$$
(187)

holds weakly on \(({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^{\mathbb {N}}\). Since \(({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^{\mathbb {N}}\) equipped with the product topology is a Polish space, Skorokhod’s representation theorem applies and without loss of generality (but with a slight abuse of notation), we can assume that (187) holds true \({\mathbf {P}}\)-almost surely on \(({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^{\mathbb {N}}\).

Recall that \(A_t = \frac{_{_1}}{^{^2}} \kappa \beta t^2 + \sum _{j\ge 1} c_j \big ( N_j (t) \! -\! 1 \big )_+\), \(t \in [0, \infty )\). Then, to prove (186), we claim that it is sufficient to prove that for all \(t \in [0, \infty )\),

$$\begin{aligned} \frac{_{1}}{^{{a_{n(p)}}}}A^{\mathtt {w}_{n(p)}}_{b_{n(p)} t} \! \longrightarrow \! A_t \quad \text {in probability.} \end{aligned}$$
(188)

Indeed, let t be such that \(\Delta A^\prime _t = \Delta A_t = 0\) and let \(q, q^\prime \) be rational numbers such that \(q\!< \! t \! < \! q^\prime \); thus, \(A^{_{\mathtt {w}_{n(p)}}}_{^{b{(n(p))} q}} \le A^{_{\mathtt {w}_{n(p)}}}_{^{b{(n(p))} t}} \le A^{_{\mathtt {w}_{n(p)}}}_{^{b{(n(p))} q^\prime }}\); since \(\Delta A^\prime _t = 0\), we get a.s. \(A^{_{\mathtt {w}_{n(p)}}}_{^{b_{n(p)} t}}/ a_{n(p)} \! \rightarrow \! A^\prime _t\); the convergence in probability entails that \(A_q \le A^\prime _t \le A_{q^\prime }\); since it holds true for all rational numbers \(q,q^\prime \) such that \(q\!< \! t \! < \! q^\prime \), we get \(A_{t-} \le A^\prime _t \le A_t\) which implies \(A_t \! = \! A^\prime _t\) since \(\Delta A_t = 0\). Thus, a.s. A and \(A^\prime \) coincide on the dense subset \(\{ t \in [0, \infty ): \Delta A^\prime _t = \Delta A_t = 0\}\): it entails that a.s. \(A = A^\prime \) and the law of \((A, X^\mathtt {b}, N_j ; j\ge 1)\) is the unique weak limit of the laws of \((\frac{_{1}}{^{{a_n}}}A^{\mathtt {w}_n}_{b_n \cdot }, \frac{_{1}}{^{{a_n}}}X^{\mathtt {b},\mathtt {w}_n}_{b_n \cdot } , N^{\mathtt {w}_n}_j (b_n \cdot ) ; j\! \ge \! 1)\).

Let us prove (188). To simplify notation let \(\mathtt {v}_n \in {\ell }^{_{\, \downarrow }}_f\) be defined by

$$\begin{aligned} v_j^{(n)}= w_j^{(n)}/ a_n\, ,\quad \text {for }j \in {\mathbb {N}}^*. \end{aligned}$$
(189)

By \(\mathbf {(C3)}\), \(v_j^{(n)} \! \rightarrow \! c_j\); by (21) and \(\mathbf {(C2)}\), \(b_n/ \sigma _1 (\mathtt {v}_n) \! \rightarrow \! \kappa \) and \(\sigma _3 (\mathtt {v}_n)\! \rightarrow \! \sigma _3 ({\mathbf {c}}) + \beta / \kappa \). We next claim that there exists \(j_n \rightarrow \infty \) such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\! v^{(n)}_{j_n} = 0, \quad \lim _{ n\rightarrow \infty } \sum _{\quad 1\le j \le j_n} (v_{^j}^{_{(n)}})^3 = \sigma _3 ({\mathbf {c}}) \quad \!\! \text {and} \quad \!\! \lim _{n\rightarrow \infty } \sum _{j > j_n} (v_{^j}^{_{(n)}})^3\! =\! \beta / \kappa . \end{aligned}$$
(190)

Proof of (190). Indeed, suppose first that \(\sup \{ j\! \ge \! 1\! : \! c_j \! >\! 0 \} = \infty \) and set \(j_n \! =\! \sup \big \{ j\ge 1 \! :\! {v_{^j}^{_{(n)}}} \! >\! 0 \; \text {and} \; \sum _{1\le i\le j} (v_{^i}^{_{(n)}})^3 \le \sigma _3 ({\mathbf {c}} )\big \}\), with the convention that \(\sup \emptyset = 0\). Here \(j_n \! \rightarrow \! \infty \), and it is easy to check that it satisfies (190).

Next suppose that \(j_* = \sup \{ j\! \ge \! 1\! : \! c_j \! >\! 0 \} \! < \! \infty \). Clearly \(\sum _{1\le j\le j_*} (v^{_{(n)}}_{^j})^3\! \rightarrow \! \sigma _3 ({\mathbf {c}})\) and \(\sum _{j> j_*} (v^{_{(n)}}_{^j})^3\! \rightarrow \! \beta /\kappa \). Since for all \(j\! >\! j^*\), \(v^{_{(n)}}_{^j}\! \rightarrow \! 0\) it is possible to find a sequence \((j_n)\) that tends to \(\infty \) sufficiently slowly to get \(\sum _{ j_*< j\le j_n} \! (v^{_{(n)}}_{^j})^3 \! \rightarrow \! 0\), which implies (190). \(\square \)

Next, we use (190) to prove (188). To that end, we fix \(t \in [0, \infty )\) and we fix \(k \in {\mathbb {N}}\) that will be specified later; since \(j_n \! \rightarrow \! \infty \), we can assume p is such that \(k\! < \! j_{n(p)}\). To simplify, we set \(\xi ^{p}_j = v^{(n(p))}_j \big (N^{\mathtt {w}_{n(p)}}_j (b_{n(p)} t)\! -\! 1 \big )_+\) and \(\xi _j = c_j \big (N_j (t)\! -\! 1 \big )_+\) and

$$\begin{aligned} D^{k,p}_t= & {} \sum _{1\le j\le k} \!\!\! \xi _j^p \! -\! \xi _j, \; \, R^{k,p}_t = \!\! \! \!\! \sum _{\; k< j \le j_{n(p)}} \!\!\! \!\! \!\! \xi ^p_j - \sum _{j>k} \xi _j , \; \, C^{p}_t = \!\! \! \!\! \!\! \sum _{\quad j>j_{n(p)}} \!\! \!\! \!\! \xi _j^p \! -\! {\mathbf {E}}[\xi ^p_j] \quad \text {and} \; \, d_p(t)\! =\! \frac{_1}{^2}\kappa \beta t^2 \! -\!\! \!\! \!\! \!\! \sum _{\quad j>j_{n(p)}}\!\! \!\! \!\! {\mathbf {E}}[\xi ^p_j]. \end{aligned}$$

Thus, \(A^{\mathtt {w}_{n(p)}} (b_{n(p)} t )/ a_{n(p)} - A_t = D^{k,p}_t + R^{k,p}_t + C^{p}_t - d_p (t)\) and we prove that each term on the right-hand side goes to 0 in probability.

We first show that \(d_p (t) \! \rightarrow \! 0\). Since \(N^{_{\mathtt {w}_{n(p)}}}_{^j} (b_{n(p)} t)\) is a Poisson r.v. with mean \(r_{p,j}\) that is equal to \(v^{_{(n(p))}}_{^j} b_{n(p)} t/\sigma _1 (\mathtt {v}_{n(p)})\), by (184) we get \({\mathbf {E}}\big [ \xi _j^p\big ] = v^{_{(n(p))}}_{^j} \big ( e^{-r_{p,j}}\! -\! 1+ r_{p,j})\). We next use the following elementary inequality, valid for \(y \in [0, \infty )\),

$$\begin{aligned} 0 \le \frac{_1}{^2} y^2 - \big ( e^{-y} \! -\! 1 + y\big ) \le \frac{_1}{^2}y^2 (1\! -\! e^{-y}) \le \frac{_1}{^2} \, y^2 \! \wedge \! y^3 \; , \end{aligned}$$
(191)

that holds true since \(y^{-2} (e^{-y} \! -\! 1 + y) = \int _0^1 \! dv \int _0^v \! dw \, e^{-wy}\). Thus,

$$\begin{aligned} 0\le & {} \sum _{\quad j>j_{n(p)}} \!\! \!\! \!\! \frac{_1}{^2}v^{_{(n(p))}}_{^j}\! r_{p,j}^2 \! \! -\! {\mathbf {E}}\big [ \xi _j^p\big ] \le \! \!\! \!\! \!\! \sum _{\quad j>j_{n(p)}} \!\! \!\! \!\! \frac{_1}{^2} v^{_{(n(p))}}_{^j} r_{p,j}^3 \le \frac{_1}{^2} v^{_{(n(p))}}_{^{j_{n(p)}}} \frac{_{(b_{n(p)}t)^3}}{^{\sigma _1 (\mathtt {v}_{n(p)})^3}} \! \!\! \!\! \!\! \sum _{\quad j>j_{n(p)}} \!\! \!\! \!\! (v^{_{(n(p))}}_{^j} )^3 \longrightarrow 0, \end{aligned}$$

by (190). Next, note that \( \sum _{ j>j_{n(p)}}\! v^{_{(n(p))}}_{^j} r_{p,j}^2 \! =\! (b_{n(p)}t/ \sigma _1(\mathtt {v}_{n(p)})^2 \sum _{ j>j_{n(p)}} \! (v^{_{(n(p))}}_{^j} )^3\! \longrightarrow \! \kappa \beta t^2\), which implies that \(d_p (t) \! \rightarrow 0\) as \(p\! \rightarrow \! \infty \).

We next consider \(C^p_t\): by (184), \(\mathbf {var} (\xi ^p_j) \le (v^{_{(n(p))}}_{^j})^2 r_{p,j}^2\). Since the \(\xi ^p_j\) are independent, we get

$$\begin{aligned} {\mathbf {E}}\big [ (C^p_t)^2\big ]\! =\! \!\! \! \!\! \!\! \sum _{\quad j>j_{n(p)}} \!\! \!\! \!\! \mathbf {var} (\xi ^p_j) \le v^{_{(n(p))}}_{^{j_{n(p)}}} \frac{_{(b_{n(p)}t)^2}}{^{\sigma _1 (\mathtt {v}_{n(p)})^2}} \! \!\! \!\! \!\! \sum _{\quad j>j_{n(p)}} \!\! \!\! \!\! (v^{_{(n(p))}}_{^j} )^3 \longrightarrow 0 \end{aligned}$$

by (190), which proves that \(C^p_t \! \rightarrow \! 0\) in probability when \(p\! \rightarrow \! \infty \).

We next deal with \(R^{k,p}_t\). By (184), (190) and (191), we first get

$$\begin{aligned} 0\le & {} \sum _{\quad k< j\le j_{n(p)}} \!\! \!\! \!\! {\mathbf {E}}\big [ \xi _j^p\big ] \! \le \! \! \!\! \!\! \!\! \sum _{\quad k< j\le j_{n(p)}} \!\! \!\! \!\! \frac{_1}{^2} v^{_{(n(p))}}_{^j} r_{p,j}^2 = \frac{_1}{^2} \frac{_{(b_{n(p)}t)^2}}{^{\sigma _1 (\mathtt {v}_{n(p)})^2}} \! \!\! \!\! \!\! \!\!\! \sum _{\quad k< j\le j_{n(p)}} \!\! \!\! \!\! (v^{_{(n(p))}}_{^j} )^3\underset{p\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \frac{_1}{^2} (\kappa t)^2 \sum _{ j >k} c_j^3.\nonumber \\ \end{aligned}$$
(192)

Similarly, observe that \({\mathbf {E}}[\xi _j] = c_j \big (e^{-\kappa tc_j} \! -\! 1 + \kappa t c_j \big )\le \frac{_1}{^2} (\kappa t)^2 c_j^3\). This inequality combined with (192) entails that

$$\begin{aligned} \limsup _{p\rightarrow \infty } {\mathbf {E}}\big [ |R^{k,p}_t|\big ] \le (\kappa t)^2 \sum _{ j >k} c_j^3 \; \underset{k\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow }\; 0 . \end{aligned}$$
(193)

Finally, we consider \(D^{k,p}\). Since a.s. t is not a jump-time of \(N_j\), a.s. \(v^{_{n(p)}}_{^j}(N^{\mathtt {w}_{n(p)}}_j (b_{n(p)} t) \! -\! 1)_+ \! \! \rightarrow \! c_j (N_j (t) \! -\! 1)_+ \). Thus, for all \(k \in {\mathbb {N}}\), a.s. \(D^{k,p}_t \! \rightarrow \! 0\). These limits combined with (193) (and with the convergence to 0 in probability of \(C^p_t\) and \(d_p(t)\)) easily imply (188), which completes the proof of the lemma. \(\square \)

Recall the definition of \(Y^\mathtt {w}\) in (100) and that of Y in (143).

Lemma 6.6

Under the assumptions of Lemma 6.3, we have

$$\begin{aligned}&\big (\big ( \frac{_1}{^{a_n}}X^{\mathtt {b},\mathtt {w}_n}_{b_n t}\! , \frac{_1}{^{a_n}}A^{\mathtt {w}_n}_{b_n t} , \frac{_1}{^{a_n}}Y^{\mathtt {w}_n}_{b_n t}\big )\big )_{t\in [0, \infty )}\overset{_{\text {weakly}}}{\underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow }} \big ( (X^\mathtt {b}_t , A_t, Y_t) \big )_{t\in [0, \infty )} \; \, \text {in }{\mathbf {D}}([0, \infty ), {\mathbb {R}}^3).\nonumber \\ \end{aligned}$$
(194)

Proof

Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that the convergence in (186) holds true \({\mathbf {P}}\)-almost surely. We first prove that \(( (\frac{_1}{^{a_n}}X^{\mathtt {b},\mathtt {w}_n}_{b_n \cdot }\! , \frac{_1}{^{a_n}}A^{\mathtt {w}_n}_{b_n \cdot }))\! \rightarrow \! ((X^\mathtt {b}, A))\) a.s. in \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^2)\) thanks to Lemma B.1 (iii). To that end, first recall that by definition, the jumps of A (resp. of \(A^{\mathtt {w}_n}\)) are jumps of \(X^{\mathtt {b}}\) (resp. of \(X^{\mathtt {b}, \mathtt {w}_n}\)): namely if \(\Delta A_t \! >\! 0\), then \(\Delta X^{\mathtt {b}}_t = \Delta A_t \). The same holds true for \(A^{\mathtt {w}_n}\) and \(X^{\mathtt {b}, \mathtt {w}_n}\).

Let \(t \in (0, \infty )\). First suppose that \(\Delta A_t \! >\! 0\). Thus, \(\Delta X^\mathtt {b}_t = \Delta A_t\). By Lemma B.1 (i), there exists a sequence of times \(t_n \! \rightarrow \! t\) such that \(\frac{_1}{^{a_n}} \Delta A^{\mathtt {w}_n}_{b_n t_n } \! \rightarrow \! \Delta A_t\). Thus, for all sufficiently large n, \(\frac{_1}{^{a_n}} \Delta A^{\mathtt {w}_n}_{b_n t_n }\!> \! 0\), which entails \(\frac{_1}{^{a_n}} \Delta A^{\mathtt {w}_n}_{b_n t_n }\! =\! \frac{_1}{^{a_n}} \Delta X^{\mathtt {b}, \mathtt {w}_n}_{b_n t_n }\) and we get \( \frac{_1}{^{a_n}} \Delta X^{\mathtt {b}, \mathtt {w}_n}_{b_n t_n }\! \rightarrow \! \Delta A_t = \Delta X^\mathtt {b}_t\). Suppose next that \(\Delta A_t \! =\! 0\); by Lemma B.1 (i), there exists a sequence of times \(t_n \! \rightarrow \! t\) such that \( \frac{_1}{^{a_n}} \Delta X^{\mathtt {b}, \mathtt {w}_n}_{b_n t_n }\! \rightarrow \! \Delta X^\mathtt {b}_t\). Since \(\Delta A_t \! =\! 0\), Lemma B.1 (ii) entails that \(\frac{_1}{^{a_n}} \Delta A^{\mathtt {w}_n}_{b_n t_n } \! \rightarrow \! \Delta A_t \! =\! 0\). In both cases, we have proved that for all \(t \in (0, \infty )\), there exists a sequence of times \(t_n \! \rightarrow \! t\) such that \( \frac{_1}{^{a_n}} \Delta X^{\mathtt {b}, \mathtt {w}_n}_{b_n t_n }\! \rightarrow \! \Delta X^\mathtt {b}_t\) and \(\frac{_1}{^{a_n}} \Delta A^{\mathtt {w}_n}_{b_n t_n } \! \rightarrow \! \Delta A_t\): by Lemma B.1 (iii), it implies that \(( (\frac{_1}{^{a_n}}X^{\mathtt {b},\mathtt {w}_n}_{b_n \cdot }\! , \frac{_1}{^{a_n}}A^{\mathtt {w}_n}_{b_n \cdot }))\! \rightarrow \! ((X^\mathtt {b}, A))\) a.s. in \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^2)\). This entails (194), since the function \((x, a) \in {\mathbb {R}}^2\! \mapsto \! (x, a, x\! -\! a) \in {\mathbb {R}}^3\) is Lipschitz and since \(X^{\mathtt {b}, \mathtt {w}_n}\! -\! A^{\mathtt {w}_n} = Y^{\mathtt {w}_n}\) and \(X^{\mathtt {b}}\! -\! A = Y\). \(\square \)

Recall that \(X^{\mathtt {r}, \mathtt {w}}\) (resp. \(X^\mathtt {r}\)) is an independent copy of \(X^{\mathtt {b}, \mathtt {w}}\) (resp. of \(X^\mathtt {b}\)). Recall the definition of \(\gamma ^{\mathtt {r}, \mathtt {w}}\) (resp. of \(\gamma ^{\mathtt {r}}\)) in (101) (resp. in (131)). Recall that \(I_t^{\mathtt {r}, \mathtt {w}} = \inf _{s\in [0, t] } X_s^{\mathtt {r}, \mathtt {w}}\) and recall the notation \(I_\infty ^{\mathtt {r}, \mathtt {w}} = \lim _{t\rightarrow \infty } I_t^{\mathtt {r}, \mathtt {w}}\). Similarly, recall that \(I_t^{\mathtt {r}} = \inf _{s\in [0, t] } X_s^{\mathtt {r}}\) and recall the notation \(I_\infty ^{\mathtt {r}} = \lim _{t\rightarrow \infty } I_t^{\mathtt {r}}\). Recall the definition of \({\overline{\gamma }}^{\mathtt {r}}\) in (135) in Lemma 4.1. We also set

$$\begin{aligned} {\overline{\gamma }}^{\mathtt {r}, \mathtt {w}}_{x} = \gamma ^{\mathtt {r}, \mathtt {w}}_x \quad \text {if }x\! <\! -I^{\mathtt {r}, \mathtt {w}}_\infty \quad \text {and} \quad {\overline{\gamma }}^{\mathtt {r}, \mathtt {w}}_{x} = \gamma ^{\mathtt {r}, \mathtt {w}} ((-I^{\mathtt {r}, \mathtt {w}}_\infty ) -) \quad \text {if }x\! \ge \! -I^{\mathtt {r}, \mathtt {w}}_\infty . \end{aligned}$$
(195)

\(\square \)

Lemma 6.7

Under the assumptions of Lemma 6.3, we have

$$\begin{aligned}&\big ( \big ( \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {r}, \mathtt {w}_n}_{b_n t}\big )_{t\in [0, \infty )} , \big ( \frac{_{_1}}{^{^{b_n}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n}_{a_n x} \big )_{x\in [0, \infty )}, -\frac{_{_1}}{^{^{a_n}}} I^{\mathtt {r}, \mathtt {w}_n}_{\infty } \big ) \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \big ( X^\mathtt {r}, {\overline{\gamma }}^{\mathtt {r} } , -I^{\mathtt {r}}_{\infty } \big )\nonumber \\ \end{aligned}$$
(196)

weakly on \(({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^2\! \times \! [0, \infty ]\).

Proof

Let \({\widetilde{\gamma }}^n\) (resp. \({\widetilde{\gamma }}\)) be a conservative subordinator with Laplace exponent \(a_n \psi _{\mathtt {w}_n}^{-1} (\cdot / b_n)- a_n \varrho _{\mathtt {w}_n}\) (resp. \(\psi ^{-1} (\cdot ) \! - \! \varrho \)). By (33) in Proposition 2.1 , \(a_n \psi _{\mathtt {w}_n}^{-1} (\lambda / b_n)\! -\! a_n \varrho _{\mathtt {w}_n} \! \rightarrow \! \psi ^{-1} (\lambda ) \! - \! \varrho \) for all \(\lambda \in [0, \infty )\), which implies that for all \(x \in [0, \infty )\), \({\widetilde{\gamma }}^n_x \! \rightarrow \! {\widetilde{\gamma }}_x\) weakly on \([0, \infty )\). Since the \({\widetilde{\gamma }}^n\) are Lévy processes, Theorem B.8 entails that \({\widetilde{\gamma }}^n \! \rightarrow \! {\widetilde{\gamma }}\) weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\). Let \({\mathcal {E}}_n\) (resp. \({\mathcal {E}}\)) be an exponentially distributed r.v. with parameter \(a_n \varrho _{\mathtt {w}_n}\) (resp. \(\varrho \)) that is independent of \({\widetilde{\gamma }}^n\) (resp. of \({\widetilde{\gamma }}\)), with the convention that a.s. \({\mathcal {E}}_n = \infty \) if \(\varrho _{\mathtt {w}_n} = 0\) (resp. a.s. \({\mathcal {E}}= \infty \) if \(\varrho = 0\)). We then get \(({\widetilde{\gamma }}^n, {\mathcal {E}}_n) \! \rightarrow \! ({\widetilde{\gamma }}, {\mathcal {E}})\) weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\! \times \! [0, \infty ]\). An easy application of Lemma B.4 (i) entails that \(\big (({\widetilde{\gamma }}^n_{x\wedge {\mathcal {E}}_n})_{x\in [0, \infty )}, {\mathcal {E}}_n \big ) \! \rightarrow \! \big (({\widetilde{\gamma }}_{x\wedge {\mathcal {E}}})_{x\in [0, \infty )}, {\mathcal {E}}\big ) \) weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\times [0, \infty ]\). By (136), we get \(\big ( \frac{1}{{b_n}} {\overline{\gamma }}^{\mathtt {r}, \mathtt {w}_n}_{a_n \cdot } , -\frac{{1}}{{{a_n}}} I^{\mathtt {r}, \mathtt {w}_n}_{\infty } \big ) \! \rightarrow \! \big ( {\overline{\gamma }}^{\mathtt {r}}_{\cdot } , - I^{\mathtt {r}}_{\infty }\big )\) weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\times [0, \infty ]\). Under our assumptions, Proposition 2.1 implies that \(\frac{{1}}{{{a_n}}} X^{\mathtt {r}, \mathtt {w}_n}_{b_n \cdot } \! \rightarrow \! X^{\mathtt {r}}_{\cdot }\) weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\). Then the laws of the processes on the left hand side of (196) are tight on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})^2\! \times \! [0, \infty ]\); we only need to prove that the joint law of the processes on the right hand side of (196) is the unique limit law: to that end, let \((n(p))_{p\in {\mathbb {N}}}\) be an increasing sequence of integers such that

$$\begin{aligned}&\big ( \big ( \frac{{_1}}{{^{a_{n(p)}}}} X^{\mathtt {r}, \mathtt {w}_{n(p)}}_{b_{n(p)} t}\big )_{t\in [0, \infty )} , \big ( \frac{{_1}}{{^{b_{n(p)}}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_{n(p)}}_{a_{n(p)} x} \big )_{x\in [0, \infty )}, -\frac{{_1}}{{^{a_{n(p)}}}} I^{\mathtt {r}, \mathtt {w}_{n(p)}}_{\infty } \big ) \underset{p\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \big ( X^\mathtt {r}, \gamma ^\prime , {\mathcal {E}}^\prime \big ) \,,\nonumber \\ \end{aligned}$$
(197)

where \(( \gamma ^\prime , {\mathcal {E}}^\prime )\) has the same law as \(({\overline{\gamma }}^{\mathtt {r}} , -I^\mathtt {r}_\infty )\). Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that the convergence in (197) holds \({\mathbf {P}}\)-a.s. and we only need to prove that \(( \gamma ^\prime , {\mathcal {E}}^\prime ) = ({\overline{\gamma }}^{\mathtt {r}} , -I^\mathtt {r}_\infty )\) a.s.

We first prove that a.s. \({\mathcal {E}}^\prime = -I^\mathtt {r}_\infty \). Since \(X^\mathtt {r}\) is a spectrally positive Lévy process, it has no fixed discontinuity. Moreover, \(t\mapsto \inf _{[0, t]} X^\mathtt {r}\) is continuous. Then, by Lemma B.3 (ii), for all \(t \in [0, \infty )\), a.s. , for all \(t \in [0, \infty )\), we get a.s. \(- \inf _{[0, t]} X^\mathtt {r} \le {\mathcal {E}}^\prime \). Namely, \(-I^\mathtt {r}_\infty \le {\mathcal {E}}^\prime \). Since \({\mathcal {E}}^\prime \) and \(-I^\mathtt {r}_\infty \) have the same law on \([0, \infty ]\), we get \({\mathcal {E}}^\prime = -I^\mathtt {r}_\infty \) a.s.

We next prove that a.s. for all \(x \in [0, -I^\mathtt {r}_\infty )\), \(\gamma ^\prime _x = {\overline{\gamma }}_x\). Indeed, fix \(x\! < \! -I^\mathtt {r}_\infty \) such that \(\Delta \gamma ^\mathtt {r}_x = 0\). Then, by Lemma B.3 (iv), we get \(\gamma ^{\mathtt {r},\mathtt {w}_{n(p)}}_{a_{n(p)} x}/b_{n(p)}\! \rightarrow \! \gamma ^\mathtt {r}_x\). Since \(x\! < \! - I^{\mathtt {r}, \mathtt {w}_{n(p)}}_{\infty }\! /a_{n(p)}\) for all sufficiently large p, it shows that \({\overline{\gamma }}^{\mathtt {r},\mathtt {w}_{n(p)}}_{a_{n(p)} x}/b_{n(p)}\! \rightarrow \! \gamma ^\mathtt {r}_x = {\overline{\gamma }}^\mathtt {r}_x\). Thus, a.s. for all \(x \in [0, -I^\mathtt {r}_\infty )\) such that \(\Delta \gamma ^\mathtt {r}_x = 0\), we get \(\gamma ^\prime _x = {\overline{\gamma }}_x\), which implies the desired result. Note that it completes the proof of the lemma in the critical and subcritical cases.

To avoid trivialities, we now assume that we are in the supercritical cases. Namely, \(\varrho \! >\! 0\) and \(-I^\mathtt {r}_\infty \! < \! \infty \) a.s. To simplify notation, we set

$$\begin{aligned}t_*^p = \tfrac{1}{b_{n(p)}}\gamma ^{\mathtt {r}, \mathtt {w}_{n(p)}} ( (-I^{\mathtt {r}, \mathtt {w}_{n(p)} }_\infty )-) \quad \text {and} \quad t_*\! =\! \gamma ^{\mathtt {r}} ((-I^{\mathtt {r}}_\infty )-) \; .\end{aligned}$$

First note that the proof is complete as soon as we prove that \(t^p_* \! \rightarrow \! t_*\). To prove this limit, we want to use Lemma B.3 (iii). To that end, we first fix \(x\! >\! -I^{\mathtt {r}}_\infty \). Since \(( \gamma ^\prime , {\mathcal {E}}^\prime )\) has the same law as \(({\overline{\gamma }}^{\mathtt {r}} , -I^\mathtt {r}_\infty )\), \(\gamma ^\prime \) is constant on \([{\mathcal {E}}^\prime , \infty )\) and since \({\mathcal {E}}^\prime = -I^\mathtt {r}_\infty \), \(\gamma ^\prime \) is constant on \([-I^\mathtt {r}_\infty , \infty )\), which implies \(\Delta \gamma ^\prime _x = 0\) and thus \({\overline{\gamma }}^{\mathtt {r},\mathtt {w}_{n(p)}}_{a_{n(p)} x}/b_{n(p)}\! \rightarrow \! \gamma ^\prime _x\). We next fix \(t\! >\! \gamma ^\prime _x+ t_*\). Thus, there is \(p_0\) such that for all \(p\! \ge \! p_0\), \({\overline{\gamma }}^{\mathtt {r},\mathtt {w}_{n(p)}}_{a_{n(p)} x}/b_{n(p)} \! < \! t \) and \(x\! > \! - I^{\mathtt {r}, \mathtt {w}_{n(p)}}_{\infty }\! /a_{n(p)}\), which implies that \(t^p_* = {\overline{\gamma }}^{\mathtt {r},\mathtt {w}_{n(p)}}_{a_{n(p)} x}/b_{n(p)}\). Since \(t \! > \! t^p_*\vee t_*\), we get \(t^p_* = \inf \{ s \in [0, t] \! : \! \inf _{r\in [0 ,s] } X^{\mathtt {r}, \mathtt {w}_{n(p)}}_{b_{n(p)} r} = \inf _{r\in [0 ,t] } X^{\mathtt {r}, \mathtt {w}_{n(p)}}_{b_{n(p)} r} \}\) and \(t_* = \inf \{ s \in [0, t] \! : \! \inf _{[0 ,s] } X^{\mathtt {r}} = \inf _{[0 ,t] } X^{\mathtt {r}} \}\). Thus Lemma B.3 (iii) entails that \(t^p_*\! \rightarrow \! t_*\), which completes the proof of the lemma. \(\square \)

Recall the definition of \(\theta ^{\mathtt {b},\mathtt {w}}\) in (101) and that of \({\overline{\gamma }}^{\mathtt {r},\mathtt {w}}\) in (195). We next set, for \(t \in [0, \infty )\),

$$\begin{aligned} {\overline{\theta }}^{\mathtt {b},\mathtt {w}}_t = t+ {\overline{\gamma }}^{\mathtt {r},\mathtt {w}}_{A^\mathtt {w}_t}\; . \end{aligned}$$
(198)

Note also that \(T^*_{\mathtt {w}} = \sup \{ t \in [0, \infty )\! : A^{\mathtt {w}}_t \! < \! -I^{\mathtt {r}, \mathtt {w}}_\infty \}\) by definition (103). Note also \( {\overline{\theta }}^{\mathtt {b},\mathtt {w}}\) coincides with \(\theta ^{\mathtt {b},\mathtt {w}}\) on \([0, T^{*}_{\mathtt {w}})\).

Lemma 6.8

Under the assumptions of Lemma 6.3, the laws of the processes \((\frac{{1}}{{{b_n}}}\, {\overline{\theta }}^{\, \mathtt {b},\mathtt {w}_n}_{b_n t})_{t\in [0, \infty )}\) are tight on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\).

Proof

To simplify notation we set \(R^n_t = \frac{{1}}{{{b_n}}}\, {\overline{\theta }}^{\, \mathtt {b},\mathtt {w}_n}_{b_n t} -t = \frac{{1}}{{{b_n}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n} (A^{\mathtt {w}_n}_{b_n t})\); we only need to prove that the \(R^n\) are tight on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\). To that end, we use Lemma 6.2. First, observe that for all \(K, z \in (0, \infty )\),

$$\begin{aligned} {\mathbf {P}}(R^n_t> K) = {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n} (A^{\mathtt {w}_n} (b_n t))> K \big )\le {\mathbf {P}}\big (\frac{_{_1}}{^{^{b_n}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n}_{a_n z}>K \big ) + {\mathbf {P}}\big ( \frac{_{_1}}{^{^{a_n}}}A^{\mathtt {w}_n}_{b_n t} >z) \; . \end{aligned}$$

This easily implies that for fixed t the laws of the \(R^n_t\) are tight on \([0, \infty )\) since it is the case for the laws of \({\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n}_{a_n z}/b_n \) and \(A^{\mathtt {w}_n}_{b_n t}/a_n \) by resp. Lemma 6.7 and Lemma 6.3.

Next, denote by \({\mathscr {F}}_{\! t}\) the \(\sigma \)-field generated by the r.v. \(N_j^{\mathtt {w}_n} (s)\) and \(\gamma ^{ \mathtt {r}, \mathtt {w}_n} (A^{\mathtt {w}_n}_s \! )\) with \(s \in [0, t]\) and \(j\! \ge \! 1\); note that \(N_j^{\mathtt {w}_n} (t+\cdot )\! -\! N_j^{\mathtt {w}_n} (t)\) are independent of \({\mathscr {F}}_{\! t}\). Fix \(\varepsilon \in (0, \infty )\) and recall the definition of the times \(\tau _k^\varepsilon (R^n)\) in (182): clearly \(b_n\tau _k^\varepsilon (R^n)\) is a \(({\mathscr {F}}_{\! t})\)-stopping time. Next, fix \(k \in {\mathbb {N}}\) and set, for \(x \in [0, \infty )\),

$$\begin{aligned} {\mathbf {g}}(x)\! =\! \frac{_{_1}}{^{^{b_n}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n} \big (a_n (x+ \frac{_{_1}}{^{^{a_n}}}A^{\mathtt {w}_n} (b_n \tau ^\varepsilon _k (R^n) ))\big ) - \frac{_{_1}}{^{^{b_n}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n} (A^{\mathtt {w}_n} (b_n \tau ^\varepsilon _k (R^n) )) \,. \end{aligned}$$

Set \({\mathbf {u}}_\varepsilon = \inf \{ x \in [0, \infty )\! :\! {\mathbf {g}} (x) \! >\! \varepsilon \}\); thus by (182),

$$\begin{aligned} \tau _{k+1}^\varepsilon (R^n) = \inf \big \{t> \tau _{k}^\varepsilon (R^n) : \frac{_{_1}}{^{^{a_n}}}A^{\mathtt {w}_n} (b_n t) \! -\! \frac{_{_1}}{^{^{a_n}}}A^{\mathtt {w}_n} (b_n \tau ^\varepsilon _k (R^n) ) > {\mathbf {u}}_\varepsilon \big \} \,.\end{aligned}$$

Fix \(z, \eta \in (0, \infty )\) and set \(q_{n, k} (\eta ) = {\mathbf {P}}\big (\tau ^\varepsilon _k (R^n) \! <\! z \, ; \, \tau ^\varepsilon _{k+1} (R^n) \! -\! \tau ^\varepsilon _k (R^n) \le \eta \big ) \). By (118) in Lemma 3.5 (applied to the \(({\mathscr {F}}_{\! t})\)-stopping time \(T = b_n\tau _k^\varepsilon (R^n)\), to \(t_0 = b_n z\), to \(t = b_n \eta \) and to \(a = a_n x\)), we get the following:

$$\begin{aligned} q_{n, k} (\eta )\le & {} {\mathbf {P}}\big ( b_n\tau ^\varepsilon _k (R^n)< b_n z \, ; A^{\mathtt {w}_n} (b_n\eta + b_n \tau ^\varepsilon _k (R^n) ) \! -\! A^{\mathtt {w}_n} (b_n \tau ^\varepsilon _k (R^n) )> a_n {\mathbf {u}}_\varepsilon \big ) \nonumber \\\le & {} {\mathbf {P}}\big ( b_n\tau ^\varepsilon _k (R^n) < b_n z \, ; A^{\mathtt {w}_n} (b_n\eta + b_n \tau ^\varepsilon _k (R^n) ) \! -\! A^{\mathtt {w}_n} (b_n \tau ^\varepsilon _k (R^n) ) > a_n x \big )\nonumber \\&+ {\mathbf {P}}({\mathbf {u}}_\varepsilon \le x) \nonumber \\\le & {} x^{-1} \eta (z + \frac{_{_1}}{^{^2}} \eta ) \frac{a_n b_n}{\sigma _1 (\mathtt {w}_n)} \frac{b_n \sigma _3 (\mathtt {w}_n)}{a_n^2\sigma _1 (\mathtt {w}_n)} + {\mathbf {P}}({\mathbf {u}}_\varepsilon \le x) \nonumber \\\le & {} x^{-1} \eta (z + \frac{_{_1}}{^{^2}}\eta ) \frac{a_n b_n}{\sigma _1 (\mathtt {w}_n)} \frac{b_n \sigma _3 (\mathtt {w}_n)}{a_n^2\sigma _1 (\mathtt {w}_n)} + {\mathbf {P}}\big ( {\mathbf {g}} (x) \! \ge \! \varepsilon \big ). \end{aligned}$$
(199)

Denote by \({\mathscr {G}}^o_x\) the \(\sigma \)-field generated by the processes \((N_j^{\mathtt {w}_n})_{j\ge 1}\) and by \(\gamma ^{\mathtt {r,w_{n}}}_{y}\), \(y \in [0, x]\) and set \({\mathscr {G}}_x = {\mathscr {G}}^o_{x+}\). Then, it is easy to see that \(\frac{{1}}{{{a_n}}}A^{\mathtt {w}_n} (b_n \tau ^\varepsilon _k (R^n) )\) is a \(({\mathscr {G}}_{x})\)-stopping time. By (137) in Lemma 4.1 applied to \(T\! =\! A^{\mathtt {w}_n} (b_n \tau ^\varepsilon _k (R^n) \)), we get

$$\begin{aligned} {\mathbf {P}}\big ( {\mathbf {g}} (x) \! \ge \! \varepsilon \big ) \le \frac{1\! -\! \exp \big ( \! - \! x a_n \psi ^{-1}_{\mathtt {w}_n} \big ( \frac{1}{{\varepsilon b_n}} \big ) \big )}{1 - e^{-1}} \; . \end{aligned}$$

This, combined with (199) and (33) in Proposition 2.1, implies that

$$\begin{aligned} \limsup _{n\rightarrow \infty } \sup _{k\in {\mathbb {N}}} q_{n, k} (\eta )\le & {} x^{-1} \eta (z + \eta ) \kappa (\beta + \kappa \sigma _3 ({\mathbf {c}})) + \frac{1\! -\! e^{-x\psi ^{-1}(\varepsilon ^{-1})}}{1-e^{-1}} \underset{\eta \rightarrow 0+}{-\!\!\!-\!\!\! \longrightarrow } \frac{1\! -\! e^{-x\psi ^{-1}(\varepsilon ^{-1})}}{1-e^{-1}}\; \underset{x \rightarrow 0+}{-\!\!\! -\!\!\! \longrightarrow } 0, \end{aligned}$$

which completes the proof by Lemma 6.2. \(\square \)

Recall the definition of \(\theta ^{\mathtt {b}}\) in (145) and that of \({\overline{\gamma }}^{\mathtt {r}}\) in (135) in Lemma 4.1. Then, we define

$$\begin{aligned} \forall t \in [0, \infty ), \quad {\overline{\theta }}^{\mathtt {b}}_t = t+ {\overline{\gamma }}^{\mathtt {r}}_{A_t}\; . \end{aligned}$$
(200)

Recall that \(T^* = \sup \{ t \in [0, \infty )\! : A_t \! < \! -I^{\mathtt {r}}_\infty \}\). Then, note that \( {\overline{\theta }}^{\mathtt {b}}\) coincides with \(\theta ^{\mathtt {b}}\) on \([0, T^{*})\).

Lemma 6.9

Under the assumptions of Lemma 6.3,

$$\begin{aligned}&\big (\big (\frac{_{_1}}{^{^{a_n}}} X^{\mathtt {b},\mathtt {w}_n}_{b_n \cdot }\! , \frac{_{_1}}{^{^{a_n}}} A^{\mathtt {w}_n}_{b_n \cdot } , \frac{_{_1}}{^{^{a_n}}} Y^{\mathtt {w}_n}_{b_n \cdot } \big ) , \frac{_{_1}}{^{^{b_n}}}{\overline{\theta }}^{\mathtt {b},\mathtt {w}_n}_{b_n \cdot } , \frac{_{_1}}{^{^{b_n}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n}_{a_n \cdot } , \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {r}, \mathtt {w}_n}_{b_n \cdot }, -\frac{_{_1}}{^{^{a_n}}}I^{\mathtt {r}, \mathtt {w}_n}_\infty , \frac{_{_1}}{^{^{b_n}}} T^{*}_{\mathtt {w}_n} \big ) \nonumber \\&\quad \! \underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } \big ( (X^\mathtt {b} , A, Y), {\overline{\theta }}^\mathtt {b} , {\overline{\gamma }}^{\mathtt {r} } , X^\mathtt {r} , -I^\mathtt {r}_\infty , T^*\big ) \end{aligned}$$
(201)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^3)\! \times \! ({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^3\! \times \! [0, \infty ]^2\) equipped with the product topology.

Proof

Recall the definition of \(T^*_{\mathtt {w}_n}\) in (103). We first prove that \(\frac{_{_1}}{^{^{b_n}}} T^{*}_{\mathtt {w}_n}\! \rightarrow \! T^*\) in law on \([0, \infty ]\). To that end, first observe that from the independence between the blue and red processes, we deduce that \(( \frac{{1}}{{{a_n}}} A^{\mathtt {w}_n}_{b_n \cdot }, -\frac{{1}}{{{a_n}}}I^{\mathtt {r}, \mathtt {w}_n}_\infty )\! \rightarrow \! (A, -I^\mathtt {r}_\infty )\) weakly on \( {\mathbf {D}}([0, \infty ), {\mathbb {R}})\! \times \! [0, \infty ]\). In the (sub)critical cases \(\alpha \in [0, \infty )\), \(- I^\mathtt {r}_\infty = \infty \). Then, clearly \(\frac{_{_1}}{^{^{b_n}}} T^{*}_{\mathtt {w}_n}\! \rightarrow \! T^*\) in law on \([0, \infty ]\). We next suppose \(\alpha \! < \! 0\); thus \(- I^\mathtt {r}_\infty \) is exponentially distributed with parameter \(\varrho \! >\! 0\) (that is the largest root of \(\psi \)); namely \(- I^\mathtt {r}_\infty \) has a diffuse law which allows to apply Proposition 2.11 in Jacod & Shiryaev [27] (Chapter VI, Section 2a p. 341) that discusses continuity properties of specific hitting times; thus, we get that \(\frac{_{_1}}{^{^{b_n}}} T^{*}_{\mathtt {w}_n}\! \rightarrow \! T^*\) in law on \([0, \infty ]\).

By Lemmas 6.6, 6.7 and 6.8, the laws of the r.v. on the left hand side of (201) are tight on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^3)\! \times \! ({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^3\! \times \! [0, \infty ]^2\); we only need to prove that the joint law of the processes on the right hand side of (201) is the unique limit law. To this end, we note that by the aforementioned three lemmas, the independence between the red processes and blue ones, as well as the uniqueness of the limit law of \((\frac{_{_1}}{^{^{b_n}}} T^{*}_{\mathtt {w}_n})\) as implied by Jacod & Shiryaev’s proposition, it suffices to consider the situation where \((n(p))_{p\in {\mathbb {N}}}\) is an increasing sequence of integers such that

$$\begin{aligned}&\Big (\big (\frac{_{1}}{^{{a_{n(p)}}}} X^{\mathtt {b},\mathtt {w}_{n(p)}}_{b_{n(p)} \cdot }\! , \frac{_{1}}{^{{a_{n(p)}}}} A^{\mathtt {w}_{n(p)}}_{b_{n(p)} \cdot } , \frac{_{1}}{^{{a_{n(p)}}}} Y^{\mathtt {w}_{n(p)}}_{b_{n(p)} \cdot }\big ) , \nonumber \\&\quad \frac{_{1}}{^{{b_{n(p)}}}}{\overline{\theta }}^{\mathtt {b},\mathtt {w}_{n(p)}}_{b_{n(p)} \cdot } , \frac{_{1}}{^{{b_{n(p)}}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_{n(p)}}_{a_{n(p)} \cdot } , \frac{_{1}}{^{{a_{n(p)}}}} X^{\mathtt {r}, \mathtt {w}_{n(p)}}_{b_{n(p)} \cdot } , -\frac{{_1}}{{^{a_{n(p)}}}}I^{\mathtt {r}, \mathtt {w}_{n(p)}}_\infty , \frac{{_1}}{{^{b_{n(p)}}}} T^{*}_{\mathtt {w}_{n(p)}}\Big ) \nonumber \\&\quad \underset{p\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \big ( (X^\mathtt {b} , A, Y), \theta ^\prime , {\overline{\gamma }}^{\mathtt {r} } , X^\mathtt {r} , -I^\mathtt {r}_\infty , T^* \big ), \end{aligned}$$
(202)

and then prove that \(\theta ^\prime = {\overline{\theta }}^{_\mathtt {b}}\). Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that (202) holds true \({\mathbf {P}}\)-almost surely. We say that càdlàg process \(L=(L_{t})_{t\in \mathbb R_{+}}\) has no fixed discontinuity if \({\mathbb {P}}(L_{t-}= L_{t})=1\) for all \(t\in {\mathbb {R}}_{+}\). Observe that A has no fixed discontinuity. Therefore, a.s. for all \(q \in {\mathbb {Q}}\cap [0,\infty )\), \(\Delta A_q = 0\), and thus \( A^{{\mathtt {w}_{n(p)}}}_{{b_{n(p)}q}} / a_{n(p)}\! \rightarrow \! A_q\). Since \(\gamma ^\mathtt {r}\) has no fixed discontinuity and is independent of A, the same properties hold for \({\overline{\gamma }}^\mathtt {r}\). Therefore, a.s. for all \(q \in {\mathbb {Q}}\cap [0,\infty )\), \(\Delta {\overline{\gamma }}^\mathtt {r}(A_q) = 0\), which easily entails that \({\overline{\gamma }}^{\mathtt {r},\mathtt {w}_{n(p)}} (A^{\mathtt {w}_{n(p)}} (b_{n(p)} q))/ b_{n(p)} \! \rightarrow \! {\overline{\gamma }}^\mathtt {r}(A_q)\); thus, \({\overline{\theta }}^{\mathtt {b},\mathtt {w}_{n(p)}} (b_{n(p)} q)/ b_{n(p)} \! \rightarrow \! {\overline{\theta }}^\mathtt {b}_q\) for all \(q \in {\mathbb {Q}}\cap [0,\infty )\) a.s. Therefore, \(\theta ^\prime = {\overline{\theta }}^\mathtt {b}\), which completes the proof. \(\square \)

Lemma 6.10

Under the assumptions of Lemma 6.3,

$$\begin{aligned} {\mathscr {Q}}_n (1) := \big (\big (\frac{_{_1}}{^{^{a_n}}} X^{\mathtt {b},\mathtt {w}_n}_{b_n \cdot }\! , \frac{_{_1}}{^{^{a_n}}} A^{\mathtt {w}_n}_{b_n \cdot } ,&\frac{_{_1}}{^{^{a_n}}} Y^{\mathtt {w}_n}_{b_n \cdot } , \frac{_{_1}}{^{^{b_n}}}{\overline{\theta }}^{\mathtt {b},\mathtt {w}_n}_{b_n \cdot } \big ) , \frac{_{_1}}{^{^{b_n}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n}_{a_n \cdot } , \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {r}, \mathtt {w}_n}_{b_n \cdot }, -\frac{_{_1}}{^{^{a_n}}}I^{\mathtt {r}, \mathtt {w}_n}_\infty , \frac{_{_1}}{^{^{b_n}}} T^{*}_{\mathtt {w}_n} \big ) \nonumber \\ \! \underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow }&\big ( (X^\mathtt {b} , A, Y, {\overline{\theta }}^\mathtt {b}) , {\overline{\gamma }}^{\mathtt {r} } , X^\mathtt {r} , -I^\mathtt {r}_\infty , T^*\big ) \end{aligned}$$
(203)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^4)\! \times \! ({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^2\! \times \! [0, \infty ]^2\) equipped with the product topology.

Proof

Without loss of generality (but with a slight abuse of notation), Skorokhod’s representation theorem allows to assume that (201) holds \({\mathbf {P}}\)-almost surely. To simplify notation, we next set \(R^n\! =\! \frac{{1}}{{{a_n}}} (X^{\mathtt {b},\mathtt {w}_n}_{b_n\cdot } , A^{\mathtt {w}_n}_{b_n \cdot }, Y^{\mathtt {w}_n}_{b_n \cdot })\) and \(R = (X^\mathtt {b} , A, Y)\). Let us fix \(a \in (0, \infty )\).

We consider several cases. We first suppose that \(\Delta R_a \! \ne \! 0\). By Lemma B.1 (i), there is \(s_n\! \rightarrow \! a\) such that \(R^n_{s_n-}\! \rightarrow \! R_{a-}\), \(R^n_{s_n}\! \rightarrow \! R_{a}\) and thus \(\Delta R^n_{s_n}\! \rightarrow \! \Delta R_{a}\).

  • Let us suppose more specifically that \(\Delta Y_a \! >\! 0\). By definition of Y, we get \(\Delta X^\mathtt {b}_a = \Delta Y_a\) and \(\Delta A_a = 0\). Suppose that \(a \in [0, T^*]\); by Lemma 4.3 (ii), we get \(\Delta \theta ^\mathtt {b}_a = 0\) and thus \(\Delta {\overline{\theta }}^{{\, \mathtt {b}}} (a) = 0\). Note that \(\Delta {\overline{\theta }}^{{\, \mathtt {b}}} (a) = 0\) for all \(a \in (T^*, \infty )\). Consequently, for all \(a \in (0, \infty )\), if \(\Delta Y_a \! >\! 0\), then \(\Delta {\overline{\theta }}^{{\, \mathtt {b}}} (a) = 0\) and Lemma B.1 (ii) entails \(\frac{_{_1}}{^{^{b_n}}} \Delta {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}( b_n s_n)\! \rightarrow \! \Delta {\overline{\theta }}^\mathtt {b}_a = 0\).

  • We next consider the case where \(\Delta R_a \! \ne \! 0\) but \(\Delta Y_a \! =\! 0\); then, by definition of A and Y, we get \(\Delta X^\mathtt {b}_a \! =\! \Delta A_a \! > \! 0\). Since \(\gamma ^\mathtt {r}\), and therefore \({\overline{\gamma }}^{\mathtt {r}}\) is independent of R, it has a.s. no jump at the times \(A_{a-}\) and \(A_{a}\); therefore: \(\frac{_{_1}}{^{^{b_n}}}{\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n} (A^{_{\mathtt {w}_n}}_{^{b_n s_n-}})\! \rightarrow \! {\overline{\gamma }}^{\mathtt {r}} (A_{a-} )\) and \( \frac{_{_1}}{^{^{b_n}}} {\overline{\gamma }}^{\mathtt {r},\mathtt {w}_n} (A^{_{\mathtt {w}_n}}_{^{b_n s_n}}) \rightarrow {\overline{\gamma }}^{\mathtt {r}} (A_{a} )\). This implies that \(\frac{{1}}{{{b_n}}} \Delta {\overline{\theta }}^{{\mathtt {b}, \mathtt {w}_n}}( b_n s_n)\! \rightarrow \! \Delta {\overline{\theta }}^\mathtt {b}_{^a} \! =\! {\overline{\gamma }}^{\mathtt {r}} (A_{a} )\! -\! {\overline{\gamma }}^{\mathtt {r}} (A_{a-} )\).

  • We finally suppose that \(\Delta R_a = 0\); by Lemma B.1 (i), there exists a sequence \(s^\prime _n\! \rightarrow \! a\) such that \(\frac{_{_1}}{^{^{b_n}}} \Delta {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}( b_n s^\prime _n) \! \rightarrow \! \Delta {\overline{\theta }}^{\mathtt {b}}_a \). Since, \(\Delta R_a = 0\), Lemma B.1 (ii) entails that \(\Delta R^n_{s^\prime _n}\! \rightarrow \! \Delta R_a\).

Thus, we have proved the following: for all \(a \in (0, \infty )\), there exists a sequence \(s^{\prime \prime }_n \! \rightarrow \! a\) such that \(\frac{{1}}{{{b_n}}} \Delta {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}( b_n s^{\prime \prime }_n )\! \rightarrow \! \Delta {\overline{\theta }}^\mathtt {b}_a\) and \(\Delta R^n_{s^{\prime \prime }_n }\! \rightarrow \! \Delta R_a\). Then, by Lemma B.1 (iii), \((R^n, \frac{{1}}{{b_n}} {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}( b_n \cdot ) )\! \rightarrow \! (R,{\overline{\theta }}^\mathtt {b})\) a.s. on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^4)\), which completes the proof. \(\square \)

Recall next that for all \(t \in [0, \infty )\) and all \(n \in {\mathbb {N}}\),

$$\begin{aligned} \Lambda ^{\mathtt {b}, \mathtt {w}_n}_t \!\! = \! \inf \big \{ s \in [0, \infty ) \! : \theta ^{\mathtt {b}, \mathtt {w}_n}_s \! \!>\! t \big \}, \quad \Lambda ^{\mathtt {b}}_t = \inf \big \{ s \in [0, \infty )\! : \theta ^{\mathtt {b}}_s \! > \! t \big \}, \end{aligned}$$
(204)

that \( \Lambda ^{\mathtt {r}, \mathtt {w}_n}_t \! = t\! -\! \Lambda ^{\mathtt {b}, \mathtt {w}_n}_t\) and that \(\Lambda ^{\mathtt {r}}_t = t \! -\! \Lambda ^{\mathtt {b}}_t\).

Lemma 6.11

Recall the notation \({\mathscr {Q}}_n (1)\) in (203). Under the assumptions of Lemma 6.3,

$$\begin{aligned} {\mathscr {Q}}_n (2)&: =&\big ( {\mathscr {Q}}_n (1), \frac{_{_1}}{^{^{b_n}}} \Lambda ^{\mathtt {b}, \mathtt {w}_n}_{b_n \cdot }, \frac{_{_1}}{^{^{b_n}}} \Lambda ^{\mathtt {r}, \mathtt {w}_n}_{b_n \cdot } \big )\underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } \big ( (X^\mathtt {b} , A, Y, {\overline{\theta }}^\mathtt {b} ), {\overline{\gamma }}^{\mathtt {r} } , X^\mathtt {r} , - I^\mathtt {r}_\infty , T^*, \Lambda ^{\mathtt {b}}, \Lambda ^{\mathtt {r}} \big )\nonumber \\ \end{aligned}$$
(205)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^4)\! \times \! ({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^2\! \times \! [0, \infty ]^2\! \times \! ({\mathbf {C}}([0, \infty ), {\mathbb {R}}))^2\) equipped with the product topology.

Proof

Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that the convergence in (203) holds \({\mathbf {P}}\)-almost surely. Since \({\overline{\theta }}^{\mathtt {b}}\) (resp. \({\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}\)) is constant on \([T^*, \infty )\) (resp. on \([T^*_{\mathtt {w}_n}, \infty )\)), we easily derive from (203) that \({\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}(T^*_{\mathtt {w}_n}) /b_n \! \rightarrow \! {\overline{\theta }}^{\mathtt {b}} (T^*)\) a.s. on \([0, \infty ]\).

Next, we take \(t \in (0, \infty )\) distinct from \({\overline{\theta }}^{\mathtt {b}} (T^*)\). Suppose first that \(t \! < \! {\overline{\theta }}^{\mathtt {b}} (T^*)\). Then, for all sufficiently large n, we get \(t \!< \! {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}(T^*_{\mathtt {w}_n})/b_n\) and we can write

$$\begin{aligned} \frac{_{_1}}{^{^{b_n}}} \Lambda ^{\mathtt {b}, \mathtt {w}_n}_{b_n t} \!\! = \! \inf \big \{ s \in [0, \infty ) \! : \frac{_{_1}}{^{^{b_n}}} {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}_{b_n s} \! \! >\! t \big \}. \end{aligned}$$

Since \({\overline{\theta }}^{\mathtt {b}}\) is strictly increasing on \([0, T^*)\), standard arguments entail \(\Lambda ^{\mathtt {b}, \mathtt {w}_n} (b_nt )/ b_n \! \rightarrow \! \Lambda ^{\mathtt {b}}_t \).

Suppose next that \(t \! > \! {\overline{\theta }}^{\mathtt {b}} (T^*)\), which is only meaningful in the supercritical cases. Then, for all sufficiently large n, we get \(t \!> \! {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}(T^*_{\mathtt {w}_n})/b_n\) and we can write \(\Lambda ^{\mathtt {b}, \mathtt {w}_n}_{b_n t} = T^*_{\mathtt {w}_n}\) and \(\Lambda ^{\mathtt {b}}_{t} = T^*\). Thus, we get \(\Lambda ^{\mathtt {b}, \mathtt {w}_n} (b_nt )/ b_n \! \rightarrow \! \Lambda ^{\mathtt {b}}_t \).

We have proved that \(\Lambda ^{\mathtt {b}, \mathtt {w}_n} (b_nt )/ b_n \! \rightarrow \! \Lambda ^{\mathtt {b}}_t \) for all \(t \in (0, \infty )\) distinct from \({\overline{\theta }}^{\mathtt {b}} (T^*)\). Since \(\Lambda ^{\mathtt {b}}\) is nondecreasing and continuous, a theorem due to Dini (see for instance [35], Theorem 7.13) implies that \(\frac{_{_1}}{^{^{b_n}}} \Lambda ^{\mathtt {b}, \mathtt {w}_n}_{b_n \cdot } \! \rightarrow \! \Lambda ^{\mathtt {b}}\) uniformly on all compact subsets; it entails a similar convergence for \( \Lambda ^{\mathtt {r}}\), which completes the proof of (205). \(\square \)

Here is one of the key technical point of the proof that relies on the estimates of Lemma 3.6.

Lemma 6.12

Under the assumptions of Lemma 6.3, the laws of the processes \((\frac{_{1}}{^{{a_n}}}X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n t}))_{t\in [0, \infty )}\) and \((\frac{_{1}}{^{{a_n}}}X^{\mathtt {r},\mathtt {w}_n}(\Lambda ^{\mathtt {r}, \mathtt {w}_n} _{b_n t}))_{t\in [0, \infty )}\) are tight on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\).

Proof

Fix \(t \in [0, \infty )\); then for all \(t_0, K \in (0, \infty )\), note that

$$\begin{aligned} {\mathbf {P}}\Big ( \sup _{^{s\in [0, t]}} \frac{_{_{1}}}{^{^{a_n}}}| X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n s}) |>K \Big ) \le {\mathbf {P}}\Big ( \sup _{^{s\in [0, t_0]}} \frac{_{_1}}{^{^{a_n}}}| X^{\mathtt {b},\mathtt {w}_n}_{b_n s} |>K \Big ) + {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}} \Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n t} > t_0 \big ) . \end{aligned}$$

Then, we deduce from (205) that

$$\begin{aligned} \lim _{K\rightarrow \infty } \limsup _{n\rightarrow \infty }{\mathbf {P}}\Big ( \sup _{^{s\in [0, t]}} \frac{_{_{1}}}{^{^{a_n}}}| X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n s}) |>K \Big ) \le \limsup _{n\rightarrow \infty }{\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}} \Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n t} > t_0 \big ) \underset{t_0\rightarrow \infty }{ -\!\!\! \longrightarrow } 0 \; .\end{aligned}$$

A similar argument shows that \( \lim _{K\rightarrow \infty } \limsup _{n\rightarrow \infty }{\mathbf {P}}( \sup _{{s\in [0, t]}} | X^{\mathtt {r},\mathtt {w}_n}(\Lambda ^{\mathtt {r}, \mathtt {w}_n} _{b_n s}) | \! >\! a_n K \Big ) \! =\! 0\).

Next, Proposition 3.2 says that a.s. for all \(n \in {\mathbb {N}}\) and for all \(t \in [0, \infty )\)

$$\begin{aligned} X^{\mathtt {w}_n}_t \! =\! X^{\mathtt {b},\mathtt {w}_n}_{\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{ t}}+ X^{\mathtt {r},\mathtt {w}_n}_{\Lambda ^{\mathtt {r}, \mathtt {w}_n} _{ t}} \; . \end{aligned}$$
(206)

Recall that for all \(y \in {\mathbf {D}}([0, \infty ), {\mathbb {R}})\), \(w_z (y, \eta )\) stands for the \(\eta \)-càdlàg modulus of continuity of \(y(\cdot ) \) on [0, z]. Let \(z_1, z, z_0, \eta , \varepsilon \in (0, \infty )\). Let us consider first the (sub)critical cases. By (122) in Lemma 3.6 (i), we easily get

$$\begin{aligned} {\mathbf {P}}\big (w_{z_1} \big (\frac{_{_1}}{^{^{a_n}}}X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n \cdot }) , \, \eta \big )> \varepsilon \big )\le & {} {\mathbf {P}}\big (w_{z+\eta } \big ( \frac{_{_1}}{^{^{a_n}}}X^{\mathtt {w}_n}_{b_n \cdot } , \eta \big )>\varepsilon /2 \big ) + {\mathbf {P}}\big (w_{z_0} \big ( \frac{_{_1}}{^{^{a_n}}}X^{\mathtt {b}, \mathtt {w}_n}_{b_n \cdot } , \eta \big )>\varepsilon /2 \big ) \\&+ {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}\theta ^{\mathtt {b}, \mathtt {w}_n}_{b_n z_0} <z_1 \big ) + {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}\theta ^{\mathtt {b}, \mathtt {w}_n}_{b_n z_0} > z \big )\; . \end{aligned}$$

By Proposition 3.2, \(X^{\mathtt {w}_n}\) has the same law as \(X^{\mathtt {b}, \mathtt {w}_n}\) and \(X^{\mathtt {r}, \mathtt {w}_n}\). Then, by Proposition 2.1, the laws of the processes \(\frac{_1}{^{a_n}}X^{\mathtt {w}_n}_{b_n \cdot }\) (or equivalently of \(\frac{_1}{^{a_n}} X^{\mathtt {b}, \mathtt {w}_n}_{b_n \cdot }\)) are tight on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\). Consequently,

$$\begin{aligned}&\lim _{\eta \rightarrow 0} \limsup _{n \rightarrow \infty } {\mathbf {P}}\big (w_{z_1} \big (\frac{_{_1}}{^{^{a_n}}}X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n \cdot }) , \, \eta \big ) > \varepsilon \big ) \nonumber \\&\quad \le \; \limsup _{n \rightarrow \infty } {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}\theta ^{\mathtt {b}, \mathtt {w}_n}_{b_n z_0} \le z_1 \big ) + \limsup _{n \rightarrow \infty } {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}\theta ^{\mathtt {b}, \mathtt {w}_n}_{b_n z_0} \ge z \big )\; . \end{aligned}$$
(207)

Recall that in the (sub)critical cases, \({\overline{\theta }}^{\mathtt {b}} = \theta ^\mathtt {b}\). Moreover, since \(\theta ^\mathtt {b}_{t-}=\theta ^{\mathtt {b}}_{t}\) a.s. for all t, (203) easily entails: \(\frac{_{_1}}{^{^{b_n}}}\theta ^{\mathtt {b}, \mathtt {w}_n}_{b_n z_0} \! \rightarrow \! \theta ^\mathtt {b}_{z_0}\) weakly on \([0, \infty )\). It first implies: \(\limsup _{n \rightarrow \infty } {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}\theta ^{\mathtt {b}, \mathtt {w}_n}_{b_n z_0} \! \ge \! z \big ) \le {\mathbf {P}}\big (\theta ^{\mathtt {b}}_{z_0} \! \ge \! z \big )\! \rightarrow \! 0\) as \(z\! \rightarrow \! \infty \) since a.s. \(\theta ^\mathtt {b}_{z_0}\! < \! \infty \) in (sub)critical cases. Similarly, we also get \(\limsup _{n \rightarrow \infty } {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}\theta ^{\mathtt {b}, \mathtt {w}_n}_{b_n z_0} \le z_1 \big ) \le {\mathbf {P}}\big ( \theta ^{\mathtt {b}}_{z_0} \le z_1 \big ) \! \rightarrow \! 0\) as \(z_0\! \rightarrow \! \infty \) since a.s. \(\lim _{z_0 \rightarrow \infty } \theta ^\mathtt {b}_{z_0} = \infty \). Then, (207) and the previous arguments imply that the laws of \((\frac{_{1}}{^{{a_n}}}X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n t}))_{t\in [0, \infty )}\) are tight on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\) in (sub)critical cases.

Let us consider the supercritical cases: Lemma 3.6 (ii) implies that for all \(z_1 \in [0, \infty )\),

$$\begin{aligned} {\mathbf {P}}\big (w_{z_1} \big (\frac{_{_1}}{^{^{a_n}}}X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n \cdot }) , \, \eta \big )> \varepsilon \big )\le & {} {\mathbf {P}}\big (w_{z+\eta } \big ( \frac{_{_1}}{^{^{a_n}}}X^{\mathtt {w}_n}_{b_n \cdot } , \eta \big )>\varepsilon /2 \big ) + {\mathbf {P}}\big (w_{z_0} \big ( \frac{_{_1}}{^{^{a_n}}}X^{\mathtt {b}, \mathtt {w}_n}_{b_n \cdot } ,2 \eta \big ) >\varepsilon /6 \big ) \\&+ {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}T^*_{ \mathtt {w}_n} \ge z_0 \big ) + {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}T^*_{ \mathtt {w}_n} \le 2\eta \big ) + {\mathbf {P}}\big ( \frac{_{_1}}{^{^{b_n}}}\theta ^{\mathtt {b}, \mathtt {w}_n}_{T^*_{\mathtt {w}_n} \! -} \ge z \big )\; . \end{aligned}$$

Then, recall that \(\theta ^{\mathtt {b}, \mathtt {w}_n}_{T^*_{\mathtt {w}_n} \! -} \le {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}_{T^*_{\mathtt {w}_n} }\) and observe that (203) easily entails \( \frac{{1}}{{{b_n}}} {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}_{T^*_{\mathtt {w}_n} }\! \rightarrow \! {\overline{\theta }}^{\mathtt {b}}_{T^* }\) weakly on \([0, \infty )\). By (203) again, \(\frac{{1}}{{{b_n}}}T^*_{ \mathtt {w}_n}\! \rightarrow \! T^*\), weakly on \([0, \infty )\). Consequently,

$$\begin{aligned} \lim _{\eta \rightarrow 0} \limsup _{n \rightarrow \infty } {\mathbf {P}}\big (w_{z_1} \big (\frac{_{_1}}{^{^{a_n}}}X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n \cdot }) , \, \eta \big ) > \varepsilon \big ) \le {\mathbf {P}}\big (T^*\! \ge z_0 \big ) + {\mathbf {P}}\big ( {\overline{\theta }}^{\mathtt {b}}_{T^* } \! \ge z \big ) \underset{z, z_0\, \rightarrow \infty }{ -\!\!\! \longrightarrow } 0\; ,\end{aligned}$$

by the fact that in the supercritical cases \(T^*\! <\! \infty \) a.s. Thus, the laws of \((\frac{_{1}}{^{{a_n}}}X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{b_n t}))_{t\in [0, \infty )}\) are tight in supercritical cases.

We derive a similar result for the red processes by a quite similar (but simpler) argument based on Lemma 3.6 (iii): we leave the details to the reader. \(\square \)

Using the relationship (206) and its continuous counterpart (148), we prove the next lemma:

Lemma 6.13

Recall the definition of \({\mathscr {Q}}_n (2)\) in (205). Under the assumptions of Lemma 6.3, we have

$$\begin{aligned} {\mathscr {Q}}_n (3):= & {} \big ( {\mathscr {Q}}_n (2), \frac{_{_1}}{^{^{a_n}}} \big ( X^{\mathtt {b}, \mathtt {w}_n}_{\Lambda ^{\mathtt {b}, \mathtt {w}_n } _{b_n \cdot }} , X^{\mathtt {r}, \mathtt {w}_n}_{\Lambda ^{\mathtt {r}, \mathtt {w}_n }_{b_n \cdot }} , X^{\mathtt {w}_n }_{b_n \cdot } \big ) \big ) \nonumber \\&\underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } \big ( (X^\mathtt {b} , A, Y, {\overline{\theta }}^\mathtt {b} ), {\overline{\gamma }}^{\mathtt {r} } , X^\mathtt {r} , -I^\mathtt {r}_\infty , T^*, \Lambda ^{\mathtt {b}}, \Lambda ^{\mathtt {r}}, ( X^{\mathtt {b}}_{\Lambda ^{\mathtt {b}}}, X^{\mathtt {r}}_{\Lambda ^{\mathtt {r}}} , X) \big ),\nonumber \\ \end{aligned}$$
(208)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^4) \! \times \! ({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^2 \! \times \! [0, \infty ]^2 \! \times \! ({\mathbf {C}}([0, \infty ), {\mathbb {R}}))^2\! \times \! {\mathbf {D}}([0, \infty ), {\mathbb {R}}^3)\) equipped with the product-topology.

Proof

We first prove the following

$$\begin{aligned} {\mathscr {Q}}^\prime _n (3):= & {} \big ( {\mathscr {Q}}_n (2), \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {b}, \mathtt {w}_n}_{\Lambda ^{\mathtt {b}, \mathtt {w}_n } _{b_n \cdot }} , \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {r}, \mathtt {w}_n}_{\Lambda ^{\mathtt {r}, \mathtt {w}_n }_{b_n \cdot }}\big ) \nonumber \\&\underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } \big ( (X^\mathtt {b} , A, Y, {\overline{\theta }}^\mathtt {b} ), {\overline{\gamma }}^{\mathtt {r} } , X^\mathtt {r} , -I^\mathtt {r}_\infty , T^*, \Lambda ^{\mathtt {b}}, \Lambda ^{\mathtt {r}}, X^{\mathtt {b}}_{\Lambda ^{\mathtt {b}}}, X^{\mathtt {r}}_{\Lambda ^{\mathtt {r}}} \big ),\nonumber \\ \end{aligned}$$
(209)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^4)\! \times \! {\mathbf {D}}([0, \infty ), {\mathbb {R}})^2\! \times \![0, \infty ]^2\! \times \! {\mathbf {C}}([0, \infty ), {\mathbb {R}})^2\! \times \! {\mathbf {D}}([0, \infty ), {\mathbb {R}})^2\) equipped with the product-topology. Note that the laws of \({\mathscr {Q}}^\prime _n (3)\) are tight thanks to (205) and Lemma 6.12. We only need to prove that the joint law of the processes on the right hand side of (209) is the unique limit law: to that end, let \((n(p))_{p\in {\mathbb {N}}}\) be an increasing sequence of integers such that

$$\begin{aligned} {\mathscr {Q}}^\prime _{n(p)} (3) \underset{p\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } \big ( (X^\mathtt {b} , A, Y, {\overline{\theta }}^\mathtt {b} ), {\overline{\gamma }}^{\mathtt {r} } , X^\mathtt {r} , -I^\mathtt {r}_\infty , T^*, \Lambda ^{\mathtt {b}}, \Lambda ^{\mathtt {r}}, Q^\mathtt {b} , Q^\mathtt {r} \big ) \end{aligned}$$
(210)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^4)\! \times \! {\mathbf {D}}([0, \infty ), {\mathbb {R}})^2\! \times \![0, \infty ]^2\! \times \! {\mathbf {C}}([0, \infty ), {\mathbb {R}})^2\! \times \! {\mathbf {D}}([0, \infty ), {\mathbb {R}})^2\) equipped with the product topology. Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that the convergence in (210) holds \({\mathbf {P}}\)-a.s. and we only need to prove that \(Q^\mathtt {b} = X^\mathtt {b} \! \circ \! \Lambda ^\mathtt {b}\) and \(Q^\mathtt {r} = X^\mathtt {r} \! \circ \! \Lambda ^\mathtt {r}\).

We first prove that \(Q^\mathtt {b} = X^\mathtt {b} \! \circ \! \Lambda ^\mathtt {b}\). Note that \(\{ t \in [0,\infty )\! : \! (\Delta X^\mathtt {b}) (\Lambda ^{\mathtt {b}}_t) \! >\! 0 \}\) is, in general, not countable (it contains all the red intervals starting with a jump), so we have to proceed with care. To that end, we first set \(S_1 = \big \{ t \in [0, \infty ): \Delta Y(\Lambda ^\mathtt {b}_t) \!> \! 0 \big \}\) that is a countable set of times (indeed, by Lemma 4.3 (ii), for all \(a \in [0, T^*_{\mathtt {w}}]\), \(\Delta Y_a \! >\! 0\) implies \(\Delta \theta ^\mathtt {b}_a \! =\! 0\) and by Lemma 4.3 (i), there exists a unique time \(t \in [0, \infty )\) such that \(\Lambda ^\mathtt {b}_t = a\)). We also set \(S_2 = \{ \theta ^\mathtt {b}_{T^*-} \} \cup \{ \theta ^\mathtt {b}_{a-} , \theta ^\mathtt {b}_{a}; a \in [0, T^*)\! : \! \Delta \theta ^\mathtt {b}_{a} \! >\! 0\}\) and \(S = S_1 \cup S_2\). Then S is countable. We then consider several cases.

We first fix \(t \in (0, T^*) \backslash S\) and we assume that \((\Delta X^\mathtt {b}) ( \Lambda ^\mathtt {b}_t) = 0\). Then, by Lemma B.1 (ii), \(X^{\mathtt {b}, \mathtt {w}_{n(p)}} \big ( \Lambda ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} t) \big )/ a_{n(p)} \! \rightarrow \! X^\mathtt {b} ( \Lambda ^\mathtt {b}_t)\), since \(\Lambda ^{\mathtt {b}, \mathtt {w}_{n(p)}}(b_{n(p)} t)/ b_{n(p)} \! \rightarrow \! \Lambda ^\mathtt {b}_t\).

We next assume that \(t \in (0, T^*) \backslash S\) and that \((\Delta X^\mathtt {b}) ( \Lambda ^\mathtt {b}_t)\! > \! 0\). Since \(t \! \notin \! S_1\), \(\Delta Y (\Lambda ^\mathtt {b}_t) = 0\), and thus \(\Delta X^\mathtt {b} ( \Lambda ^\mathtt {b}_t) = \Delta A ( \Lambda ^\mathtt {b}_t)\! > \! 0\), by definition of A and Y. We then set \(a = \Lambda ^\mathtt {b}_t\) and we necessarily get \(a\! < \! T^*\), \(\Delta \theta ^\mathtt {b}_a \! >0\) and \( t \in [\theta ^{\mathtt {b}}_{a-}, \theta ^{\mathtt {b}}_{a}]\). Since \(t\! \notin \! S_2\), we then get \( t \in (\theta ^{\mathtt {b}}_{a-}, \theta ^{\mathtt {b}}_{a})\). To simplify the notation, we set

$$\begin{aligned}R^p = \left( \frac{{_1}}{{^{a_{n(p)}}}} X^{\mathtt {b}, \mathtt {w}_{n(p)}}_{b_{n(p)} \cdot }, \frac{{_1}}{{^{a_{n(p)}}}} A^{ \mathtt {w}_{n(p)}}_{b_{n(p)} \cdot } \, , \frac{{_1}}{{^{a_{n(p)}}}} Y^{ \mathtt {w}_{n(p)}}_{b_{n(p)} \cdot } \, , \frac{{_1}}{{^{b_{n(p)}}}} {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_{n(p)}}_{b_{n(p)} \cdot } \right) \quad \text {and} \quad R= (X^\mathtt {b} , A, Y, {\overline{\theta }}^\mathtt {b} ) \; .\end{aligned}$$

By (210), \(R^p \! \rightarrow \! R\) a.s. on \({\mathbf {D}}([0, \infty ) , {\mathbb {R}}^4)\). Since \(a\! <\! T^*\), \(\Delta \theta ^\mathtt {b}_a = \Delta {\overline{\theta }}^\mathtt {b}_a \! >0\) and a is a jump-time of R. By Lemma B.1 (i), there is a sequence \(s_p \! \rightarrow \! a\) such that \((R^p_{s_p-} , R^p_{s_p}) \! \rightarrow \! (R_{a-} , R_{a})\): in particular, we get \( X^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} s_p)/ a_{n(p)} \! \rightarrow \! X^\mathtt {b}_a = X^\mathtt {b} (\Lambda ^{\mathtt {b}}_t)\). It also implies that \(\theta ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} s_p-)/ b_{n(p)} = {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} s_p-)/ b_{n(p)}\! \rightarrow \! {\overline{\theta }}^\mathtt {b}_{a-} = \theta ^\mathtt {b}_{a-}\) and \(\theta ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} s_p)/ b_{n(p)} = {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} s_p)/ b_{n(p)}\! \rightarrow \! {\overline{\theta }}^\mathtt {b}_{a} = \theta ^\mathtt {b}_{a}\); thus, for all sufficiently large p, we get

$$\begin{aligned}\frac{{_1}}{{^{b_{n(p)}}}} \theta ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} s_p-)< t < \frac{{_1}}{{^{b_{n(p)}}}} \theta ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} s_p) \quad \text {and thus} \quad \frac{{_1}}{{^{b_{n(p)}}}} \Lambda ^{\mathtt {b}, \mathtt {w}_{n(p)}}_{b_{n(p)} t}= s_p , \end{aligned}$$

which implies that \(X^{\mathtt {b}, \mathtt {w}_{n(p)}} \big ( \Lambda ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} t )\big ) / a_{n(p)} \! \rightarrow \! X^\mathtt {b}_a = X^\mathtt {b} (\Lambda ^{\mathtt {b}}_t)\).

Thus, we have proved a.s. for all \(t \in (0, T^*) \backslash S\) that \(X^{\mathtt {b}, \mathtt {w}_{n(p)}}\! \big ( \Lambda ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} t )\big ) / a_{n(p)} \rightarrow \! X^\mathtt {b} (\Lambda ^{\mathtt {b}}_t)\). Since S is countable, it easily implies that for all \(t \in [0, T^*)\), \(Q^\mathtt {b}_t = X^\mathtt {b}(\Lambda ^\mathtt {b}_t)\). In (sub)critical cases, it simply means that \(Q^\mathtt {b} = X^\mathtt {b}_{\Lambda ^\mathtt {b}}\).

We now complete the proof that \(Q^\mathtt {b} = X^\mathtt {b}_{\Lambda ^\mathtt {b}}\) in the supercritical cases. To that end, we first observe the following. Let \(t_1, t_2 \in (T^*, \infty )\) be distinct times such that \(\Delta Q^\mathtt {b}_{t_{1}} = \Delta Q^\mathtt {b}_{t_{2}} = 0\). By Lemma B.1 (ii), \(X^{\mathtt {b}, \mathtt {w}_{n(p)}}\! \big ( \Lambda ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} t_{i} )\big ) / a_{n(p)} \! \rightarrow \!Q^\mathtt {b}_{t_i}\) for \(i \in \{1, 2\}\). Then, by (210), we get \(t_i \! >\! T_{\mathtt {w}_{n(p)}}^* /b_{n(p)}\) for all sufficiently large p which implies \(X^{\mathtt {b}, \mathtt {w}_{n(p)}}\! \big ( \Lambda ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} t_{1} )\big ) = X^{\mathtt {b}, \mathtt {w}_{n(p)}}\! \big ( \Lambda ^{\mathtt {b}, \mathtt {w}_{n(p)}} (b_{n(p)} t_{2} )\big )\). Consequently, we get \(Q^\mathtt {b}_{t_1} = Q^\mathtt {b}_{t_2}\). This argument easily implies that for all \(t \in [T^*, \infty )\), \(Q^\mathtt {b}_t = Q^\mathtt {b}_{T^*}\). Thus, to complete the proof that \(Q^\mathtt {b} = X^\mathtt {b}_{\Lambda ^\mathtt {b}}\) in the supercritical cases, we only need to prove that \(X^{\mathtt {b}, \mathtt {w}_{n(p)}} (T^*_{\mathtt {w}_{n(p)}})/ a_{n(p)} \! \rightarrow \! X^\mathtt {b} (T^*)\). If \(\Delta X^\mathtt {b}(T^*) = 0\), then it is a consequence of (210) and of Lemma B.1 (ii).

Therefore, it remains to address cases where \(\Delta X^\mathtt {b}(T^*)\! >\! 0\). In this case, we clearly get \(\Delta \theta ^\mathtt {b}(T^*) = \infty \); by Lemma 4.3 (ii) with \(a = T^*\), we get \(\Delta Y(T^*) = 0\) and therefore \(\Delta X^\mathtt {b}(T^*)\!= \! \Delta A(T^*) \! >\! 0\) by definition of Y and A.

We first claim that it is sufficient to prove \(A^{\mathtt {w}_{n(p)}}(T^*_{\mathtt {w}_{n(p)}})/a_{n(p)}\! \rightarrow \! A_{T^*}\). Indeed, suppose it holds true; since \(\Delta Y(T^*) = 0\), Lemma B.1 (ii) and (210) imply that \(Y^{\mathtt {w}_{n(p)}}(T^*_{\mathtt {w}_{n(p)}})/a_{n(p)}\! \rightarrow \! Y_{T^*}\); and it is sufficient to recall that \(X^{\mathtt {b} , \mathtt {w}_{n(p)}} = A^{\mathtt {w}_{n(p)}} + Y^{\mathtt {w}_{n(p)}}\).

Thus, we assume that we are in the supercritical cases and that \(\Delta X^\mathtt {b}(T^*)\! >\! 0\), and we want to prove that \(A^{\mathtt {w}_{n(p)}}(T^*_{\mathtt {w}_{n(p)}})/a_{n(p)}\! \rightarrow \! A_{T^*}\). By Lemma B.1 (i), there exists \(t_p \! \rightarrow \! T^*\) such that \(A^{\mathtt {w}_{n(p)}}(b_{n(p)} t_p-)/a_{n(p)}\! \rightarrow \! A_{T^*\! -}\) and \(A^{\mathtt {w}_{n(p)}}(b_{n(p)} t_p)/a_{n(p)}\! \rightarrow \! A_{T^*}\). Suppose that \(t_p \!> \! T^*_{\mathtt {w}_{n(p)}}/b_{n(p)}\) for infinitely many p; by the definition (103) of \(T^*_{\mathtt {w}_n}\), this implies that \(A^{\mathtt {w}_{n(p)}}(b_{n(p)} t_p-)\! \ge \! -I^{\mathtt {r}, \mathtt {w}_{n(p)}}_\infty \) for infinitely many p and (210) implies \(A_{T^*-} \! \ge \! -I^{\mathtt {r}}_\infty \); since \(T^* = \sup \{ t \in [0, \infty ): A_t \! < \! -I^{\mathtt {r}}_\infty \}\), we get \(A_{T^*-} = -I^{\mathtt {r}}_\infty \); however, \(-I^{\mathtt {r}}_\infty \) is an exponentially distributed r.v. that is independent of A which a.s. implies that \(-I^{\mathtt {r}}_\infty \notin \{ A_{a-}; a \in (0, \infty ) \}\). This proves that a.s. \(t_p \le T^*_{\mathtt {w}_{n(p)}}/b_{n(p)}\) for all sufficiently large p. Then, Lemma B.1 (iv) implies that \(A^{\mathtt {w}_{n(p)}}(T^*_{\mathtt {w}_{n(p)}})/a_{n(p)}\! \rightarrow \! A_{T^*}\). As observed previously, it completes the proof of \(X^{\mathtt {b}, \mathtt {w}_{n(p)}} (T^*_{\mathtt {w}_{n(p)}})/ a_{n(p)} \! \rightarrow \! X^\mathtt {b} (T^*)\) and thus that of \(Q^\mathtt {b} = X^\mathtt {b}_{\Lambda ^{\mathtt {b}}}\) in the supercritical cases.

We next prove that \(Q^\mathtt {r} = X^\mathtt {r}_{\Lambda ^\mathtt {r}}\): to that end, we set \(S_3 = \{ t \in [0, \infty ) \! : \! (\Delta X^\mathtt {r}) (\Lambda ^\mathtt {r}_t) \! >\! 0 \}\). Lemma 4.3 (iv) entails that a.s. \(S_3\) is countable and by Lemma B.1 (ii), a.s. for all \(t \in [0, \infty )\backslash S_3\), we get \(X^{\mathtt {r}, \mathtt {w}_{n(p)}} \! \big ( \Lambda ^{\mathtt {r}, \mathtt {w}_{n(p)}} (b_{n(p)} t\big )/ a_{n(p)} \! \rightarrow \! X^\mathtt {r} (\Lambda ^\mathtt {r}_t)\); this easily entails that a.s. \(Q^\mathtt {r} = X^\mathtt {r} \circ \Lambda ^\mathtt {r}\), which completes the proof of (209).

We now prove (208): without loss of generality (but with a slight abuse of notation), Skorokhod’s representation theorem allows to assume that (209) holds \({\mathbf {P}}\)-a.s. By Lemma 4.3 (v), a.s. for all \(t \in [0, \infty )\), \(\Delta Q^\mathtt {b}_t \Delta Q^\mathtt {r}_t = 0\), and Lemma B.1 (iii) entails that

$$\begin{aligned} \big ( \big ( \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {b}, \mathtt {w}_n}_{\Lambda ^{\mathtt {b}, \mathtt {w}_n } _{b_nt }} , \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {r}, \mathtt {w}_n}_{\Lambda ^{\mathtt {r}, \mathtt {w}_n }_{b_n t}} \big )\big )_{t\in [0, \infty )} \underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } \big ( (Q^\mathtt {b}_t ,Q^\mathtt {r}_t )\big )_{t\in [0, \infty )} \quad \text {a.s. on }{\mathbf {D}}([0, \infty ), {\mathbb {R}}^2).\end{aligned}$$

which implies (208) since \(X^{\mathtt {w}_n}_t \! =\! X^{\mathtt {b},\mathtt {w}_n}(\Lambda ^{\mathtt {b}, \mathtt {w}_n} _{ t}) + X^{\mathtt {r},\mathtt {w}_n}( \Lambda ^{\mathtt {r}, \mathtt {w}_n} _{ t})\) and \(X_t \! =\! X^{\mathtt {b}} (\Lambda ^{\mathtt {b}} _{ t})+ X^{\mathtt {r}}(\Lambda ^{\mathtt {r} } _{ t})\). \(\square \)

Recall the definition of the height process \(H^{\mathtt {w}_n}\) associated with \(X^{\mathtt {w}_n}\) in (114). Recall the definition of \((H_t)_{t\in [0, \infty )}\) in (138) that is the height process associated with X: H is a continuous process and note that (138) implies that H is an adapted measurable functional of X. Then, recall the definition of the offspring distribution \(\mu _{\mathtt {w}_n}\) in (85) and denote by \((Z^{\mathtt {w}_n}_k)_{k\in {\mathbb {N}}}\) a Galton–Watson branching process with initial state \(Z^{\mathtt {w}_n}_0 = \lfloor a_n \rfloor \) and offspring distribution \(\mu _{\mathtt {w}_n}\); recall Assumption \(\mathbf (C4) \) in (34): there exists \(\delta \ \in \! (0, \infty ) \) such that \(\liminf _{n\rightarrow \infty } {\mathbf {P}}( Z^{_{\mathtt {w}_n}}_{^{\lfloor b_n \delta /a_n \rfloor }} = 0 ) \! >\! 0\).

Lemma 6.14

Recall \({\mathscr {Q}}_n (3)\) in (208). Under \(\mathbf {(C4)}\) and the assumptions of Lemma 6.3,

$$\begin{aligned} {\mathscr {Q}}_n (4):= & {} \big ( {\mathscr {Q}}_n (3), \frac{_{_{a_n}}}{^{^{b_n}}} H^{\mathtt {w}_n}_{b_n \cdot } , \frac{_{_{a_n}}}{^{^{b_n}}}H^{\mathtt {w}_n} \! \! \circ \! {\overline{\theta }}^{\mathtt {b}, \mathtt {w}_n}_{b_n \cdot } \big ) \nonumber \\&\underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } \big ( (X^\mathtt {b} , A, Y, {\overline{\theta }}^\mathtt {b} ), {\overline{\gamma }}^{\mathtt {r} } , X^\mathtt {r} , -I^\mathtt {r}_\infty , T^*, \Lambda ^{\mathtt {b}}, \Lambda ^{\mathtt {r}}, ( X^{\mathtt {b}}_{\Lambda ^{\mathtt {b}}}, X^{\mathtt {r}}_{\Lambda ^{\mathtt {r}}} , X) , H, H \! \! \circ \! {\overline{\theta }}^\mathtt {b} \big ),\nonumber \\ \end{aligned}$$
(211)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}^4) \! \times \! ({\mathbf {D}}([0, \infty ), {\mathbb {R}}))^2\! \times \! [0, \infty ]^2 \! \times \! ({\mathbf {C}}([0, \infty ), {\mathbb {R}}))^2 \! \times \! {\mathbf {D}}([0, \infty ), {\mathbb {R}}^3)\! \times \! ({\mathbf {C}}([0, \infty ), {\mathbb {R}}))^2 \) equipped with the product topology.

Proof

We first prove that

$$\begin{aligned} {\mathscr {Q}}^\prime _n (4)= & {} \big ( {\mathscr {Q}}_n (3), \frac{_{_{a_n}}}{^{^{b_n}}} H^{\mathtt {w}_n}_{b_n \cdot } \big ) \nonumber \\&\underset{n\rightarrow \infty }{ -\!\!\! -\!\!\! \longrightarrow } {\mathscr {Q}}^\prime (4)= \big ( (X^\mathtt {b} , A, Y, {\overline{\theta }}^\mathtt {b} ), {\overline{\gamma }}^{\mathtt {r} } , X^\mathtt {r} , -I^\mathtt {r}_\infty , T^*, \Lambda ^{\mathtt {b}}, \Lambda ^{\mathtt {r}}, ( X^{\mathtt {b}}_{\Lambda ^{\mathtt {b}}}, X^{\mathtt {r}}_{\Lambda ^{\mathtt {r}}} , X) , H \big ),\nonumber \\ \end{aligned}$$
(212)

weakly on the appropriate product-space. By Proposition 2.2, the laws of the processes \( \frac{{{a_n}}}{{{b_n}}} H^{\mathtt {w}_n}_{b_n \cdot } \) are tight on \({\mathbf {C}}([0, \infty ), {\mathbb {R}}) \). Then, the laws of \({\mathscr {Q}}^\prime _n (4)\) are tight thanks to (208). We only need to prove that the law of \({\mathscr {Q}}^\prime (4)\) is the unique limit law, which is an easy consequence of (208), of the joint convergence (35) in Proposition 2.2 and of the fact that H is an adapted measurable deterministic functional of X.

To complete the proof of the lemma, we use a general (deterministic) result on Skorokhod’s convergence for the composition of functions that is recalled in Theorem B.5 (see Appendix B.1). Without loss of generality (but with a slight abuse of notation), Skorokhod’s representation theorem allows to assume that (212) holds \({\mathbf {P}}\)-a.s.: since \( \frac{{{a_n}}}{{{b_n}}} H^{\mathtt {w}_n}_{b_n \cdot }\! \rightarrow \! H\) a.s. on \({\mathbf {C}}([0, \infty ), {\mathbb {R}})\), since \( \frac{_{{1}}}{^{{b_n}}} {\overline{\theta }}^{_{\mathtt {b}, \mathtt {w}_n}}_{^{b_n \cdot } }\! \rightarrow \! {\overline{\theta }}^{_{\mathtt {b}}}\) a.s. on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\) and since \(H \! \circ \! {\overline{\theta }}^{_{\mathtt {b}}}\) is a.s. continuous by (152) Theorem 4.7, Theorem B.5 (i) applies and asserts that \(\frac{{{a_n}}}{{{b_n}}}H^{\mathtt {w}_n} \circ {\overline{\theta }}^{_{\mathtt {b}, \mathtt {w}_n}}_{^{b_n \cdot } } \! \rightarrow \! H \circ {\overline{\theta }}^{_{\mathtt {b}}}\) in \({\mathbf {C}}([0, \infty ), {\mathbb {R}})\), which completes the proof of the proposition. \(\square \)

End of the proof of Proposition 5.1 Recall the definition of \({\mathcal {H}}^{\mathtt {w}_n}\) (that is the height process associated with \(Y^{\mathtt {w}_n}\)) in (112). By Lemma 3.4, we have \({\mathcal {H}}^{\mathtt {w}_n}_t \! =\! H^{\mathtt {w}_n}( \theta ^{\mathtt {b}, \mathtt {w}_n}_t)\) for all \(t \in [0, T^*_{\mathtt {w}_n})\). On the other hand, recall the existence and the properties of \({\mathcal {H}}\) as stated in Theorem 4.7. In particular, note that \({\mathcal {H}}_t = H (\theta ^\mathtt {b}_t) \) for all \(t \in [0, T^*)\). First observe that in (sub)critical cases, the convergence (153) in Proposition 5.1 is an immediate consequence of (211) in Lemma 6.14. Thus, we only need to focus on the supercritical cases.

To simplify notation, we denote by \((Y^{{(n)}}, A^{(n)}, {\mathcal {H}}^{(n)})\) the rescaled processes on the left hand side of (153) and we also set \((Y^{(\infty )}, A^{(\infty )}, {\mathcal {H}}^{(\infty )}) = (Y, A, {\mathcal {H}})\). We fix \(t \in (0, \infty )\), a bounded continuous function \(F\! : \! {\mathbf {D}}([0, \infty ), {\mathbb {R}})^2 \! \times \! {\mathbf {C}}([0, \infty ), {\mathbb {R}})\! \rightarrow \! {\mathbb {R}}\), and for all \(n \in {\mathbb {N}}\cup \{ \infty \}\), we set \(u_n = {\mathbf {E}}\big [ F\big ( Y^{_{(n)}}_{^{\cdot \wedge t}}, A^{_{(n)}}_{^{\cdot \wedge t}}, {\mathcal {H}}^{_{(n)}}_{^{\cdot \wedge t}} \big ) \big ]\). Clearly, we only need to prove that \(u_n \! \rightarrow \! u_\infty \). To that end, we introduce for all \(K \in (0, \infty )\), a continuous function \(\phi _K \! : \! [0, \infty ) \! \rightarrow \! [0, 1]\) such that \(\mathbf{1}_{[0, K]} \le \phi _K (\cdot ) \le \mathbf{1}_{[0, K+1]}\) and we set \(u_n (K) = {\mathbf {E}}\big [ F\big ( Y^{_{(n)}}_{^{\cdot \wedge t}}, A^{_{(n)}}_{^{\cdot \wedge t}}, {\mathcal {H}}^{_{(n)}}_{^{\cdot \wedge t}} \big ) \phi _K \big ( A^{_{(n)}}_{^t} \big ) \big ]\), for all \(n \in {\mathbb {N}}\! \cup \! \{ \infty \}\). We first observe that \(0 \le \! u_n \! -\! u_n (K) \le \Vert F \Vert _\infty {\mathbf {P}}\big ( A^{_{(n)}}_{^t} \! \ge \! K\big )\). Since \(A^{(n)}_t\! \rightarrow \! A_t\), standard arguments imply \(\limsup _{n\rightarrow \infty } |u_n \! -\! u_n (K) | \le \Vert F \Vert _\infty {\mathbf {P}}\big ( A_t \! \ge \! K\big )\). Next recall that Theorem 4.7 asserts that \({\mathcal {H}}\) is a functional of (YA); then recall also that \(-I^{\mathtt {r}}_\infty \) (resp. \(-I^{\mathtt {r}, \mathtt {w}_n}_\infty /a_n\)) is an exponentially distributed r.v. independent of (YA) and thus independent of \((Y, A, {\mathcal {H}})\) (resp. independent of \((Y^{(n)}, A^{(n)}, {\mathcal {H}}^{(n)})\)) and whose parameter is \(\varrho ^{(\infty )}\! :=\! \varrho \) (resp. \(\varrho ^{(n)}\! := \! a_n\varrho _{\mathtt {w}_n}\)). We set \({\overline{{\mathcal {H}}}}^{_{(n)}}_{^{\cdot } } = \frac{{_{a_n}}}{{^{b_n}}}H^{\mathtt {w}_n} \circ {\overline{\theta }}^{_{\mathtt {b}, \mathtt {w}_n}}_{^{b_n \cdot }} \). Then, for all \(n \in {\mathbb {N}}\cup \{ \infty \}\), we have

$$\begin{aligned} u_n (K) = {\mathbf {E}}\Big [ e^{\varrho ^{_{(n)}}\! A^{_{(n)}}_{^t}}\! F\big ( Y^{_{(n)}}_{^{\cdot \wedge t}}, A^{_{(n)}}_{^{\cdot \wedge t}}, {\overline{{\mathcal {H}}}}^{_{(n)}}_{^{\cdot \wedge t}} \big ) \phi _K \big ( A^{_{(n)}}_{^t} \big ) \mathbf{1}_{\! \big \{ A^{{(n)}}_{t} < -I^{\mathtt {r}, \mathtt {w}_n}_\infty /a_n \big \} }\Big ] , \end{aligned}$$
(213)

where the right-hand side is bounded thanks to the term \(\phi _{K}\). Note that the above identity holds since the events \(\{ A^{{(n)}}_{t} \! <\! -I^{\mathtt {r}, \mathtt {w}_n}_\infty /a_n \big \} \) and \(\{ T^*_{\mathtt {w}_n}/ b_n >t \}\) coincide a.s. and on these events, we get \({\overline{{\mathcal {H}}}}^{_{(n)}}_{^t } = {\mathcal {H}}^{_{(n)}}_{^t}\).

Next, recall that Proposition 2.1 (iv) asserts that \(\lim _{n\rightarrow \infty } \varrho ^{(n)} = \varrho ^{(\infty )}\). Since \({\mathbf {P}}(A_t = -I^\mathtt {r}_\infty ) = 0\), the joint convergence (211) in Lemma 6.14 combined with (213) entails that \(u_n(K) \! \rightarrow \! u_\infty (K)\) by dominated convergence. Since \(|u_\infty \! -\! u_n| \le |u_\infty \! -\! u_\infty (K)|+|u_\infty (K)\! -\! u_n(K)|+|u_n(K) \! -\! u_n|\), we get \( \limsup _{n\rightarrow \infty } |u_\infty \! -\! u_n| \le 2 \Vert F\Vert _\infty {\mathbf {P}}\big ( A_t \! \ge \! K\big ) \rightarrow 0 \) as K tends to \(\infty \). This completes the proof of (153) in supercritical cases and it also completes the proof of Proposition 5.1. \(\square \)

7 Proof of the limit theorems for the Markovian processes

The aim of this section is to prove Propositions 2.1-2.3. We will proceed as follows. In Sect. 7.1, we will address a slightly more general situation: we will temporarily remove the requirement that \(a_{n}b_{n}/\sigma _{1}(\mathtt {w}_{n})\rightarrow \kappa \). In that case, we will see that the Lévy measure \(\pi \) in the limit can take a more general form than the one in (28); in particular \(\pi \) is not necessarily purely atomic. With this requirement back in place, we then show in Sect. 7.2 that the only possible limits are of the form (28). This then allows us to prove aforementioned propositions in the rest of the section.

7.1 Convergence of the Markovian queueing system: the general case

We say that an \({\mathbb {R}}\)-valued spectrally positive Lévy process \((R_t)_{t\in [0, \infty )}\) with initial value \(R_0 = 0\) is integrable if for at least one \(t \in (0, \infty )\) we have \({\mathbf {E}}[|R_t|] \! < \! \infty \). It implies that \({\mathbf {E}}[|R_t|] \! < \! \infty \) for all \(t \in (0, \infty )\). In Sect. B.2.1, we recall that there is a one-to-one correspondence between the laws of \({\mathbb {R}}\)-valued spectrally positive Lévy processes \((R_t)_{t\in [0, \infty )}\) with initial value \(R_0 = 0\) that are integrable and the triplets \((\alpha , \beta , \pi )\) where \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\) and \(\pi \) is a Borel-measure on \((0, \infty )\) satisfying \(\int _{(0, \infty )} \pi (dr) \, (r\! \wedge \! r^2) \! < \! \infty \). More precisely, the correspondence is given by the Laplace exponent of spectrally positive Lévy processes: for all \(t, \lambda \in [0, \infty )\),

$$\begin{aligned} {\mathbf {E}}\big [ e^{-\lambda R_t}\big ]= & {} e^{t\psi _{\alpha , \beta , \pi } (\lambda )}, \; \text {where} \quad \psi _{\alpha , \beta , \pi } (\lambda ) = \alpha \lambda + \frac{_{1}}{^{2}} \beta \lambda ^2 + \int _{(0, \infty )} (e^{-\lambda r}\! -\! 1+ \lambda r) \, \pi (dr).\nonumber \\ \end{aligned}$$
(214)

The main result used to obtain the convergence of branching processes is a theorem due to Grimvall [24], that is recalled in Theorem B.11: it states the convergence of rescaled Galton–Watson processes to Continuous State Branching Processes (CSBP for short). We say that a process \((Z_t)_{t\in [0, \infty )}\) is an integrable CSBP if it is a \([0, \infty )\)-valued Feller Markov process obtained from spectrally positive Lévy processes via Lamperti’s time-change which further satisfies \(\mathbf{E}[Z_{t}]<\infty \) for all \(t\in [0, \infty )\). The law of such a CSBP is completely characterised by the Laplace exponent of its associated Lévy process that is usually called the branching mechanism of the CSBP, which is necessarily of the form (214): see Sect. B.2.2 for a brief account on CSBP.

Let \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\). Let us recall the notation \(\nu _{\mathtt {w}_n}= \sigma _1 (\mathtt {w}_n)^{-1} \sum _{j\ge 1} w^{_{(n)}}_{^j} \delta _{j} \) and \(\mu _{\mathtt {w}_n} (k) \! =\! \sigma _1 (\mathtt {w}_n)^{-1} \sum _{j\ge 1} (w^{_{(n)}}_{^j})^{k+1} \exp (-w^{_{(n)}}_{^j})/k!\), for all \(k \in {\mathbb {N}}\). Recall the definition in Sect. 3.2 of the Markovian LIFO-queueing system associated with the set of weights \(\mathtt {w}_n\): clients arrive at unit rate; each client has a type that is a positive integer; the amount of service required by a client of type j is \(w^{_{(n)}}_j\); the types are i.i.d. with law \(\nu _{\mathtt {w}_n}\). If one denotes by \(\tau ^{n}_k\) the time of arrival of the k-th client in the queue and by \({\mathtt {J}}^n_k\) his type, then the queueing system is entirely characterised by \( {\mathscr {X}}_{\mathtt {w}_n} \! = \sum _{k\ge 1} \delta _{(\tau ^n_k , {\mathtt {J}}^n_k)}\) that is a Poisson point measure on \([0, \infty ) \! \times \! {\mathbb {N}}^*\) with intensity \(\ell \otimes \nu _{\mathtt {w}_n}\), where \(\ell \) stands for the Lebesgue measure on \([0, \infty )\). Next, for all \(j \in {\mathbb {N}}^*\) and all \(t \in [0, \infty )\), we introduce the following:

$$\begin{aligned} N^{\mathtt {w}_n}_j (t)= & {} \sum _{k\ge 1} \mathbf{1}_{\{ \tau _k^n \le t \, ; \, {\mathtt {J}}^n_k = j \}} \quad \text {and} \quad X^{\mathtt {w}_n}_t = -t + \sum _{k\ge 1} w^{_{(n)}}_{{\mathtt {J}}^n_k}\mathbf{1}_{[0, t]} (\tau ^n_k) = -t + \sum _{j\ge 1} w^{_{(n)}}_j N^{\mathtt {w}_n}_j (t) .\nonumber \\ \end{aligned}$$
(215)

Observe that \((N^{\mathtt {w}_n}_{^j})_{j\ge 1}\) are independent homogeneous Poisson processes with rates \(w^{_{(n)}}_j \! / \sigma _1 (\mathtt {w}_n)\) and \(X^{\mathtt {w}_n}\) is a càdlàg spectrally positive Lévy process.

Let \(a_n, b_n \in (0, \infty )\), \(n \in {\mathbb {N}}\) be two sequences that satisfy the following conditions.

$$\begin{aligned} a_n \; \, \text {and} \; \, \frac{b_n}{a_n} \; \, \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! \longrightarrow }\, \infty , \quad \frac{b_n}{a^2_n} \, \; \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! \longrightarrow }\,\; \beta _0 \in [0, \infty ), \quad \text {and} \quad \sup _{n\in {\mathbb {N}}} \frac{w^{_{(n)}}_{^1} }{a_n} \! < \! \infty . \end{aligned}$$
(216)

Remark 7.1

It is important to note that these assumptions are weaker than (21): namely, we temporarily do not assume that \(\frac{a_n b_n}{\sigma _1 (\mathtt {w}_n)}\! \rightarrow \! \kappa \in (0, \infty )\), which explains why the possible limits in the theorem below are more general. \(\square \)

Theorem 7.1

Let \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\) and \(a_n , b_n \in (0, \infty )\), \(n \in {\mathbb {N}}\), satisfy (216). Recall the definition of \(X^{\mathtt {w}_n}_t \! \) in (215); recall the definition of \(\mu _{\mathtt {w}_n}\) in (85) and let \((Z^{_{(n)}}_k)_{k\in \,{\mathbb {N}}}\) be a Galton–Watson process with offspring distribution \(\mu _{\mathtt {w}_n}\) and initial state \(Z^{_{(n)}}_0\! =\! \lfloor a_n \rfloor \). Then, the following convergences are equivalent.

\(\mathrm {(I)}\)    \(\big ( \frac{_1}{^{a_n}} Z^{_{(n)}}_{\lfloor b_n t /a_n \rfloor } \big )_{t\in [0, \infty )} \! \longrightarrow \! (Z_t)_{t\in [0, \infty )}\) weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\).

\(\mathrm {(II)}\)    \(\big ( \frac{_1}{^{a_n}} X^{\mathtt {w}_n}_{ b_n t } \big )_{t\in [0, \infty )} \! \longrightarrow \! (X_t)_{t\in [0, \infty )}\) weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\).

If \(\mathrm {(I)}\) or \(\mathrm {(II)}\) holds true, then Z is necessarily an integrable CSBP and X is an integrable \((\alpha , \beta , \pi )\)-spectrally positive Lévy process (as defined at the beginning of Sect. 7.1) whose Laplace exponent is the same as the branching mechanism of Z. Here \((\alpha , \beta , \pi )\) necessarily satisfies

$$\begin{aligned} \beta \ge \beta _0 \quad \text {and} \quad \exists \, r_0 \in (0, \infty ) \; \text {such that} \; \pi ((r_0, \infty )) = 0\; , \end{aligned}$$
(217)

which implies \(\int _{(0, \infty )} r^2 \, \pi (dr) \! < \! \infty \). Moreover, \( \mathrm {(I)}\! \Leftrightarrow \! \mathrm {(II)}\! \Leftrightarrow \!\mathrm {(IIIabc)}\! \Leftrightarrow \! \big ( \mathrm {(IIIa)} \& \mathrm {(IV)} )\) where

\({\mathrm {(IIIa)}}\)   \(\displaystyle \frac{b_n}{a_n} \Big ( 1\! -\! \frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \Big )\; \longrightarrow \; \alpha .\)

\({\mathrm {(IIIb)}}\)   \(\displaystyle \frac{b_n}{(a_n)^2} \frac{\sigma _3 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \; \longrightarrow \; \beta + \int _{(0, \infty )} \!\! \! r^2 \, \pi (dr) \).

\({\mathrm {(IIIc)}}\) \(\displaystyle \; \frac{a_n b_n}{\sigma _1 (\mathtt {w}_n)} \sum _{j\ge 1} \frac{w_j^{(n)}}{a_n} f \big ( w_j^{(n)}\! / a_n \big ) \longrightarrow \!\! \int _{(0, \infty )} \!\! \!\! \! f(r) \, \pi (dr), \, \) for all continuous bounded \(f\! :\! [0, \infty ) \! \rightarrow \! {\mathbb {R}}\) vanishing in a neighbourhood of 0.

\({\mathrm {(IV)}}\)    \(\displaystyle \frac{a_n b_n}{\sigma _1 (\mathtt {w}_n)} \sum _{j\ge 1} \frac{w_j^{(n)}}{a_n} \big ( e^{-\lambda w^{(n)}_j /a_n}\!\! -\! 1 + \lambda \, w^{(n)}_j \!\! /a_n \big ) \longrightarrow \psi _{\alpha , \beta , \pi } (\lambda ) - \alpha \lambda ,\; \) for all \(\lambda \in (0, \infty )\), where \(\psi _{\alpha , \beta , \pi }\) is defined by (214).

Remark 7.2

We recall that \(\beta _{0}\) is the limit of the ratio \(b_{n}/a_{n}^{2}\). Combined with (217), we see that if \(b_{n}\asymp a_{n}^{2}\), then necessarily the limit process X will have a non-zero Brownian component. The converse is not true in general: it is possible to have \(\beta _{0}=0<\beta \); see the construction in the proof of Propositions 2.1 and 2.2.

Proof

We easily check that \((X^{\mathtt {w}_n}_{ b_n t } /a_n)_{t\in [0, \infty )}\) is an \((\alpha _n, \beta _n, \pi _n)\)-spectrally positive Lévy process where

$$\begin{aligned} \alpha _n \! =\! \frac{b_n}{a_n} \Big ( 1\! -\! \frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \Big ), \quad \beta _n = 0 \quad \text {and} \quad \pi _n= \frac{a_n b_n}{\sigma _1 (\mathtt {w}_n)} \sum _{j\ge 1} \frac{w_j^{(n)}}{a_n} \, \delta _{w^{(n)}_j \! / a_n} .\end{aligned}$$

We immediately see that \(\beta _n +\! \int r^2 \pi _n (dr) = b_n \sigma _3 (\mathtt {w}_n)/ a_n^2\sigma _1 (\mathtt {w}_n)\). Then, Theorem B.9 implies that \(\mathrm {(II)} \! \Leftrightarrow \! \mathrm {(IIIabc)}\). We then apply Lemma A.3 to \(\Delta ^n_k = (X^{\mathtt {w}_n}_{{k}}\! -\!X^{\mathtt {w}_n}_{{k-1}})/a_n\) and \(q_n = \lfloor b_n \rfloor \): it shows that the weak limit \(X^{\mathtt {w}_n}_{\lfloor b_n \rfloor }/a_n \! \rightarrow \! X_1\) is equivalent to the convergence of the Laplace exponents \(\psi _{\alpha _n, \beta _n , \pi _n} (\lambda ) \! \rightarrow \psi _{\alpha , \beta , \pi } (\lambda )\), for all \(\lambda \in [0, \infty )\). Then note that the left hand side of \(\mathrm {(IV)}\) is \(\psi _{\alpha _n, \beta _n, \pi _n} (\lambda )\! -\! \alpha _n \lambda \). This shows that \( \mathrm {(II)} \! \Leftrightarrow \! \big ( \mathrm {(IIIa)} \& \mathrm {(IV)} )\).

It remains to prove that \(\beta \! \ge \! \beta _0\) and that \(\mathrm {(I)} \! \Leftrightarrow \! \mathrm {(IIIabc)}\). Let \((\zeta ^n_k)_{k\in {\mathbb {N}}}\) be a sequence of i.i.d. random variables with law \(\mu _{\mathtt {w}_n}\) as defined in (85). By Theorem B.11, \(\mathrm {(I)} \) is equivalent to the weak convergence on \({\mathbb {R}}\) of the r.v. \(R_n\! := \! a_n^{-1} \sum _{1\le k\le \lfloor b_n \rfloor } \big (\zeta ^n_k \! -\! 1 \big )\). We next apply Lemma A.3 to \(\Delta ^n_k:=a_n^{-1} (\zeta ^n_k-1)\) \(q_n = \lfloor b_n \rfloor \), which implies that \(\mathrm {(I)}\) is equivalent to

$$\begin{aligned} \exists \, \psi \in {\mathbf {C}}([0, \infty ), {\mathbb {R}}): \quad \psi (0) = 0 \quad \text {and} \quad \forall \lambda \in [0, \infty ), \; L_n (\lambda )\! :=\! {\mathbf {E}}\big [ e^{-\lambda R_n} \big ] \underset{^{n\rightarrow \infty }}{-\!\! \! -\!\! \!\longrightarrow } e^{\psi (\lambda )} . \end{aligned}$$
(218)

We next compute \(L_n (\lambda )\) more precisely. To that end, let \((W^n_k)_{k\in {\mathbb {N}}}\) be an i.i.d. sequence of r.v. with the same law as \(w^{_{(n)}}_{\mathtt {J^n_1}}\), where \(\mathtt {J}^n_1\) has law \(\nu _{\mathtt {w}_n}\). Namely, \( {\mathbf {E}}[ f (W^n_k)] = \sigma _1 (\mathtt {w}_n)^{-1} \sum _{j\ge 1} w^{_{(n)}}_{^j} f ( w^{_{(n)}}_{^j})\) for any nonnegative measurable function Note that for all \(k \in {\mathbb {N}}\), \(\mu _{\mathtt {w}_n} (k) = {\mathbf {E}}[\, (W^n_1)^k e^{-W^n_1} \! / k !\, ]\), which implies that

$$\begin{aligned} L_n (\lambda )= e^{\lambda \lfloor b_n\rfloor /a_n } \big ( {\mathbf {E}}\big [ e^{-\lambda \zeta ^n_1/ a_n}\big ]\big )^{\lfloor b_n\rfloor }=e^{\lambda \lfloor b_n\rfloor /a_n } \big ( {\mathbf {E}}\big [ \exp \big ( \! -\! W_1^n \big (1-e^{-\lambda /a_n} \big ) \big )\big ]\big )^{\lfloor b_n\rfloor } . \end{aligned}$$
(219)

We next set \(S^n_1 = a_n^{-1} \sum _{1\le k\le \lfloor b_n \rfloor } \big (W^n_k \! -\! 1 \big )\) and \({\mathcal {L}}_n (\lambda )\! = \! {\mathbf {E}}[ \exp (-\lambda S^n_1)]\). By (219), we get for all \(\lambda \in [0, \infty )\),

$$\begin{aligned}{\mathcal {L}}_n \big ( a_n \big (1-e^{-\lambda /a_n} \big ) \big )= L_n (\lambda )\, \exp \big ( \, \lfloor b_n\rfloor \big (1\! -\! e^{-\lambda /a_n} \big ) \! -\! \lambda \lfloor b_n\rfloor /a_n \big )\,. \end{aligned}$$

Since \(\lfloor b_n\rfloor \big (1\! -\! e^{-\lambda /a_n} \big ) \! -\! \lambda \lfloor b_n\rfloor /a_n + \frac{_1}{^2}b_na_n^{-2} \lambda ^2\!=\! {\mathcal {O}} (b_na_n^{-3}) \! \rightarrow \! 0\) and since \(b_{n}/a_{n}^{2}\rightarrow \beta _{0}\), (218) is equivalent to

$$\begin{aligned} \exists \psi _0 \in {\mathbf {C}}([0, \infty ), {\mathbb {R}}): \quad \psi _0 (0) = 0 \quad \text {and} \quad \forall \lambda \in [0, \infty ), \; \lim _{n\rightarrow \infty } {\mathcal {L}}_n (\lambda ) = e^{\psi _0 (\lambda )} , \end{aligned}$$
(220)

and if (218) or (220) holds true, then \(\psi (\lambda ) = \psi _0 (\lambda ) + \frac{_1}{^2} \beta _0 \lambda ^2\), for all \(\lambda \in [0, \infty )\).

Next, by Lemma A.3 applied to \(\Delta ^n_k:=a_n^{-1} (W^n_k-1)\), we see that (220) is equivalent to the weak convergence \(S^{{n}}_1\! \rightarrow \! S_1\) in \({\mathbb {R}}\) and Theorem B.10 asserts that is equivalent to the conditions \((\textit{Rw3abc})\) there with \(\xi ^n_1 = W^n_1 \! -\! 1\). Namely, there exists a triplet \((\alpha ^*, \beta ^*, \pi ^*)\) with \(\alpha ^* \in {\mathbb {R}}\), \(\beta ^* \in [0, \infty )\) such that there exists \(r_0 \in (0, \infty )\) satisfying \(\pi ^* ([r_0, \infty )) = 0\) and that the following holds true:

$$\begin{aligned} \frac{b_n}{a_n} {\mathbf {E}}[\xi ^n_1]&= \frac{b_n}{a_n} \Big ( \frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} - 1 \Big )\! \rightarrow -\alpha ^*,\\ \frac{b_n}{a_n^2} \mathbf {var}( \xi ^n_1)&= \frac{b_n}{a_n^2} \frac{\sigma _3 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} - \frac{b_n}{a_n^2} \Big (\frac{\sigma _2 (\mathtt {w}_n) }{\sigma _1 (\mathtt {w}_n)}\Big )^2 \!\!\! \rightarrow \! \beta ^* + \! \! \int _{(0, \infty )} \! \!\!\! \!\!\! \!\!\! r^2 \pi ^* (dr) , \\ b_n {\mathbf {E}}\big [ f\big (\xi ^n_1/a_n \big )\big ]&= \frac{a_n b_n}{\sigma _1 (w_n)} \sum _{j\ge 1} \frac{w_j^{(n)}}{a_n} f \Big ( \frac{w_{j}^{{(n)}}\! - 1}{ a_n} \Big ) \rightarrow \int _{(0, \infty )} \!\! \! f(r) \, \pi ^* (dr), \; \end{aligned}$$

for all continuous bounded \(f\! :\! [0, \infty ) \! \rightarrow \! {\mathbb {R}}\) vanishing in a neighbourhood of 0. It is easy to see that these conditions are equivalent to \({\mathrm {(IIIabc)}}\) with \(\alpha = \alpha ^*\), \(\beta = \beta _0+ \beta ^*\) and \(\pi = \pi ^*\). This completes the proof of the theorem. \(\square \)

Next, as recalled in Sect. 3.2, the Markovian \(\mathtt {w}_n\)-LIFO queueing system governed by \({\mathscr {X}}_{\mathtt {w}_n}\) induces a Galton–Watson forest \({\mathbf {T}}_{\! \mathtt {w}_n}\) with offspring distribution \(\mu _{\mathtt {w}_n}\): informally, the clients are the vertices of \({\mathbf {T}}_{\! \mathtt {w}_n}\) and the server is the root (or the ancestor); the j-th client to enter the queue is a child of the i-th one if the j-th client enters when the i-th client is served; among siblings, the clients are ordered according to their time of arrival. We denote by \(H^{\mathtt {w}_n}_t\) the number of clients waiting in the line right after time t; in (114), recall how \(H^{\mathtt {w}_n}\) is derived from \(X^{\mathtt {w}_n}\): for all \(s \le t\), if one sets \(I^{\mathtt {w}_n, s}_t= \inf _{r\in [s, t]}X^{\mathtt {w}_n}_r \), then, \(H^{\mathtt {w}_n}_t \! = \# \{ s \in [0, t]\, : \;I^{\mathtt {w}_n, s-}_{t} \! <\! I^{\mathtt {w}_n, s}_{t} \}\). As recalled in Sect. 3.2, \(X^{\mathtt {w}_n}\) and \(H^{\mathtt {w}_n}\) are close to the Lukasiewicz path and the contour process of \({\mathbf {T}}_{\! \mathtt {w}_n}\). Therefore, the convergence results for Lukasiewicz paths and contour processes of Galton–Watson trees in Le Gall & D. [19] (see Theorem B.12, Sect. B.2.3) allow us to prove the following theorem.

Theorem 7.2

Let X be an integrable \((\alpha , \beta , \pi )\)-spectrally positive Lévy process, as defined at the beginning of Sect. 7.1. Assume that (217) holds and that \(\int ^\infty \! dz / \psi _{\alpha , \beta , \pi } (z) \! <\! \infty \), where \(\psi _{\alpha , \beta , \pi }\) is given by (214). Let \((H_t)_{t\in [0, \infty )}\) be the continuous height process derived from X as defined by (138).

Let \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\) and \(a_n , b_n \in (0, \infty )\), \(n \in {\mathbb {N}}\), satisfy (216). Let \((Z^{_{(n)}}_{k})_{k\in \,{\mathbb {N}}}\) be a Galton–Watson process with offspring distribution \(\mu _{\mathtt {w}_n}\) (defined by (85)), and initial state \(Z^{_{(n)}}_0\! =\! \lfloor a_n \rfloor \). Assume that the three conditions \(\mathrm {(IIIabc)}\) in Theorem 7.1 hold true and assume that there exists a \(\delta \in (0, \infty )\) such that

$$\begin{aligned} \liminf _{n\rightarrow \infty } {\mathbf {P}}\big ( Z^{_{(n)}}_{\lfloor b_n \delta /a_n \rfloor } = 0 \big ) >0 \; . \end{aligned}$$
(221)

Then, the following joint convergence holds true:

$$\begin{aligned} \Big ( (\frac{_{1}}{^{{a_n}}} X^{\mathtt {w}_n}_{b_n t } )_{t\in [0, \infty )} , (\frac{_{a_n}}{^{b_n}} H^{\mathtt {w}_n}_{b_n t })_{t\in [0, \infty )} \Big ) \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! -\!\!\! \longrightarrow } \; (X, H) \end{aligned}$$
(222)

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}) \times {\mathbf {C}}([0, \infty ), {\mathbb {R}})\), equipped with the product topology. Furthermore, for \(t \in [0, \infty )\),

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathbf {P}}\big ( Z^{_{(n)}}_{\lfloor b_n t /a_n \rfloor } = 0 \big ) = e^{-v(t)} \quad \text {where} \quad \int _{v(t)}^\infty \! \frac{dz}{\psi _{\alpha , \beta , \pi } (z)}= t . \end{aligned}$$
(223)

Proof

Recall the definition of the Lukasiewicz path \(V^{{\mathbf {T}}_{\mathtt {w}_n}}\) associated with the GW(\(\mu _{\mathtt {w}_n}\))-forest \({\mathbf {T}}_{\mathtt {w}_n}\) in (75) (Sect. 3.1). Recall the definition of its height process \(\mathtt {Hght}^{{\mathbf {T}}_{\mathtt {w}_n}}\) in (77) and recall that \(C^{{\mathbf {T}}_{\mathtt {w}_n}}\) stands for the contour process of \({\mathbf {T}}_{\! \mathtt {w}_n}\). We first assume that \(\mathrm {(IIIabc)}\) in Theorem 7.1 and that (221) hold true. Then, Theorem B.12 applies with \(\mu _n \! := \! \mu _{\mathtt {w}_n}\). In consequence, the joint convergence (265) holds true and we get (223).

Recall that \((\tau ^n_k)_{k\ge 1}\) are the arrival-times of the clients in the queue governed by \(X^{\mathtt {w}_n}\) and recall the notation \(N^{\mathtt {w}_n}(t) = \sum _{k\ge 1} \mathbf{1}_{[0, t]} (\tau ^n_k)\) in (87) that is a homogeneous Poisson process with unit rate. Then, by Lemma B.6 (see Appendix B.1) the joint convergence (265) entails the following:

$$\begin{aligned} {\mathscr {Q}}_n (5)= \big ( \frac{_{1}}{{^{a_n}}} V^{\! {{\mathbf {T}}_{\! \mathtt {w}_n}}}\! (N^{\mathtt {w}_n}_{b_n \cdot }) , \, \frac{_{{a_n}}}{^{{b_n}}} \mathtt {Hght}^{\! {{\mathbf {T}}_{\! \mathtt {w}_n}}}\! (N^{\mathtt {w}_n}_{b_n \cdot }) , \, \frac{_{{a_n}}}{^{{b_n}}} C^{{{\mathbf {T}}_{\! \mathtt {w}_n}}}_{b_n \cdot } \big ) \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! \longrightarrow } \big ( X, H , (H_{t/2})_{t\in [0, \infty )} \big )\end{aligned}$$

weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}}) \times ({\mathbf {C}}([0, \infty ), {\mathbb {R}}))^2\) equipped with the product topology. Here X is an integrable \((\alpha , \beta , \pi )\)-spectrally positive Lévy process (as defined at the beginning of Sect. 7.1) and H is the height process derived from X by (138). By Theorem 7.1, the laws of the processes \(\frac{_1}{^{a_n}}X^{\mathtt {w}_n}_{b_n \cdot }\) are tight in \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\). Thus, if one sets \({\mathscr {Q}}_n (6) = (\frac{_1}{^{a_n}}X^{\mathtt {w}_n}_{b_n \cdot }, {\mathscr {Q}}_n (5))\), then the laws of the \({\mathscr {Q}}_n (6)\) are tight on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})^2 \times ({\mathbf {C}}([0, \infty ), {\mathbb {R}}))^2\). Thus, to prove the weak convergence \({\mathscr {Q}}_n (6)\! \rightarrow \! (X, X, H , H_{\cdot /2})\! :=\!{\mathscr {Q}} (6)\), we only need to prove that the law of \({\mathscr {Q}} (6)\) is the unique limit law: to that end, let \((n(p))_{p\in {\mathbb {N}}}\) be an increasing sequence of integers such that

$$\begin{aligned} {\mathscr {Q}}_{n(p)} (6) \! \underset{p\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \big ( X^\prime \! , X, H , H_{\cdot /2} \big ). \end{aligned}$$
(224)

Actually, we only have to prove that \(X^\prime = X\). Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that (224) holds \({\mathbf {P}}\)-almost surely. We next use (88) in Lemma 3.1: fix \(t, \varepsilon , y \in (0, \infty )\), set \(I^{\mathtt {w}_n}_t = \inf _{s\in [0, t]} X^{\mathtt {w}_n}_s \); by applying (88) at time \(b_n t\), with \(a = a_n \varepsilon \) and \(x = a_n y\), we get the following:

$$\begin{aligned} {\mathbf {P}}\big ( \big | \frac{_{_1}}{^{^{a_n}}}V^{{\mathbf {T}}_{\! \mathtt {w}_n}}_{N^{\mathtt {w}_n} (b_n t)} \! - \frac{_{_1}}{^{^{a_n}}} X^{\mathtt {w}_n}_{b_n t} \big | \!> \! 2\varepsilon \big )\le & {} 1 \! \wedge \! \frac{_{4y}}{^{\varepsilon ^2 a_n}} + {\mathbf {P}}\big ( \! \! -\! \frac{_{_1}}{^{^{a_n}}} I^{\mathtt {w}_n}_{b_n t} \! >\! y) + {\mathbf {E}}\bigg [ 1 \wedge \frac{\frac{{1}}{{{a_n}}} (X^{\mathtt {w}_n}_{b_n t} \! -\! I^{\mathtt {w}_n}_{b_n t})}{^{\varepsilon ^2 a_n} } \bigg ] . \end{aligned}$$

By Lemma B.3 (ii), \(\frac{{1}}{{{a_{n(p)} }}} (X^{\mathtt {w}_{n(p)}}_{b_{n(p)} t} \! -\! I^{\mathtt {w}_{n(p)}}_{b_{n(p)} t}) \! \rightarrow \! X^\prime _t \! -\! I^\prime _t\) and \(\frac{{1}}{{{a_{n(p)}}}} I^{\mathtt {w}_{n(p)}}_{b_{n(p)} t} \! \rightarrow \! I^\prime _t\) almost surely, where we have set \(I^\prime _t = \inf _{s\in [0, t]} X^\prime _s\). Thus, for all \(\varepsilon \in (0, \infty )\),

$$\begin{aligned} \limsup _{p\rightarrow \infty } {\mathbf {P}}\big ( \big | \frac{_{1}}{^{{a_{n(p)}}}}V^{{\mathbf {T}}_{\! \mathtt {w}_{n(p)}}}_{N^{\mathtt {w}_{n(p)}} (b_{n(p)} t)} \! - \frac{_{1}}{^{{a_{n(p)}}}} X^{\mathtt {w}_{n(p)}}_{b_{n(p)} t} \big | \!> \! 2\varepsilon \big ) \le {\mathbf {P}}\big ( \!\! -\! I^\prime _t \! >\! y/2) \; \underset{y\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } 0. \end{aligned}$$

Compared with (224), this implies that for all \(t \in [0, \infty )\) a.s. \(X^\prime _t = X_t\) and thus, a.s. \(X^\prime = X\).

We have proved that \({\mathscr {Q}}_n (6)\! \rightarrow \! (X, X, H , H_{\cdot /2})\! =\! {\mathscr {Q}} (6)\) weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})^2 \times ({\mathbf {C}}([0, \infty ), {\mathbb {R}}))^2\). Without loss of generality (but with a slight abuse of notation), by Skorokhod’s representation theorem we can assume that the convergence holds true \({\mathbf {P}}\)-almost surely. In (94) and in (95) recall that

$$\begin{aligned} M^{\mathtt {w}_n}(t) = 2N^{\mathtt {w}_n} (t) \! -\! H^{\mathtt {w}_n}_t, \quad C^{{\mathbf {T}}_{\! \mathtt {w}_n}}_{M^{\mathtt {w}_n} (t)} = H^{\mathtt {w}_n}_t \quad \text {and} \quad \sup _{s\in [0, t]} H^{\mathtt {w}_n}_s \le 1+ \sup _{s\in [0, t]} \mathtt {Hght}^{{\mathbf {T}}_{\! \mathtt {w}_n}}_{N^{\mathtt {w}_n} (s)} . \end{aligned}$$

Then, we fix \(t, \varepsilon \in (0, \infty )\), and we apply (96) at time \(b_n t\), with \(a = b_n \varepsilon \) to get

$$\begin{aligned} {\mathbf {P}}\big (\! \sup _{s\in [0, t]} \! \! |\frac{_1}{^{b_n}}M^{\mathtt {w}_n}_{b_n s} \! -\! 2s |> 2\varepsilon \big ) \le 1\! \wedge \! \frac{_{16t}}{^{\varepsilon ^2 b_n}} + {\mathbf {P}}\Big (\frac{_{a_n}}{^{b_n}}+ \!\! \sup _{\; \, s\in [0, t]} \!\! \frac{_{a_n}}{^{b_n}}\mathtt {Hght}^{{\mathbf {T}}_{\! \mathtt {w}}}_{N^\mathtt {w}(b_n s)} \! > \varepsilon a_n \Big ) . \end{aligned}$$

Since \(\frac{_{a_n}}{^{b_n}}\mathtt {Hght}^{{\mathbf {T}}_{\! \mathtt {w}}} (N^\mathtt {w}(b_n \cdot )) \! \rightarrow \! H\) a.s. in \({\mathbf {C}}([0, \infty ), {\mathbb {R}})\), it easily entails that \(\frac{_1}{^{b_n}}M^{{\mathtt {w}_n}}_{{b_n \cdot }}\) tends in probability to twice the identity map on \([0, \infty )\) in \({\mathbf {C}}([0, \infty ), {\mathbb {R}})\). Since \(H^{\mathtt {w}_n}_t = C^{{\mathbf {T}}_{\! \mathtt {w}_n}} (M^{\mathtt {w}_n} (t))\), and since \(C^{{\mathbf {T}}_{\! \mathtt {w}_n}} (b_n \cdot ) \! \rightarrow \! H (\cdot /2)\) a.s. in \({\mathbf {C}}([0, \infty ), {\mathbb {R}})\), Lemma B.6 easily entails the joint convergence (222), which completes the proof. \(\square \)

As explained right after Theorem 2.3.1 in Le Gall & D. [19] (see Chapter 2, pp. 54-55) Assumption (221) is actually a necessary condition for the height process to converge. However it is not always easy to check this condition in practice. The following proposition provides a handy way of doing it.

Proposition 7.3

Let X be an integrable \((\alpha , \beta , \pi )\)-spectrally positive Lévy process, as defined at the beginning of Sect. 7.1. Assume that \((\alpha , \beta , \pi )\) satisfies (217) and that \(\int ^\infty dz / \psi _{\alpha , \beta , \pi } (z) \! <\! \infty \), where \(\psi _{\alpha , \beta , \pi }\) is given by (214). Let H be the continuous height process derived from X by (138). Let \(\mathtt {w}_n \in \! {\ell }^{_{\, \downarrow }}_f\) and \(a_n , b_n \in (0, \infty )\), \(n \in {\mathbb {N}}\), satisfy (216). We recall the definition of \(X^{\mathtt {w}_n}\) in (215) and denote by \(\psi _n\) the Laplace exponent of \((\frac{1}{a_n} X^{\mathtt {w}_n}_{b_n t })_{t\in [0, \infty )}\). Namely, for all \(\lambda \in [0, \infty )\),

$$\begin{aligned} \psi _n (\lambda ) = \frac{b_n}{a_n} \Big ( 1\! -\! \frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \Big ) \lambda + \frac{a_n b_n}{\sigma _1 (\mathtt {w}_n)} \sum _{j\ge 1} \frac{w_j^{(n)}}{a_n} \big ( e^{-\lambda w^{(n)}_j /a_n}\!\! -\! 1 + \lambda \, w^{(n)}_j \!\! /a_n \big )\; . \end{aligned}$$
(225)

We assume that the three conditions \(\mathrm {(IIIabc)}\) in Theorem 7.1 hold true. Then, (221) in Theorem 7.2 holds true when

$$\begin{aligned} \lim _{y\rightarrow \infty } \limsup _{n\rightarrow \infty } \int _y^{a_n} \!\! \frac{d\lambda }{\psi _n (\lambda )} = 0 \; . \end{aligned}$$
(226)

Proof

We first prove a lemma that compares the total height of Galton–Watson trees with i.i.d. exponentially distributed edge-lengths and the total height of their discrete skeleton. More precisely, let \(\rho \in (0, \infty )\) and let \(\mu \) be an offspring distribution such that \(\mu (0) \! >\! 0\) and whose generating function is denoted by \(g_\mu (r) = \sum _{l\in {\mathbb {N}}} \mu (l) r^l \). Note that \(g_{\mu } ([0, 1])\! \subset \! [0, 1]\); let \(g_\mu ^{\circ k}\) be the k-th iterate of \(g_\mu \), with the convention that \(g_\mu ^{\circ 0} (r) = r\), \(r \in [0, 1]\). Let \(\tau \! :\! \Omega \! \rightarrow \! {\mathbb {T}}\) be a random tree whose distribution is characterised as follows.

  • The number of children of the ancestor (namely the r.v. \(k_\varnothing (\tau )\)) is a Poisson r.v. with mean \(\rho \);

  • For all \(l\! \ge \! 1\), under \({\mathbf {P}}(\, \cdot \, | \, k_\varnothing (\tau ) = l)\), the l subtrees \(\theta _{[1]} \tau , \ldots , \theta _{[l]} \tau \) stemming from the ancestor \(\varnothing \) are independent Galton–Watson trees with offspring distribution \(\mu \).

We next denote by \(Z_k\) the number of vertices of \(\tau \) that are situated at height \(k+1\): namely, \(Z_k = \# \{ u \in \tau \! : |u| = k+1\}\) (see Sect. 3.1 for the notation on trees). Then, \((Z_k)_{k\in {\mathbb {N}}}\) is a Galton–Watson process whose initial value \(Z_0\) is distributed as a Poisson r.v. with mean \(\rho \). We denote by \(\Gamma (\tau )\) the total height of \(\tau \): namely, \(\Gamma (\tau ) = \max _{u\in \tau } |u|\) is the maximal graph-distance from the root \(\varnothing \). Note that if \(\mu \) is supercritical, then \(\Gamma (\tau )\) may be infinite). Observe that \(\Gamma (\tau )\! =\! \min \{ k \in {\mathbb {N}}: Z_k = 0 \}\). Thus,

$$\begin{aligned} {\mathbf {P}}\big ( \Gamma (\tau ) < k+1 \big ) = {\mathbf {P}}(Z_k = 0) = \exp \big (\! - \rho \big ( 1\! -\! g_\mu ^{\circ k} (0) \big ) \big ) \; . \end{aligned}$$
(227)

We next equip each individual u of the family tree \(\tau \) with an independent lifetime e(u) that is distributed as follows.

  • The lifetime \(e(\varnothing )\) of \(\varnothing \) is 0.

  • Conditional on \(\tau \), the r.v. e(u), \(u \in \tau \backslash \{ \varnothing \}\) are independent and exponentially distributed r.v. with parameter \(q \in (0, \infty )\).

Within our notation, the genealogical order on \(\tau \) is defined as follows: a vertex \(v \in \tau \) is an ancestor of \(u \in \tau \), which is denoted as \(v \preceq u\), if there exists \(v^\prime \in {\mathbb {U}}\) such that \(u = v *v^\prime \); \(\preceq \) is a partial order on \(\tau \). For all \(u \in \tau \!\setminus \!\{\varnothing \}\), we denote by \(\zeta (u) \! =\! \sum _{\varnothing \preceq v \preceq u}e(v)\), the time of death of u; then \(\zeta (\overleftarrow{u})\) is the time of birth of u, where \(\overleftarrow{u}\) stands for the parent of u. For all \(t \in [0, \infty )\), we next set \(\mathtt {Z}_t = \sum _{u\in \tau \backslash \{ \varnothing \}} \mathbf{1}_{[\zeta (\overleftarrow{u}), \zeta (u))} (t)\). Then \((\mathtt {Z}_t)_{t\in [0, \infty )}\) is a continuous-time Galton–Watson process (or a Harris process) with offspring distribution \(\mu \), with time parameter q and with Poisson\((\rho )\)-initial distribution. We denote by \(\Gamma = \max _{u\in \tau } \zeta (u) \) the extinction time of the population; then \(\Gamma = \max \{ t \in [0, \infty ) \! : \mathtt {Z}_t \! \ne \! 0\}\). Standard results on continuous-time GW-processes imply the following. For all \(t \in (0, \infty )\),

$$\begin{aligned} {\mathbf {P}}\big ( \Gamma \! <\! t \big ) = {\mathbf {P}}(\mathtt {Z}_t = 0 )= e^{-\rho r(t)}, \quad \text {where} \quad \int _{r(t)}^{1} \frac{dr}{g_\mu (1\! -\! r) \! -\! 1+r} = qt \; . \end{aligned}$$
(228)

For a formal proof, see for instance Athreya & Ney [5], Chapter III, Section 3, Equation (7) p. 106 and Section 4, Equation (1) p. 107.

We next compare \(\Gamma (\tau )\) and \(\Gamma \). To that end, we introduce \((\mathrm {e}_n)_{n\ge 1}\), a sequence of i.i.d. exponentially distributed r.v. with mean 1, and we set, for each \(\varepsilon \! \in \! (0, 1)\),

$$\begin{aligned} \delta (\varepsilon ) \, = \, \sup _{n\ge 1} \, {\mathbf {P}}\big ( n^{-1} (\mathrm {e}_1+ \cdots + \mathrm {e}_n) \notin (\varepsilon , \varepsilon ^{-1}) \big ) . \end{aligned}$$
(229)

The Law of Large Numbers easily implies that \( \delta (\varepsilon ) \! \rightarrow \! 0\) as \(\varepsilon \! \rightarrow \! 0\). Note that \(\mathtt {Z}_0 = Z_0\) and a.s. \(\Gamma (\tau )\!< \! \infty \) if and only if \(\Gamma \! <\! \infty \). We argue on the event \(\{\Gamma (\tau ) \! < \! \infty \}\): we first assume that \(Z_0 \! \ne \! 0\); let \(u^* \in \tau \backslash \{ \varnothing \}\) be the first vertex in the lexicographical order such that \(|u^*| = \Gamma (\tau )\); since \(\zeta (u^*) \le \Gamma \) and since conditional on \(\tau \), \(\zeta (u^*)\) is the sum of \(|u^*|\) (conditionally) independent exponential r.v. with parameter q, we get for all \(t \in (0, \infty )\),

$$\begin{aligned} {\mathbf {P}}\big (\Gamma \! < \! t \, ; \, \mathtt {Z}_0 \! \ne \! 0\big ) \le \sum _{n\ge 1} {\mathbf {P}}\big ( \, \Gamma (\tau ) = n\! \, ;\, Z_0 \! \ne \! 0 \, \big ) \, {\mathbf {P}}\big (\mathrm {e}_1+ \cdots + \mathrm {e}_n \! \le \! qt\big ) \; . \end{aligned}$$

Then, let \(\varepsilon \! \in \! (0, 1)\) and observe that \( {\mathbf {P}}\big ( \mathrm {e}_1+ \cdots + \mathrm {e}_n \! \le \! qt\big ) \le \delta (\varepsilon ) + \mathbf{1}_{\{ n \le qt / \varepsilon \}}\). Consequently,

$$\begin{aligned} {\mathbf {P}}\big ( \Gamma \! < \! t \, ;\, \mathtt {Z}_0 \! \ne \! 0\big ) \le \delta (\varepsilon ) + {\mathbf {P}}\big ( \Gamma (\tau ) \le \lfloor q t/\varepsilon \rfloor \, ;\, Z_0 \! \ne \! 0 \big ) . \end{aligned}$$

If \(\mathtt {Z}_0 = Z_0 = 0\), \(\Gamma = \Gamma (\tau ) = 0\), which implies that

$$\begin{aligned} {\mathbf {P}}\big ( \Gamma \! < \! t \big ) \le \delta (\varepsilon ) + {\mathbf {P}}\big (\Gamma (\tau ) \le \lfloor q t/\varepsilon \rfloor \big ) . \end{aligned}$$

Thus by (228) and (227), we have proved the following lemma: \(\square \)

Lemma 7.4

Let \(\rho , q \in (0, \infty )\) and let \(\mu \) be an offspring distribution such that \(\mu (0) \! >\! 0\) and whose generating function is denoted by \(g_\mu \); denote by \(g_\mu ^{\circ k}\) the k-th iterate of \(g_\mu \) with the convention \(g_\mu ^{\circ 0} (r) = r\), \(r \in [0, 1]\). Let \(t \in (0, \infty )\). Recall the definition of r(t) in (228). Let \(\varepsilon \! \in \! (0, 1)\). Recall the definition of \(\delta (\varepsilon )\) in (229). Then, for all \(t \! \in \! (0, \infty )\),

$$\begin{aligned} e^{- \rho r(t)} -\! \delta (\varepsilon ) \, \le \, \exp \big (\! -\! \rho \big ( 1\! -\! g_\mu ^{\circ \lfloor tq/\varepsilon \rfloor } (0) \, \big ) \big ). \end{aligned}$$
(230)

We are now ready to prove Proposition 7.3. Recall the definition of the offspring distribution \(\mu _{\mathtt {w}_n}\) in (85). We apply Lemma 7.4 with \(\mu = \mu _{\mathtt {w}_n}\), \(\rho = a_n\), \(q = b_n /a_n\) and we denote by \(r_n (t)\) the solution of (228) with \(g_{\mu }\) replaced by \(g_{\mu _{\mathtt {w}_n}}\). The change of variable \(\lambda = a_n r\) then implies that \(r_n (t)\) satisfies

$$\begin{aligned} \int _{a_n r_n(t)}^{a_n} \, \frac{d\lambda }{b_n \big ( g_{\mu _{\mathtt {w}_n}} \big ( 1\! -\! \frac{\lambda }{{a_n}}\big ) \! -\! 1+ \frac{\lambda }{{a_n}} \big ) } \, = t\,. \end{aligned}$$
(231)

Next, it is easy to check from (85) that \(b_n \big ( g_{\mu _{\mathtt {w}_n}} (1\! -\! \frac{\lambda }{{a_n}}) \! -\! 1+ \frac{\lambda }{a_n} \big ) \! =\! \psi _n (\lambda )\), where \(\psi _n\) is defined in (225). Then, Lemma 7.4 asserts for all \(t \in (0, \infty )\) and for all \(\varepsilon \in (0, 1)\), that

$$\begin{aligned} e^{- a_n r_n(t)} \! - \delta (\varepsilon ) \, \le \, \exp \big (\! -\! a_n \big ( 1\! -\! g_{\mu _{\mathtt {w}_n}}^{\circ \lfloor tb_n/a_n\varepsilon \rfloor }\! (0) \, \big ) \big ) \quad \text {where} \quad \int _{a_n r_n (t)}^{a_n} \frac{d\lambda }{\psi _n (\lambda )}= t \; . \end{aligned}$$
(232)

Next, fix \(t \in (0, \infty )\) and set \(C\! :=\! \limsup _{n\rightarrow \infty } a_n r_n (t) \in [0, \infty ]\). Suppose that \(C = \infty \). Then, there is an increasing sequence of integers \((n_k)_{k\in {\mathbb {N}}}\) such that \(\lim _{k\rightarrow \infty } a_{n_k} r_{n_k} (t) = \infty \). Let \(y \in (0, \infty )\); then, for all sufficiently large k, we have \(a_{n_k} r_{n_k} (t) \! \ge \! y\), which entails that

$$\begin{aligned} t = \int _{a_{n_k} r_{n_k} (t)}^{a_{n_k}} \frac{d\lambda }{\psi _{n_k} (\lambda )} \le \int _{y}^{a_{n_k}} \frac{d\lambda }{\psi _{n_k} (\lambda )} .\end{aligned}$$

Thus, for all \(y \in (0, \infty )\), \(t \le \limsup _{n\rightarrow \infty } \int _y^\infty d\lambda / \psi _n (\lambda ) \), which contradicts Assumption (226). This proves that \(C\! < \! \infty \). Since \(\lim _{\varepsilon \rightarrow 0} \delta (\varepsilon ) = 0\), we can choose \(\varepsilon \) such that \(\delta (\varepsilon ) \! < \! \frac{_1}{^2}e^{-C}\); then, we set \(\delta = t/ \varepsilon \) and (232) implies that

$$\begin{aligned} \limsup _{n\rightarrow \infty } a_n \Big ( 1\! -\! g_{\mu _{\mathtt {w}_n}}^{\circ \lfloor \delta b_n/a_n \rfloor }\! (0)\Big ) < \infty \; . \end{aligned}$$
(233)

Recall that \((Z^{_{(n)}}_{k})_{k\in {\mathbb {N}}}\) stands for a Galton–Watson branching process with offspring distribution \(\mu _{\mathtt {w}_n}\) such that \(Z^{_{(n)}}_{0} = \lfloor a_n \rfloor \). Then, \( {\mathbf {P}}\big ( Z^{_{(n)}}_{\lfloor \delta b_n / a_n \rfloor } \! =\! 0\big )\! =\! \big ( g_{\mu _{\mathtt {w}_n}}^{\circ \lfloor \delta b_n/a_n \rfloor }\! (0)\big )^{ \lfloor a_n \rfloor } \) and (233) easily implies that \(\liminf _{n\rightarrow \infty } {\mathbf {P}}\big ( Z^{_{(n)}}_{\lfloor \delta b_n / a_n \rfloor }\! \! =\! 0\big ) \! >\! 0 \), which completes the proof of Proposition 7.3. \(\square \)

7.2 Proof of Propositions 2.1 and 2.2

In this section we shall assume that the sequence \((a_n)\) and \((b_n)\) satisfy (216) and \(\frac{a_n b_n}{\sigma _1 (\mathtt {w}_n)}\! \rightarrow \! \kappa \) where \(\kappa \in (0, \infty ) \). This dramatically restricts the possible limit triplets \((\alpha , \beta , \pi )\). To see this point, we first prove the following lemma:

Lemma 7.5

For all \(n\in {\mathbb {N}}\), let \(\mathtt {v}_n = (v^{_{(n)}}_{j})_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_f\) and set \(\phi _n (\lambda ) \! =\! \sum _{j\ge 1} v^{_{(n)}}_{j} \big ( e^{-\lambda v^{_{(n)}}_{j}} \!\! -\! 1 +\lambda v^{_{(n)}}_{j} \big )\), for all \(\lambda \in [0, \infty )\). Then, the following assertions are equivalent:

(L):

For all \(\lambda \in [0, \infty )\), there exists \(\phi (\lambda ) \in [0, \infty )\) such that \(\lim _{n\rightarrow \infty } \phi _n (\lambda ) = \phi (\lambda )\).

(S):

There are \({\mathbf {c}} \in {\ell }^{_{\, \downarrow }}_3\) and \(\beta ^\prime \in [0, \infty )\) such that, for \(j \! \in \! {\mathbb {N}}^*\),

$$\begin{aligned}\lim _{n\rightarrow \infty }v^{_{(n)}}_{j} = c_j \quad \text {and} \quad \lim _{n\rightarrow \infty } \sigma _3 (\mathtt {v}_n) \! -\! \sigma _3 ({\mathbf {c}}) = \beta ^\prime . \end{aligned}$$

Moreover, if (L) or (S) holds true, then \(\phi \) in (L) is given by, for \(\lambda \in [0, \infty )\),

$$\begin{aligned} \phi (\lambda )= \frac{_1}{^2} \beta ^\prime \lambda ^2 + \sum _{j\ge 1} c_j \big ( e^{-\lambda c_j} -1 +\lambda c_j \big )\; . \end{aligned}$$
(234)

Proof

We first prove \((S) \! \Rightarrow \! (L)\). For all \(x \in [0, \infty )\), we set \(f(x)\! =\! e^{-x} \! -\! 1+x\). Elementary arguments entail that, for \(x \in [0, \infty )\),

$$\begin{aligned} 0 \le \frac{_{1}}{^{2}} x^2 \! - \! f(x) \le \frac{_{_1}}{^{2}} x^2 (1\! -\! e^{-x}) \,. \end{aligned}$$
(235)

We set \(\eta (x) \! =\! \sup _{y\in [0, x]} y^{-2}| \frac{_1}{^2}y^2 \! -\! f(y) |\); thus, \(\eta (x) \le \frac{1}{2} (1\! -\! e^{-x}) \le 1 \! \wedge \! x\) and \(\eta (x) \! \downarrow \! 0\) as \(x\! \downarrow \! 0\). Then, fix \(\lambda \in [0, \infty )\) and define \(\phi (\lambda )\) by (234); fix \(j_0 \ge 2\) and observe the following:

$$\begin{aligned} \phi _n (\lambda )\! -\! \phi (\lambda )= & {} \sum _{1\le j\le j_0} \!\!\! \Big ( v^{_{(n)}}_{^j} \! f(\lambda v^{_{(n)}}_{^j} ) \! -\! c_j f(\lambda c_j)\Big ) + \frac{_{_1}}{^{^2}}\lambda ^2 \Big (\sigma _3 (\mathtt {v}_n) \! -\! \sigma _3 ({\mathbf {c}}) \! -\! \beta ^\prime + \sum _{1\le j\le j_0} \!\!\! \big ( c^3_j\! -\! (v^{_{(n)}}_{^j} )^3 \big ) \Big ) \\&+ \sum _{j>j_0} \!\! \Big ( v^{_{(n)}}_{^j} f(\lambda v^{_{(n)}}_{^j} ) \! -\! \frac{_{_1}}{^{^2}}\lambda ^2 (v^{_{(n)}}_{^j})^3 \Big ) +\! \sum _{j>j_0} \!\! \Big ( \frac{_{_1}}{^{^2}}\lambda ^2 c^3_j \! -\! c_j f(\lambda c_j) \Big ) . \end{aligned}$$

Then, note that

$$\begin{aligned} \sum _{j>j_0} \!\! \big | v^{_{(n)}}_{^j} f(\lambda v^{_{(n)}}_{^j} ) \! -\! \frac{_{_1}}{^{^2}}\lambda ^2 (v^{_{(n)}}_{^j})^3 \big | \le \lambda ^2 \eta \big (\lambda v^{_{(n)}}_{^{j_0}} \big ) \sigma _3 (\mathtt {v}_n) \; .\end{aligned}$$

Similarly, \(\sum _{j>j_0} \big | \frac{_{_1}}{^{^2}}\lambda ^2 c^3_j - c_j f(\lambda c_j) \big | \le \lambda ^{2}\eta \big (\lambda c_{j_0} \big ) \sigma _3 ({\mathbf {c}})\). Thus, by assumption,

$$\begin{aligned} \limsup _{n\rightarrow \infty } \big | \phi _n (\lambda )-\phi (\lambda ) \big | \le (\beta ^\prime + 2\sigma _3 ({\mathbf {c}})) \lambda ^{2}\eta \big (\lambda c_{j_0} \big ) \underset{^{j_0\rightarrow \infty }}{-\!\!\! -\!\! \! -\!\! \! \longrightarrow } \; \; 0 \; , \end{aligned}$$

since \(c_{j_0} \rightarrow 0\) as \(j_0\rightarrow \infty \). This proves (L) and (234).

Conversely, we assume (L). Note that \(v^{_{(n)}}_{1} \! f(v^{_{(n)}}_{1} ) \le \phi _n (1) \). Thus, \(x_0\! :=\! \sup _{n\in {\mathbb {N}}} v^{_{(n)}}_{1} \! < \! \infty \). By (235), for all \(y \in [0, x]\), \(f(y) \! \ge \! \frac{_1}{^2}e^{-x}y^{2}\), which implies \(\sigma _3 (\mathtt {v}_n) \le 2 e^{x_0} \sup _{n\in {\mathbb {N}}} \phi _n (1)\! =:\! z_0\). Consequently, for all \(n \in {\mathbb {N}}\), \((\sigma _3 (\mathtt {v}_n), \mathtt {v}_n)\) belongs to the compact space \([0, z_0] \! \times \! [0, x_0]^{{\mathbb {N}}^*}\) equipped with the product topology. Let \((q_n)_{n\in {\mathbb {N}}}\) be an increasing sequence of integers such that \(\lim _{n\rightarrow \infty } \sigma _3 (\mathtt {v}_{q_n})\! =\! a\) for some \(a\in [0, z_{0}]\) and such that for all \(j \! \ge \! 1\), \(\lim _{n\rightarrow \infty } v^{_{(q_n)}}_{j} \! =\! c^\prime _j\) for certain \(c^{\prime }_{j}\in [0, x_{0}]\). By Fatou’s Lemma, \(\sigma _3 ({\mathbf {c}}^\prime ) \le a\) and we then set \(\beta ^\prime \! =\! a\! -\! \sigma _3 ({\mathbf {c}}^\prime )\). By applying \((S) \Rightarrow (L)\) to \((\mathtt {v}_{q_n})_{n\in {\mathbb {N}}}\), we get \(\phi (\lambda ) \! =\! \frac{_1}{^2} \beta ^\prime \lambda ^2 + \sum _{j\ge 1} c^\prime _j \big ( \exp (-\lambda c^\prime _j) \! -\! 1 +\lambda c^\prime _j \big )\), for all \(\lambda \in [0, \infty )\). We easily show that it characterises \(\beta ^\prime \) and \({\mathbf {c}}^\prime \). Thus, \(((\sigma _3 (\mathtt {v}_n), \mathtt {v}_n))_{n\in {\mathbb {N}}}\), has a unique limit point in \([0, z_0] \! \times \! [0, x_0]^{{\mathbb {N}}^*}\), which easily entails (S). \(\square \)

Recall the definition of \(X^{\mathtt {w}_n}_{\, } \! \) in (215).

Lemma 7.6

Let \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\) and \(a_n , b_n \in (0, \infty )\), \(n \in {\mathbb {N}}\), satisfy (21). Then the following assertions hold true:

  1. (i)

    Let us suppose that \(\mathrm {(II)}\) in Theorem 7.1 holds true; namely, \(\frac{1}{{a_n}} X^{\mathtt {w}_n}_{ b_n \cdot } \! \longrightarrow \! X\) weakly on \({\mathbf {D}}([0, \infty ), {\mathbb {R}})\). Then, X is an integrable \((\alpha , \beta , \pi )\) spectrally positive Lévy process (as defined at the beginning of Sect. 7.1) and \((\alpha , \beta , \pi )\) is necessarily such that

    $$\begin{aligned} \beta \! \ge \! \beta _0\;, \quad \text {there exists }\, {\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3 \text { such that } \, \; \pi = \sum _{j\ge 1} \kappa c_j \delta _{c_j} \end{aligned}$$
    (236)

    and the following statements hold true:

    $$\begin{aligned}&\mathbf {(C1):} \quad \frac{b_n}{a_n} \Big ( 1-\frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \Big )\; \underset{^{n\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; \alpha \qquad \mathbf {(C2):} \quad \frac{b_n}{a^2_n}\! \cdot \! \frac{\sigma _3 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \; \underset{^{n\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; \beta + \kappa \sigma _3 ({\mathbf {c}}) \, , \\&\mathbf {(C3):} \quad \forall j \in {\mathbb {N}}^*, \quad \frac{w^{(n)}_j}{a_n } \; \underset{^{n\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; c_j \, . \end{aligned}$$
  2. (ii)

    Conversely, \((\mathbf {C1})\)\((\mathbf {C3})\) are equivalent to \(\mathrm {(II})\) in Theorem 7.1; it is also equivalent to \(\mathrm {(I)}\), or to \(\mathrm {(IIIabc)}\) or to \( (\mathrm {(IIIa)} \, \& \, \mathrm {(IV)})\).

Proof

To simplify notation, we set \(\kappa _n = a_n b_n / \sigma (\mathtt {w}_n)\). By the last point of (21), \(\kappa _n \! \rightarrow \! \kappa \in (0, \infty )\). We also set \(v^{_{(n)}}_j = w^{_{(n)}}_j/a_n\) for all \(j\! \ge \! 1\). We first prove (i). Suppose Theorem 7.1\(\mathrm {(II)}\), which first implies that \(\beta \! \ge \! \beta _0\); then recall that Theorem 7.1\(\mathrm {(II)}\) is equivalent to \( ( (\mathbf {C1})\, \& \, \mathrm {(IV)})\) and Theorem 7.1\(\mathrm {(IV)}\) can be rewritten as follows: for all \(\lambda \in [0, \infty )\),

$$\begin{aligned}\kappa _n \sum _{j\ge 1} v^{_{(n)}}_{j} \big ( e^{-\lambda v^{_{(n)}}_{j}} \!\! -\! 1 +\lambda v^{_{(n)}}_{j} \big ) \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! \longrightarrow } \psi _{\alpha , \beta , \pi } (\lambda )-\alpha \lambda \; .\end{aligned}$$

This entails Condition (L) in Lemma 7.5 with \(\phi (\lambda ) = ( \psi _{\alpha , \beta , \pi } (\lambda )-\alpha \lambda ) / \kappa \). Lemma 7.5 then implies that there are \({\mathbf {c}} \in {\ell }^{_{\, \downarrow }}_3\) and \(\beta ^\prime \in [0, \infty )\) such that for all \(j \! \in \! {\mathbb {N}}^*\), \(\lim _{n\rightarrow \infty }v^{_{(n)}}_{j} = c_j \) and \( \lim _{n\rightarrow \infty } \sigma _3 (\mathtt {v}_n) \! -\! \sigma _3 ({\mathbf {c}}) = \beta ^\prime \) and that

$$\begin{aligned} \frac{_1}{^2} \kappa ^{-1}\beta \lambda ^2 + \kappa ^{-1} \!\! \int _{(0, \infty )} (e^{-\lambda r} \! \! -\! 1 + \lambda r) \, \pi (dr)= & {} \frac{\psi _{\alpha , \beta , \pi } (\lambda )\! -\! \alpha \lambda }{ \kappa } = \phi (\lambda ) = \frac{_1}{^2} \beta ^\prime \lambda ^2+ \sum _{j\ge 1} c_j \big ( e^{-\lambda c_j} \! -\! 1 +\lambda c_j \big )\; . \end{aligned}$$

This easily entails that \(\kappa \beta ^\prime = \beta \), \(\pi = \sum _{j\ge 1} \kappa c_j \delta _{c_j}\) and we easily get \((\mathbf {C2})\) and \((\mathbf {C3})\).

We next prove (ii): we assume that \(\beta \! \ge \! \beta _0\) and that \(\pi = \sum _{j\ge 1} \kappa c_j \delta _{c_j}\) where \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\). Then observe that \((\mathbf {C1})\) is \(\mathrm {(IIIa)}\) in Theorem 7.1, that \((\mathbf {C2})\) is \(\mathrm {(IIIb)}\) in Theorem 7.1; moreover, \((\mathbf {C3})\) easily entails \(\mathrm {(IIIc)}\) in Theorem 7.1. Then (ii) follows from Theorem 7.1. This completes the proof of the lemma.

Proof of Propositions 2.1and2.2 We note that Lemma 7.6 combined with Theorem 7.1 implies Proposition 2.1 (i), (ii) and (iii).

Let us prove Proposition 2.1 (iv). Since \(\psi _{\mathtt {w}_n}\) is a convex function, the convergence (32) in Proposition 2.1 (iii) is uniform in \(\lambda \) on all compact subsets of \([0, \infty )\), which easily entails the convergence of the inverses: \(\lim _{n\rightarrow \infty } a_n \psi ^{-1}_{\mathtt {w}_n} (\lambda / b_n) = \psi ^{-1} (\lambda )\).

Next observe that Lemma 7.6 combined with Theorem 7.2 implies Proposition 2.2.

It remains to prove Proposition 2.1 (v). Let \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\kappa \in (0, \infty )\), and \({\mathbf {c}} = (c_j)_{j\ge 1} \in {\ell }^{_{\, \downarrow }}_3\). Let us show that there are sequences \(a_n , b_n \in (0, \infty ) \), \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), that satisfy (21) with \(\beta _0 \in [0, \beta ]\) and \(\mathbf {(C1)}\), \(\mathbf {(C2)}\) and \(\mathbf {(C3)}\) and \(\sqrt{{\mathbf {j}}_n}/ b_n \! \rightarrow \! 0\) where we recall the notation \({\mathbf {j}}_n = \max \{ j\! \ge \! 1\,: \! w^{_{(n)}}_{^j} \! >\! 0 \}\). To that end, first let \((\rho _n)_{n\in {\mathbb {N}}}\) be a sequence of positive integers such that \(\rho _n \le n\), \(\lim _{n \rightarrow \infty } \rho _n = \infty \) and \(\sum _{1\le j \le \rho _n} c_j + c_j^2 \le n \), for all \(n \! \ge \! c_1+c_1^2\). To construct the sequence \((\mathtt {w}_{n})_{n\in {\mathbb {N}}}\) which will have the desirable limits, let us start with the following definition:

$$\begin{aligned} q_{^j}^{_{(n)}}=\left\{ \begin{array}{l@{\quad }l} c_j &{} \text {if} \quad j \! \in \! \big \{ 1, \ldots , \rho _n \! \big \} , \\ ((\beta \! -\! \beta _0)/ \kappa )^{\frac{1}{3}}n^{-1} &{} \text {if} \quad j \! \in \! \big \{\rho _n +1 , \ldots , \rho _n + n^3 \big \}, \\ u_n &{} \text {if} \quad j \! \in \! \big \{ \rho _n +n^3 +1 , \ldots , \rho _n + n^3+n^8 \big \}, \\ \; \; \; 0 &{} \text {if} \quad j > \rho _n + n^3 + n^8, \end{array}\right. \end{aligned}$$
(237)

where \(u_n = n^{-3}\) if \(\beta _0 = 0\) and \(u_n = (\beta _0/ \kappa )^{\frac{1}{3}}n^{-8/3}\) if \( \beta _0 \! >\! 0\). We denote by \(\mathtt {v}_n \! =\! (v^{_{(n)}}_{^j})_{j\ge 1}\) the nonincreasing rearrangement of \(\mathtt {q}_n \! =\! (q^{_{(n)}}_{^j})_{j\ge 1}\). Thus, we get \(\sigma _p (\mathtt {v}_n) = \sigma _p (\mathtt {q}_n)\) for any \(p\! \in \! (0, \infty )\) and we observe the following:

$$\begin{aligned} \kappa \sigma _1 (\mathtt {v}_n)\sim & {} \left\{ \begin{array}{ll} \kappa n^5 \!\!\! &{}\quad \text {if }\beta _0 = 0, \\ \kappa ^{\frac{2}{3}} \beta _0^{\frac{1}{3}} n^{\frac{16}{3}} \!\!\! &{}\quad \text {if }\beta _0 \!> \! 0, \end{array}\right. \nonumber \\ \kappa \sigma _2 (\mathtt {v}_n)\sim & {} \left\{ \begin{array}{l@{\quad }l} \kappa n^2 &{} \text {if }\beta _0 = 0, \\ \kappa ^{\frac{1}{3}} \beta _0^{\frac{2}{3}} n^{\frac{8}{3}} &{} \text {if }\beta _0 \! > \! 0, \end{array}\right. \quad \text {and} \quad \kappa \sigma _3 (\mathtt {v}_n) \sim \kappa \sigma _3 ({\mathbf {c}}) + \beta \; . \end{aligned}$$
(238)

We next set

$$\begin{aligned} b_n = \kappa \sigma _1 (\mathtt {v}_n), \quad a_n = \frac{\kappa \sigma _1 (\mathtt {v}_n) }{\kappa \sigma _2 (\mathtt {v}_n) +\alpha } \quad \text {and} \quad w^{_{(n)}}_{^j}= a_n v^{_{(n)}}_{^j}, \;\text { for }j\! \ge \! 1 . \end{aligned}$$
(239)

We then see that \(a_nb_n/\sigma _1 (\mathtt {w}_n) = \kappa \), that \(\sup _{n\in {\mathbb {N}}} w^{_{(n)}}_{^1}/a_n \! < \! \infty \). Moreover, we get

$$\begin{aligned}&\frac{b_n}{a_n} \Big (1- \frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} \Big )= \alpha , \quad \lim _{n\rightarrow \infty } \frac{{b_n}}{{a^2_n}}\! \cdot \! \frac{{\sigma _3 (\mathtt {w}_n)}}{{\sigma _1 (\mathtt {w}_n)}} = \beta + \kappa \sigma _3 ({\mathbf {c}}) \text {and} \quad \lim _{n\rightarrow \infty } \frac{w^{_{(n)}}_{^j}}{a_n}= c_j, \text { for all }j\in {\mathbb {N}}^{*}, \end{aligned}$$

which are the limits \((\mathbf {C1})\), \((\mathbf {C2})\) and \((\mathbf {C3})\). It is easy to derive from (238) and from (239) that \(a_n\) and \(b_n /a_n\) tend to \(\infty \) and that \(b_n / a_n^2\) tends to \(\beta _0\). Moreover, since \({{\mathbf {j}}_n}\! \le \! n^8+n^3 + n\), it is also easy to check that \(\sqrt{{\mathbf {j}}_n}/b_n \! \rightarrow \! 0\). This completes the proof of Proposition 2.1 (iv). \(\square \)

7.3 Proof of Proposition 2.3

7.3.1 Proof of Proposition 2.3 (i)

Fix \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}}\! =\! (c_j)_{j\ge 1} \! \in \! {\ell }^{_{\, \downarrow }}_3\). For all \(\lambda \! \in \! [0, \infty )\), set \(\psi (\lambda ) = \alpha \lambda + \frac{_1}{^2} \beta \lambda ^2 + \sum _{j\ge 1} \kappa c_j \big ( e^{-\lambda c_j} \! -\! 1 + \lambda c_j \big )\) and we assume that \(\int ^\infty \! d\lambda / \psi (\lambda ) \! < \! \infty \). Let \(a_n , b_n \in (0, \infty )\) and \(\mathtt {w}_n \in {\ell }^{_{\, \downarrow }}_f\), \(n \in {\mathbb {N}}\), satisfy (21), \((\mathbf {C1})\), \((\mathbf {C2})\) and \((\mathbf {C3})\) (as recalled in Lemma 7.6). Recall the definition of \(X^{\mathtt {w}_n}\) in (215). Recall the definition of \(\psi _n\) in (225) that is the Laplace exponent of \(\frac{1}{a_n} X^{\mathtt {w}_n}_{b_n \cdot }\). To simplify notation, set \(\alpha _n = \frac{b_n}{a_n} ( 1\! -\! \frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n)} )\). It remains to prove the last point of Proposition 2.3 (i): assume that \(\beta _0 \! >\! 0\) in (21); let \(V_n\! : \! \Omega \! \rightarrow [0, \infty )\) be a r.v. with law \(\sigma _1 (\mathtt {w}_n)^{-1}\sum _{j\ge 1} w^{_{(n)}}_j \delta _{ w^{_{(n)}}_j / a_n}\). First, observe the following:

$$\begin{aligned} {\mathbf {E}}[V_n] = \frac{\sigma _2 (\mathtt {w}_n)}{a_n \sigma _1 (\mathtt {w}_n)} = \frac{1}{a_n} \Big (1 \! -\! \alpha _n \frac{a_n}{b_n} \Big ) \quad \text {and} \quad \psi _n (\lambda ) \! -\! \alpha _n \lambda = b_n {\mathbf {E}}\big [ f\big ( \lambda V_n\big )\big ] , \end{aligned}$$

where we recall that \(f (x) = e^{-x}\! -\! 1 + x\). Since f is convex and by Jensen’s inequality, we get \(\psi _n (\lambda )\! -\! \alpha _n \lambda \! \ge \! b_n f (\lambda {\mathbf {E}}[V_n])\). Moreover, (235) implies \(f(\lambda {\mathbf {E}}[V_n]) \ge \frac{1}{2}(\lambda {\mathbf {E}}[V_n])^2 \exp (-\lambda {\mathbf {E}}[V_n])\). Since \({\mathbf {E}}[V_n] \! \sim \! 1/a_n\), since \(\alpha _n \! \rightarrow \! \alpha \) by \((\mathbf {C1})\) and since \(b_n /a_n^2\! \rightarrow \! \beta _0 \! >\! 0\), there is \(n_1 \in {\mathbb {N}}\) such that for all \(n\! \ge \! n_1\), we get \(1/2 \le a_n {\mathbf {E}}[V_n] \le 2\), \(\alpha _n \! \ge \! -2(\alpha )_-\) and \(b_n /a_n^2 \! \ge \! \beta _0 /2\). Thus, there exists \(n_{1} \in {\mathbb {N}}\) such that for all \(n\! \ge \! n_1\) and for all \(\lambda \in [0, a_n]\),

$$\begin{aligned} \quad \psi _n (\lambda ) \ge -2(\alpha )_- \lambda + \frac{_1}{^{16e^2}} \beta _0 \lambda ^2 \; , \end{aligned}$$

which clearly implies (38). This completes the proof of Proposition 2.3 (i). \(\square \)

7.3.2 Proof of Proposition 2.3 (ii)

Let us mention that, here, we closely follow the counterexample given in Le Gall & D. [19], p. 55. Fix \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}}\! =\! (c_j)_{j\ge 1} \! \in \! {\ell }^{_{\, \downarrow }}_3\). For all \(\lambda \! \in \! [0, \infty )\), set \(\psi (\lambda ) = \alpha \lambda + \frac{_1}{^2} \beta \lambda ^2 + \sum _{j\ge 1} \kappa c_j \big ( e^{-\lambda c_j} \! -\! 1 + \lambda c_j \big )\); assume that \(\int ^\infty \! d\lambda / \psi (\lambda ) \! < \! \infty \). For all positive integers n, we next define \({\mathbf {c}}_n\! =\! (c^{_{(n)}}_j)_{j\ge 1}\) by setting

$$\begin{aligned} c^{_{(n)}}_j = c_j \; \, \text {if }j \le n, \quad c^{_{(n)}}_j = (\beta / (\kappa n))^{\frac{1}{3}} \; \, \text {if }n\! <\! j \le 2n \quad \text {and} \quad c^{_{(n)}}_j = 0 \;\, \text {if }j\! >\! {2n}. \end{aligned}$$

We also set \(\psi _n (\lambda ) = \alpha \lambda + \sum _{j\ge 1} \kappa c^{_{(n)}}_j \big ( \exp (-\lambda c^{_{(n)}}_j) \! -\! 1 + \lambda c^{_{(n)}}_j \big )\), \(\lambda \in [0, \infty )\). Let \((U_t^n)_{t\in [0, \infty )}\), be a CSBP with branching mechanism \(\psi _n\) and with initial state \(U^n_0 = 1\). As \(\lambda \! \rightarrow \! \infty \), observe that \(\psi _n (\lambda ) \! \sim \! (\alpha + \kappa \sigma _2 ({\mathbf {c}}_n) )\lambda \). Thus, \(\int ^\infty \! d\lambda / \psi _n (\lambda ) \! =\! \infty \); by standard results on CSBP (recalled in Sect. B.2.2 in Appendix), it follows that, for all \(n\in {\mathbb {N}}\) and \(t\! \in \! [0, \infty )\),

$$\begin{aligned} {\mathbf {P}}\big ( U^n_t \! >\! 0\big ) = 1 \; . \end{aligned}$$
(240)

Let \(Z = (Z_t)_{t\in [0, \infty )}\) stands for a CSBP with branching mechanism \(\psi \) and with initial state \(Z_0 = 1\). Observe that for all \(\lambda \in [0, \infty )\), \(\lim _{n\rightarrow \infty } \psi _n (\lambda ) = \psi (\lambda )\). Standard results on CSBP (see Helland [25], Theorem 6.1, p. 96) yield

$$\begin{aligned} U^n \underset{^{n \rightarrow \infty }}{ -\!\! \!-\!\! \! -\!\! \!\longrightarrow } Z \; \, \text {weakly on }{\mathbf {D}}([0, \infty ) , {\mathbb {R}}). \end{aligned}$$
(241)

Next, for each \(n\in {\mathbb {N}}\), we construct a sequence of Galton–Watson processes \((Z^{(n, p)})_{p\ge 1}\) to approximate \(U^{n}\). To that end, note that by Proposition 2.1 (iv) there exist sequences \(\mathtt {w}_{n, p} = (w^{_{(n,p)}}_{^j})_{j\ge 1} \! \in \! {\ell }^{_{\, \downarrow }}_f\) and \(a_{n, p} , b_{n, p} \! \in \! (0, \infty )\), \(p \! \in \! {\mathbb {N}}\), such that

$$\begin{aligned}&\frac{a_{n, p} b_{n, p}}{\sigma _1 (w_{n, p})} \rightarrow \kappa , \quad a_{n, p} , \frac{b_{n, p}}{a_{n, p}} \; \text {and} \; \frac{a^2_{n, p}}{b_{n, p}} \underset{^{p\rightarrow \infty }}{-\!\! \! -\!\! \!\longrightarrow } \infty , \quad \frac{b_{n, p}}{a_{n,p}} \Big ( 1-\frac{\sigma _2 (\mathtt {w}_{n , p})}{\sigma _1 (\mathtt {w}_{n, p})} \Big )\; \underset{^{p\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; \alpha \end{aligned}$$
(242)
$$\begin{aligned}&\frac{b_{n, p}}{a^2_{n , p}}\! \cdot \! \frac{\sigma _3 (\mathtt {w}_{n, p})}{\sigma _1 (\mathtt {w}_{n, p})} \; \underset{{p\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; \kappa \sigma _3 ({\mathbf {c}}_n) \quad \text {and} \quad \forall j \in {\mathbb {N}}^*, \quad \frac{w^{_{(n, p)}}_j}{a_{n, p} } \; \underset{{p\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; c^{_{(n)}}_j , \end{aligned}$$
(243)

and the following weak limit holds true on \({\mathbf {D}}([0, \infty ) , {\mathbb {R}})\):

$$\begin{aligned} \Big ( \frac{_1}{^{a_{n, p}}} Z^{_{(n, p)}}_{\lfloor b_{n, p} t/a_{n, p}\rfloor } \Big )_{t\in [0, \infty )}\underset{^{p\rightarrow \infty }}{-\!\!\! -\!\! \! -\!\! \!\longrightarrow } (U^n_t)_{t\in [0, \infty )} \;, \end{aligned}$$
(244)

where \( (Z^{_{(n, p)}}_{^k})_{k\in {\mathbb {N}}}\) is a Galton–Watson branching process with \( Z^{_{(n, p)}}_{^0} = \lfloor a_{n ,p} \rfloor \) and with offspring distribution \(\mu _{\mathtt {w}_{n, p}}\) as in (85). We also have \(\lim _{p\rightarrow 0}\sqrt{{\mathbf {j}}_{n,p}}/ b_{n,p} = 0\), where \({\mathbf {j}}_{n,p} = \max \{ j \! \ge \! 1 \! :\! w^{_{(n,p)}}_{^j} \! >\! 0 \}\). By Portemanteau’s theorem for all \(t\! \in [0, \infty )\), \(\liminf _{p\rightarrow \infty } {\mathbf {P}}\big (Z^{_{(n, p)}}_{\lfloor b_{n, p} t/a_{n, p}\rfloor } \!>\! 0 \big ) \! \ge \! {\mathbf {P}}(U^n_t \! >\! 0)=1\), by (240). Thus, there exists \(p_n \! \in \! {\mathbb {N}}\), such that for \(p \! \ge \! p_n\),

$$\begin{aligned} {\mathbf {P}}\big (Z^{_{(n, p)}}_{\lfloor b_{n, p} n/a_{n, p}\rfloor } \! >\! 0 \big ) \ge 1-2^{-n} \; . \end{aligned}$$
(245)

Without loss of generality we can furthermore assume that \(\sqrt{{\mathbf {j}}_{n,p_n}}/ b_{n,p_n} \le 2^{-n}\) and

$$\begin{aligned}&a_{n, p_n}, \; \frac{b_{n, p_n}}{a_{n, p_n}} \; \text { and} \; \frac{a^2_{n, p_n}}{b_{n, p_n}} \ge 2^n , \quad \Big | \frac{b_{n, p_n}}{a_{n,p_n}} \Big ( 1-\frac{\sigma _2 (\mathtt {w}_{n , p_n})}{\sigma _1 (\mathtt {w}_{n, p_n})} \Big ) - \alpha \Big | \le 2^{-n} , \Big | \frac{a_{n, p_n} b_{n, p_n}}{\sigma _1 (w_{n, p_n})} - \kappa \Big | \le 2^{-n}\\&\Big | \frac{b_{n, p_n}}{a^2_{n , p_n}}\! \cdot \! \frac{\sigma _3 (\mathtt {w}_{n, p_n})}{\sigma _1 (\mathtt {w}_{n, p_n})}- \kappa \sigma _3 ({\mathbf {c}}_n)\Big | \le 2^{-n} \quad \text {and} \quad \forall j \in \{ 1, \ldots , n\}, \; \Big | \frac{w^{_{(n, p_n)}}_j}{a_{n, p_n} } - c^{_{(n)}}_j \Big | \le 2^{-n} . \end{aligned}$$

Set \(a_n \! =\! a_{n ,p_n}\), \(b_n = b_{n , p_n}\) and \(\mathtt {w}_n \! =\! \mathtt {w}_{n, p_n}\). Note that \(\kappa \sigma _3 ({\mathbf {c}}_n)\! \rightarrow \beta + \kappa \sigma _3 ({\mathbf {c}})\) as \(n\! \rightarrow \! \infty \). Thus, \(a_n , b_n \) and \(\mathtt {w}_n\) satisfy (21) with \(\beta _0 = 0\), \(\mathbf {(C1)}\), \(\mathbf {(C2)}\), \(\mathbf {(C3)}\) and \(\sqrt{{\mathbf {j}}_{n}}/ b_{n}\! \rightarrow \! 0\). Set \( Z^{_{(n)}}_{k} = Z^{_{(n, p_n)}}_{k}\). By (245), for all \(\delta \! \in \! (0, \infty )\), and all integers \(n \! \ge \! \delta \), we easily get \( {\mathbf {P}}\big (Z^{_{(n)}}_{\lfloor b_{n} \delta /a_{n}\rfloor } \! =\! 0 \big ) \le {\mathbf {P}}\big (Z^{_{(n)}}_{\lfloor b_{n} n/a_{n}\rfloor } \! =\! 0 \big ) \le 2^{-n} \). Consequently, \(\lim _{n\rightarrow \infty } {\mathbf {P}}\big (Z^{_{(n)}}_{\lfloor b_{n} \delta /a_{n}\rfloor } \! =\! 0 \big ) = 0\), for all \(\delta \in (0, \infty )\). Namely, \(\mathbf {(C4)}\) is not satisfied, which completes the proof of Proposition 2.3 (ii). \(\square \)

7.3.3 Proof of Proposition 2.3 (iii)

Fix \(\alpha \in {\mathbb {R}}\), \(\beta \in [0, \infty )\), \(\kappa \in (0, \infty )\) and \({\mathbf {c}}\! =\! (c_j)_{j\ge 1} \! \in \! {\ell }^{_{\, \downarrow }}_3\). For all \(\lambda \! \in \! [0, \infty )\), set \(\psi (\lambda ) = \alpha \lambda + \frac{_1}{^2} \beta \lambda ^2 + \sum _{j\ge 1} \kappa c_j \big ( e^{-\lambda c_j} \! -\! 1 + \lambda c_j \big )\); assume that \(\int ^\infty \! d\lambda / \psi (\lambda ) \! < \! \infty \). We consider several cases.

\(\bullet \) Case 1: we first assume that \(\beta \! \ge \! \beta _0 \! >\! 0\). By Proposition 2.1 (iv) there exists \(a_n, b_n , \mathtt {w}_n\) satisfying (21) with \(\beta _0 \! >\! 0\), \((\mathbf {C1})\), \((\mathbf {C2})\) and \((\mathbf {C3})\). But Proposition 2.3 (i) (proved in Sect. 7.3.1) asserts that \(a_n, b_n , \mathtt {w}_n\) necessarily satisfy \((\mathbf {C4})\). This proves Proposition 2.3 (iii) in Case 1.

\(\bullet \) Case 2. We next assume that \(\beta \! >\! 0\) and \(\beta _0 = 0\). Similar to the construction in (246), let us first introduce the folllowing:

$$\begin{aligned} q_{^j}^{_{(n)}}=\left\{ \begin{array}{l@{\quad }l} c_j &{} \text {if} \quad j \! \in \! \big \{ 1, \ldots , n \! \big \} , \\ (\beta /\kappa )^{\frac{1}{3}}n^{-1} &{} \text {if} \quad j \! \in \! \big \{n +1 , \ldots , n + n^3 \big \}, \\ n^{-3} &{} \text {if} \quad j \! \in \! \big \{ n +n^3 +1 , \ldots , n + n^3+n^8 \big \}, \\ \; \; \; 0 &{} \text {if} \quad j > n + n^3 + n^8. \end{array}\right. \end{aligned}$$
(246)

Denote by \(\mathtt {v}_n \! =\! (v^{_{(n)}}_{^j})_{j\ge 1}\) the nonincreasing rearrangement of \(\mathtt {q}_n \! =\! (q^{_{(n)}}_{^j})_{j\ge 1}\). Thus, \(\sigma _p (\mathtt {v}_n) = \sigma _p (\mathtt {q}_n)\) for any \(p\! \in \! (0, \infty )\). Since \(\sum _{1\le j\le n} c_j^p \le c_1^p n \), we easily get \(\kappa \sigma _1 (\mathtt {v}_n)\! \sim \! \kappa n^5\), \(\kappa \sigma _2 (\mathtt {v}_n) \! \sim \! \kappa n^2 \) and \(\kappa \sigma _3 (\mathtt {v}_n) \! \rightarrow \! \beta + \kappa \sigma _3 ({\mathbf {c}} )\). We next set \(b_n \! =\! \kappa \sigma _1 (\mathtt {v}_n)\), \(a_n \! =\! \kappa \sigma _1 (\mathtt {v}_n)/ (\kappa \sigma _2 (\mathtt {v}_n) + \alpha ) \) and for all \(j\! \ge \! 1\), \(w^{_{(n)}}_{j} \! =\! a_n v^{_{(n)}}_{j}\). Note that \(a_n \sim n^3\). Then, it is easy to check that \(a_n , b_n \) and \(\mathtt {w}_n\) satisfy (21) with \(\beta _0 = 0\), \((\mathbf {C1})\), \((\mathbf {C2})\) and \((\mathbf {C3})\). Since \({\mathbf {j}}_n = \max \{j\! \ge \! 1 \! : \! w^{_{(n)}}_{^j} \! >\! 0\} \le n+ n^3 + n^8\), we easily get \(\sqrt{{\mathbf {j}}_n}/b_n \! \rightarrow \! 0\). Here observe that \(\kappa = a_n b_n / \sigma _1 (\mathtt {w}_n)\) and \(b_n \big (1\! -\! (\sigma _2 (\mathtt {w}_n)/ \sigma _1 (\mathtt {w}_n)) \big )/ a_n = \alpha \).

We next prove that \((\mathbf {C4})\) holds true by proving that (38) in Proposition 2.3 (i) holds true. To that end, we introduce \(f_\lambda (x)= x\big ( e^{-\lambda x}\! -\! 1 + \lambda x\big )\), for all \(x, \lambda \in [0, \infty )\), and we recall the definition of \(X^{\mathtt {w}_n}\) in (215). We denote by \(\psi _n\) the Laplace exponent of \(\frac{1}{{a_n}} X^{\mathtt {w}_n}_{b_n \cdot }\). We first observe that for all \(\lambda \in [0, \infty )\),

$$\begin{aligned} \psi _n (\lambda )= \alpha \lambda + \kappa \! \sum _{j\ge 1} \! f_\lambda ( q^{_{(n)}}_{^j}) \! =\! \alpha \lambda + \kappa \! \!\! \sum _{1\le j\le n} \!\! f_\lambda ( c_j)+ \kappa n^3 f_\lambda ( (\beta /\kappa )^{\frac{1}{3}}n^{-1} ) + \kappa n^8 f_\lambda ( n^{-3} ) . \end{aligned}$$
(247)

Set \(s_0\! =\! (\beta / \kappa )^{1/3}\). In (235), recall that \(f_{\lambda } (x)\! \ge \! \frac{1}{2}x^3 \lambda ^2 e^{-\lambda x}\). Thus, if \(\lambda \in [1, 2n/s_0]\), then

$$\begin{aligned} \psi _n (\lambda ) +(\alpha )_- \lambda \ge \kappa n^3 f_\lambda (s_0/n) \ge \frac{_1}{^2}e^{- 2} \beta \lambda ^2 =: s_1 \lambda ^2. \end{aligned}$$

As a result, \(\psi _n (\lambda ) \ge s_1 \lambda ^2 (1 -\frac{(\alpha )_-}{s_1\lambda } )\) for \(\lambda \in [1, 2n/s_0]\). Next observe that, \(f_\lambda (x) \! \ge \! x(\lambda x -1)\). It follows that for \(\lambda \in [2n/s_0 , n^3]\),

$$\begin{aligned} \psi _n (\lambda )\ge & {} -(\alpha )_- \lambda + \kappa n^3 f_\lambda ( s_0/n) \ge -(\alpha )_- \lambda + \kappa s_0 n^2 \big (\tfrac{s_0 \lambda }{n} \! -\! 1\big ) =\kappa s_0 n^2 \Big ( \big (1-\tfrac{(\alpha )_-}{\kappa s_0^2n} \big ) \tfrac{s_0\lambda }{n} -1 \Big ) . \end{aligned}$$

Thus, for all \(y \! >\! \tfrac{2(\alpha )_-}{s_1} \vee 1\) and for all \(n\ge \tfrac{y s_0}{2} \vee \tfrac{3(\alpha )_-}{\kappa s_0^{2}}\), we get

$$\begin{aligned} \int _y^{n^3} \frac{d\lambda }{\psi _n (\lambda )} \le 2 \int _y^{\frac{2n}{s_0}} \frac{d \lambda }{s _1\lambda ^2} + \int _{\frac{2n}{s_0}}^{n^3} \frac{d \lambda }{\kappa s_0 n^2 \big ( \tfrac{2s_0}{3n} \lambda \! -\! 1\big ) } \le \frac{2 }{s_1 y}+ \frac{3\log (\tfrac{2}{3}s_0n^2 \! -\! 1)+ 3\log 3}{2\kappa s^2_0n} . \end{aligned}$$

Since \(a_n \sim n^3\), it proves that the right-hand side vanishes, so that \(\psi _n\) satisfies (38), and \((\mathbf {C4})\) holds true. This proves Proposition 2.3 (iii) in Case 2.

\(\bullet \) Case 3: We now assume that \(\beta = \beta _0 = 0\). Let \(\beta _n \in (0, \infty )\) be a sequence decreasing to 0. For all \(n \in {\mathbb {N}}^*\), we set \(\Psi _n (\lambda ) = \psi (\lambda ) + \frac{_1}{^2} \beta _n \lambda ^2 = \alpha \lambda + \frac{_1}{^2} \beta _n \lambda ^2 + \sum _{j\ge 1} \kappa c_j \big ( e^{-\lambda c_j} \! -\! 1 + \lambda c_j \big )\). We now fix \(n \in {\mathbb {N}}^*\); by \(\mathbf{Case} 2 \), there exists \(\mathtt {w}_{n, p} = (w^{_{(n,p)}}_{^j})_{j\ge 1} \! \in \! {\ell }^{_{\, \downarrow }}_f\) and \(a_{n, p} , b_{n, p} \! \in \! (0, \infty )\), \(p \! \in \! {\mathbb {N}}\), that satisfy \(\sqrt{{\mathbf {j}}_{n,p}}/ b_{n,p}\! \rightarrow \! 0\) as \(p\! \rightarrow \! \infty \), where \({\mathbf {j}}_{n,p} = \max \{ j \! \ge \! 1 \! :\! w^{_{(n,p)}}_{^j} \! >\! 0 \}\), and

$$\begin{aligned}&\frac{a_{n, p} b_{n, p}}{\sigma _1 (w_{n, p})} = \kappa , \quad a_{n, p} , \frac{b_{n, p}}{a_{n, p}} \; \text {and} \; \frac{a^2_{n, p}}{b_{n, p}} \underset{^{p\rightarrow \infty }}{-\!\! \! -\!\! \!\longrightarrow } \infty , \quad \frac{b_{n, p}}{a_{n,p}} \Big ( 1-\frac{\sigma _2 (\mathtt {w}_{n , p})}{\sigma _1 (\mathtt {w}_{n, p})} \Big )\; =\alpha \,, \end{aligned}$$
(248)
$$\begin{aligned}&\frac{b_{n, p}}{a^2_{n , p}}\! \cdot \! \frac{\sigma _3 (\mathtt {w}_{n, p})}{\sigma _1 (\mathtt {w}_{n, p})} \; \underset{{p\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; \beta _n + \kappa \sigma _3 ({\mathbf {c}}) \quad \text {and, for each }j \in {\mathbb {N}}^*, \quad \frac{w^{_{(n, p)}}_j}{a_{n, p} } \; \underset{{p\rightarrow \infty }}{-\!\!\! -\!\! \! \longrightarrow } \; c_j .\nonumber \\ \end{aligned}$$
(249)

Furthermore, for \(n \in {\mathbb {N}}^*\) and \(t \in [0, \infty )\),

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathbf {P}}\big ( Z^{_{(n, p)}}_{\lfloor b_{n, p} t /a_{n , p} \rfloor } = 0 \big ) = e^{-v_n(t)} \quad \text {where} \quad \int _{v_n(t)}^\infty \! \frac{d\lambda }{\Psi _n (\lambda )}= t . \end{aligned}$$
(250)

Here, \((Z^{_{(n, p)}}_k)_{k\in {\mathbb {N}}}\) is a Galton–Watson process with offspring distribution \(\mu _{\mathtt {w}_{n, p}}\) given by (85) and where \(Z^{_{(n, p)}}_0 = \lfloor a_{n, p} \rfloor \). Let \(v\! : \! (0, \infty ) \! \rightarrow \! (0, \infty )\) be such that \(t = \int _{v(t)}^\infty d\lambda / \psi (\lambda ) \) for all \(t \in (0, \infty )\). Since \(\Psi _n (\lambda ) \! \ge \! \psi (\lambda )\), we get \(\int _{v(t)}^\infty d\lambda / \psi (\lambda ) = t = \int _{v_n(t)}^\infty d\lambda / \Psi _n (\lambda ) \le \! \int _{v_n(t)}^\infty d\lambda / \psi (\lambda )\); thus \(v_n (t) \le v(t)\). Thus, there exists \(p_n \! \in \! {\mathbb {N}}\) such that for all \(p\! \ge \! p_n\), \( {\mathbf {P}}\big ( Z^{_{(n, p)}}_{\lfloor b_{n, p} /a_{n , p} \rfloor } = 0 \big ) \! \ge \! \frac{1}{2}\exp (-v_n (1)) \! \ge \! \frac{1}{2}\exp (-v (1))\). Without loss of generality, we can assume that \(\sqrt{{\mathbf {j}}_{n,p_n}}/ b_{n,p_n} \le 2^{-n}\), \(a_{n, p_n} , b_{n ,p_n}/ a_{n, p_n}\) and \(a_{n, p_n}^2/ b_{n, p_n} \ge 2^n\), that for all \(1 \le j \le n\), \(| w^{_{(n,p_n)}}_j / a_{n, p_n} \! -\! c_j | \le 2^{-n}\) and

$$\begin{aligned} \Big | \frac{b_{n, p_n}}{a^2_{n , p_n}}\! \cdot \! \frac{\sigma _3 (\mathtt {w}_{n, p_n})}{\sigma _1 (\mathtt {w}_{n, p_n})} - \kappa \sigma _3 ({\mathbf {c}}) \Big | \le 2\beta _n \longrightarrow 0 . \end{aligned}$$

If one sets \(a_n \! =\! a_{n ,p_n }\), \(b_{n, p_n}\) and \(\mathtt {w}_{n } = \mathtt {w}_{n , p_n}\), then we have proved that \(a_n , b_n , \mathtt {w}_n\) satisfy (21) with \(\beta = \beta _0 = 0\), \(\sqrt{{\mathbf {j}}_n}/b_n\! \rightarrow \! 0\) and \((\mathbf {C1})\)\((\mathbf {C4})\), which proves Proposition 2.3 (iii) in Case 3. This completes the proof of Proposition 2.3 (iii). \(\square \)

8 Proof of Lemma 2.10

In this section, we consider the power-law example in [9, 11]. We check that the weight sequence \((\mathtt {w}_{n}(\alpha ))\) and the renormalising sequences \((a_{n}), (b_{n})\) in Lemma 2.10 satisfy the assumptions of Theorem 2.8. Let us start with the following lemma:

Lemma 8.1

Let \(\ell \! : \! (0, 1] \! \rightarrow \! (0, \infty )\) be a measurable slowly varying function at \(0+\) such that for all \(x_0 \in (0,1)\), \(\sup _{x\in [x_0 , 1]} \ell (x) \! < \! \infty \). Then, for all \(\delta \in (0, \infty )\), there exist \(\eta _\delta \in (0, 1]\) and \(c_\delta \in (1, \infty )\) such that, for \(y \in (0, \eta _\delta )\) and \(z \in (y, 1]\), one has

$$\begin{aligned} \frac{1}{c_\delta } \Big ( \frac{z}{y}\Big )^{\! -\delta }\!\!\! \le \, \frac{\ell (z)}{\ell (y)}\, \le c_\delta \Big ( \frac{z}{y}\Big )^{\delta } \; . \end{aligned}$$
(251)

Proof

The measurable version of the representation theorem for slowly varying functions (see for instance Bingham, Goldie & Teugels [13]) implies that there exist two measurable functions \(c\! : \! (0, 1]\! \rightarrow \! {\mathbb {R}}\) and \(\varepsilon \! : \! (0, 1] \! \rightarrow \! [-1, 1]\) such that \(\lim _{x\rightarrow 0+} c(x) = \gamma \in {\mathbb {R}}\), such that \(\lim _{x\rightarrow 0+} \varepsilon (x) = 0\), and such that \(\ell (x) \! =\! \exp ( c(x) + \int _x^1 ds\, \varepsilon (s)/ s )\), for all \(x \in (0, 1]\). Since, \(\sup _{x\in [x_0 , 1]} \ell (x) \! < \! \infty \), for all \(x_0 \in (0, 1)\), we can assume without loss of generality that c is bounded. Fix \(\delta \in (0, \infty )\) and let \(\eta _\delta \in (0, 1]\) be such that \(\sup _{(0, \eta _\delta ]} |\varepsilon | \le \delta \). Fix \(y \in (0, \eta _\delta )\) and \(z \in (y, 1]\); if \(z \le \eta _\delta \), then note that \(\int _y^z ds \, |\varepsilon (s) | /s \le \delta \log (z/y)\); if \(\eta _\delta \le z\), then observe that \(\int _y^z ds \, |\varepsilon (s) | /s \le \delta \log ( \eta _\delta /y)+ \int _{\eta _\delta }^1 ds \, |\varepsilon (s) | /s \le \delta \log (z/y) + \log (1/ \eta _\delta ) \). Thus

$$\begin{aligned} \eta _{\delta } e^{-2\Vert c\Vert _\infty } \Big ( \frac{z}{y}\Big )^{\! -\delta }\!\!\! \le \frac{\ell (z)}{\ell (y)}= \exp \Big ( c(z) \! -\! c(y) \! - \!\! \int _y^z \! \! \! ds\, \frac{\varepsilon (s)}{s} \Big ) \le \eta _{\delta }^{-1} e^{2\Vert c\Vert _\infty } \Big ( \frac{z}{y}\Big )^{\delta } \; , \end{aligned}$$

which implies the desired result. \(\square \)

Let us recall that \(W\! :\! \Omega \! \rightarrow [0, \infty )\) is a r.v. satisfying \(r\! : = \! {\mathbf {E}}[W] = {\mathbf {E}}[ W^2] \! <\! \infty \) and that \({\mathbf {P}}(W \ge x) \! =\! x^{-\rho } L(x)\), where L is a slowly varying function at \(\infty \) and \(\rho \in (2, 3)\). Recall also the notation \(G(y) = \sup \{ x \in [0, \infty ) : {\mathbf {P}}(W\! \ge \! x) \! \ge \! 1\! \wedge \! y\}\), \(y \in [0, \infty )\). Note that G is non increasing and null on \([1, \infty )\). Then, \(G(y) \! =\! y^{-1/\rho } \, \ell (y)\), where \(\ell \) is slowly varying at 0. Recall also the parameters \(\kappa , q \in (0, \infty )\) as well as the assumption that \( a_n\! \sim \! q^{-1} G(1/n)\), \(w_{^j}^{_{(n)}}\! \! =\! G(j/n)\), \(j\! \ge \! 1\), and \( b_n \! \sim \! \kappa \sigma _1 (\mathtt {w}_n) /a_n\).

Fix \(a\!\in \! [1, 2]\) and observe that \(\sigma _a (\mathtt {w}_n) = \sum _{1\le j< n}\int _0^{G(1/n)} \! dz \, az^{a-1}\mathbf{1}_{\{z < G(j/n) \}}\). But observe that \(z\! < \! G(y)\) implies \(y \le {{\mathbf {P}}}(W\! \ge \! z)\). Thus,

$$\begin{aligned} \sigma _a (\mathtt {w}_n)= & {} \sum _{1\le j< n}\! \int _0^{G(1/n)} \! \!\!\! \!\! dz \, az^{a-1} \mathbf{1}_{\{j \le n {\mathbf {P}}(W\ge z) \}}= \int _0^{G(1/n)} dz \, az^{a-1}\!\!\! \sum _{1\le j< n}\mathbf{1}_{\{j \le n{\mathbf {P}}(W\ge z) \}} \nonumber \\= & {} \int _0^{G(1/n)} dz \, az^{a-1} \lfloor n {\mathbf {P}}(W\! \ge \! z) \rfloor = \int _0^{G(1/n)} dz \, az^{a-1} n{\mathbf {P}}(W\! \ge \! z) \nonumber \\&- \int _0^{G(1/n)} \!\!\! \!\! dz \, az^{a-1} \{ n {\mathbf {P}}(W\! \ge \! z) \} \nonumber \\= & {} n \int _0^\infty \!\! dz \, az^{a-1} {\mathbf {P}}(W\! \ge \! z) \nonumber \\&- \int _{G(1/n)}^\infty dz \, az^{a-1} n {\mathbf {P}}(W\! \ge \! z) - \int _0^{G(1/n)} dz \, az^{a-1} \{ n {\mathbf {P}}(W\! \ge \! z) \} . \end{aligned}$$
(252)

Note that \(\int _0^\infty dz \, az^{a-1} {\mathbf {P}}(W\! \ge \! z) = {\mathbf {E}}[W^a] \! < \! \infty \). Note that \({\mathbf {P}}(W = G(1/n)) = 0\) by assumption (68), which easily implies that \({\mathbf {P}}(W\! \ge \! G(1/n) ) = 1/n\). Thus,

$$\begin{aligned} n{\mathbf {P}}(W\! \ge z)\! =\! {\mathbf {P}}(W\ge z)/ {\mathbf {P}}(W\ge G(1/n))= (z/G(1/n))^{-\rho } L(z)/L(G(1/n)) \end{aligned}$$

and by (252) and the change of variable \(z\! \mapsto \! z/G(1/n)\), we get

$$\begin{aligned} \sigma _a (\mathtt {w}_n)= & {} n {\mathbf {E}}[W^a] -G \big (\frac{_1}{^n}\big )^{\! a} \!\! \int _1^\infty \!\! \!\! dz \, az^{a-1 -\rho } \frac{L(zG(\frac{_1}{^n}))}{L(G(\frac{_1}{^n}))} -G\big (\frac{_1}{^n}\big )^{\! a} \!\! \int _0^{1} \!\! \!\! dz \, az^{a-1} \Big \{ z^{-\rho } \frac{L(zG(\frac{_1}{^n}))}{L(G(\frac{_1}{^n}))} \Big \} . \end{aligned}$$

The measurable version of the representation theorem for slowly varying functions (see for instance [13]) implies that there exist two measurable functions \(c\! : \! (0, \infty )\! \rightarrow \! {\mathbb {R}}\) and \(\varepsilon \! : \! (0, \infty ) \! \rightarrow \! [-1, 1]\) such that \(\lim _{x\rightarrow \infty } c(x) = \gamma \in {\mathbb {R}}\), such that \(\lim _{x\rightarrow \infty } \varepsilon (x) = 0\), and such that \(L (x) \! =\! \exp ( c(x) + \int _1^x ds\, \varepsilon (s)/ s )\), for all \(x \in (0, \infty )\). We then set \(u = (\rho \! -\! a)/2\) that is a strictly positive quantity since \(a \le 2 \! < \! \rho \). Let \(n_0\) be such that for all \(n\! \ge \! n_0\), \(\sup _{s \in [1, \infty )} |\varepsilon (s G(1/n))| \le u\). Thus, for all \(z \in [1, \infty )\),

$$\begin{aligned} 0 \le \! z^{a-1 -\rho }\frac{L(zG(\frac{_1}{^n}))}{L(G(\frac{_1}{^n}))}= & {} z^{a-1 -\rho } \exp \Big ( c\big (zG \big (\frac{_1}{^n}\big )\big )\! -\! c\big (G \big (\frac{_1}{^n}\big )\big ) + \int _1^z\!\! ds \frac{\varepsilon \big ( s G \big (\frac{1}{n}\big )\big )}{s} \Big ) \le e^{2 \Vert c \Vert _\infty } z^{-1-u}. \end{aligned}$$

Since for all \(z \in [1, \infty )\), \(L(zG(1/n))/L(G(1/n))\! \rightarrow \! 1\), dominated convergence entails that

$$\begin{aligned} \lim _{n\rightarrow \infty \;\; } \int _1^\infty dz \, az^{a-1 -\rho } \frac{L(zG(\frac{_1}{^n}))}{L(G(\frac{_1}{^n}))}= & {} \frac{a}{\rho \! -\! a} \; \, \text {and} \, \lim _{n\rightarrow \infty \; \; } \int _0^{1} \!\! \!\! dz \, az^{a-1} \Big \{ z^{-\rho } \frac{L(zG(\frac{_1}{^n}))}{L(G(\frac{_1}{^n}))} \Big \}\\= & {} \int _0^{1} \!\! \!\! dz \, az^{a-1} \{ z^{-\rho }\} . \end{aligned}$$

We then set \(Q_a = a/ (\rho \! - \! a)+ \int _0^{1} dz \, az^{a-1} \{ z^{-\rho }\}\) and since \(a_n \sim q^{-1} G(1/n)\), we have proved that

$$\begin{aligned} \sigma _a (\mathtt {w}_n) = n {\mathbf {E}}[W^a] -q^a Q_a (a_n)^a + o((a_n)^a) . \end{aligned}$$
(253)

Recall that as the graph is critical, we have \(r = {\mathbf {E}}[W] = {\mathbf {E}}[W^2]\). We then take (253) with \(a = 1\) to get \(\sigma _1 (\mathtt {w}_n) \! -\! r n\! \sim \! - Q_1n^{1/\rho } \ell (1/n)\) since \(a_n \sim q^{-1}n^{1/\rho } \ell (1/n)\); thus \(b_n \sim \kappa q r n^{1-1/\rho } / \ell (1/n)\). It implies that \(a_n\) and \(b_n / a_n\) go to \(\infty \) and that \(b_n / a_n^2 \! \rightarrow \! 0\). Moreover for all \(j\! \ge 1\), \(w^{_{(n)}}_{^j} /a_n\! \rightarrow \! q j^{-1/\rho }\). This implies that \(a_n\), \(b_n\) and \(\mathtt {w}_n\) satisfy (21) with \(\beta _0 = 0\) (and (\(\mathbf {C3}\))). Since \(a_n b_n \sim \kappa \sigma _1 (\mathtt {w}_n)\sim \kappa r n \), (253) with \(a = 1\) and 2 implies that

$$\begin{aligned} \frac{\sigma _2 (\mathtt {w}_n)}{\sigma _1 (\mathtt {w}_n) }\! =\! \frac{nr\! -\! q^2Q_2a_n^2 + o(a_n^2)}{nr\! -\! qQ_1a_n + o(a_n)} \! =\! 1\! -\! \kappa q^2 Q_2 \frac{a_n}{b_n} + o\Big (\frac{a_n}{b_n} \Big ) \! =\! 1\! -\! \alpha _0 \frac{a_n}{b_n} + o\Big (\frac{a_n}{b_n} \Big )\; , \end{aligned}$$
(254)

where \(\alpha _0 = \kappa q^2 Q_2 \) as defined in (70).

Next, for all \(\alpha \in {\mathbb {R}}\), set \(w^{_{(n)}}_{^j}(\alpha ) = (1 \! -\! \frac{{a_n}}{{b_n}} (\alpha - \alpha _0))w^{_{(n)}}_{^j}\). By (254), we get \(\sigma _2 (\mathtt {w}_n (\alpha ))/ \sigma _1 (\mathtt {w}_n (\alpha ))\! =\! 1-\alpha a_n / b_n+ o( a_n / b_n)\). Namely, \(\mathtt {w}_n (\alpha )\) satisfies (\(\mathbf {C1}\)). Since \(w^{_{(n)}}_{^j}(\alpha )\sim w^{_{(n)}}_{^j}\) as \(n\rightarrow \infty \), \(\mathtt {w}_n (\alpha )\) also satisfies (\(\mathbf {C3}\)) with \(c_j = qj^{-1/\rho }\), \(j\! \ge \! 1\).

Let us prove that \((\mathtt {w}_n (\alpha ))\) satisfies (\(\mathbf {C2}\)). First observe that \(\sigma _3 (\mathtt {w}_n (\alpha ))\! \sim \! \sigma _3 (\mathtt {w}_n)\). So we only need to prove that the \(\mathtt {w}_n\) satisfy (\(\mathbf {C2}\)). To that end, for all n and \(j\! \ge \! 1\), we set \(f_n (j) = (G(j/n)/ G(1/n))^3\! =\! j^{-3/\rho } \ell ^3 (j/n)/\ell ^3 (1/n)\) and \(\delta \! =\! \frac{_1}{^2} (\frac{_3}{^\rho }\! -\! 1)\) that is strictly positive. We apply Lemma 8.1 to \(\ell ^3\): let \(c_\delta \in (1, \infty )\) and \(\eta _\delta \in (0, 1]\) such that (251) holds true; then, for all \(n\! >\! 1/ \eta _\delta \), \(0 \le f_n (j) \le c_\delta j^{-1-\delta }\). Since for all \(j\! \ge \! 1\), \(\lim _{n\rightarrow \infty } f_n(j) = j^{-3/\rho }\), by dominated convergence we get

$$\begin{aligned} G(1/n)^{-3} \sigma _3 (\mathtt {w}_n) = \sum _{1\le j\le n} f_n (j) \underset{n\rightarrow \infty }{-\!\!\! -\!\!\! -\!\!\! \longrightarrow } \sum _{j\ge 1} j^{-3/\rho } = q^{-3} \sigma _3 ({\mathbf {c}}), \end{aligned}$$

which easily implies (\(\mathbf {C2}\)).

Let us prove that \(\mathtt {w}_n (\alpha )\) satisfies (\(\mathbf {C4}\)) thanks to (38) in Proposition 2.3. To that end, we fix \(n \in {\mathbb {N}}^*\) and \(\lambda \in [0, \infty )\) such that \(\lambda \in [1, a_n]\). For all \(x \in [0, \infty )\), recall that \(f_\lambda (x) = x (e^{-\lambda x} \! - \! 1 + \lambda x)\) and for all \(j\! \ge \! 1\), set

$$\begin{aligned}\phi _n (j) = f_\lambda \Big (\frac{_{w^{_{(n)}}_{^j}(\alpha )}}{^{a_n}} \Big ) = f_\lambda \Big (q_n j^{-1/\rho } \frac{_{\ell (j/n)}}{^{\ell (1/n)}} \Big ) \; \text {where} \quad q_n = \Big (1\! -\! \frac{_{a_n}}{^{b_n}}(\alpha \! -\! \alpha _0) \Big ) \frac{_{G(1/n)}}{^{a_n}} \sim q \; .\end{aligned}$$

To simplify, we also set \(\kappa _n\! =\! a_n b_n / \sigma _1 (\mathtt {w}_n (\alpha ))\); note that \(\kappa _n \! \sim \! \kappa \). Let \(\delta \in (0, \infty )\) be specified further; by Lemma 8.1 and the previous arguments, there exists \(c_{\delta } \in (0, \infty )\) and \(n_\delta \) such that for all \(n \! \ge \! n_\delta \), \(w^{_{(n)}}_{^j} (\alpha ) / a_n\! \ge \! c_\delta j^{ -\delta -1/\rho }\) and \(\kappa _n \! \ge \! \frac{_1}{^2} \kappa \), which entails \(\kappa _n \phi _n (j) \! \ge \! \frac{_1}{^2} \kappa f_\lambda (c_\delta j^{ -\delta -1/\rho })\). We next set

$$\begin{aligned}\alpha _n \! := \! \frac{{b_n}}{{a_n}} \Big (1 \! -\! \frac{\sigma _2 (\mathtt {w}_n (\alpha ))}{\sigma _1 (\mathtt {w}_n (\alpha ))} \Big ) \sim \alpha \,. \end{aligned}$$

Recall that \(\psi _n\), defined in (37), is the Laplace exponent of \((\frac{1}{a_n} X^{\mathtt {w}_n(\alpha )}_{b_n t })_{t\in [0, \infty )}\). The previous inequalities then imply that

$$\begin{aligned} \psi _n (\lambda )\! -\! \alpha _n \lambda = \! \sum _{1\le j<n } \!\!\! \kappa _n \phi _n (j) \ge \frac{_1}{^2} \kappa \!\! \! \sum _{1\le j< n } \! f_\lambda (c_\delta j^{ -\delta -\frac{1}{\rho }})\ge \frac{_1}{^2} \kappa \!\! \int _1^n \!\! \!dx \, f_\lambda (c_\delta x^{ -\delta -\frac{1}{\rho }}) \; ,\end{aligned}$$

where we have used the fact that \(x\mapsto f_{\lambda }(x)\) is increasing. We set \(a\! =\! \rho / (1+ \rho \delta )\), namely \(1/ a = \delta + 1/\rho \) and we use the change of variables \(y = \lambda x^{ -1/a}\) in the last term of the inequality to get

$$\begin{aligned} \forall n \! \ge \! n_\delta , \; \forall \lambda \in [1, a_n], \quad \psi _n (\lambda ) \! -\! \alpha _n \lambda\ge & {} \frac{_1}{^2} \kappa a \lambda ^{a-1} \!\! \int _{\lambda n^{-1/a}}^{\lambda } \!\! \! \!\! \! \!\! \! \!\! dy \, y^{-a-1} f_1 (c_\delta y )\\\ge & {} \frac{_1}{^2} \kappa a \lambda ^{a-1} \!\! \int _{a_n n^{-1/a}}^{1} \!\! \! \!\! \! \!\! \! \!\! \! \!\! dy \, y^{-a-1} f_1 (c_\delta y ) . \end{aligned}$$

Now observe that \(a_n n^{-1/a} \sim q^{-1}n^{-\delta } \ell (1/n)\! \rightarrow \! 0\). Thus, without loss of generality, we can assume that for all \(n\! \ge \! n_\delta \), \(a_n n^{-1/a} \le 1/2\). Then, we set \(K_\delta = \frac{_1}{^2} \kappa a \int _{1/2}^{1} \!dy \, y^{-a-1} f_1 (c_\delta y ) \! >\! 0\) and we have proved that for all \(n\! \ge \! n_\delta \), and for all \(\lambda \in [1, a_n]\), \(\psi _n (\lambda ) \! -\! \alpha _n \lambda \! \ge \! K_\delta \lambda ^{a-1}\). Since \(\rho >2\), it is possible to choose a sufficiently small \(\delta \! >\! 0\) such that \(a\! -\! 1 = \rho / (1+ \rho \delta ) \! -\! 1 \! >\! 1\). Then, we get (38) in Proposition 2.3 (i) which implies (\(\mathbf {C4}\)). This completes the proof of Lemma 2.10. \(\square \)