1 Introduction

Random graphs can be used to model many different types of networked structures such as communication networks, social networks and biological networks. Many of these real-world networks display similar characteristics. A well-known characteristic of many real-world networks is that the degree distribution follows a power law. Another such property is that they are highly clustered. Several statistics to measure clustering exist. The global clustering coefficient measures the fraction of triangles in the network. A second measure of clustering is the local clustering coefficient, which measures the fraction of triangles that arise from one specific node.

The local clustering coefficient c(k) of vertices of degree k decays when k becomes large in many real-world networks. In particular, the decay was found to behave as an inverse power of k for k large enough, so that \(c(k)\sim k^{-\gamma }\) for some \(\gamma >0\) [5, 15, 19, 20, 22], where most real-world networks were found to have \(\gamma \) close to one. Figure 1 shows the local clustering coefficient for a technological network (the Google web graph [16]), an information network (hyperlinks of the online encyclopedia Baidu [18]) and a social network (friends in the Gowalla social network [16]). We see that for small values of k, c(k) decays slowly. When k becomes larger, the local clustering coefficient indeed seems to decay as an inverse power of k. Similar behavior has been observed in more real-world networks [21]. The decay of the local clustering coefficient c(k) in k is considered an important empirical observation, because it may signal the presence of hierarchical network structures [19], where high-degree vertices barely participate in triangles, but connect communities consisting of small-degree vertices with high clustering coefficients.

Fig. 1
figure 1

Local clustering coefficient c(k) for three real-world networks. a Google web graph [16]. b Baidu online encyclopedia [18]. c Gowalla social network [16]

In this paper we analyze c(k) for networks with a power-law degree distribution with degree exponent \(\tau \in (2,3)\), the situation that describes the majority of real-world networks [1, 8, 13, 21]. To analyze c(k), we consider the configuration model in the large-network limit, and count the number of triangles where at least one of the vertices has degree k. When the degree exponent satisfies \(\tau >3\), the total number of triangles in the configuration model converges to a Poisson random variable [9, Chapter 7]. When \(\tau \in (2,3)\), the configuration model consists of many self-loops and multiple edges [9]. This creates multiple ways of counting the number of triangles, as we will show below. In this paper, we count the number of triangles from a vertex perspective, which is the same as counting the number of triangles in the erased configuration model, where all self-loops have been removed and multiple edges have been merged.

We show that the local clustering coefficient remains a constant times \(n^{2-\tau }\log (n)\) as long as \(k=o(\sqrt{n})\). After that, c(k) starts to decay as \(c(k)\sim k^{-\gamma }n^{5-2\tau }\). We show that this exponent \(\gamma \) depends on \(\tau \) and can be larger than one. In particular, when the power-law degree exponent \(\tau \) is close to two, the exponent \(\gamma \) approaches two, a considerable difference with the preferential attachment model with triangles or several fractal-like random graph models that predict \(c(k)\sim k^{-1}\) [7, 14, 19]. Related to this result on the c(k) fall-off, we also show that for every node with fixed degree k only pairs of nodes with specific degrees contribute to the triangle count and hence local clustering.

The paper is structured as follows. Section 2 contains a detailed description of the configuration model and the triangle count. We present our main results in Sect. 3, including Theorem 1 that describes the three ranges of c(k). The remaining sections prove all the main results, and in particular focus on establishing Propositions 1 and 2 that are crucial for the proof of Theorem 1.

2 Basic Notions

Notation We use \(\overset{d}{\longrightarrow }\) for convergence in distribution, and \({\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\) for convergence in probability. We say that a sequence of events \((\mathcal {E}_n)_{n\ge 1}\) happens with high probability (w.h.p.) if \(\lim _{n\rightarrow \infty }\mathbb {P}\left( \mathcal {E}_n\right) =1\). Furthermore, we write \(f(n)=o(g(n))\) if \(\lim _{n\rightarrow \infty }f(n)/g(n)=0\), and \(f(n)=O(g(n))\) if |f(n)| / g(n) is uniformly bounded, where \((g(n))_{n\ge 1}\) is nonnegative. Similarly, if \(\limsup _{n\rightarrow \infty }\left| f(n)\right| /g(n)>0\), we say that \(f(n)=\varOmega (g(n))\) for nonnegative \((g(n))_{n\ge 1}\). We write \(f(n)=\varTheta (g(n))\) if \(f(n)=O(g(n) )\) as well as \(f(n)=\varOmega (g(n))\). We say that \(X_n=O_{\scriptscriptstyle {\mathbb {P}}}(g(n))\) for a sequence of random variables \((X_n)_{n\ge 1}\) if \(|X_n|/g(n)\) is a tight sequence of random variables, and \(X_n=o_{\scriptscriptstyle {\mathbb {P}}}(g(n))\) if \(X_n/g(n){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0\).

The Configuration Model Given a positive integer n and a degree sequence, i.e., a sequence of n positive integers \(\varvec{d}=(d_1,d_2,\ldots , d_n)\), the configuration model is a (multi)graph where vertex i has degree \(d_i\). It is defined as follows, see e.g., [3] or [9, Chapter 7]: Given a degree sequence \(\varvec{d}\) with \(\sum _{i\in [n]} d_i\) even, we start with \(d_j\) free half-edges adjacent to vertex j, for \(j=1, \ldots , n\). The random multigraph \(\mathrm{CM}_n(\varvec{d})\) is constructed by successively pairing, uniformly at random, free half-edges into edges, until no free half-edges remain. (In other words, we create a uniformly random matching of the half-edges.) The wonderful property of the configuration model is that, conditionally on obtaining a simple graph, the resulting graph is a uniform graph with the prescribed degrees. This is why \(\mathrm{CM}_n(\varvec{d})\) is often used as a null model for real-world networks with given degrees.

In this paper, we study the setting where the degree distribution has infinite variance. Then the number of self-loops and multiple edges tends to infinity in probability (see e.g., [9, Chapter 7]), so that the configuration model results in a multigraph with high probability. In particular, we take the degrees \(\varvec{d}\) to be an i.i.d. sample of a random variable D such that

$$\begin{aligned} \mathbb {P}(D=k)=C k^{-\tau }(1+o(1)), \end{aligned}$$
(2.1)

when \(k\rightarrow \infty \), where \(\tau \in (2,3)\) so that \(\mathbb {E}[D^2]=\infty \). When this sample constructs a sequence such that the sum of the variables is odd, we add an extra half-edge to the last vertex to obtain the degree sequence. This does not affect our computations. In this setting, \(d_{\max }=O_{\scriptscriptstyle \mathbb {P}}\left( n^{1/(\tau -1)}\right) \), where \(d_{\max }=\max _{v\in [n]} d_v\) denotes the maximal degree of the degree sequence.

Counting triangles Let \(G=(V,E)\) denote a configuration model with vertex set \(V=[n]:=\{1,\ldots , n\}\) and edge set E. We are interested in the number of triangles in G. There are two ways to count triangles in the configuration model. The first approach is from an edge perspective, as illustrated in Fig. 2. This approach counts the number of triples of edges that together create a triangle. This approach may count multiple triangles between one fixed triple of vertices. Let \(X_{ij}\) denote the number of edges between vertex i and j. Then, from an edge perspective, the number of triangles in the configuration model is

$$\begin{aligned} \sum _{1\le i<j < k\le n }X_{ij}X_{jk}X_{ik}. \end{aligned}$$
(2.2)

A different approach is to count the number of triangles from a vertex perspective. This approach counts the number of triples of vertices that are connected. Counting the number of triangles in this way results in

$$\begin{aligned} \sum _{1\le i<j < k\le n }\mathbbm {1}_{\left\{ X_{ij}\ge 1\right\} }\mathbbm {1}_{\left\{ X_{jk}\ge 1\right\} }\mathbbm {1}_{\left\{ X_{ik}\ge 1\right\} }. \end{aligned}$$
(2.3)

When the configuration model results in a simple graph, these two approaches give the same result. When the configuration model results in a multigraph, these two approaches may give very different numbers of triangles. In particular, when the degree distribution follows a power-law with \(\tau \in (2,3)\), the number of triangles is dominated by the number of triangles between the vertices of the highest degrees, even though only few such vertices are present in the graph [17]. When the exponent \(\tau \) of the degree distribution approaches 2, then the number of triangles between the vertices of the highest degrees will approach \(\varTheta (n^3)\), which is much higher than the number of triangles we would expect in any real-world network of that size. When we count triangles from a vertex perspective, we count only one triangle between these three vertices. Thus, the number of triangles from the vertex perspective will be significantly lower. In this paper, we focus on the vertex based approach for counting triangles. Note that this approach is the same as counting triangles in the erased configuration model, where all multiple edges have been merged, and the self-loops have been removed.

Let \(\triangle _k\) denote the number of triangles attached to vertices of degree k in the erased configuration model. Note that when a triangle consists of two vertices of degree k, it is counted twice in \(\triangle _k\). Let \(N_k\) denote the number of vertices of degree k. Then, the clustering coefficient of vertices with degree k equals

$$\begin{aligned} c(k)=\frac{1}{N_k}\frac{2\triangle _k}{k(k-1)}. \end{aligned}$$
(2.4)

When we count \(\triangle _k\) from the vertex perspective, this clustering coefficient can be interpreted as the probability that two random connections of a vertex with degree k are connected. This version of c(k) is the local clustering coefficient of the erased configuration model.

Fig. 2
figure 2

From the edge perspective in the configuration model, these are two triangles. From the vertex perspective, there is only one triangle. a CM. b ECM

3 Main Results

The next theorem presents our main result on the behavior of the local clustering coefficient in the erased configuration model.

Theorem 1

Let G be an erased configuration model, where the degrees are an i.i.d. sample from a power-law distribution with exponent \(\tau \in (2,3)\) as in (2.1). Define \(A=-\Gamma (2-\tau )>0\) for \(\tau \in (2,3)\), let \(\mu =\mathbb {E}\left[ D\right] \) and C be the constant in (2.1). Then, as \(n\rightarrow \infty \),

(Range I):

for \(1<k=o( n^{(\tau -2)/(\tau -1)})\),

$$\begin{aligned} \frac{c(k)}{n^{2-\tau }\log (n)}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\frac{3-\tau }{\tau -1} \mu ^{-\tau } C^2A, \end{aligned}$$
(3.1)
(Range II):

for \(k=\varOmega (n^{(\tau -2)/(\tau -1)})\) and \(k=o( \sqrt{n})\),

$$\begin{aligned} \frac{c(k)}{n^{2-\tau }\log (n/k^2)}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\mu ^{-\tau } C^2 A , \end{aligned}$$
(3.2)
(Range III):

for \(k=\varOmega (\sqrt{n})\) and \(k\le d_{\max }\),

$$\begin{aligned} \frac{c(k)}{n^{5-2\tau }k^{2\tau -6}}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\mu ^{3-2\tau }C^2 A^2, \end{aligned}$$
(3.3)

Theorem 1 shows three different ranges for k where c(k) behaves differently, and is illustrated in Fig. 3. Let us explain why these three ranges occur. Range I contains small-degree vertices with \(k=o( n^{(\tau -2)/(\tau -1)})\). In Sect. 4.2 we show that these vertices are hardly involved in self-loops and multiple edges in the configuration model, and hence there is little difference between counting from an edge perspective or from a vertex perspective. It turns out that these vertices barely make triadic closures with hubs, which renders c(k) independent of k in Theorem 1. Range II contains degrees that are neither small nor large with degrees \(k=\varOmega ( n^{(\tau -2)/(\tau -1)})\) and \( k=o( \sqrt{n})\). We can approximate the connection probability between vertices i and j with \(1-e ^{-D_iD_j/\mu n}\), where \(\mu =\mathbb {E}[D]\). Therefore, a vertex of degree k connects to vertices of degree at least n / k with positive probability. The vertices in Range II quite likely have multiple connections with vertices of degrees at least n / k. Thus, in this degree range, the single-edge constraint of the erased configuration model starts to play a role and causes the slow logarithmic decay of c(k) in Theorem 1. Range III contains the large-degree vertices with \(k=\varOmega (\sqrt{n})\). Again we approximate the probability that vertices i and j are connected by \(1-e ^{-D_iD_j/\mu n}\). This shows that vertices in Range III are likely to be connected to one another, possibly through multiple edges. The single-edge constraint on all connections between these core vertices causes the power-law decay of c(k) in Theorem 1.

Fig. 3
figure 3

The three ranges of c(k) defined in Theorem 1 on a log-log scale

Theorem 1 shows that the local clustering not only decays in k, it also decays in the graph size n for all values of k. This decay in n is caused by the locally tree-like nature of the configuration model. Figure 1 shows that in large real-world networks, c(k) is typically high for small values of k, which is unlike the behavior in the erased configuration model. The behavior of c(k) for more realistic network models is therefore an interesting question for further research. We believe that including small communities to the configuration model such as in [10] would only change the \(k\mapsto c(k)\) curve for small values of k with respect to the erased configuration model. Low-degree vertices will then typically be in highly clustered communities and therefore have high local clustering coefficients. Most connections from high-degree vertices will be between different communities, which results in a similar \(k\mapsto c(k)\) curve for large values of k as in the erased configuration model.

Observe that in Theorem 1 the behavior of c(k) on the boundary between two different ranges may be different than the behavior inside the ranges. Since \(k\mapsto c(k)\) is a function on a discrete domain, it is always continuous. However, we can extend the scaling limit of \(k\mapsto c(k)\) to a continuous domain. Theorem 1 then shows that the scaling limit of \(k\mapsto c(k)\) is a smooth function inside the different ranges. Furthermore, filling in \(k=an^{(\tau -1)/(\tau -2)}\) in Range II of Theorem 1 suggests that \(k\mapsto c(k)\) is also a smooth function on the boundary between Ranges I and II. However, the behavior of \(k\mapsto c(k)\) on the boundary between Ranges II and III is not clear from Theorem 1. We therefore prove the following result in Sect. 6.1:

Theorem 2

For \(k=B\sqrt{n}\),

$$\begin{aligned} \frac{c(k)}{n^{2-\tau }} {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\mu ^{2-2\tau }B^{-2}\int _{0}^{\infty }\int _{0}^{\infty }(t_1t_2) ^{-\tau }(1-e ^{-Bt_1})(1-e ^{-Bt_2})(1-e ^{-t_1t_2\mu })\mathrm{d}t_1\mathrm{d}t_2.\nonumber \\ \end{aligned}$$
(3.4)

Figure 4 compares \(c(k)/n^{2-\tau }\) for \(k=B\sqrt{n}\) using Theorems 1 and 2. The line associated with Theorem 1 uses the result for Range II when \(B<1\), and the result for Range III when \(B>1\). We see that there seems to be a discontinuity between these two ranges. Figure 4 suggests that the scaling limit of \(k\mapsto c(k)\) is smooth around \(k\approx \sqrt{n}\), because the lines are close for both small and large B-values. Theorem 3 shows that indeed the scaling limit of \(k\mapsto c(k)\) is smooth for k of the order \(\sqrt{n}\):

Fig. 4
figure 4

The normalized version of c(k) for \(k=B\sqrt{n}\) obtained from Theorems 1 and 2

Theorem 3

The scaling limit of \(k\mapsto c(k)\) is a smooth function.

Most likely configurations The three different ranges in Theorem 1 result from a canonical trade-off caused by the power-law degree distribution. On the one hand, high-degree vertices participate in many triangles. In Sect. 5.1 we show that the probability that a triangle is present between vertices with degrees \(k, D_u\) and \(D_v\) can be approximated by

$$\begin{aligned} \left( 1-e ^{-kD_u/\mu n}\right) \left( 1-e ^{-kD_v/\mu n}\right) \left( 1-e ^{-D_uD_v/\mu n}\right) . \end{aligned}$$
(3.5)

The probability of this triangle thus increases with \(D_u\) and \(D_v\). On the other hand, in power-law distribution high degrees are rare. This creates a trade-off between the occurrence of triangles between \(\{k, D_u,D_v \}\)-triplets and the number of them. Surely, large degrees \(D_u\) and \(D_v\) make a triangle more likely, but larger degrees are less likely to occur. Since (3.5) increases only slowly in \(D_u\) and \(D_v\) as soon as \(D_u,D_v=\varOmega (\mu n/k)\) or when \(D_uD_v=\varOmega ( \mu n)\), intuitively, triangles with \(D_u,D_v=\varOmega (\mu n/k)\) or with \(D_uD_v=\varOmega ( \mu n)\) only marginally increase the number of triangles. In fact, we will show that most triangles with a vertex of degree k contain two other vertices of very specific degrees, those degrees that are aligned with the trade-off. The typical degrees of \(D_u\) and \(D_v\) in a triangle with a vertex of degree k are given by \(D_u,D_v\approx \mu n/k\) or by \(D_uD_v\approx \mu n\).

Let us now formalize this reasoning. Introduce

$$\begin{aligned} W_n^k(\varepsilon ) = {\left\{ \begin{array}{ll} (u,v): D_uD_v\in [\varepsilon ,1/\varepsilon ]\mu n &{} \text {for } k=o( n^{(\tau -2)/(\tau -1)}),\\ (u,v): D_uD_v\in [\varepsilon ,1/\varepsilon ]\mu n, D_u,D_v<\mu n/(k\varepsilon )&{}\text {for } k=\varOmega ( n^{(\tau -2)/(\tau -1)}),k =o(\sqrt{n}),\\ (u,v): D_u,D_v\in [\varepsilon ,1/\varepsilon ]\mu n/k &{} \text {for } k=\varOmega (\sqrt{n}). \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.6)

Denote the number of triangles between one vertex of degree k and two other vertices ij with \((i,j)\in W_n^k(\varepsilon )\) by \(\triangle _k(W_n^k(\varepsilon ))\). The next theorem shows that these types of triangles dominate all other triangles where one vertex has degree k:

Theorem 4

Let G be an erased configuration model where the degrees are an i.i.d. sample from a power-law distribution with exponent \(\tau \in (2,3)\). Then, for \(\varepsilon _n\rightarrow 0\) sufficiently slowly,

$$\begin{aligned} \frac{\Delta _k(W_n^k(\varepsilon _n))}{\triangle _k}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}1. \end{aligned}$$
(3.7)

For example, when \(k=\varOmega ( \sqrt{n})\), \(\triangle _k(W_n^k(\varepsilon _ n))\) denotes all triangles between a vertex of degree k and two other vertices with degrees in \([\varepsilon _n,1/\varepsilon _n]n/k\). Theorem 4 then shows that the number of these triangles dominates the number of all other types of triangles where one vertex has degree k. This holds when \(\varepsilon _n\rightarrow 0\), so that the degrees of the other two vertices cover the entire \(\varTheta (n/k)\) range. The convergence of \(\varepsilon _n\rightarrow 0\) should be sufficiently slowly, e.g., \(\varepsilon _n=1/\log (n)\), for several combined error terms of \(\varepsilon \) and n to go to zero.

Figure 5 illustrates the typical triangles containing a vertex of degree k as given by Theorem 4. When k is small (k in Range I or II), a typical triangle containing a vertex of degree k is a triangle with vertices u and v such that \(D_uD_v=\varTheta (n)\) as shown in in Fig. 5a. Then, the probability that an edge between u and v exists is asymptotically positive and non-trivial. Since k is small, the probability that an edge exists between a vertex of degree k and u or v is small. On the other hand, when k is larger (in Range III), a typical triangle containing a vertex of degree k is with vertices u and v such that \(D_u=\varTheta (n/k)\) and \(D_v=\varTheta (n/k)\). Then, the probability that an edge exists between k and \(D_u\) or k and \(D_v\) is asymptotically positive whereas the probability that an edge exists between vertices u and v vanishes. Figure 5b shows this typical triangle.

Fig. 5
figure 5

The major contributions in the different ranges for k. The highlighted edges are present with asymptotically positive probability. a \(k<\sqrt{n}\). b \(k>\sqrt{n}\)

Figure 6 shows the typical size of the degrees of other vertices in a triangle with a vertex of degree \(k=n^\beta \). We see that when \(\beta <(\tau -2)/(\tau -1)\) (so that k is in Range I), the typical other degrees are independent of the exact value of k. This shows why c(k) is independent of k in Range I in Theorem 1. When \((\tau -2)/(\tau -1)<\beta <\tfrac{1}{2}\), we see that the range of possible degrees for vertices u and v decreases when k gets larger. Still, the range of possible degrees for \(D_u\) and \(D_v\) is quite wide. This explains the mild dependence of c(k) on k in Theorem 1 in Range II. When \(\beta >\tfrac{1}{2}\), k is in Range III. Then the typical values of \(D_u\) and \(D_v\) are considerably different from those in the previous regime. The values that \(D_u\) and \(D_v\) can take depend heavily on the value of k. This explains the dependence of c(k) on k in Range III.

Fig. 6
figure 6

Visualization of the contributing degrees when \(k=n^{\beta }\) and \(D_u=n^{\alpha }\). The colored area shows the values of \(\alpha \) that contribute to \(c(n^\beta )\) (Color figure online)

Global and local clustering The global clustering coefficient divides the total number of triangles by the total number of pairs of neighbors of all vertices. In [11], we have shown that the total number of triangles in the configuration model from a vertex perspective is determined by vertices of degree proportional to \(\sqrt{n}\). Thus, only triangles between vertices on the border between Ranges II and III contribute to the global clustering coefficient. The local clustering coefficient counts all triangles where one vertex has degree k and provides a more complete picture of clustering from a vertex perspective, since it covers more types of triangles.

Hidden-variable models Our results for clustering in the erased configuration model agree with recent results for the hidden-variable model [21]. In the hidden-variable model, every vertex is equipped with a hidden variable \(w_i\), where the hidden variables are sampled from a power-law distribution. Then, vertices i and j are connected with probability \(\min (w_iw_j/n,1)\) [2, 6]. In the erased configuration model, we will use that the probability that a vertex with degree \(D_i\) is connected to a vertex with degree \(D_j\) can be approximated by

$$\begin{aligned} 1-e ^{-D_iD_j/\mu n}, \end{aligned}$$
(3.8)

which behaves similarly as \(\min (D_iD_j/n,1)\). Thus, the connection probabilities in the erased configuration model can be interpreted as the connection probabilities in the hidden-variable model, where the sampled degrees can be interpreted as the hidden variables. The major difference is that connections in the hidden-variable model are independent once the hidden variables are sampled, whereas connections in the erased configuration model are correlated once the degrees are sampled. Indeed, in the erased configuration model we know that a vertex with degree \(D_i\) has at most \(D_i\) other vertices as a neighbor, so that the connections from vertex i to other vertices are correlated. Still, our results show that these correlations are small enough for the results for c(k) to be similar to the results for c(k) in the hidden variable model.

3.1 Overview of the Proof

To prove Theorem 1, we show that there is a major contributing regime for c(k), which characterizes the degrees of the other two vertices in a typical triangle with a vertex of degree k. We write this major contributing regime as \(W_n^k(\varepsilon )\) defined in (3.6). The number of triangles adjacent to a vertex of degree k is dominated by triangles between the vertex of degree k and other vertices with degrees in a specific regime, depending on k. All three ranges of k have a different spectrum of degrees that contribute to the number of triangles. We write

$$\begin{aligned} c(k)= c(k,W_n^k(\varepsilon ))+c(k,\bar{W}_n^k(\varepsilon )), \end{aligned}$$
(3.9)

where \(c(k,W_n^k(\varepsilon ))\) denotes the contribution to c(k) from triangles where the other two vertices \((u,v)\in W_n^k(\varepsilon )\) and \(c(k,\bar{W}_n^k(\varepsilon ))\) denotes the contribution to c(k) from triangles where the other two vertices \((u,v)\notin W_n^k(\varepsilon )\). Furthermore, we write the order of magnitude of the value of c(k) as f(kn). Theorem 1 states that this order should be

$$\begin{aligned} f(k,n) = {\left\{ \begin{array}{ll} n^{2-\tau }\log (n) &{} \text {for }k=o( n^{(\tau -2)/(\tau -1)}),\\ n^{2-\tau }\log (n/k^2) &{} \text {for } k=\varOmega (n^{(\tau -2)/(\tau -1)}), k=o( \sqrt{n}),\\ n^{5-2\tau }k^{2\tau -6} &{} \text {for } k=\varOmega (\sqrt{n}). \end{array}\right. } \end{aligned}$$
(3.10)

The proof of Theorem 1 is largely built on the following two propositions:

Proposition 1

(Main contribution)

$$\begin{aligned} \frac{c(k,W_n^k(\varepsilon ))}{f(n,k)} {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}{\left\{ \begin{array}{ll} C^2\int _{\varepsilon }^{1/\varepsilon }t^{1-\tau }(1-e ^{-t})\mathrm{d}t &{}\mathrm{for} \,\, k=o(\sqrt{n}),\\ C^2\left( \int _{\varepsilon }^{1/\varepsilon }t^{1-\tau }(1-e ^{-t})\mathrm{d}t \right) ^2&{}\mathrm{for} \,\, k=\varOmega ( \sqrt{n}).\\ \end{array}\right. } \end{aligned}$$
(3.11)

Proposition 2

(Minor contributions) There exists \(\kappa >0\) such that for all ranges

$$\begin{aligned} \frac{\mathbb {E}_n\left[ c(k,\bar{W}_n^k(\varepsilon ))\right] }{f(n,k)} = O_{\scriptscriptstyle \mathbb {P}}\left( \varepsilon ^\kappa \right) . \end{aligned}$$
(3.12)

We now show how these propositions prove Theorem 1. Applying Proposition 2 together with the Markov inequality yields

$$\begin{aligned} \mathbb {P}\left( c(k,\bar{W}_n^k(\varepsilon ))>Kf(k,n)\varepsilon ^\kappa \right) = O\left( K^{-1}\right) . \end{aligned}$$
(3.13)

Therefore,

$$\begin{aligned} c(k)=c(k,W_n^k(\varepsilon ))+O_\mathbb {P}\left( f(k,n)\varepsilon ^\kappa \right) . \end{aligned}$$
(3.14)

Replacing \(\varepsilon \) by \(\varepsilon _n\), and letting \(\varepsilon _n\rightarrow 0\) slowly enough for all combined error terms of \(\varepsilon _n\) and o(f(nk)) in the expectation in (3.12) to converge to 0 then already proves Theorem 4. To prove Theorems 1 and 2 we use Proposition 1, which shows that

$$\begin{aligned} \frac{c(k)}{f(k,n)}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}{\left\{ \begin{array}{ll} C^2\int _\varepsilon ^{1/\varepsilon }t^{1-\tau }(1-e ^{-t})\mathrm{d}t+O(\varepsilon ^\kappa ) &{}\mathrm{for}\,\, k =o(\sqrt{n}),\\ C^2\left( \int _\varepsilon ^{1/\varepsilon }t^{1-\tau }(1-e ^{-t})\mathrm{d}t\right) ^2+O(\varepsilon ^\kappa ) &{}\mathrm{for}\,\, k=\varOmega (\sqrt{n}). \end{array}\right. } \end{aligned}$$
(3.15)

We take the limit of \(\varepsilon \rightarrow 0 \) and use that

$$\begin{aligned} \begin{aligned}&\int _{0}^{\infty }x^{1-\tau }(1-e ^{-x})\mathrm{d}x = \int _{0}^{\infty }\int _{0}^{x}x^{1-\tau }e ^{-y}\mathrm{d}y \mathrm{d}x = \int _{0}^{\infty }\int _{y}^{\infty }x^{1-\tau }e ^{-y}\mathrm{d}x \mathrm{d}y\\&\quad = -\frac{1}{2-\tau }\int _{0}^{\infty }y^{2-\tau }e ^{-y}\mathrm{d}y = -\frac{\Gamma (3-\tau )}{2-\tau }=-\Gamma (2-\tau )=:A, \end{aligned} \end{aligned}$$
(3.16)

which proves Theorem 1.\(\square \)

The rest of the paper will be devoted to proving Propositions 1 and 2. We prove Proposition 1 using a second moment method. We can compute the expected value of c(k) conditioned on the degrees as

$$\begin{aligned} \mathbb {E}_n\left[ c(k)\right] =\frac{2\mathbb {E}_n\left[ \sum _{w:D^{\scriptscriptstyle \mathrm {(er)}}_w=k} \triangle (w)\right] }{N_kk(k-1)}, \end{aligned}$$
(3.17)

where \(\triangle (w)\) denotes the number of triangles containing vertex w and \(\mathbb {E}_n\) denotes the conditional expectation given the degrees. Let \(X_{ij}\) denote the number of edges between vertex i and j in the configuration model, and \(\hat{X}_{ij}\) the number of edges between i and j in the corresponding erased configuration model, so that \(\hat{X}_{ij}\in \{0,1\}\). Now,

$$\begin{aligned} \mathbb {E}_n\left[ \triangle (w)\mid D^{\scriptscriptstyle \mathrm {(er)}}_w=k\right] =\tfrac{1}{2}\sum _{u,v\ne w} \mathbb {P}_n(\hat{X}_{wu}= \hat{X}_{wv}=\hat{ X}_{uv}= 1\mid D^{\scriptscriptstyle \mathrm {(er)}}_w=k). \end{aligned}$$
(3.18)

Thus, to find the expected number of triangles, we need to compute the probability that a triangle between vertices u, v and w exists, which we will do in Sect. 5.1. After that, we show that this expectation converges to a constant when taking the randomness of the degrees into account, and that the variance conditioned on the degrees is small in Sect. 5.3. Then, we prove Proposition 2 in Sect. 6 using a first moment method. We start in Sect. 4 to state some preliminaries.

4 Preliminaries

We now introduce some lemmas that we will use frequently while proving Propositions 1 and 2. We let \(\mathbb {P}_n\) denote the conditional probability given \(\varvec{d}\), and \(\mathbb {E}_n\) the corresponding expectation. Furthermore, let \(\mathcal {D}_u\) denote a uniformly chosen vertex from the degree sequence \(\varvec{d}\) and let \(L_n=\sum _{i\in [n]}D_i\) denote the sum of the degrees.

4.1 Conditioning on the Degrees

In the proof of Proposition 1 we first condition on the degree sequence. We compute the clustering coefficient conditional on the degree sequence, and after that we show that this converges to the correct value when taking the random degrees into account. We will use the following lemma several times:

Lemma 1

Let G be an erased configuration model where the degrees are an i.i.d. sample from a random variable D. Then,

$$\begin{aligned} \mathbb {P}_n\left( \mathcal {D}_u\in [a,b]\right)&= O_{\scriptscriptstyle \mathbb {P}}\left( \mathbb {P}\left( D\in [a,b]\right) \right) , \end{aligned}$$
(4.1)
$$\begin{aligned} \mathbb {E}_n\left[ f(\mathcal {D}_u)\right]&=O_{\scriptscriptstyle \mathbb {P}}\left( \mathbb {E}\left[ f(D)\right] \right) . \end{aligned}$$
(4.2)

Proof

By using the Markov inequality, we obtain for \(M>0\)

$$\begin{aligned} \begin{aligned} \mathbb {P}\left( \mathbb {P}_n\left( \mathcal {D}_u\in [a,b]\right) \ge M \mathbb {P}\left( D\in [a,b]\right) \right) \le \frac{\mathbb {E}\left[ \mathbb {P}_n\left( \mathcal {D}_u\in [a,b]\right) \right] }{M\mathbb {P}\left( D\in [a,b]\right) }=\frac{1}{M}, \end{aligned} \end{aligned}$$
(4.3)

and the second claim can be proven in a very similar way. \(\square \)

In the proof of Theorem 1 we often estimate moments of D, conditional on the degrees. The following lemma shows how to bound these moments, and is a direct consequence of the Stable Law Central Limit Theorem:

Lemma 2

Let \(\mathcal {D}_u\) be a uniformly chosen vertex from the degree sequence, where the degrees are an i.i.d. sample from a power-law distribution with exponent \(\tau \in (2,3)\). Then, for \(\alpha >\tau -1\),

$$\begin{aligned} \mathbb {E}_n\left[ \mathcal {D}_u^\alpha \right] =O_{\scriptscriptstyle \mathbb {P}}\left( n^{\alpha /(\tau -1)-1}\right) . \end{aligned}$$
(4.4)

Proof

We have

$$\begin{aligned} \mathbb {E}_n\left[ \mathcal {D}_u^\alpha \right] =\frac{1}{n}\sum _{i=1}^{n}D_i^\alpha . \end{aligned}$$
(4.5)

Since the \(D_i\) are an i.i.d. sample from a power-law distribution with exponent \(\tau \),

$$\begin{aligned} \mathbb {P}\left( D_i^\alpha>t\right) =\mathbb {P}\left( D_i>t^{1/\alpha }\right) =Ct^{-\frac{\tau -1}{\alpha }}, \end{aligned}$$
(4.6)

so that \(D_i^\alpha \) are distributed as i.i.d. samples from a power-law with exponent \((\tau -1)/\alpha +1<2\). Then, by the Stable law Central Limit Theorem (see for example [23, Theorem 4.5.1]),

$$\begin{aligned} \sum _{i=1}^{n}D_i^\alpha = O_{\scriptscriptstyle \mathbb {P}}\left( n^{\frac{\alpha }{\tau -1}}\right) , \end{aligned}$$
(4.7)

which proves the lemma. \(\square \)

We also need to relate \(L_n\) and its expected value \(\mu n\). Define the event

$$\begin{aligned} J_n = \left\{ \left| L_n-\mu n\right| \le n^{1/(\tau -1)}\right\} . \end{aligned}$$
(4.8)

By [12], \(\mathbb {P}\left( J_n\right) =1-O(n^{-1/\tau })\) as \(n\rightarrow \infty \). When we condition on the degree sequence, we will assume that the event \(J_n\) takes place.

4.2 Erased and Non-erased Degrees

The degree sequence of the erased configuration model may differ from the original degree sequence of the original configuration model. We now show that this difference is small with high probability. By [4, Eq A(9)], the probability that a half-edge incident to a vertex of degree o(n) is removed is o(1). The maximal degree in the configuration model with i.i.d. degrees is \(O_\mathbb {P}(n^{1/(\tau -1)})\), so that for all i \(\max _{i\in [n]}D_i=o_{\scriptscriptstyle \mathbb {P}}(n)\). Therefore,

$$\begin{aligned} D_i(1-o_{\scriptscriptstyle \mathbb {P}}(1))\le D^{\scriptscriptstyle \mathrm {(er)}}_i\le D_i. \end{aligned}$$
(4.9)

Thus, in many proofs, we will exchange \(D_i\) and \(D^{\scriptscriptstyle \mathrm {(er)}}_i\) when needed.

5 Second Moment Method on Main Contribution \(W_n^k(\varepsilon )\)

We now focus on the triangles that give the main contribution. First, we condition on the degree sequence and compute the expected number of triangles in the main contributing regime. Then, we show that this expectation converges to a constant when taking the i.i.d. degrees into account. After that, we show that the variance of the number of triangles in the main contributing regime is small, and we prove Proposition 1.

5.1 Conditional Expectation Inside \(W_n^k(\varepsilon )\)

In this section, we compute the expectation of the number of triangles in the major contributing ranges of 3.6 when we condition on the degree sequence. We define

$$\begin{aligned} g_n(D_u,D_v,D_w):=(1-e ^{-D_uD_v/L_n})(1-e ^{-D_uD_w/L_n}) (1-e ^{-D_vD_w/L_n}). \end{aligned}$$
(5.1)

Then, the following lemma shows that the expectation of c(k) conditioned on the degrees is the sum of \(g_n(D_u,D_v,D_w)\) over all degrees in the major contributing regime:

Lemma 3

On the event \(J_n\) defined in (4.8),

$$\begin{aligned} \mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] =\frac{ \sum _{(u,v)\in W_n^k(\varepsilon )}g_n(k,D_u,D_v)}{k(k-1)} (1+o_{\scriptscriptstyle \mathbb {P}}(1)). \end{aligned}$$
(5.2)

Proof

By (3.17) and (3.18)

$$\begin{aligned} \mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] =\frac{\frac{1}{2 N_k}\sum _{w:D^{\scriptscriptstyle \mathrm {(er)}}_w=k}\sum _{(u,v)\in W_n^k(\varepsilon )}\mathbb {P}_n\left( \triangle _{u,v,w}=1\right) }{k(k-1)/2}, \end{aligned}$$
(5.3)

where \(\triangle _{u,v,w}\) denotes the event that a triangle is present on vertices uv and w. We write the probability that a specific triangle on vertices uv and w exists as

$$\begin{aligned} \begin{aligned} \mathbb {P}_n\left( \triangle _{u,v,w}=1\right)&=1- \mathbb {P}_n\left( {X}_{uw}=0\right) -\mathbb {P}_n\left( {X}_{vw}=0\right) -\mathbb {P}_n\left( {X}_{uv}=0\right) \\&\quad +\mathbb {P}_n\left( {X}_{uw}=X_{vw}=0\right) \\&\quad + \mathbb {P}_n\left( {X}_{uv}=X_{vw}=0\right) + \mathbb {P}_n\left( {X}_{uv}=X_{uw}=0\right) \\&\quad -\mathbb {P}_n\left( X_{uv}={X}_{uw}=X_{vw}=0\right) . \end{aligned} \end{aligned}$$
(5.4)

In the major contributing ranges, \(D_u,D_v,D_w=O_{\scriptscriptstyle \mathbb {P}}(n^{1/(\tau -1)})\), and the product of the degrees is O(n). By [11, Lemma 3.1]

$$\begin{aligned} \mathbb {P}_n\left( X_{uv}=X_{vw}=0\right) =e ^{-D_uD_v/L_n}e ^{-D_vD_w/L_n} (1+o_{\scriptscriptstyle \mathbb {P}}(n^{-(\tau -2)/(\tau -1)})) \end{aligned}$$
(5.5)

and

$$\begin{aligned} \mathbb {P}_n\left( X_{uv}\!=\! X_{vw}=X_{uw}=0\right) \!=\!e ^{-D_uD_v/L_n} e ^{-D_vD_w/L_n}e ^{-D_uD_w/L_n}(1+o_{\scriptscriptstyle \mathbb {P}}(n^{-(\tau -2)/(\tau -1)})).\nonumber \\ \end{aligned}$$
(5.6)

Therefore,

$$\begin{aligned} \begin{aligned} \mathbb {P}_n\left( \triangle _{u,v,w}=1\right)&=(1+o_{\scriptscriptstyle \mathbb {P}}(1))\left( 1-e ^{-D_uD_v/L_n}\right) \left( 1-e ^{-D_uD_w/L_n}\right) \left( 1-e ^{-D_vD_w/L_n}\right) \\&= (1+o_{\scriptscriptstyle \mathbb {P}}(1))g_n(D_u,D_v,D_w), \end{aligned} \end{aligned}$$
(5.7)

where we have used that for \(D_uD_v=O(n)\)

$$\begin{aligned} 1-e ^{-D_uD_v/L_n}(1+o_{\scriptscriptstyle \mathbb {P}}(n^{-(\tau -2)/(\tau -1)})) =(1-e ^{-D_uD_v/L_n})(1+o_{\scriptscriptstyle \mathbb {P}}(1)). \end{aligned}$$
(5.8)

Lemma 1 shows that, given \(D^{\scriptscriptstyle \mathrm {(er)}}_w=k\),

$$\begin{aligned} g_n(D_w,D_u,D_v)=g_n(k,D_u,D_v)(1+o_{\scriptscriptstyle \mathbb {P}}(1)). \end{aligned}$$
(5.9)

Thus, we obtain

$$\begin{aligned} \begin{aligned} \mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right]&=\frac{\sum _{w:D^{\scriptscriptstyle \mathrm {(er)}}_w=k}\sum _{(u,v)\in W_n^k(\varepsilon )}g_n(D_w,D_u,D_v)}{N_kk(k-1)}(1+o_{\scriptscriptstyle \mathbb {P}}(1))\\&=\frac{\sum _{(u,v)\in W_n^k(\varepsilon )}g_n(k,D_u,D_v)}{k(k-1)}(1+o_{\scriptscriptstyle \mathbb {P}}(1)), \end{aligned} \end{aligned}$$
(5.10)

which proves the lemma. \(\square \)

5.2 Analysis of Asymptotic Formula

In the previous section, we have shown that the expected value of c(k) in the major contributing regime is the sum of a function \(g_n(k,D_u,D_v)\) over all vertices u and v with degrees in the major contributing regime if we condition on the degrees, that is

$$\begin{aligned} \mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] \!=\!\frac{1\!+\!o_{\scriptscriptstyle \mathbb {P}}(1)}{k(k-1)}\sum _{(u,v)\!\in \! W_n^k(\varepsilon )}(1-e ^{-kD_v/L_n})(1-e ^{-kD_u/L_n})(1-e ^{-D_uD_v/L_n}).\nonumber \\ \end{aligned}$$
(5.11)

This expected value does not yet take into account that the degrees are sampled i.i.d. from a power-law distribution. In this section, we will prove that this expected value converges to a constant when we take the randomness of the degrees into account. We will make use of the following lemmas:

Lemma 4

Let \(A\subset \mathbb {R}^2\) be a bounded set and \(f(t_1,t_2)\) be a bounded, continuous function on A. Let \(M^{\scriptscriptstyle (n)}\) be a random measure such that for all \(S\subseteq A\), \(M^{\scriptscriptstyle (n)}(S){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\lambda (S)=\int _S\mathrm{d}\lambda (t_1,t_2)\) for some deterministic measure \(\lambda \). Then,

$$\begin{aligned} \int _A f(t_1,t_2)\mathrm{d}M^{\scriptscriptstyle (n)}(t_1,t_2){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _A f(t_1,t_2)\mathrm{d}\lambda (t_1,t_2). \end{aligned}$$
(5.12)

Proof

Fix \(\eta >0\). Since f is bounded and continuous on A, for any \(\varepsilon >0\), we can find \(m<\infty \), disjoint sets \((B_i)_{i\in [m]}\) and constants \((b_i)_{i\in [m]}\) such that \(\cup B_i = A\) and

$$\begin{aligned} \left| f(t_1,t_2)-\sum _{i=1}^{m}b_i\mathbbm {1}_{\left\{ (t_1,t_2)\in B_i\right\} }\right| <\varepsilon , \end{aligned}$$
(5.13)

for all \((t_1,t_2)\in A\). Because \(M^{\scriptscriptstyle (n)}(B_i){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\lambda (B_i)\) for all i,

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {P}\left( \left| M^{\scriptscriptstyle (n)}(B_i)-\lambda (B_i)\right| >\eta /m\right) =0. \end{aligned}$$
(5.14)

Then,

$$\begin{aligned}&\left| \int _A f(t_1,t_2)\mathrm{d}M^{\scriptscriptstyle (n)}(t_1,t_2)-\int _A f(t_1,t_2)\mathrm{d}\lambda (t_1,t_2)\right| \nonumber \\&\quad \le \left| \int _A f(t_1,t_2)-\sum _{i=1}^mb_i \mathbbm {1}_{\left\{ (t_1,t_2)\in B_i\right\} }\mathrm{d}M^{\scriptscriptstyle (n)}(t_1,t_2)\right| \nonumber \\&\qquad + \left| \int _A f(t_1,t_2)-\sum _{i=1}^mb_i \mathbbm {1}_{\left\{ (t_1,t_2)\in B_i\right\} }\mathrm{d}\lambda (t_1,t_2)\right| \nonumber \\&\qquad +\left| \sum _{i=1}^mb_i(M^{\scriptscriptstyle (n)}(B_i)-\lambda (B_i))\right| \nonumber \\&\quad \le \varepsilon M^{\scriptscriptstyle (n)}(A)+\varepsilon \lambda (A)+o_{\scriptscriptstyle \mathbb {P}}(\eta ). \end{aligned}$$
(5.15)

Now choosing \(\varepsilon <\eta /\lambda (A)\) proves the lemma. \(\square \)

The following lemma is a straightforward one-dimensional version of Lemma 4:

Lemma 5

Let \(M^{\scriptscriptstyle (n)}[a,b]\) be a random measure such that for all \(0<a<b\), \(M^{\scriptscriptstyle (n)}[a,b]{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\lambda [a,b]=\int _a^b\mathrm{d}\lambda (t)\) for some deterministic measure \(\lambda \). Let f(t) be a bounded, continuous function on \([\varepsilon ,1/\varepsilon ]\). Then,

$$\begin{aligned} \int _{\varepsilon }^{1/\varepsilon }f(t)\mathrm{d}M^{\scriptscriptstyle (n)}(t){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _{\varepsilon }^{1/\varepsilon }f(t)\mathrm{d}\lambda (t). \end{aligned}$$
(5.16)

Proof

This proof follows the same lines as the proof of Lemma 4.\(\square \)

Using these lemmas we investigate the convergence of the expectation of c(k) conditioned on the degrees. We treat the three ranges separately, but the proofs follow the same structure. First, we define a random measure \(M^{\scriptscriptstyle (n)}\) that counts the normalized number of vertices with degrees in the major contributing regime. We then show that this measure converges to a deterministic measure \(\lambda \), using that the degrees are i.i.d. samples of a power-law distribution. We then write the conditional expectation of the previous section as an integral over measure \(M^{\scriptscriptstyle (n)}\). Then, we can use Lemmas 4 or 5 to show that this converges to a deterministic integral.

First, we consider the case where k is in Range I:

Lemma 6

(Range I) For \(1<k=o( n^{(\tau -2)/(\tau -1)})\),

$$\begin{aligned} \frac{\mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] }{n^{2-\tau }\log (n)}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\mu ^{-\tau } C^2\frac{3-\tau }{\tau -1}\int _{\varepsilon }^{1/\varepsilon }t^{1-\tau } (1-e ^{-t})\mathrm{d}t . \end{aligned}$$
(5.17)

Proof

Since the degrees are i.i.d. samples from a power-law distribution, \(D_u=O_\mathbb {P}(n^{1/(\tau -1)})\) uniformly in \(u\in [n]\). Thus, when \(k=o(n^{(\tau -2)/(\tau -1)})\), \(kD_u=o_{\scriptscriptstyle \mathbb {P}}(n)\) uniformly in \(u\in [n]\). Therefore, we can Taylor expand the first two exponentials in (5.11), using that \(1-{\mathrm e}^{-x}=x+O(x^2)\). By Lemma 3, this leads to

$$\begin{aligned} \mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] =(1+o_{\scriptscriptstyle \mathbb {P}}(1)) \frac{k^2}{k(k-1)} \sum _{(u,v)\in W_n^k(\varepsilon )} \frac{D_uD_v(1-{\mathrm e}^{-D_uD_v/L_n})}{L_n^2}. \end{aligned}$$
(5.18)

Furthermore, since \(D_u=O_\mathbb {P}(n^{1/(\tau -1)})\) while also \(D_uD_v=\varTheta (n)\) in the major contributing regime, we can add the indicator that \(K_1n^{(\tau -2)/(\tau -1)}<D_u<K_2n^{1/(\tau -1)}\) for \(0<K_1,K_2<\infty \). We then define the random measure

$$\begin{aligned} M^{\scriptscriptstyle (n)}[a,b] = \frac{(\mu n)^{\tau -1}}{\log (n)n^2}\sum _{u\ne v\in [n]}\mathbbm {1}_{\left\{ D_uD_v\in n\mu [a,b], K_1n^{(\tau -2)/(\tau -1)}<D_u<K_2n^{1/(\tau -1)}\right\} }. \end{aligned}$$
(5.19)

We write the expected value of this measure as

$$\begin{aligned} \begin{aligned} \mathbb {E}\left[ M^{\scriptscriptstyle (n)}[a,b]\right]&= \frac{(\mu n)^{\tau -1}}{\log (n)n^2}\mathbb {E}\left[ \left| \left\{ u\ne v: D_uD_v\in [a,b]\mu n, D_u\in [K_1n^{(\tau -2)/(\tau -1)},K_2n^{\frac{1}{\tau -1}}]\right\} \right| \right] \\&= \frac{(\mu n)^{\tau -1}(n-1)}{\log (n)n}\mathbb {P}\left( D_1D_2\in [a,b]\mu n, D_1\in [K_1n^{(\tau -2)/(\tau -1)},K_2n^{\frac{1}{\tau -1}}]\right) \\&= \frac{(\mu n)^{\tau -1}}{\log (n)}\int _{K_1n^{(\tau -2)/(\tau -1)}}^{K_2n^{\frac{1}{\tau -1}}}\int _{a\mu n/x}^{b\mu n/x}C^2(xy)^{-\tau }\mathrm{d}y \mathrm{d}x \\&= C^2\frac{(\mu n)^{\tau -1}(n-1)}{\log (n)n}\int _{K_1n^{(\tau -2)/(\tau -1)}}^{K_2n^{\frac{1}{\tau -1}}}\frac{1}{x} \mathrm{d}x \int _{a\mu n}^{b\mu n}u^{-\tau }\mathrm{d}u \\&=C^2\frac{n-1}{n}\int _{a}^{b}t^{-\tau }\mathrm{d}t \left( \frac{3-\tau }{\tau -1} +\frac{\log (K_2/K_1)}{\log (n)}\right) , \end{aligned} \end{aligned}$$
(5.20)

where we have used the change of variables \(u=xy\) and \(t=u/(\mu n)\). Thus,

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {E}\left[ M^{\scriptscriptstyle (n)}[a,b]\right] = C^2\frac{3-\tau }{\tau -1}\int _{a}^{b}t^{-\tau }\mathrm{d}t =:\lambda [a,b]. \end{aligned}$$
(5.21)

Furthermore, the variance of this measure can be written as

$$\begin{aligned} \begin{aligned} Var \left( M^{\scriptscriptstyle (n)}[a,b]\right)&= \frac{(\mu n)^{2\tau -6}\mu ^2}{\log ^2(n)}\sum _{u,v,w,z}\Big ({\mathbb {P}}\Big (D_uD_v,D_wD_z\in \mu n[a,b],\\&\qquad D_u,D_w\in [K_1n^{(\tau -2)/(\tau -1)},K_2n^{\frac{1}{\tau -1}}]\Big )\\&\quad -\mathbb {P}\left( D_uD_v\in \mu n[a,b], D_u\in [K_1n^{(\tau -2)/(\tau -1)},K_2n^{\frac{1}{\tau -1}}]\right) \\&\quad \times \mathbb {P}\left( D_wD_z\in \mu n[a,b],D_w\in [K_1n^{(\tau -2)/(\tau -1)},K_2n^{\frac{1}{\tau -1}}]\right) \Big ). \end{aligned}\nonumber \\ \end{aligned}$$
(5.22)

Since the degrees are sampled i.i.d. from a power-law distribution, the contribution to the variance for \(|\{u,v,w,z\}|=4\) is zero. The contribution from \(|\{u,v,w,z\}|=3\) can be bounded as

$$\begin{aligned} \begin{aligned}&\frac{(\mu n)^{2\tau -6}\mu ^2}{\log ^2(n)}\sum _{u,v,w}\mathbb {P}\left( D_uD_v,D_uD_w\!\in \!\mu n[a,b]\right) = \frac{\mu ^{2\tau -4}n^{2\tau -3}}{\log ^2(n)}\mathbb {P}\left( {D}_1{D}_2,{D}_1{D}_3\in \mu n[a,b]\right) \\&\qquad = \frac{\mu ^{2\tau -4}n^{2\tau -3}}{\log ^2(n)}\int _{1}^{\infty } Cx^{-\tau }\left( \int _{an/x}^{bn/x}Cy^{-\tau }\mathrm{d}y\right) ^2\mathrm{d}x\\&\qquad \le K\frac{n^{-1}}{\log ^2(n)}, \end{aligned} \end{aligned}$$
(5.23)

for some constant K. Similarly, the contribution for \(u=z\), \(v=w\) can be bounded as

$$\begin{aligned} \begin{aligned} \frac{(\mu n)^{2\tau -6}\mu ^2}{\log ^2(n)}\sum _{u,v}\mathbb {P}\left( D_uD_v\in \mu n[a,b]\right)&= \frac{\mu ^{2\tau -6}n^{2\tau -4}}{\log ^2(n)}\mathbb {P}\left( {D}_1{D}_2\in \mu n[a,b]\right) \\&\le K\frac{n^{2\tau -4}}{\log ^2(n)}n^{1-\tau }\log (n)= K\frac{n^{\tau -3}}{\log (n)}, \end{aligned} \end{aligned}$$
(5.24)

for some constant K. Thus, \(Var \left( M^{\scriptscriptstyle (n)}[a,b]\right) =o_\mathbb {P}(1)\). Therefore, a second moment method yields that for every \(a,b>0\),

$$\begin{aligned} M^{\scriptscriptstyle (n)}[a,b]{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\lambda [a,b]. \end{aligned}$$
(5.25)

Using the definition of \(M^{\scriptscriptstyle (n)}\) in (5.19) and that \(L_n^{-1}=(\mu n)^{-1}(1+o_{\scriptscriptstyle \mathbb {P}}(1))\),

$$\begin{aligned} \begin{aligned}&\sum _{(u,v)\in W_n^k(\varepsilon )} \frac{D_uD_v(1-{\mathrm e}^{-D_uD_v/L_n})}{L_n^2} =\mu ^{1-\tau }n^{3-\tau }\log (n)\int _{\varepsilon }^{1/\varepsilon }\frac{t}{L_n}(1\!-\!e ^{-t})\mathrm{d}M^{\scriptscriptstyle (n)}(t) \\&\quad =\mu ^{-\tau }n^{2-\tau }\log (n)\int _{\varepsilon }^{1/\varepsilon } t(1-e ^{-t})\mathrm{d}M^{\scriptscriptstyle (n)}(t)(1+o_\mathbb {P}(1)). \end{aligned} \end{aligned}$$
(5.26)

By Lemma 5 and (5.25),

$$\begin{aligned} \begin{aligned} \int _{\varepsilon }^{1/\varepsilon }t(1-e ^{-t})\mathrm{d}M^{\scriptscriptstyle (n)}(t)&{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\int _{\varepsilon }^{1/\varepsilon }t(1-e ^{-t})\mathrm{d}\lambda (t)\\&= C^2\frac{3-\tau }{\tau -1} \int _{\varepsilon }^{1/\varepsilon }t^{1-\tau }(1-e ^{-t})\mathrm{d}t. \end{aligned} \end{aligned}$$
(5.27)

If we first let \(n\rightarrow \infty \), and then \(K_1\rightarrow 0\) and \(K_2\rightarrow \infty \), then we obtain from (5.18), (5.26) and (5.27) that

$$\begin{aligned} \frac{\mathbb {E}_n\left[ c(k),W_n^k(\varepsilon )\right] }{n^{2-\tau }\log (n)}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\mu ^{-\tau }\frac{3-\tau }{\tau -1} \int _{\varepsilon }^{1/\varepsilon }t^{1-\tau }(1-e ^{-t})\mathrm{d}t. \end{aligned}$$
(5.28)

\(\square \)

Lemma 7

(Range II) When \(k=\varOmega (n^{(\tau -2)/(\tau -1)})\) and \(k=o(\sqrt{n})\),

$$\begin{aligned} \frac{\mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] }{n^{2-\tau }\log (n/k^2)}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\mu ^{-\tau }\int _{\varepsilon }^{1/\varepsilon }t^{1-\tau }(1-e ^{-t})\mathrm{d}t. \end{aligned}$$
(5.29)

Proof

We split the major contributing regime into three parts, depending on the values of \(D_u\) and \(D_v\), as visualized in Fig. 7. We denote the contribution to the clustering coefficient where \(D_u\in [k/\varepsilon ^2,\varepsilon n/k]\) (area A of Fig. 7) by \(c_1(k,W_n^k(\varepsilon ))\), the contribution from \(D_u\) or \(D_v\in [\varepsilon n/k,n/(\varepsilon k)]\) (area B of Fig. 7) by \(c_2(k,W_n^k(\varepsilon ))\) and the contribution from \(D_u\in [k,k/\varepsilon ^2]\) and \(D_v\in [\varepsilon ^3n/k,\varepsilon n/k]\) (area C of Fig. 7) by \(c_3(k,W_n^k(\varepsilon ))\). We first study the contribution of area A. In this situation, \(D_u,D_v<\varepsilon n/k\), so that we can Taylor expand the exponentials \(e ^{-kD_u/L_n}\) and \(e ^{-kD_v/L_n}\) in (5.11). This results in

$$\begin{aligned} \begin{aligned} \mathbb {E}_n\left[ c_1(k,W_n^k(\varepsilon ))\right]&=\frac{1}{k^2}\sum _{\begin{array}{c} (u,v)\in W_n^k(\varepsilon ),\\ D_u\in [k/\varepsilon ^2,\varepsilon n/k] \end{array}} \left( 1-{\mathrm e}^{-kD_u/L_n}\right) \left( 1-{\mathrm e}^{-kD_v/L_n}\right) \left( 1-{\mathrm e}^{-D_uD_v/L_n}\right) \\&= (1+o_\mathbb {P}(1))\sum _{\begin{array}{c} (u,v)\in W_n^k(\varepsilon ),\\ D_u\in [k/\varepsilon ^2,\varepsilon n/k] \end{array}} \frac{D_uD_v}{L_n^2}(1-{\mathrm e}^{-D_uD_v/L_n}). \end{aligned} \end{aligned}$$
(5.30)

Now we define the random measure

Fig. 7
figure 7

Contributing regime for \(n^{(\tau -2)/(\tau -1)}<k<\sqrt{n}\)

$$\begin{aligned} M^{\scriptscriptstyle (n)}_1[a,b] = \frac{(\mu n)^{\tau -1}}{\log (\varepsilon ^3n/k^2)n^2}\sum _{u,v\in [n]}\mathbbm {1}_{\left\{ D_uD_v\in \mu n[a,b], D_u\in [k/\varepsilon ^2,\varepsilon n/k]\right\} }. \end{aligned}$$
(5.31)

A similar reasoning as in (5.25) shows that

$$\begin{aligned} M^{\scriptscriptstyle (n)}_1[a,b]{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\int _{a}^{b}t^{-\tau }\mathrm{d}t := \lambda _2[a,b]. \end{aligned}$$
(5.32)

By (5.30), we can write the contribution to the expected value of c(k) in this regime as

$$\begin{aligned} \begin{aligned} \mathbb {E}_n\left[ c_1(k,W_n^k(\varepsilon ))\right]&=(1+o_{\scriptscriptstyle \mathbb {P}}(1)) \sum _{\begin{array}{c} (u,v)\in W_n^k(\varepsilon ),\\ D_u\in [k/\varepsilon ^3,\varepsilon n/k] \end{array}} \frac{D_uD_v}{L_n^2}(1-{\mathrm e}^{-D_uD_v/L_n}) \\&=(1+o_{\scriptscriptstyle \mathbb {P}}(1)) \mu ^{-\tau }n^{2-\tau }\log (\varepsilon ^3n/k^2) \int _{\varepsilon }^{1/\varepsilon }t(1-e ^{-t})\mathrm{d}M^{\scriptscriptstyle (n)}_1 (t) . \end{aligned}\nonumber \\ \end{aligned}$$
(5.33)

Thus, by Lemma 5,

$$\begin{aligned} \mathbb {E}_n\left[ c_1(k,W_n^k(\varepsilon ))\right] =(1+o_{\scriptscriptstyle \mathbb {P}}(1)) 2 \mu ^{-\tau }n^{2-\tau }\log (\varepsilon ^3n/k^2) \int _{\varepsilon }^{1/\varepsilon }t(1-e ^{-t})\mathrm{d}\lambda _2(t).\nonumber \\ \end{aligned}$$
(5.34)

Then we study the contribution of area B in Fig. 7. This area consists of two parts, the part where \(D_u\in [\varepsilon n/k, n/(k\varepsilon )]\), and the part where \(D_v\in [\varepsilon n/k, n/(k\varepsilon )]\). By symmetry, these two contributions are the same and therefore we only consider the case where \(D_u\in [\varepsilon n/k, n/(k\varepsilon )]\). Then, we can Taylor expand \(e ^{-D_vk/L_n}\) in (5.11), which yields

$$\begin{aligned} \begin{aligned} \mathbb {E}_n\left[ c_2(k,W_n^k(\varepsilon ))\right]&=\frac{2}{k^2}\sum _{\begin{array}{c} (u,v)\in W_n^k(\varepsilon ),\\ D_u>\varepsilon n/k \end{array}} \left( 1-{\mathrm e}^{-kD_u/L_n}\right) \frac{D_v k}{L_n}\left( 1-{\mathrm e}^{-D_uD_v/L_n}\right) . \end{aligned} \end{aligned}$$
(5.35)

Define the random measure

$$\begin{aligned} M^{\scriptscriptstyle (n)}_2([a,b],[c,d]):=\frac{(\mu n)^{\tau -1}}{n^2} \sum _{u,v\in [n]}\mathbbm {1}_{\left\{ D_uD_v\in \mu n[a,b], D_u\in (\mu n/k)[c,d]\right\} }. \end{aligned}$$
(5.36)

Then we obtain

$$\begin{aligned} \begin{aligned} \mathbb {E}_n\left[ c_2(k,W_n^k(\varepsilon ))\right]&=\frac{2}{k L_n}\sum _{\begin{array}{c} (u,v)\in W_n^k(\varepsilon ),\\ D_u>\varepsilon n/k \end{array}}\frac{L_n}{D_u k}\left( 1\!-\!{\mathrm e}^{-kD_u/L_n}\right) \frac{D_uD_v k}{L_n}\left( 1\!-\!{\mathrm e}^{-D_uD_v/L_n}\right) \\&=2\mu ^{-\tau }n^{2-\tau }\int _{\varepsilon }^{1/\varepsilon } \int _{\varepsilon }^{1/\varepsilon }\frac{t_1}{t_2}(1-e ^{-t_1}) (1-e ^{-t_2})\mathrm{d}M^{\scriptscriptstyle (n)}_2(t_1,t_2) (1+o_\mathbb {P}(1)). \end{aligned}\nonumber \\ \end{aligned}$$
(5.37)

Again, using a first moment method and a second moment method, we can show that

$$\begin{aligned} M^{\scriptscriptstyle (n)}_2([a,b],[c,d]){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\int _{a}^{b}t^{-\tau }\mathrm{d}t \int _{c}^{d}\frac{1}{v} \mathrm{d}v=: \lambda [a,b]\nu [c,d]. \end{aligned}$$
(5.38)

Very similarly to the proof of Lemma 4 we can show that

$$\begin{aligned} \begin{aligned}&\int _{\varepsilon }^{1/\varepsilon }\int _{\varepsilon } ^{1/\varepsilon }\frac{t_1}{t_2}(1-e ^{-t_1})(1-e ^{-t_2})\mathrm{d}M^{\scriptscriptstyle (n)}_2(t_1,t_2){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\\&\quad \int _{\varepsilon }^{1/\varepsilon }\int _{\varepsilon } ^{1/\varepsilon }\frac{t_1}{t_2}(1-e ^{-t_1})(1-e ^{-t_2})\mathrm{d}\lambda (t_1)\mathrm{d}\nu (t_2) . \end{aligned} \end{aligned}$$
(5.39)

The latter integral can be written as

$$\begin{aligned} \begin{aligned}&\int _{\varepsilon }^{1/\varepsilon } \int _{\varepsilon }^{1/\varepsilon }\frac{t_1}{t_2}(1-e ^{-t_1}) (1-e ^{-t_2})\mathrm{d}\lambda (t_1)\mathrm{d}\nu (t_2)\\&\qquad = C^2\int _{\varepsilon }^{1/\varepsilon }\int _{\varepsilon } ^{1/\varepsilon }t_2^{-2}t_1^{1-\tau }(1-e ^{-t_2})(1-e ^{-t_1})\mathrm{d}t_1\mathrm{d}t_2\\&\qquad = C^2\int _{\varepsilon }^{1/\varepsilon }\frac{1}{t_2^2} (1-e ^{-t_2})\mathrm{d}t_2\int _{\varepsilon }^{1/\varepsilon }t_1^{1-\tau } (1-e ^{-t_1})\mathrm{d}t_1. \end{aligned} \end{aligned}$$
(5.40)

The left integral results in

$$\begin{aligned} \begin{aligned} \int _{\varepsilon }^{1/\varepsilon }\frac{1}{t_2^2}(1-e ^{-t_2})\mathrm{d}t_2&= \left[ \frac{e ^{-t_2}-1}{t_2}+\text {Ei}(t_2)\right] _{t_2 =\varepsilon }^{t_2=1/\varepsilon }\\&=\varepsilon (e ^{-1/\varepsilon }-1)-\frac{e ^{-\varepsilon }-1}{\varepsilon }+\int _{1/\varepsilon }^{\infty }\frac{1}{u}e ^{-u} \mathrm{d}u-\log (\varepsilon )-\sum _{j=1}^{\infty }\frac{\varepsilon ^k}{k! k}\\&=\log \left( \frac{1}{\varepsilon }\right) +\int _{1/\varepsilon } ^{\infty }\frac{1}{u}e ^{-u}\mathrm{d}u+\varepsilon (e ^{-1/\varepsilon }-1) -\frac{e ^{-\varepsilon }-1}{\varepsilon }-\sum _{j=1}^{\infty } \frac{\varepsilon ^k}{k! k}\\&=\log \left( \frac{1}{\varepsilon }\right) +f(\varepsilon ), \end{aligned} \end{aligned}$$
(5.41)

where Ei denotes the exponential integral and we have used the Taylor series for the exponential integral. We can show that \(f(\varepsilon )<\infty \) for fixed \(\varepsilon \in (0,\infty )\). In fact, \(f(\varepsilon )\rightarrow 1\) as \(\varepsilon \rightarrow 0\).

Finally, we study the contribution of area C in Fig. 7, where \(D_u\in [k,k/\varepsilon ^2]\) and \(D_v\in [n/k\varepsilon ^3,n/k\varepsilon ]\). In this regime, \(D_uk=o( n)\) and \(D_vk=o(n)\) so that we can Taylor expand the first two exponentials in (5.11). This results in

$$\begin{aligned} \mathbb {E}_n\left[ c_3(k,W_n^k(\varepsilon ))\right] =(1+o(1))\sum _{\begin{array}{c} u,v: D_v\in [\varepsilon ^3 n/k,\varepsilon n/k], D_uD_v>\varepsilon n,\\ D_u\in [k,k/\varepsilon ^2] \end{array}}(1-{\mathrm e}^{-D_uD_v/L_n})\frac{D_uD_v}{L_n} .\nonumber \\ \end{aligned}$$
(5.42)

We define the random measure

$$\begin{aligned} M^{\scriptscriptstyle (n)}_3([a,b],[c,d]):=\frac{(\mu n)^{\tau -1}}{n^2}\sum _{u,v}\mathbbm {1}_{\left\{ D_u\in \sqrt{\mu }k[a,b],D_v\in (\sqrt{\mu } n/k)[c,d]\right\} }. \end{aligned}$$
(5.43)

Then,

$$\begin{aligned} \begin{aligned} \mathbb {E}_n\left[ c_3(k,W_n^k(\varepsilon ))\right]&=(1+o_\mathbb {P}(1)) n^{2-\tau }\mu ^{-\tau }\int _{1}^{1/\varepsilon ^2}\int _{\varepsilon /t_1} ^{\varepsilon }(t_1t_2)(1-e ^{-t_1t_2})\mathrm{d}M^{\scriptscriptstyle (n)}_3(t_1,t_2) . \end{aligned}\nonumber \\ \end{aligned}$$
(5.44)

Again using a first moment method and a second moment method we can show that

$$\begin{aligned} M^{\scriptscriptstyle (n)}_3([a,b],[c,d]){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\int _{a}^{b}u^{-\tau }\mathrm{d}u\int _{c}^{d}v^{-\tau }\mathrm{d}v. \end{aligned}$$
(5.45)

In a similar way, we can show that for \(B\subseteq [1,1/\varepsilon ^2]\times [\varepsilon ^3,\varepsilon ]\), \(M^{\scriptscriptstyle (n)}_3(B){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\int \int _B(uv)^{-\tau }\mathrm{d}u\mathrm{d}v\). Thus, by Lemma 4,

$$\begin{aligned} \int _{1}^{1/\varepsilon ^2}\int _{\varepsilon /t_1}^{\varepsilon } (t_1t_2)(1-e ^{-t_1t_2})\mathrm{d}M^{\scriptscriptstyle (n)}_3(t_1,t_2){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\int _{1}^{1/\varepsilon ^2}\int _{\varepsilon /x}^{\varepsilon } (xy)^{1-\tau }(1-e ^{-xy})\mathrm{d}y \mathrm{d}x.\nonumber \\ \end{aligned}$$
(5.46)

We evaluate the latter integral as

$$\begin{aligned} \begin{aligned} \int _{1}^{1/\varepsilon ^2}\int _{\varepsilon /x}^{\varepsilon }(xy) ^{1-\tau }(1-e ^{-xy}) \mathrm{d}y \mathrm{d}x&= \int _{1}^{1/\varepsilon ^2}\int _{\varepsilon }^{\varepsilon v}\frac{1}{v}u^{1-\tau }(1-e ^{-u})\mathrm{d}u\mathrm{d}v \\&=\int _{\varepsilon }^{1/\varepsilon }\int _{u/\varepsilon }^{1/\varepsilon ^2} \frac{1}{v}u^{1-\tau }(1-e ^{-u})\mathrm{d}v\mathrm{d}u\\&= \log \left( \frac{1}{\varepsilon }\right) \int _{\varepsilon } ^{1/\varepsilon }u^{1-\tau }(1-e ^{-u})\mathrm{d}u\\&\quad + \int _{\varepsilon }^{1/\varepsilon }\log \left( \frac{1}{u}\right) u^{1-\tau }(1-e ^{-u})\mathrm{d}u, \end{aligned} \end{aligned}$$
(5.47)

where we have used the change of variables \(u=xy\) and \(v=x\). Summing all three contributions to the expectation under \(\mathbb {E}_n\) of the clustering coefficient yields

$$\begin{aligned} \mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right]= & {} \mathbb {E}_n\left[ c_1(k,W_n^k(\varepsilon ))\right] +\mathbb {E}_n\left[ c_2(k,W_n^k(\varepsilon ))\right] +\mathbb {E}_n\left[ c_3(k,W_n^k(\varepsilon ))\right] \nonumber \\= & {} C^2\mu ^{-\tau } n^{2-\tau }(1+o_{\scriptscriptstyle \mathbb {P}}(1))\Bigg [\int _{\varepsilon } ^{1/\varepsilon }t_1^{1-\tau }(1-e ^{-t_1})\mathrm{d}t_1\nonumber \\&\times \left( \log \left( \frac{n\varepsilon ^2}{k^2}\right) +3\log \left( \frac{1}{\varepsilon }\right) +2 f(\varepsilon )\right) +\int _{\varepsilon }^{1/\varepsilon }\log \left( \frac{1}{u}\right) u^{1-\tau }(1-e ^{-u})\mathrm{d}u\Bigg ]\nonumber \\= & {} C^2(1+o_{\scriptscriptstyle \mathbb {P}}(1))\mu ^{-\tau } n^{2-\tau }\Bigg [\int _{\varepsilon }^{1/\varepsilon }t_1^{1-\tau }(1-e ^{-t_1}) \mathrm{d}t_1\left( \log \left( \frac{n}{k^2}\right) +2 f(\varepsilon )\right) \nonumber \\&+\int _{\varepsilon }^{1/\varepsilon }\log \left( \frac{1}{u}\right) u^{1-\tau }(1-e ^{-u})\mathrm{d}u\Bigg ]. \end{aligned}$$
(5.48)

Dividing by \(n^{2-\tau }\log (n/k^2)\) and taking the limit of \(n\rightarrow \infty \) then shows that

$$\begin{aligned} \frac{\mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] }{n^{2-\tau }\log (n/k^2)}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\mu ^{-\tau }\int _{\varepsilon }^{1/\varepsilon }x^{1-\tau }(1-e ^{-x})\mathrm{d}x. \end{aligned}$$
(5.49)

\(\square \)

Lemma 8

(Range III) For \(k=\varOmega ( \sqrt{n})\),

$$\begin{aligned} \frac{\mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] }{n^{5-2\tau }k^{2\tau -6}}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\mu ^{3-2\tau }\left( \int _{\varepsilon }^{1/\varepsilon }t^ {1-\tau }(1-e ^{-t})\mathrm{d}t\right) ^2. \end{aligned}$$
(5.50)

Proof

When \(k=\varOmega (\sqrt{n})\), the major contribution is from u, v with \(D_u,D_v=\varTheta (n/k)\), so that \(D_uD_v=o(n)\). Therefore, we can Taylor expand the exponential \(e ^{-D_uD_v/L_n}\) in (5.11). Thus, we write the expected value of c(k) as

$$\begin{aligned} \begin{aligned} \mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right]&=\frac{1}{k^2}\sum _{{(u,v)\in W_n^k(\varepsilon )}} \left( 1-{\mathrm e}^{-kD_u/L_n}\right) \left( 1-{\mathrm e}^{-kD_v/L_n}\right) \left( 1-{\mathrm e}^{-D_uD_v/L_n}\right) (1+o_{\mathbb {P}}(1)) \\&= \frac{1}{k^2}\sum _{(u,v)\in W_n^k(\varepsilon )} \left( 1-{\mathrm e}^{-kD_u/L_n}\right) \left( 1-{\mathrm e}^{-kD_v/L_n}\right) \frac{D_uD_v}{L_n}(1+o_{\scriptscriptstyle \mathbb {P}}(1)). \end{aligned}\nonumber \\ \end{aligned}$$
(5.51)

Define the random measure

$$\begin{aligned} N^{\scriptscriptstyle (n)}_1[a,b]=\frac{(\mu n)^{\tau -1}}{n}k^{1-\tau } \sum _{u\in [n]} \mathbbm {1}_{\{D_u\in (\mu n/k)[a,b]\}}, \end{aligned}$$
(5.52)

and let \(N^{\scriptscriptstyle (n)}\) be the product measure \(N^{\scriptscriptstyle (n)}_1\times N^{\scriptscriptstyle (n)}_1\). Since all degrees are i.i.d. samples from a power-law distribution, the number of vertices with degrees in interval \([q_1,q_2]\) is distributed as a \(\text {Bin}(n,C(q_1^{1-\tau }-q_2^{1-\tau }))\) random variable. Therefore,

$$\begin{aligned} \begin{aligned} N^{\scriptscriptstyle (n)}_1\left( [a,b]\right)&=\frac{(\mu n)^{\tau -1}k^{1-\tau }}{n}\left| \{i: D_i\in (\mu n/k)[a,b]\}\right| \\&{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}\lim _{n\rightarrow \infty }(\mu n)^{\tau -1}k^{1-\tau }\mathbb {P}\left( D_i\in (\mu n/k)[a,b]\right) \\&= (\mu n)^{\tau -1}k^{1-\tau }\int _{a\mu n/k}^{b\mu n/k} Cx^{-\tau }\mathrm{d}x=C\int _{a}^{b}t^{-\tau }\mathrm{d}t:=\lambda ([a,b]), \end{aligned} \end{aligned}$$
(5.53)

where we have used the substitution \(t=xk/(\mu n)\). Then,

$$\begin{aligned}&\sum _{(u,v)\in W_n^k(\varepsilon )} \left( 1-{\mathrm e}^{-kD_u/L_n}\right) \left( 1-{\mathrm e}^{-kD_v/L_n}\right) \frac{D_uD_v}{L_n}\nonumber \\&\quad =\frac{L_n}{k^2} \sum _{(u,v)\in W_n^k(\varepsilon )} \left( 1-{\mathrm e}^{-kD_u/L_n}\right) \left( 1-{\mathrm e}^{-kD_v/L_n}\right) \frac{D_u k}{L_n}\frac{D_v k}{L_n}\nonumber \\&\quad =(1+o_{\scriptscriptstyle \mathbb {P}}(1))\mu ^{3-2\tau } n^{5-2\tau }k^{2\tau -4}\int _{\varepsilon }^{1/\varepsilon } \int _{\varepsilon }^{1/\varepsilon } t_1t_2(1\!-\!e ^{-t_1})(1\!-\!e ^{-t_2})\mathrm{d}N^{\scriptscriptstyle (n)}(t_1,t_2). \end{aligned}$$
(5.54)

Combining this with (5.51) yields

$$\begin{aligned} \begin{aligned} \frac{\mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] }{n^{5-2\tau }k^{2\tau -4}}&=(1+o_{\scriptscriptstyle \mathbb {P}}(1))\mu ^{2\tau -3}\int _{\varepsilon }^{1/\varepsilon }\int _{\varepsilon }^{1/\varepsilon } t_1t_2(1-e ^{-t_1})(1-e ^{-t_2})\mathrm{d}N^{\scriptscriptstyle (n)}(t_1,t_2) \\&=(1+o_{\scriptscriptstyle \mathbb {P}}(1))\mu ^{2\tau -3}\left( \int _{\varepsilon }^{1/\varepsilon } t_1(1-e ^{-t_1})\mathrm{d}N^{\scriptscriptstyle (n)}_1(t_1)\right) ^2. \\ \end{aligned} \end{aligned}$$
(5.55)

We then use Lemma 5, which shows that

$$\begin{aligned}&\int _{\varepsilon }^{1/\varepsilon } t_1^{1-\tau }(1-e ^{-t_1})\mathrm{d}N^{\scriptscriptstyle (n)}_1(t_1){\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C\int _{\varepsilon }^{1/\varepsilon } t_1(1-e ^{-t_1})\mathrm{d}\lambda (t_1) \nonumber \\&\quad = C \int _{\varepsilon }^{1/\varepsilon } t_1^{1-\tau }(1-e ^{-t_1})\mathrm{d}t_1. \end{aligned}$$
(5.56)

Then, we can conclude from (5.55) and (5.56) that

$$\begin{aligned} \frac{\mathbb {E}_n\left[ c(k,W_n^k(\varepsilon )\right] }{n^{5-2\tau }k^{2\tau -6}}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\mu ^{3-2\tau }\left( \int _{\varepsilon }^{1/\varepsilon } t_1^{1-\tau }(1-e ^{-t_1})\mathrm{d}t_1\right) ^2. \end{aligned}$$
(5.57)

\(\square \)

5.3 Variance of the Local Clustering Coefficient

In the following lemma, we give a bound on the variance of \(c(k,W_n^k(\varepsilon ))\):

Lemma 9

For all ranges, under \(J_n\),

$$\begin{aligned} \frac{Var _n\left( c(k,W_k^k(\varepsilon ))\right) }{\mathbb {E}_n\left[ c(k,W_n^k(\varepsilon )) \right] ^2}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}0. \end{aligned}$$
(5.58)

Proof

We will analyze the variance in a very similar way as we have analyzed the expected value of c(k) conditioned on the degrees in Sect. 5.1. We can write the variance of \(c(k,W_n^k(\varepsilon ))\) as

$$\begin{aligned} \begin{aligned}&Var _n\left( c(k,W_n^k(\varepsilon ))\right) = \frac{1}{k^2(k-1)^2N_k^2}\sum _{i,j:D^{\scriptscriptstyle \mathrm {(er)}}_i,D^{\scriptscriptstyle \mathrm {(er)}}_j=k}\\ \quad&\sum _{(u,v),(w,z)\in W_n^k(\varepsilon )}\mathbb {P}_n\left( \triangle _{i,u,v} \triangle _{j,w,z}\right) -\mathbb {P}_n\left( \triangle _{i,u,v}\right) \mathbb {P}_n\left( \triangle _{j,w,z}\right) , \end{aligned} \end{aligned}$$
(5.59)

where \(\triangle _{i,u,v}\) again denotes the event that vertices iu and v form a triangle. Equation (5.59) splits into various cases, depending on the size of \(\{i,j,u,v,w,z\}\). We denote the contribution of \(\left| \{i,j,u,v,w,z\}\right| =r\) to the variance by \(V^{\scriptscriptstyle {(r)}}(k)\). We first consider \(V^{\scriptscriptstyle {(6)}}(k)\). By a similar reasoning as (5.7)

$$\begin{aligned} \begin{aligned}&Var _n\left( c(k,W_n^k(\varepsilon ))\right) = \frac{1}{N_k^2k^2(k-1)^2} \sum _{i,j:D^{\scriptscriptstyle \mathrm {(er)}}_i,D^{\scriptscriptstyle \mathrm {(er)}}_j=k}\sum _{(u,v),(w,z)\in W_n^k(\varepsilon )} \Big (g_n(k,D_u,D_v)\\&\qquad \times g_n(k,D_w,D_z)(1+o_{\scriptscriptstyle \mathbb {P}}(1)) - g_n(k,D_u,D_v)g_n(k,D_w,D_z) (1+o_{\scriptscriptstyle \mathbb {P}}(1)) \Big )\\&\quad = \sum _{(u,v),(w,z)\in W_n^k(\varepsilon )}o_{\scriptscriptstyle \mathbb {P}}\left( \frac{g_n(k,D_u,D_v) g_n(k,D_w,D_z)}{k^2(k-1)^2}\right) \\&\quad = o_{\scriptscriptstyle \mathbb {P}}\left( \mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] ^2\right) , \end{aligned} \end{aligned}$$
(5.60)

where we have again replaced \(g_n(D_i,D_u,D_v)\) by \(g_n(k,D_u,D_v)\) because of (5.9). Since there are no overlapping edges when \(|\{i,j,u,v,w,z\}|=5\), \(V^{\scriptscriptstyle {(5)}}(k)\) can be bounded similarly. This already shows that the contribution to the variance from 5 or 6 different vertices involved is small in all three ranges of k.

We then consider the contribution from \(V^{\scriptscriptstyle {(4)}}\), which is the contribution from two triangles where one edge overlaps. We show that these types of overlapping triangles are rare, so that their contribution to the variance is small. If for example \(i=j\) and \(u=z\), then one edge from the vertex of degree k overlaps with another triangle. To bound this contribution, we use that \(\mathbb {P}_n\left( \hat{X}_{ij}=1\right) \le \min \left( 1,\frac{D_iD_j}{L_n}\right) \). Then we can bound the summand in (5.59) as

$$\begin{aligned} \begin{aligned} \mathbb {P}_n\left( \triangle _{i,u,v}\triangle _{i,w,u}\right)&-\mathbb {P}_n\left( \triangle _{i,u,v}\right) \mathbb {P}_n\left( \triangle _{i,w,u}\right) \\&\le \mathbb {P}_n\left( \triangle _{i,u,v}\triangle _{i,w,u}\right) \\&\le \min \left( 1,\frac{kD_u}{L_n}\right) \min \left( 1, \frac{kD_v}{L_n-2}\right) \min \left( 1,\frac{D_uD_v}{L_n-4}\right) \\&\quad \times \min \left( 1,\frac{kD_w}{L_n-6}\right) \min \left( 1,\frac{D_wD_u}{L_n-8}\right) \\&= (1+O\left( n^{-1}\right) ) \min \left( 1,\frac{kD_u}{L_n}\right) \min \left( 1,\frac{kD_v}{L_n}\right) \min \left( 1,\frac{D_uD_v}{L_n}\right) \\&\quad \times \min \left( 1,\frac{kD_w}{L_n}\right) \min \left( 1,\frac{D_wD_u}{L_n}\right) . \end{aligned} \end{aligned}$$
(5.61)

We first consider k in Ranges I or II, where \(k=o(\sqrt{n})\). For the terms involving k we bound this by taking the second term of the minimum, while we bound \(\min (D_uD_v/L_n,1)\le 1\). Combining this with (5.61) results in the bound

$$\begin{aligned}&\mathbb {P}_n\left( \triangle _{i,u,v}\triangle _{i,w,u}\right) -\mathbb {P}_n\left( \triangle _{i,u,v}\right) \mathbb {P}_n\left( \triangle _{i,w,u}\right) \nonumber \\&\quad \le (1+O\left( n^{-1}\right) ) \frac{k^3D_uD_vD_w}{L_n^3}\le O(1) \varepsilon ^{-1}\frac{k^3D_w}{L_n^2}, \end{aligned}$$
(5.62)

where we have used that \(D_uD_v<n/\varepsilon \) when \((u,v)\in W_n^k(\varepsilon )\). Therefore, the contribution to the variance in this situation can be bounded by

$$\begin{aligned} \begin{aligned} \frac{k^3}{k^4 N_k^2}\sum _{i:D^{\scriptscriptstyle \mathrm {(er)}}_i=k}\sum _{(u,v),(w,u) \in W_n^k(\varepsilon )}\frac{\varepsilon ^{-1}D_w}{L_n^2}&= \frac{1}{k N_k}\sum _{(u,v),(w,u)\in W_n^k(\varepsilon )} \frac{\varepsilon ^{-1}D_w}{L_n^2} \\&\le \frac{\varepsilon ^{-1}}{k N_k}O\left( n^{-1}\right) \sum _{u\in [n]} \frac{1}{\varepsilon D_u}\left( \sum _{w\in [n]}\mathbbm {1}_{\left\{ D_w>\varepsilon n/ D_u\right\} } \right) ^2, \end{aligned}\nonumber \\ \end{aligned}$$
(5.63)

where we have used that \(D_w=O(n/D_u)\) in \(W_n^k(\varepsilon )\). We then use Lemma 1 to further bound this as

$$\begin{aligned} \begin{aligned} \frac{k^3}{k^4 N_k^2}\sum _{i:D^{\scriptscriptstyle \mathrm {(er)}}_i=k}\sum _{(u,v),(w,u)\in W_n^k(\varepsilon )}\frac{\varepsilon ^{-1}D_w}{L_n^2}&\le K(\varepsilon ) O_{\scriptscriptstyle \mathbb {P}}\left( \frac{1}{nk^{1-\tau }}\sum _u \left( \frac{n}{D_u}\right) ^{3-2\tau }\right) \\&\le K(\varepsilon )O_{\scriptscriptstyle \mathbb {P}}\left( n^{3-2\tau }k^{\tau -1}n^{(2-\tau )/(\tau -1)}\right) . \end{aligned} \end{aligned}$$
(5.64)

Here \(K(\varepsilon )\) is a constant only depending on \(\varepsilon \). Since \(n^{(2-\tau )/(\tau -1)}k^{\tau -1}=o(n)\) when \(k=o(\sqrt{n})\) and \(\tau \in (2,3)\), we have proven that this contribution is smaller than \(n^{4-2\tau }\log ^2(n)\) and smaller than \(n^{4-2\tau }\log ^2(n/k^2)\), as required by Lemmas 6 and 7 respectively. Now we consider the contribution from triangles that share the edge between vertices u and v. Using a similar reasoning as in (5.61), the contribution from the case \(i\ne j\) and \(u=z\) and \(v=w\) can be bounded as

$$\begin{aligned} \begin{aligned} \frac{1}{k^4N_k^2}\sum _{i,j:D_i,D_j=k}&\sum _{(u,v)\in W_n^k(\varepsilon )}\mathbb {P}_n\left( \triangle _{i,u,v}\triangle _{j,v,w}\right) -\mathbb {P}_n\left( \triangle _{i,u,v}\right) \mathbb {P}_n\left( \triangle _{j,v,w}\right) \\&\le \sum _{(u,v)\in W_n^k(\varepsilon )} \frac{k^4D_u^2D_v^2}{k^4L_n^4}\le \varepsilon ^{-2} \mathbb {P}_n\left( (\mathcal {D}_u,\mathcal {D}_v)\in W_n^k(\varepsilon )\right) \\&= \varepsilon ^{-2} O_{\scriptscriptstyle \mathbb {P}}\left( n^{1-\tau }\log (n)\right) , \end{aligned} \end{aligned}$$
(5.65)

where we have used Lemma 1 and that \(D_uD_v=O(n)\) when \((u,v)\in W_n^k(\varepsilon )\). Since \(n^{1-\tau }\log (n)=o(n^{4-2\tau }\log ^2(n))\) for \(\tau \in (2,3)\), this shows that this contribution is small enough.

When k is in Range III, we use similar bounds for \(V^{\scriptscriptstyle (4)}\), now using that \(D_u,D_v,D_w< \varepsilon ^{-1}n/k\). If \(N_k=0\), then by definition \(Var _n\left( c(k)\right) =0\). Therefore, we only consider the case \(N_k\ge 1\). Again, we start by considering the case \(i=j\) and \(u=z\). We can use (5.61), where we use that \(D_uD_v<n^2/(k\varepsilon )^2\) and \(D_uD_w<n^2/(k\varepsilon )^2\), and we take 1 for the other minima. This yields

$$\begin{aligned} \mathbb {P}_n\left( \triangle _{i,u,v}\triangle _{i,w,u}\right) -\mathbb {P}_n\left( \triangle _{i,u,v}\right) \mathbb {P}_n\left( \triangle _{i,w,u}\right) \le O(n^2)k^{-4}\varepsilon ^{-4}. \end{aligned}$$
(5.66)

Thus, the contribution to the variance from this case can be bounded as

$$\begin{aligned} \begin{aligned} \frac{1}{k^4N_k}\sum _{(u,v),(u,w)\in W_n^k(\varepsilon )} O(n^2)k^{-4}\varepsilon ^{-4}&\le \frac{1}{k^4}O_{\scriptscriptstyle \mathbb {P}}\left( n^{5}k^{-8}\varepsilon ^{-4} \mathbb {P}\left( D>n/(\varepsilon k)\right) ^3\right) \\&\le O_{\scriptscriptstyle \mathbb {P}}\left( n^{5}k^{-8}\varepsilon ^{-4}\left( \frac{n}{k\varepsilon }\right) ^{3-3\tau }\right) \\&=O_{\scriptscriptstyle \mathbb {P}}\left( k^{3\tau -11}n^{8-3\tau }\right) \varepsilon ^{3\tau -7}, \end{aligned} \end{aligned}$$
(5.67)

where we used Lemma 1. When \(k=\varOmega ( \sqrt{n})\) and \(\tau \in (2,3)\), this contribution is smaller that \(n^{10-4\tau }k^{4\tau -12}\), as required by Lemma 8. In the case where \(i\ne j\), \(u=z\) and \(v=w\), we use a similar reasoning as the one in (5.61) to show that

$$\begin{aligned} \mathbb {P}_n\left( \triangle _{i,u,v}\triangle _{i,w,u}\right) -\mathbb {P}_n\left( \triangle _{i,u,v}\right) \mathbb {P}_n\left( \triangle _{i,w,u}\right) \le O(n)k^{-2}\varepsilon ^{-2}. \end{aligned}$$
(5.68)

Then the contribution of this situation to the variance can be bounded as

$$\begin{aligned} \frac{1}{k^4}\sum _{(u,v)\in W_n^k(\varepsilon )}O(n)k^{-2}\varepsilon ^{-2}\le O\left( \varepsilon ^{-2}n^3k^{-6}\left( \frac{n}{\varepsilon k}\right) ^{2-2\tau }\right) = O\left( n^{5-2\tau }k^{2\tau -8}\right) . \end{aligned}$$
(5.69)

Again, this is smaller than \(n^{10-4\tau }k^{4\tau -12}\), as required. Thus, the contribution of \(V^{\scriptscriptstyle {(4)}}\) is small enough in all three ranges.

Finally, \(V^{\scriptscriptstyle {(3)}}\) can be bounded as

$$\begin{aligned} \frac{1}{k^4N_k^2}\sum _{i:D_i=k}\sum _{(u,v)\in W_n^k(\varepsilon )}\mathbb {P}_n\left( \triangle _{i,u,v}\right) = \frac{1}{k^4N_k}\mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] = \frac{1}{k^4N_k}O_{\scriptscriptstyle \mathbb {P}}\left( f(k,n)\right) .\nonumber \\ \end{aligned}$$
(5.70)

In Ranges I and II, we use that \(N_k=O_{\scriptscriptstyle \mathbb {P}}\left( nk^{-\tau }\right) \). Thus, this gives a contribution of

$$\begin{aligned} V^{\scriptscriptstyle {(3)}}(k)=O_{\scriptscriptstyle \mathbb {P}}\left( \frac{n^{2-\tau }\log (n)}{k^{4-\tau }n}\right) =O_{\scriptscriptstyle \mathbb {P}}\left( n^{1-\tau }\log (n)k^{\tau -4}\right) , \end{aligned}$$
(5.71)

which is small enough since \(n^{1-\tau }k^{\tau -4}<n^{4-2\tau }\) for \(\tau \in (2,3)\) and \(k=o(\sqrt{n})\). In Range III, again we assume that \(N_k\ge 1\), since otherwise the variance of c(k) would be zero, and therefore small enough. Then (5.70) gives the bound

$$\begin{aligned} V^{\scriptscriptstyle {(3)}}(k)=O_{\scriptscriptstyle \mathbb {P}}\left( n^{5-2\tau }k^{2\tau -10}\right) , \end{aligned}$$
(5.72)

which is again smaller than \(n^{10-4\tau }k^{4\tau -12}\) for \(\tau \in (2,3)\) and \(k=\varOmega ( \sqrt{n})\). Thus, all contributions to the variance are small enough, which proves the claim. \(\square \)

Proof of Proposition 1 Combining Lemma 9 and the fact that \(\mathbb {P}\left( J_n\right) =1-O(n^{-1/\tau })\) shows that

$$\begin{aligned} \frac{c(k,W_n^k(\varepsilon ))}{\mathbb {E}_n\left[ c(k,W_n^k(\varepsilon ))\right] }{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}1. \end{aligned}$$
(5.73)

Then, Lemmas 7 and 8 show that

$$\begin{aligned} \frac{c(k,W_n^k(\varepsilon ))}{f(k,n)}{\mathop {\longrightarrow }\limits ^{\mathbb {P}}}{\left\{ \begin{array}{ll} C^2\int _\varepsilon ^{1/\varepsilon }t^{1-\tau }e ^{-t}\mathrm{d}t+O(\varepsilon ^\kappa ) &{} \mathrm{for}\,\, k=o(\sqrt{n})\\ C^2\left( \int _\varepsilon ^{1/\varepsilon }t^{1-\tau }e ^{-t}\mathrm{d}t\right) ^2+O(\varepsilon ^\kappa ) &{} \mathrm{for}\,\, k=\varOmega (\sqrt{n}). \end{array}\right. } \end{aligned}$$
(5.74)

which proves the proposition.\(\square \)

6 Contributions Outside \(W_n^k(\varepsilon )\)

In this section, we show that the contribution of triangles with degrees outside of the major contributing ranges as described in (3.6) is negligible. The following lemma bounds the contribution from triangles with vertices with degrees outside of \(W_n^k(\varepsilon )\):

Lemma 10

There exists \(\kappa >0\) such that

$$\begin{aligned} \frac{\mathbb {E}_n\left[ c(k,\bar{W}_n^k(\varepsilon ))\right] }{f(n,k)}= O_{\scriptscriptstyle \mathbb {P}}\left( \varepsilon ^{\kappa }\right) . \end{aligned}$$
(6.1)

Proof

To compute the expected value of c(k), we use that \(\mathbb {P}_n\left( \hat{X}_{ij}=1\right) \le \min (1,\frac{D_iD_j}{L_n})\). This yields

$$\begin{aligned} \mathbb {E}_n\left[ c(k)\right] \le \frac{n^2\mathbb {E}_n\left[ \min (1,\frac{k\mathcal {D}_u}{L_n})\min (1,\frac{k\mathcal {D}_v}{L_n})\min (1,\frac{\mathcal {D}_u\mathcal {D}_v}{L_n})\right] }{k(k-1)}. \end{aligned}$$
(6.2)

Using Lemma 1, we obtain

$$\begin{aligned} \mathbb {E}_n\left[ c(k)\right] =n^2k^{-2}O_{\scriptscriptstyle \mathbb {P}}\left( \mathbb {E}\left[ \min \left( 1,\frac{k D_u}{\mu n}\right) \min \left( 1,\frac{k D_v}{ \mu n}\right) \min \left( 1,\frac{D_uD_v}{\mu n}\right) \right] \right) , \end{aligned}$$
(6.3)

where \(D_u\) and \(D_v\) are two independent copies of D. Similarly,

$$\begin{aligned}&\mathbb {E}_n\left[ c(k,\bar{W}_n^k(\varepsilon ))\right] \nonumber \\&\quad =n^2k^{-2}O_{\scriptscriptstyle \mathbb {P}}\left( \mathbb {E}\left[ \min \left( 1,\frac{k D_u}{\mu n}\right) \min \left( 1,\frac{k D_v}{ \mu n}\right) \min \left( 1,\frac{D_uD_v}{\mu n}\right) \mathbbm {1}_{\left\{ (D_u,D_v)\in \bar{W}_n^k(\varepsilon )\right\} }\right] \right) ,\nonumber \\ \end{aligned}$$
(6.4)

where

$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ \min \left( 1,\frac{k D_u}{\mu n}\right) \min \left( 1,\frac{k D_v}{ \mu n}\right) \min \left( 1,\frac{D_uD_v}{\mu n}\right) \mathbbm {1}_{\left\{ (D_u,D_v)\in \bar{W}_n^k(\varepsilon )\right\} }\right] \\&\quad = \int \int _{(x,y)\in \bar{W}_n^k(\varepsilon )}(xy)^{-\tau }\min \left( 1,\frac{k x}{\mu n}\right) \min \left( 1,\frac{k y}{ \mu n}\right) \min \left( 1,\frac{xy}{\mu n}\right) \mathrm{d}y \mathrm{d}x. \end{aligned} \end{aligned}$$
(6.5)

We analyze this expression separately for all three ranges of k. For ease of notation, we assume that \(\mu =1\) in the rest of this section.

We first consider Range I, where \(k=o( n^{(\tau -2)/(\tau -1)})\). Then we have to show that the contribution from vertices u and v such that \(D_uD_v<\varepsilon n\) or \(D_uD_v>n/\varepsilon \) is small. First, we study the contribution to (6.5) for \(D_uD_v<\varepsilon n\). We bound this contribution by taking the second term of the minimum in all three cases, which gives

$$\begin{aligned} \frac{k^2}{n^3}\int _{1}^{n}\int _{1}^{\varepsilon n/x}(xy)^{2-\tau }\mathrm{d}y \mathrm{d}x = \frac{k^2}{n^3}\int _{1}^{n}\frac{1}{x} \int _{x}^{\varepsilon n}u^{2-\tau }\mathrm{d}u \mathrm{d}x = \frac{k^2\varepsilon ^{3-\tau }}{3-\tau }O\left( n^{-\tau }\log (n)\right) .\nonumber \\ \end{aligned}$$
(6.6)

Then, we study the contribution for \(D_uD_v>n/\varepsilon \). This contribution can be bounded very similarly by taking \(\frac{kD_u}{L_n}\) and \(\frac{kD_uv}{L_n}\) and 1 for the minima in (6.5) as

$$\begin{aligned} \frac{nk^2}{n^2}\int _{1}^{n}\int _{n/(\varepsilon x)}^n(xy)^{1-\tau }\mathrm{d}y \mathrm{d}x = \frac{k^2}{n^2}\int _{1}^{n}\frac{1}{x} \int _{n/\varepsilon }^{nx}u^{1-\tau }\mathrm{d}u \mathrm{d}x = \frac{k^2\varepsilon ^{\tau -2}}{\tau -2} O\left( n^{-\tau }\log (n)\right) .\nonumber \\ \end{aligned}$$
(6.7)

Thus, by (6.4),

$$\begin{aligned} \mathbb {E}_n\left[ c(k,\bar{W}_n^k(\varepsilon ))\right] =O_{\scriptscriptstyle \mathbb {P}}\left( n^{2-\tau } \log (n)\varepsilon ^{\kappa }\right) . \end{aligned}$$
(6.8)

Multiplying by \(n^2k^{-2}\) and dividing by \(n^{2-\tau }\log (n)\) and taking the limit for \(n\rightarrow \infty \) then proves the lemma in Range I by (6.4).

Now we consider Range II, where \(k=\varOmega (n^{(\tau -2)/(\tau -1)})\) and \( k=o( \sqrt{n})\). We show that the contribution from vertices u and v such that \(D_uD_v<\varepsilon n\) or \(D_uD_v>n/\varepsilon \) or \(D_u,D_v>n/(k\varepsilon )\) is small. We first show that the contribution to (6.5) for \(D_u>n/(k\varepsilon )\) is small. In this setting, \(D_uk>n\), so that the first minimum in (6.5) is attained by 1. The contribution can be computed as

$$\begin{aligned} \begin{aligned}&\int _{n/(k\varepsilon )}^\infty \int _1^\infty (xy) ^{-\tau }\min \left( 1,\frac{k y}{ n}\right) \min \left( 1,\frac{xy}{n}\right) \mathrm{d}y \mathrm{d}x\\&=\frac{k}{n^2}\int _{n/(\varepsilon k)}^{\infty } \int _{1}^{n/x}x^{1-\tau }y^{2-\tau }\mathrm{d}y\mathrm{d}x + \frac{k}{n} \int _{n/(k\varepsilon )}^{\infty }\int _{n/x}^{n/k}x^{-\tau }y^{1-\tau } \mathrm{d}y\mathrm{d}x\\&\quad +\int _{n/(k\varepsilon )}^{\infty }\int _{n/k}^{\infty } x^{-\tau }y^{-\tau }\mathrm{d}y\mathrm{d}x\\&= k^2O\left( n^{-\tau }\right) + k^2O\left( n^{-\tau }\right) + \varepsilon ^{\tau -1} O\left( n^{2-2\tau }k^{2\tau -2}\right) . \end{aligned} \end{aligned}$$
(6.9)

By (6.4), multiplying by \(n^{2}k^{-2}\) and then dividing by \(n^{2-\tau }\log (n/k^2)\) and letting n go to infinity shows that this contribution is small. Thus, we may assume that \(D_u,D_v<n/(k\varepsilon )\). Now we show that the contribution from \(D_uD_v<\varepsilon n\) is negligible. Then, \(D_uD_v<n\), so that the third minimum in (6.5) is attained for \(D_uD_v/n\). The contribution then splits into various cases, depending on \(D_u\).

$$\begin{aligned} \begin{aligned}&\frac{1}{n} \int \int _{xy<\varepsilon n}(xy)^{1-\tau }\min \left( 1,\frac{k x}{ n}\right) \min \left( 1,\frac{k y}{n}\right) \mathrm{d}y \mathrm{d}x \\&\quad = \int _{1}^{k}\int _{1}^{\varepsilon n/x}(xy)^{-\tau }\frac{kx^2y}{L_n^2}\mathrm{d}y \mathrm{d}x\\&\quad \quad +\int _{k}^{n/k}\int _{1}^{\varepsilon n/x}(xy)^{-\tau }\frac{k^2x^2y^2}{L_n^3}\mathrm{d}y \mathrm{d}x + \int _{n/k}^{\infty }\int _{1}^{\varepsilon n/x}(xy)^{-\tau }\frac{kxy^2}{L_n^3}\mathrm{d}y \mathrm{d}x\\&\quad = k^2O\left( n^{-\tau }\right) \varepsilon ^{2-\tau }+ k^2 \varepsilon n^{-\tau }O\left( \log (n/k^2)\right) + k^2O\left( n^{-\tau }\right) \varepsilon ^{3-\tau }. \end{aligned} \end{aligned}$$
(6.10)

The contribution of \(D_uD_v>n/\varepsilon \) can be bounded similarly as

$$\begin{aligned} \begin{aligned}&\int \int _{xy>n/\varepsilon }(xy)^{-\tau }\min \left( 1,\frac{k x}{ n}\right) \min \left( 1,\frac{k y}{n}\right) \mathrm{d}y \mathrm{d}x \\&\quad = \int _1^{k}\int _{n/(\varepsilon x)}^\infty (xy)^{-\tau }\frac{kx}{L_n}\mathrm{d}y \mathrm{d}x + \int _{k}^{n/k}\int _{n/(\varepsilon x)}^\infty (xy)^{-\tau }\frac{k^2xy}{L_n^2}\mathrm{d}y \mathrm{d}x\\&\quad \quad +\int _{n/k}^\infty \int _{n/(\varepsilon x)}^\infty (xy)^{-\tau }\frac{ky}{L_n}\mathrm{d}y \mathrm{d}x\\&\quad = k^2\varepsilon ^{\tau -1}O\left( n^{-\tau }\right) + k^2\varepsilon ^{\tau -2}O\left( n^{-\tau }\log (n/k^2)\right) + k^2O\left( n^{-\tau }\right) \varepsilon ^{\tau -2}. \end{aligned} \end{aligned}$$
(6.11)

By (6.4), multiplying by \(k^{-2}n^2\) and then dividing by \(k(k-1) n^{2-\tau }\log (n/k^2)\) proves the lemma in Range II.

Finally, we prove the lemma in Range III, where \(k=\varOmega (\sqrt{n})\). Here we have to show that the contribution from \(D_u, D_v<\varepsilon n/k\) or \(D_u,D_v>n/(\varepsilon k)\) is small. We again bound this using (6.5). The contribution to (6.5) for \(D_u>n/(k\varepsilon )\) can be computed as

$$\begin{aligned} \begin{aligned}&\int _{n/(k\varepsilon )}^\infty \int _1^\infty (xy)^{-\tau }\min \left( 1,\frac{k y}{ n}\right) \min \left( 1,\frac{xy}{n}\right) \mathrm{d}y \mathrm{d}x\\&= \int _{\frac{n}{k\varepsilon }}^{k}\int _{n/x}^{\infty }x^{-\tau } y^{-\tau }\mathrm{d}y\mathrm{d}x + \int _{\frac{n}{k\varepsilon }}^{k}\int _{n/k} ^{n/x}\frac{1}{n} x^{-\tau +1}y^{-\tau +1}\mathrm{d}y\mathrm{d}x + \int _{\frac{n}{k\varepsilon }}^{k}\int _{0}^{n/k}\frac{k}{n^2} x^{-\tau +1}y^{-\tau +2}\mathrm{d}y\mathrm{d}x\\&\quad + \int _{k}^\infty \int _{n/k}^{\infty }x^{-\tau }y^{-\tau }\mathrm{d}y\mathrm{d}x + \int _{k}^\infty \int _{n/x}^{n/k}\frac{k}{n} x^{-\tau }y^{-\tau +1}\mathrm{d}y\mathrm{d}x + \int _{k}^{\infty }\int _{0}^{n/x}\frac{k}{n^2} x^{-\tau +1}y^{-\tau +2}\mathrm{d}y\mathrm{d}x\\&=O\left( \log \left( \frac{k^2\varepsilon }{n}\right) n^{1-\tau }\right) + O\left( \varepsilon ^{\tau -2}k^{2\tau -4}n^{3-2\tau }\right) + O\left( n^{1-\tau }\right) +O\left( \varepsilon ^{\tau -2}n^{3-2\tau }k^{2\tau -4}\right) \\&\quad + O\left( n^{1-\tau }\right) + O\left( n^{1-\tau } \right) + O\left( n^{1-\tau }\right) = O\left( \varepsilon ^{\tau -2}k^{2\tau -4}n^{3-2\tau }\right) . \end{aligned} \end{aligned}$$
(6.12)

Multiplying this by \(n^2k^{-2}\) and then dividing by \(n^{5-2\tau }k^{2\tau -6}\) shows that this contribution is small.

Then we study the contribution to (6.5) for \(D_u<\varepsilon n/k\). This can be computed as

$$\begin{aligned} \begin{aligned}&\frac{1}{n}\int _1^{\varepsilon n/k} \int _1^\infty (xy) ^{1-\tau }\min \left( 1,\frac{k y}{ n}\right) \min \left( 1,\frac{xy}{n}\right) \mathrm{d}y \mathrm{d}x\\&\quad =\int _0^{\frac{n\varepsilon }{k}}\int _{0}^{n/k}\frac{k^2}{n^3}x^{-\tau +2}y^{-\tau +2}\mathrm{d}y\mathrm{d}x +\int _0^{\frac{n\varepsilon }{k}}\int _{n/k}^{n/x}\frac{k}{n^2}x^{-\tau +2}y^{-\tau +1}\mathrm{d}y\mathrm{d}x\\&\quad \quad +\int _0^{\frac{n\varepsilon }{k}}\int _{n/x}^{\infty } \frac{k}{n}x^{-\tau +1}y^{-\tau }\mathrm{d}y\mathrm{d}x \\&\quad = O\left( \varepsilon ^{3-\tau } k^{2\tau -4}n^{3-2\tau } \right) \!+ \!O\left( \varepsilon ^{3-\tau } k^{2\tau -4}n^{3-2\tau }\right) \!+\! O\left( \varepsilon n^{1-\tau }\right) \!=\! O\left( \varepsilon ^{\tau -2}k^{2\tau -4}n^{3-2\tau }\right) . \end{aligned}\nonumber \\ \end{aligned}$$
(6.13)

Thus, dividing these estimates by \(n^{3-2\tau }k^{2\tau -6}\) and noting that \(n^{1-\tau }<n^{3-2\tau }k^{2\tau -4}\) for \(k=\varOmega (\sqrt{n})\) and \(k=o( n)\) completes the proof in Range III. \(\square \)

6.1 Proof of Theorem 2

We now show how we adjust the proof of Theorem 1 to prove Theorem 2. We use the same major contributing triangles as the ones in Range III in (3.6). Then, in fact Lemmas 39 and Proposition 2 still hold. It is easy to derive a similar lemma as Lemma 8 for the situation \(k=\varTheta (\sqrt{n})\). The only difference with the proof of Lemma 8 is that we do not Taylor expand the exponentials in (5.51). This then proves Theorem 2. \(\square \)

6.2 Proof of Theorem 3

We now prove that the scaling limit of \(k\mapsto c(k)\) is continuous around \(k=\sqrt{n}\). When B is large, we rewrite (3.4) as

$$\begin{aligned} \frac{c(k)}{n^{2-\tau }} {\mathop {\longrightarrow }\limits ^{\mathbb {P}}}C^2\mu ^{2-2\tau }B^{2\tau -4}\int _{0}^{\infty }\int _{0}^{\infty }(xy) ^{-\tau }(1-e ^{-x})(1-e ^{-y})(1-e ^{-xy\mu /B^2})\mathrm{d}x\mathrm{d}y.\nonumber \\ \end{aligned}$$
(6.14)

Taylor expanding the last exponential then yields

$$\begin{aligned} \begin{aligned} \frac{c(k)}{n^{2-\tau }}&= (1+o_{\mathbb {P}}(1))C^2\mu ^{3-2\tau }B^{2\tau -6}\int _{0}^{\infty }\int _{0} ^{\infty }(xy)^{1-\tau }(1-e ^{-x})(1-e ^{-y})\mathrm{d}x\mathrm{d}y \\&= (1+o_{\mathbb {P}}(1))C^2\mu ^{3-2\tau }B^{2\tau -6}A^2. \end{aligned} \end{aligned}$$
(6.15)

Substituting \(k=B\sqrt{n}\) in Range III of Theorem 1 gives

$$\begin{aligned} \frac{c(k)}{n^{2-\tau }} =(1+o_{\mathbb {P}}(1)) C^2\mu ^{3-2\tau }B^{2\tau -6}A^2, \end{aligned}$$
(6.16)

which is the same as the result obtained from Theorem 2. Therefore, the scaling limit of \(k\mapsto c(k)\) is smooth for \(k>\sqrt{n}\).

For B small, we can Taylor expand the first two exponentials in (3.4) as long as x and y are much smaller than 1 / B. The contribution where \(x,y<1/B\) and \(B<\mu xy<1/B\) can be written as

$$\begin{aligned} \begin{aligned}&C^2\mu ^{2-2\tau }\left( \int _{B^2}^{1}\int _{B/(\mu x)} ^{1/B}(xy)^{1-\tau }(1-e ^{-\mu xy})\mathrm{d}y\mathrm{d}x+\int _{1}^{1/B} \int _{B/(\mu x)}^{1/(Bx)}(xy)^{1-\tau }(1-e ^{-\mu xy})\mathrm{d}y\mathrm{d}x\right) \\&= C^2\mu ^{-\tau }\left( \int _{B^2}^{1}\int _{B}^{v/B} \frac{1}{v}u^{1-\tau }(1-e ^{-u})\mathrm{d}u\mathrm{d}v+\int _{1}^{1/B} \int _{B}^{1/B}\frac{1}{v}u^{1-\tau }(1-e ^{-u})\mathrm{d}u\mathrm{d}v\right) \\&= C^2\mu ^{-\tau }\left( \log (1/B^2)\int _{B}^{1/B}u^{1-\tau } (1-e ^{-u})\mathrm{d}u+\int _{B}^{1/B}\log (1/u)u^{1-\tau }(1-e ^{-u})\mathrm{d}u\right) , \end{aligned} \end{aligned}$$
(6.17)

where we have used the change of variables \(u=\mu xy\) and \(v=x\). The contribution of the second integral becomes small compared to the first part as B gets small, as the second integral is finite for \(B>0\). We can show that the contributions from \(x,y>1/B\), or from \(xy>1/B\) can also be neglected by using that \(1-e ^{-x}\le \min (1,x)\). Thus, as B becomes small, Theorem 2 shows that c(k) for \(k=B\sqrt{n}\) can be approximated by

$$\begin{aligned} \frac{c(k)}{n^{2-\tau }}\approx C^2\log (B^{-2})\int _{0}^{\infty }u^{1-\tau }(1-e ^{-u})\mathrm{d}u, \end{aligned}$$
(6.18)

which agrees with the value for \(k=B\sqrt{n}\) in Range II of Theorem 1.

To prove the continuity around \(k=n^{(\tau -2)/(\tau -1)}\), we note that the proofs of Lemmas 7,9 and 10 for Range II still hold if we assume that \(k\ge an^{(\tau -2)/(\tau -1)}\) for some \(a>0\) instead of \(k=\varOmega (n^{(\tau -2)/(\tau -1)})\). Thus, we can also apply the result of Range II in Theorem 1 to \(k=an^{(\tau -2)/(\tau -1)}\), which yields

$$\begin{aligned} c(k)=n^{2-\tau }\mu ^{-\tau }C^2A\left( \frac{3-\tau }{\tau -1} \log (n)+\log (a^{-2})\right) (1+o_{\scriptscriptstyle \mathbb {P}}(1)). \end{aligned}$$
(6.19)

This agrees with the \(k\mapsto c(k)\) curve in Range I when n grows large. \(\square \)