Brought to you by:
Paper The following article is Open access

Decreased resilience in power grids under dynamically induced vulnerabilities

, and

Published 15 October 2020 © 2020 The Author(s). Published by IOP Publishing Ltd on behalf of the Institute of Physics and Deutsche Physikalische Gesellschaft
, , Citation C C Galindo-González et al 2020 New J. Phys. 22 103033 DOI 10.1088/1367-2630/abb962

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/22/10/103033

Abstract

In this paper, a methodology inspired on bond and site percolation methods is applied to the estimation of the resilience against failures in power grids. Our approach includes vulnerability measures with both dynamical and structural foundations as an attempt to find more insights about the relationships between topology and dynamics in the second-order Kuramoto model on complex networks. As test cases for numerical simulations, we use the real-world topology of the Colombian power transmission system, as well as randomly generated networks with spatial embedding. It is observed that, by focusing the attacks on those dynamical vulnerabilities, the power grid becomes, in general, more prone to reach a state of total blackout, which in the case of node removal procedures it is conditioned by the homogeneity of power distribution in the network.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Emergence of synchronization in systems of agents, coupled through local interactions, is a widely studied phenomenon with multiple applications in physics, biology and social sciences [13]. In particular, the optimal functioning of a power grid, which represents one of the most complex interacting systems in engineering, highly depends on its ability to maintain a synchronous operation over time despite external disturbances. Losing that synchrony, even locally, may lead to cascading failures and complete blackout of the network [46]. Two main concerns have motivated the discussion around dynamical analysis and design of power grids in the last years, those are: the strong economical and social impact that a power outage could cause in the highly electricity-dependent modern society [7], and the slow transition to renewable energy sources that is being promoted all around the world [8, 9], since it is known that renewable and small power producers have negligible inertia, which imposes instability risks in the power grids [10]. Multiple models have been proposed for the analysis of synchronization in power grids, and they differ mainly in the way loads, power generators and transmission lines are modeled [11]. Probably the most common model is the one called synchronous motor, where both, generators and consumers are represented by second-order synchronous machines [12]. This model is essentially equivalent to the celebrated Kuramoto model when inertia terms are considered [1315]. Self-synchronization behavior in this model occurs when the interactions between connected oscillators are sufficiently strong to overcome the dissimilarity in the ensemble [4, 1618].

Recently, the scientific community has put a lot of effort in trying to determine the topological or structural features of those interactions that can enhance or undermine non-linear stability of a power grid [19], that is, its capability to reject finite-size disturbances, a matter of paramount importance in the design of the future smart grids. This has been approached, for instance, by the means of energy barrier functions [20] and the basin stability concept [2126]. Some interesting findings that deserve to be mentioned, as they provide great insights about the relationship between topology and dynamical stability include: the poor basin stability usually detected on dead-tree arrangements [21], the strong stability found on triangle-shaped motifs [27], the enhanced non-linear stability achieved by increasing global redundancy in the connections [28] or by adding small cyclic motifs [20], the lower basin stability exhibited in general by high-power generator nodes [29], the variation in transient excursion for finite-size disturbances depending on resistance centrality metrics [30, 31], the diminished size of the basin of attraction as well as the appearance of solitary synchronous states when losses are considered in the network [32] and the interplay between inertia and the correlation time of stochastic noise that yield an escape from the basin of attraction [33]. Other studies have focused on estimating the resilience of the complex network against cascading failures [6, 34, 35], the robustness in the transient behavior after localized disturbances [36] and their diffusion through the interconnected system [10], the optimal design of weights and frequencies [37], as well as the identification of key elements in the network that require a control action [38].

In the present work, a percolation inspired method is presented in order to evaluate the resilience of a power grid against random and focalized disturbances. By defining a vulnerability measure in terms of the dynamical properties of both, nodes and edges, it is found that, attacks that focus on the weakest components of the network, can easily provoke cascading failures that lead to total blackout faster than random attacks in some specific cases. Note that, in the following, the term total blackout refers to the situation when no operational composition of at least one generator and one consumer exists in the network. Test cases chosen for this study include randomly generated networks with spatial embedding that emulate properties of real-world power grids, as well as the particular case of the Colombian power transmission network. It is also found that, by attacking the most dynamically-loaded transmission lines in the graph, the power system reaches blackout faster than any other bond-percolation criterion, while, for the site-percolation methods, it is produced by attacking the most dynamically vulnerable nodes, but only if the distribution of power generation and consumption is not homogeneous, which often occurs in real-world architectures.

A similar work can be found in [39], where the authors use centrality measures, namely, the node degree, clustering coefficient and betweenness centrality, to estimate the vulnerability of a node and then proceed to remove weakest nodes to observe the evolution of the giant component in the network as a function of the number of remaining nodes in the whole graph. Moreover in [40], the elimination process of transmission lines based on betweenness centrality and a random selection are compared. This structural approach to quantify the node vulnerability will be also used in this work to compare results of removal methods based on dynamical measurements. Other relevant works have to be mentioned; for instance, based on the statistical physics description of networks, some basic percolation properties for different networks have been analysed using generating functions [41], the lifetime and reliability of networks has been estimated by setting some probability distributions for node failures [42], analytical performance measures have been developed and tested over structural and dynamical perturbations of the complex system [43], and the propagation of cascading failures through the power transmission or communication architecture of smart grids has also been modeled as a random percolation process [44] or as a continuous phase transition [45].

The rest of this paper is organized as follows: section 2 introduces the dynamical model of a power grid and the structure of the specific test cases, as well as some theoretical tools that will be applied in further analysis of synchronization and vulnerability measurements. Section 3 presents an algorithm for resilience assessment of power grids and results regarding line and node removal procedures over synthetic power grids and the Colombian transmission network. Finally, some concluding remarks are discussed in section 4.

2. Model and methods

In this paper, the framework of the synchronous motor representation is used to model the general structure and dynamics of power grids [11, 12], since it has been employed constantly and successfully in the past years, as mentioned in the previous section. This approach is described in the following. For the readers convenience, a list of the main mathematical symbols used in the text, along with their corresponding definitions, is presented in appendix A.

2.1. Power grid model

As a simplified model of a real-world power grid, consider a system of N interacting synchronous machines arranged in a connected graph G(ϑ, ɛ), with a set of nodes ϑ = {1, 2, ..., N} and a set of edges ɛϑ × ϑ, such that the amount of edges is |ɛ| = M. Nodes can be labeled as generator machines (supply energy to the grid) or consumer machines (demand energy from the grid), thus ϑ = ϑgϑc, being ϑg the set of generators and ϑc the set of consumers.

Let the dynamical state of each machine be represented by its phase angle ϕi and its phase velocity ${\dot {\phi }}_{i}{:=}\frac{\mathrm{d}{\phi }_{i}}{\mathrm{d}t}$, iϑ. Under an appropriate operation of the power grid, it is expected that every machine will be rotating at some reference frequency Ω = 2πf (where by convention f is either 50 Hz or 60 Hz), so let θi(t) = ϕi(t) − Ωt be the phase deviation of the machine i with respect to the reference angle Ωt, then the dynamical behavior of θi and its velocity ${\dot {\theta }}_{i}$ can be described by the well-known second-order Kuramoto model for coupled oscillators given by [12, 16]:

Equation (1)

where Pi is equal to the injected (drained) power at the ith node up to a scaling factor, and it is positive (negative) for generators (consumers); αi is related to the damping coefficient of the ith machine, aij are the elements of the adjacency matrix A, thus, aij = 1 if nodes i and j are connected (that is, (i, j) ∈ ɛ) and aij = 0 otherwise. The constant K is the maximum power transfer capacity between any pair of connected nodes, which depends on the impedance of transmission lines connecting them. This equation is derived when the ohmic losses on transmission lines are neglected, an equal impedance level is assumed for every transmission line in the network and the deviations from the phase velocity at any node as compared to the reference Ω are small, that is $\vert {\dot {\theta }}_{i}\vert \ll {\Omega}$, ∀ iϑ [12, 16].

The dynamical system described by equation (1) is said to be in a phase synchronized state if θi(t) = θj(t), ∀  i, jϑ, and in a frequency synchronized state if ${\dot {\theta }}_{i\left(t\right)}={\dot {\theta }}_{j\left(t\right)}={\omega }_{\mathrm{s}}$, ∀ i, jϑ. Since we are interested in the dynamics around the synchronization frequency Ω, it can be assumed without loss of generality that ωs = 0, which amounts to consider the dynamics in the co-rotating reference frame Ωt (see [4]). In the following, the topology of G(ϑ, ɛ) is taken from the Colombian National Transmission System [46], a power grid composed of 158 edges and 102 nodes (28 of them are generators and 74 are consumers). Power supplied by generators is set to Pk = 1.0, ∀ kϑg, and the power drained by consumers is uniformly distributed such that the power balance condition in the network is fulfilled, that is:

Equation (2)

This condition is a requirement in order to allow the existence of a locally-stable equilibrium point in the system [16]. Finally, the damping and maximum power transfer are fixed to αi = 0.1 and K = 12.0 respectively. Figure 1(a) shows the topology of the network with the tree-like node classification introduced in [23]. Panels (b)–(e) present distributions for some classic centrality measures of the graph: node degree, clustering coefficient [47], node betweenness and edge betweenness centrality [48]. Panels (f) and (g) display the behavior of θ(t) and ${\dot {\theta }}_{\left(t\right)}$ for every oscillator when the system converges from an unsynchronized state to a frequency synchronized regime given the parameters above. Aiming for a generalization of the results, random test cases were also used and compared to the Colombian test case just described; to do that, synthetic power grids were generated with the algorithm proposed in [49], which is specially useful as it creates spatially embedded networks that emulate connectivity distributions of real-world power grids. Synthetic networks were constructed by initializing a tree-like base architecture and then following a growth procedure conditioned by the optimization of a redundancy-cost function that depends on both, geographic distance and number of paths separating each pair of nodes in the graph. For a detailed description of the algorithm, refer to appendix B. The parameters used for the growth model algorithm were chosen to match those used on [23], as presented in table 1. Note that the final size of the network is fixed to N = 102 so that it matches the Colombian power grid size, nevertheless, geographic positions for nodes are chosen uniformly at random and an equal amount of generators and consumers is placed to reduce heterogeneity in the samples (thus, Pk = 1.0, ∀ kϑg and Pj = −1.0, ∀ jϑc).

Figure 1.

Figure 1. Description of the testing case based on the Colombian power grid: (a) topology of the network with N = 102 and M = 158. Generating nodes Ng = 28 (cross symbols) and Nc = 74 consumer nodes (circle symbols). (b)–(e) Distribution of the node degree dk , clustering coefficient ck , node betweenness centrality bk and edge betweenness centrality ek . (f)–(g) Time trace of the phases and phase velocities of each oscillator when the grid converges to a frequency synchronized state (red lines: generators, blue lines: consumers).

Standard image High-resolution image

Table 1. Parameters for the construction of synthetic power grids.

ParameterValue
N0: an initial amount of nodes, N0 ⩾ 1.1
N: final amount of nodes that the network will have N > N0.102
y = {y1, y2, ..., yN }: geographical location of nodes. yk U[0,1]
p: probability of constructing additional redundancy links attached to new nodes (0 ⩽ p ⩽ 1). $\frac{1}{5}$
q: probability of constructing additional redundancy links between existing nodes (0 ⩽ q ⩽ 1). $\frac{3}{10}$
s: probability of splitting an existing line (0 ⩽ s ⩽ 1). $\frac{1}{10}$
u: cost-vs-redundancy trade-off parameter for equation (B.1). $\frac{1}{3}$

2.2. Synchronization measures

A helpful indicator that quantifies the degree of synchronization is the complex-valued order parameter r defined as [50]:

Equation (3)

Here, Ψ(t) is the average phase of the oscillators at time t. If all phases θj are identical, the magnitude of the order parameter is r(t) = 1. Conversely, if they are equally distributed around the unit circle r(t) = 0 and the system is said to be desynchronized. Intermediate values of r correspond to partially synchronized states. Additionally, in the case of power grids in which equation (2) holds, when the system is synchronized the average phase is Ψ(t) = 0, and the real part of the order parameter IR[r] = 1, while it oscillates around zero otherwise. While the dependence of the order parameter with time is useful in transient analysis, we will focus on the steady state behavior, so we can define the average order parameter in steady state as [16]:

Equation (4)

Another measure of frequency synchronization in power grids is the squared average rotational speed ${v}_{\left(t\right)}^{2}$ defined as [16]:

Equation (5)

and the associated steady state value of the speed v:

Equation (6)

2.3. Edge vulnerability

Following the results in [4], it can be proven that, by applying a small-angle approximation, the phases at the frequency synchronized equilibrium point, which are denoted by θ* ∈ IRN , can be estimated as:

Equation (7)

where L is the pseudo-inverse of the Laplacian matrix, which is computed through the adjacency matrix as $L=\mathrm{diag}\left({\sum }_{j=1}^{n}{a}_{ij}\right)-A$. Let us now define the oriented incidence matrix $B=\left\{{b}_{ij}\right\}\in {\mathbb{R}}^{N{\times}M}$, with:

Equation (8)

Note that although G is an undirected graph, a direction for each edge can be assumed without loss of generality for the following analysis, allowing for this definition of B. Under these circumstances, the steady state phase difference between any pair of connected nodes (i, j) ∈ ɛ can be approximated by:

Equation (9)

with ${{\Delta}}_{\theta }\in {\mathbb{R}}^{M}$. Then, the system described by equation (1) is guaranteed to have a unique and stable solution with synchronized frequencies and cohesive phases if the next sufficient condition for Δθ holds:

Equation (10)

with 0 ⩽ γ < π/2. In the limit γπ/2, equation (10) simply reduces to ||Δθ || < 1. In other words, equation (10) states that, in order to achieve synchronization, the largest difference between the phases of connected pairs in the network has to be smaller than $\frac{\pi }{2}$ [4]. This rather simple equation, readily connects topological and dynamical features of the network. While condition (10) is only sufficient, it is a rapid way to assess synchronization without relying on the computationally expensive time evolution of a large number of phase oscillators. Moreover, equation (10) provides an approximation to the critical value Kc of the coupling strength when substituting from (9):

Equation (11)

In figure 2(a), it is shown the behavior of the steady state order parameter IR[r] as a function of the coupling strength K. As seen in the figure and the insets, the real part of the order parameter oscillates around zero when the coupling strength is small, and therefore its average value is zero. At a critical coupling strength Kc ≈ 1.51, the system achieves a state of phase cohesiveness characterized by IR[r] ≈ 1 indicating a synchronized regime. The same transition can be observed by looking at the average velocity in figure 2(b). In particular, it can be seen that before the synchronization transition, a finite and different than zero, steady-state value for the angular velocity is obtained, indicating that the phases of the oscillators increase constantly and no equilibrium is reached. After the transition, v(t) abruptly decreases to zero, meaning that the phases of the oscillators remain in a steady state.

Figure 2.

Figure 2. Synchronization for the Colombian power grid: (a) average steady state value of the real part of the order parameter as a function of the coupling strength K. Left (right) inset shows the time trace of IR[r(t)] before (after) critical coupling. (b) Average rotational speed as a function of K. As in (a), insets show time traces before and after critical coupling. In both cases, vertical dashed line depicts the value of Kc calculated with equation (11). It is observed that for this particular test case, a minimum power transfer capacity of roughly 1.51 is required for the system to operate in a synchronized regime.

Standard image High-resolution image

In both figures, it is possible to see as well the accuracy of the sufficient condition (11), where the critical value is indicated by the vertical dashed line, showing a good agreement with the numerical value of the transition. This evidence supports the use of condition (10) as a synchronization requisite in the following analysis. Note that the chosen value of K = 12.0 mentioned before is justified here, since K > Kc, thus it allows for a synchronized state to exist.

As a measure of the vulnerability of each transmission line, we take advantage of the equation (9), given that the edge (i, j) is considered weak when |Δθ (i, j)| is close to 1, while it is considered more stable when it is close to 0, regarding the synchronization limit imposed by (10). This approach is equivalent to that presented in [51, 52], where the weakness of an edge is measured as the power transferred between the linked nodes (line load), except that here, the steady state of the system is approximated immediately from equation (9).

2.4. Nodal vulnerability

Consider a multistable dynamical system that evolves in the state space X and let X* ⊂ X be the set of desirable attracting states. The basin of attraction of X*, noted by β, is defined as the set of all initial conditions x(0) that asymptotically converge to X* [21]. For the purpose of this work, X* will be the frequency synchronization manifold of the system (1). Similarly, the likelihood of a randomly perturbed trajectory to return back to β, known as the basin stability SB, can be defined as [21, 22]:

Equation (12)

where Γβ(x) is a function that indicates whether a state x belongs to the basin of attraction of X* or not, that is

Equation (13)

The function ρ(x) is the density of states to which the system can be pushed by some non-local perturbation, such that ∫X ρ(x)dx = 1. Note that under this definition, the basin stability is a number between 0 and 1, being SB = 0 when the synchronous state is unstable and SB = 1 when it is globally stable.

Monte Carlo simulations can be performed in order to estimate SB by randomly sampling a sufficiently high number of disturbed states IC (following a certain distribution ρ(x)) from a representative subspace Π ⊂ X, and using them as the initial conditions for time-evolution simulations. For this work, ρ(x) is chosen as an uniform distribution in the restricted subspace Π, that is:

Equation (14)

Finally, the amount FC of simulated trajectories that are found to approach asymptotically to the attractor is assumed to be proportional to the volume of the basin of attraction (restricted to the subspace Π), thus SB is approximated by ${S}_{\mathrm{B}}\approx \frac{{F}_{C}}{{I}_{C}}$. Now, making use of the concept of basin stability we can define an indicator of the robustness of a node against large perturbations applied to it, by means of the single-node basin stability (SNBS) [22, 24], given by:

Equation (15)

where the IC initial conditions are drawn from perturbations applied to node i. As illustration, figure 3 shows the phase plane of some node i subject to large disturbances; its SNBS is, loosely speaking, the number of crosses divided by the number of total datapoints. Throughout this work, the disturbances for each node are drawn from the subspace Π = [−π, π] × [−100, 100].

Figure 3.

Figure 3. Phase space of a node i of the Colombian grid when random disturbances are applied to θi or ${\dot {\theta }}_{i}$. Green crosses are initial conditions that return to the frequency synchronized state, while the blue circles are out of the basin of attraction. The red dashed line denotes the subspace Π from which disturbances are chosen. This diagram was created by sampling IC = 500 points.

Standard image High-resolution image

2.5. Structural vulnerability

Equations (9) and (15) define already a dynamical vulnerability for edges and nodes, respectively. Furthermore a vulnerability based merely on the connectivity features of the complex network could also be considered, since it is intuitive that by removing a highly connected node or a certain critically located edge could lead to a rapid diminishing on power grid stability and performance. In that regard, let us define structural vulnerability in terms of the following centrality measures:

  • Degree centrality ${d}_{k}^{\left(i\right)}$: Amount of nodes to which node i is connected, which in an undirected graph is simply given by the sum of the elements in the ith row of the adjacency matrix A [53]. Here, a normalized version of this metric is used, so it is divided by the maximum possible degree that a node could have in the network, which is evidently N − 1 provided that self-loops are not allowed. It is thus computed as:
    Equation (16)
  • Clustering coefficient ${c}_{k}^{\left(i\right)}$: Number of triangles in which node i is involved, normalized by the maximum possible amount of such triangles [47]. A triangle involves node i when two neighbors of such node, namely j and k, are also directly connected, so more formally, if (i, j), (j, k), (i, k) ∈ ɛ, then i, j and k form a triangle. Then the clustering coefficient is simply computed as:
    Equation (17)
    where Wi is the total number of triangles involving node i and ${\hat{d}}_{k}^{\left(i\right)}={\sum }_{j}^{N}{a}_{ij}$ is the degree centrality without the normalization factor.
  • Node betweenness centrality ${b}_{k}^{\left(i\right)}$: Sum of the shortest paths between every pair of nodes (s, t) in the network that pass through node i. For clarity, let us define a path that joins s and t as a sequence of edges Hm = {(s, v1), (v1, v2), (v2, v3)..., (vj , t)}, (vi , vk ) ∈ ɛ where s and t are the beginning and end nodes, respectively. Then, by considering the length of the path as the number of edges contained in Hm , the shortest path is the sequence Hm that yields the minimum possible length. Node betweenness centrality is then given by [48, 54]:
    Equation (18)
    where λs,t is the amount of shortest paths between nodes s and t and λs,t (i) is the number of those paths that pass through node i.
  • Edge betweenness centrality ${e}_{k}^{\left(l\right)}$: In the same fashion as bk measures centrality for a node, ek does it for an edge; it is simply defined as the number of shortest paths in the network that include edge l:
    Equation (19)

So in this work, a node will be considered structurally weak if dk , ck or bk is high. Similarly, an edge will be considered structurally weak if its ek is high.

3. Resilience measures

In order to assess the resilience of a power grid, we propose a percolation-based algorithm. To do so, we will follow the capability of the network to maintain its functioning upon different node removal (site percolation) and edge removal (bond percolation) rules. In particular, the algorithm that will be describedbelow removes a node or an edge from the graph on each iteration by applying a random attack, where the element to be removed is chosen uniformly at random; or a focal attack, where the most vulnerable node or edge in the graph is removed first. Here, the most vulnerable node is the one with the lowest basin stability (${S}_{\mathrm{B}}^{\left(i\right)}$) or the highest degree centrality (dk ), clustering coefficient (ck ) or node betweenness centrality (bk ); while the most vulnerable edge is the one for which phase difference (Δθ (i, j)) or edge betweenness centrality (ek ) is the highest.

The percolation-based algorithm then proceeds as such:

  • (a)  
    Remove a node (edge) by applying a random or a focal attack. Note that in a focal attack, multiple nodes (edges) could share the same vulnerability level, in that case, the attacked node is chosen randomly among them.
  • (b)  
    Compute the existing disconnected clusters in the new graph.
  • (c)  
    For each cluster, check if it contains at least one generator node and one consumer node, if it does not, remove the whole cluster from the graph.
  • (d)  
    For each remaining cluster the power has to be compensated, so as a way to simulate a real-world power control (where the supplied power level is increased or reduced according to the demand of the consumers), the power of all generators in the cluster is modified uniformly such that equation (2) remains true.
  • (e)  
    For each remaining cluster, check whether or not it is synchronizable, that is, its topology satisfies the synchronization condition that K > Kc, where Kc is computed by the approximation (11). If it does not, the whole cluster is removed from the graph.
  • (f)  
    Measure the needed observables and return to step 1 until the whole power grid has gone to blackout, that is, no operational cluster exists in the network.

An important remark has to be done about the algorithm, regarding node removal under the focal attack methodology based on SNBS. Computing this vulnerability index is computationally expensive since it requires a large amount of time-domain simulations in order to estimate reasonably well the equation (15). Thus, in the following, for the SB-focused percolation, only 10 simulations are considered for averaging purposes, while for each of the other cases, 500 simulations are performed. It is also worth noting that the fine structure of the basin of attraction is not captured by this approach, and it could impose numerical problems on dynamical systems that possess either fractal basin boundaries or riddled or intermingled basins [55].

For this elimination procedure, two main phase transitions will be analysed:

  • Transition T 1 : This transition occurs at the iteration when the graph is divided into multiple functional subgraphs for the first time (turning from a connected graph to an unconnected graph).
  • Transition T 2 : The final iteration of the algorithm, when the power grid goes to complete blackout.

Figure 4 illustrates the algorithm for nodal resilience measurement under focal attacks. The transition from the frame (a) to (b) shows the formation of 2 independent clusters after removing one node. Note that in the frame (c) there is only one generator in the northern cluster, and interestingly, it has a very poor basin stability level, so when that node is removed from (c) to (d), the whole northern cluster goes to blackout. A different realization is presented in panels (e)–(h); note that from (g) to (h) it suffices to remove one node to bring the bigger cluster to blackout even though there would still remain connected subgraphs with both generators and consumers. This occurs because the remaining cluster would have a topology that does not satisfy the synchronization condition (10) and thus it vanishes. Similarly, figure 5 shows the algorithm in action under Δθ -focused attacks. In the transition from panel (b) to (c), it can be appreciated that, after suppressing a very vulnerable transmission line, a significant cluster of isolated consumers in the east side of the network vanishes.

Figure 4.

Figure 4. Node removal algorithm under the SB-focused attacking scheme. Circle nodes indicate consumers while crosses represent generators. Node color is mapped with the ${S}_{\mathrm{B}}^{\left(i\right)}$ (computed with IC = 50) and the size of the node is proportional to the power Pi . The red arrow indicates the node that is going to be suppressed. Also, the inset shows the histogram of the basin stability for the whole network. One realization is presented in panels (a)–(d) and a different one in panels (e)–(h) in order to visualize two different mechanisms of cluster vanishing.

Standard image High-resolution image
Figure 5.

Figure 5. Edge removal algorithm under the focal attacking scheme. Edge color is mapped with the Δθ of the nodes it connects. The red arrow indicates the edge that is going to be suppressed. Also, the inset shows the histogram of the phase differences for the whole network.

Standard image High-resolution image

3.1. Node removal resilience for the Colombian power grid

Figure 6(a) shows the fraction of nodes in the largest cluster of the network (also called the giant component of the graph and denoted by C) subject to node removal with focal and random attacking policies. Similarly, figure 6(b) presents the fraction of removed nodes after each iteration with respect to the original size (named removed component and denoted by $\frac{\left({N}_{\left(0\right)}-{N}_{\left(I\right)}\right)}{{N}_{\left(0\right)}}$, being N(0) and N(I) the original size of G and its size after I removal iterations, respectively), which allows to detect the phase transition T2, corresponding to the iteration at which $\frac{\left({N}_{\left(0\right)}-{N}_{\left(I\right)}\right)}{{N}_{\left(0\right)}}=1$. It is thus observed that the grid goes to complete blackout around 30th removal iteration for the case of the focal attacking on SB, and around the 62nd iteration for the random attacking. For dk and bk the transition occurs at the 40th iteration and for the random and ck cases, we have the less effective removal technique since this transition occurs around the 60th attack.

Figure 6.

Figure 6. Evolution in the structure of the Colombian power grid subject to the nodal elimination algorithm. The shading accounts for the standard deviation calculated over 10 realizations with IC = 50 for the SB-focused methodology and 500 realizations for the other attacking schemes. Although an abrupt reduction of the giant component is observed for the bk and dk methods on the first stages of the algorithm, the fastest transition to blackout is obtained for the SB method.

Standard image High-resolution image

The fact that attacking the most vulnerable nodes based on SB produces a faster transition to the total blackout state of the network than just removing nodes at random implies that there is indeed a relationship between the stability of a node against dynamical disturbances (related to the SNBS) and the structural connectivity and resilience to cascading failures of the graph.

Furthermore, by focusing the attacks on nodes with the highest bk or dk , the dimension of the giant component goes down drastically during the first elimination iterations and then it goes down at a slower rate, reaching total blackout in an intermediate level of attacks between the SB and random methodologies. ck on the other hand, produces a slower reduction of the graph size and a total blackout with the most iterations in average. This is related to the tree-like structure of the power grid that can be seen on figure 1. In this kind of topologies, the nodes with the higher degree and betweenness are located at the bulk regions of the graph, therefore, their removal likely divides the giant component instantly into multiple smaller clusters, as also confirmed by the figure 6(d), where it is shown that these methodologies produce by far, the highest amount of operational clusters (quantity denoted by |SG |) during the whole procedure. Correspondingly, a node with a high clustering coefficient, by definition, implies that the neighbors that it is connected to, are also connected between them, hence, removing it is not going to divide the graph and generate new clusters. Eventually, after multiple node eliminations, redundant links between said neighbors will be suppressed and ck will become uniform over the whole system. That is the reason why this attacking strategy behaves roughly in the same way as the random attacking one after a few iterations.

Figure 6(c) plots the size of the second largest component of the graph C2, which reveals the location of the phase transition T1 as the first iteration when C2 jumps from 0 to a non-zero value. This figure shows, as expected, that the dk and bk attacking policies generate in a couple of iterations a new cluster composed by a significant amount of nodes (peaking roughly at 40% of the nodes), while random and ck focal methods generate smaller subgraphs and the transition takes in average, a higher attacking effort. Interestingly, the transition to an unconnected graph composed of multiple functioning clusters for the SB case, takes several iterations (at least 15 iterations) and it occurs closer to the total blackout transition (about 30 iterations) than any other methodology.

This behavior is further complemented with figure 6(d), here, the SB focal strategy generates very few clusters as compared to the others; this is likely related again to the tree-like topology of the Colombian power grid and to the observed phenomenon in [22], where nodes located on dead-tree arrangements (more specifically, inner tree nodes [23]), were found to exhibit, in general, a lower SNBS than the rest of the nodes. As these are some of the nodes that are more likely to be removed first under SB focal attacks, and as by definition they do not have a significant amount of connections, the graph does not segregate on multiple clusters that easily.

Table 2 summarizes the average value of the transitions for the node removal algorithm applied to the Colombian power grid.

Table 2. Average value of T1 and T2 for the nodal resilience testing algorithm in the Colombian grid.

 Random node bk ck dk ${S}_{\mathrm{B}}^{k}$
T1 14.413.0028.773.1820.60
T2 61.9240.8565.0340.5429.80

3.2. Node removal resilience for synthetic power grids

To test the generality of the results shown for the Colombian power grid, the process is repeated for a set of synthetic power grids, which are generated as described in section 2.1. Results on the nodal resilience for these synthetic power grids are presented in figure 7. It is surprising that, besides the wider standard deviation in every trace (which was to be expected considering the randomness in the generation of the complex network), every other behavior related to the elimination based on bk , ck , dk and even the random attacking method is identical to that observed in the specific case of the Colombian power grid in figure 6. It is also worth mentioning that the same experiment was repeated for synthetic power grids with the same power distribution as the Colombian power grid (that is, 28 generators and 74 consumers), and the results achieved were virtually identical to those presented in figure 6, thus they are not presented here, yet this allows us to safely conclude that the unique difference between the specific Colombian case and the random networks is the power distribution. This confirms the fact that the algorithm proposed in [49] is indeed capable of reproducing real-world examples of networked power systems, but also reveals that a homogeneous power distribution did not have, in average, a significant impact in the transition to blackout for these specific attacking schemes. This is not surprising, as bk , ck and dk are structure-based vulnerabilities and completely independent of the dynamical vulnerabilities. The SB-focused attacking, however, shows a very different evolution, similar to that of the random attacks, except in figure 7(d), where SB attacks keep producing the lowest amount of clusters, which is related to the same explanation given before: the nodes located at dead-tree arrangements usually possess low SNBS levels, and their removal hardly generates new functional clusters. For reference, the values for the transitions T1 and T2 on the synthetic power grids are presented in table 3.

Figure 7.

Figure 7. Evolution in the structure of the randomly grown synthetic power grids subject to the nodal elimination algorithm. The shading accounts for the standard deviation calculated over 10 realizations with IC = 50 for the SB-focused methodology and 500 realizations for the other attacking schemes. The resilience against SB-based attacks shows an enhancement as compared with the Colombian grid case in figure 6, while the behavior for other attacking schemes is conserved.

Standard image High-resolution image

Table 3. Average value of T1 and T2 for the resilience testing algorithm in the synthetic power grids.

 Random node bk ck dk ${S}_{\mathrm{B}}^{k}$
T1 10.202.3519.582.5027.00
T2 61.6239.9163.2239.0254.30

It seems quite natural to think that, as the random networks have more generator nodes and they are randomly distributed over the graph, the iteration for which the transition to blackout occurs would, for any of the attacking methods, be higher than the Colombian case since after each removal step the probability to obtain non-operational clusters would be reduced, yet it only happens for the SB-focused one, as can be also confirmed by comparing tables 2 and 3. Next section focuses the discussion around the effects of these power heterogeneity in the resilience profiles observed.

3.3. Influence of power heterogeneity

Figure 8 shows the amount of generators Ng and consumers Nc after each iteration in both, the Colombian power grid and the synthetic power grids, and it allows to see how the node elimination is working: for the random synthetic power grids in average, both, consumers and generators are being eliminated at the same rate, this leads to believe that every new cluster created in the network preserves a comparable number of both kind of nodes and that homogeneity makes the total destruction of said cluster less likely. This also explains why, in average, T2 for the SB-method is close to that of the random case (table 3) and hints that the main reason why clusters are being removed from the graph is because the synchronization condition is not satisfied (step 5 in the resilience testing algorithm), rather than the absence of generators and consumers (step 3 of the algorithm). Nevertheless, for the Colombian power grid the decreasing slope for Ng is faster than that of Nc for the first iterations. Since the initial amount of generators is lower than the initial amount of consumers in this test case, after some of the nodes in the former group have been removed from the graph, the resulting clusters are more likely to be consumers isolated without a connected generator, subsequently, these clusters go to blackout. This implies that the main reason why clusters are being suppressed from the Colombian grid is the absence of generators in them (step 3 of the algorithm), and explains why T2 in the SB-focused method is the lowest value in table 2.

Figure 8.

Figure 8. Number of consumers Nc and generators Ng existing in the graph on each iteration of the SB-focused removal algorithm. The shading accounts for the standard deviation over the 10 random realizations with IC = 50. The slope of the plot allows to observe that the initial iterations on the Colombian grid case remove mainly generators, while for the synthetic grids the elimination is uniform during the whole procedure, hinting an influence of the power distribution on the resilience of the network.

Standard image High-resolution image

In addition, as mentioned in the algorithm, the power supplied by each generator is modified after each percolation step in order to maintain the power balance in the grid. Figure 9 shows the SNBS of each generator node in the complex network as a function of P. For both study cases: the Colombian power grid and the randomly grown synthetic power grids, it can be observed that, as the power of the node increases, its stability diminishes; this comes from the fact that P acts as a measure of the stress in the node, that is, the energy demand that it needs to satisfy: having to provide energy for a higher amount of consumer nodes, a generator becomes then more susceptible to random perturbations, as indicated by the reduction on ${S}_{\mathrm{B}}^{\left(i\right)}$. After performing a Pearson correlation testing between ${S}_{\mathrm{B}}^{\left(i\right)}$ and P(i), a correlation coefficient σ and a p-value pval was found as (σ, pval) = (−0.605, 0) for the Colombian grid and (σ, pval) = (−0.221, 0) for the synthetic grids, confirming the existence of a negative relationship between both variables; which is in accordance with the general behavior observed in the recent results shown in [29]. This result, as well as the previous observation on the resilience to blackout for SB, can be contrasted with the results presented in [16], where it was found that, by distributing the demand among multiple small power generators, instead of a few big power plants, the value of the critical coupling Kc usually diminishes, thus synchronization is favored, but for disturbances applied to power consumption, the most robust performance was found when there is a mixture of both, small and big power generators.

Figure 9.

Figure 9. SNBS of each generator node as a function of the power it supplies to the network. Red dashed line marks the initial set power P = 1.0. It is shown that nodes that provide more power to the grid have a lower basin stability in general.

Standard image High-resolution image

3.4. Distribution of vulnerable nodes

Let us now consider weak nodes as those that exhibit an SNBS lower than 0.4, then the fraction of weak nodes can be expressed as:

Equation (20)

where Nw(I) and N(I) are respectively the number of weak nodes and the total amount of nodes in the graph at iteration I of the resilience testing algorithm. Figure 10 shows the behavior of ηw after each iteration for the SB-focused attacking algorithm. From this figure, a rather counterintuitive observation emerges: in general, for both test cases, removing the most vulnerable nodes in the network is causing an overall reduction of the stability of the whole power grid. This implies that, although attacking nodes with a poor SNBS does not ensure a faster transition to blackout than other strategies, it does make the whole grid more prone to dynamical disturbances in oscillation frequency and phase angle. It is worth noting that, results on figure 10 are statistically less reliable in the last iterations due to the fact that each power grid reaches total blackout at a different iteration value in general, thus, last iterations are not averaging necessarily over 10 realizations as the first iterations are.

Figure 10.

Figure 10. Percentage of weak nodes in the network as a function of the percolation iteration. A general behavior is observed in both subplots; the non-linear stability of the whole system, decreases after each iteration, and this is not affected by the original distribution of power generation.

Standard image High-resolution image

Another important analysis that can be performed is observing how the critical iterations for which the phase transitions occur, are affected for the parameters in the system. In that regard, for the node resilience procedure over synthetic power grids, T1 and T2 were calculated over variations in the coupling strength K as shown in figure 11, where each data point is the average of 10 simulations for the SNBS method and 500 simulations for the other methods. It should be noted that by decreasing K below 10, T1 for the SNBS case becomes lower than the random case in average, hinting that the effect where weaker nodes are located mainly in inner-tree nodes is more noticeable when K is somewhat large, while for lower K, the weak nodes are spread all around the graph. This is understandable since, in general, it can be expected that the basin stability of a node is increased when the coupling of the network is enhanced (this behavior however is not monotonic for all nodes, as pointed out in [56, 57]). For the random and the topology-dependant elimination methods, no tendency is observed for T1 when varying K.

Figure 11.

Figure 11. Phase transitions in the node percolation process for synthetic power grids as a function of K. Insets present the standard deviation sx for each computed data point. Higher values of K reduce the influence of dynamics on the elimination process, thus the transition to blackout reaches a plateau for the topology-based attacks.

Standard image High-resolution image

T2 on the other hand shows an evident increase for any removal method when the coupling strength is increased. As shown in figure 11(b), random and topological-based elimination methods seem to reach a plateau when K is sufficiently high. This reduction in the variability of T2 is because for a higher K, the network becomes more robust to lose clusters due to lack of synchronization (step 5 in the algorithm) and thus, clusters are removed merely by the lack of generators and consumers (step 3) or by single elimination (step 1 and 2). The number of attacks required to reach blackout on either of those two cases depends mostly on the initial construction of the network, therefore it should not vary significantly among multiple experiments. The SNBS removal depends on the system dynamics, thus its curve does not flat as the others for the observed range, however it is expected to behave in the same way for a significantly higher K, when the set point in any node becomes globally stable in Π.

3.5. Edge removal resilience

Continuing now with the analysis for the edge removal procedure, figure 12 presents the evolution of the Colombian graph under this methodology. The difference between the random and focal schemes is remarkable on these simulations: by focusing attacks on transmission lines with higher loads Δθ (i, j), the size of the power grid goes down rapidly and reaches total blackout faster than other alternatives.

Figure 12.

Figure 12. Evolution in the structure of the Colombian power grid subject to the edge elimination algorithm. The shading accounts for the standard deviation calculated over 500 realizations. Elimination based on the dynamical behavior of the network presents a faster transition to total blackout.

Standard image High-resolution image

By focusing on lines with a higher ek , the sparsity of the graph grows rapidly and many small isolated but functional clusters form, as appreciated in figure 12(d), where the average number of clusters for ek can even be twice the amount seen in the random case and almost three times the Δθ (i, j) case. Qualitatively, exactly the same behavior is observed on synthetic power grids on figure 13, and actually, the Colombian grid results lay inside the standard deviation of the experiments for synthetic grids. The giant component reduction profile observed for random and ek methods is in agreement with the results presented in [40]. The values found for T1 and T2 on these simulations are summarized on table 4.

Figure 13.

Figure 13. Evolution in the structure of the randomly grown synthetic power grids subject to the edge elimination algorithm. The shading accounts for the standard deviation calculated over 500 realizations. Remarkably, the same line resilience profiles observed on the Colombian network on figure 12 are present for these randomly grown graphs.

Standard image High-resolution image

Table 4. Average value of T1 and T2 for the edge resilience testing algorithm.

 Colombian gridSynthetic grids
Random ek Δ θ Random ek Δ θ
T1 29.123.007.0017.484.614.14
T2 131.14117.9177.00125.71116.8387.33

Finally, for the line resilience procedure over synthetic power grids, the behavior of the transitions is observed on figure 14. For T1 on panel (a), no clear tendency is observed, besides the expected fact that this transition is, in general, higher for the random method than for other elimination strategies. Panel (b) however shows something interesting for T2: for any value of K, attacking the most vulnerable edges based on the dynamics (Δθ ) produces the fastest transition to blackout in the synthetic power grids, while attacking randomly produces in general the slowest transition. Furthermore, this transition also reaches the plateau when K is sufficiently large, for any elimination method, as observed for the node attacking results. Evidently, the dynamical-based elimination with Δθ flats so quickly (K = 6) because increasing K strengthens the transmission lines and thus directly raises Δθ , as opposed to the node elimination based on SNBS, where the basin stability is known to be affected by K but not in the same trivial and immediate way [56, 57].

Figure 14.

Figure 14. Phase transitions in the edge percolation process for synthetic power grids as a function of K. Insets present the standard deviation sx for each computed data point. The same behavior observed in the node removal process is present here; a sufficiently high coupling strength reduces the dynamical effects and thus the transition to blackout flattens around a certain value.

Standard image High-resolution image

4. Concluding remarks

In this paper, an algorithm was proposed to test the resilience of power grids against failures. Our approach differs from others found in the literature since it takes into consideration dynamical stability against random finite-size disturbances to describe nodal vulnerability, as well as linear stability of phase differences to compute edge vulnerability. Sequential elimination of elements in the network based on these dynamical vulnerabilities was also compared with the same procedure applied to structural vulnerabilities, which were defined in terms of connectivity patterns in the topology of the graph. This allowed to extract some interesting findings which will be discussed in the following.

Table A1. List of the main symbols used in the text.

SymbolMeaning
G(ϑ, ɛ)Graph composed of a set ϑ of nodes and a set ɛ of edges
N Number of nodes in a graph
M Number of edges in a graph
ϑ A set of N nodes
ɛ A set of M edges
ϕ Phase angle of a machine
ΩGlobal reference frequency of the power grid
θ Phase deviation of a machine with respect to the reference frame
P Power supplied or drained by a node
K Maximum power transfer capacity of each transmission line
Kc Critical coupling strength of the graph
α Damping coefficient of a node
A Adjacency matrix of a graph
aij Coefficients of the matrix A
ωs Synchronization frequency of the system
r Order parameter for synchronization
r Steady state order parameter
v Steady state average speed of the system
L Laplacian matrix of the graph
B Incidence matrix of the graph
bij Coefficients of the matrix B
Δθ (i, j)Approximation of the steady state phase difference on the edge (i, j)
X State space of the dynamical system
β Basin of attraction of the system
SB Basin stability
ρ(x) Density of states of disturbances
ΠRepresentative subspace of the dynamical system
FC Number of initial conditions that belong to β
IC Number of initial conditions tested for the computation of SB
dk Degree centrality
ck Clustering coefficient
bk Node betweenness centrality
ek Edge betweenness centrality
T1 Transition from a connected graph to an unconnected graph
T2 Transition to total blackout
C Giant component of the network
C2 Second largest component of the network
$\frac{\left({N}_{\left(0\right)}-{N}_{\left(I\right)}\right)}{{N}_{\left(0\right)}}$ Removed component of the network
|SG |Number of operational clusters in the system
σ Pearson correlation coefficient
pval P-value associated with the Pearson correlation
Nw Number of weak nodes in the graph
ηw Fraction of weak nodes in the graph

First of all, focusing nodal attacks on redundantly connected nodes, such as those with the highest clustering coefficient, is obviously the least effective strategy, being comparable with the random removal; the blackout transition T2 for both is always the highest, meaning that they require more attacks in order to destroy the whole network. Similarly, the giant component of the graph reduces at the slowest rate, since dividing the graph into multiple operational clusters is not likely until the redundant links are removed from the network. Eventually, both methods become essentially the same, once the clustering coefficient for the whole grid is uniform.

The fastest way to reduce the size of the giant component of the network is, as expected, to attack the most central nodes, either those with the highest degree or betweenness; when these bulk nodes are removed, the graph rapidly segregates into multiple perfectly operational clusters. In other words, the transition to an unconnected graph T1 is always the lowest for these methods.

For the case of node elimination based on basin stability it was found that the transition to blackout T2 in the randomly grown synthetic power grids did not differ significantly to that found for the random removal approach. However, in the specific case of the Colombian power grid, this method proved to be the most effective to bring the whole network to total blackout. Essentially, the only key difference between both test cases is the power distribution: for the synthetic grids the power was initially distributed uniformly, with an equal number of generators and consumers, but for the Colombian case only 27% of the initial nodes are generators, and the rest consumers. This power homogeneity in the synthetic grids then is enhancing the resilience of the graph, and this was confirmed when it was observed that for the Colombian grid, the first nodes to be removed are generators, leading to a higher stress in the remaining power plants: a few generators will have to increase drastically the power they produce in order to balance the demand. In addition, this power increase in the generators will reduce their basin stability, that is, it will make them more prone to dynamical random disturbances in oscillatory frequency and phase angle. This negative correlation between the power parameter of a node and its non-linear stability is supported by the findings in [29]. Interestingly, the number of weak nodes in the power grid (nodes with a low basin stability level), was found to increase during the node removal procedure focused on basin stability for both test cases. This implies that this attacking strategy is useful to undermine the general dynamical stability of the power grid and make it more prone to random disturbances, even though, it does not necessarily propagate structural failures to larger proportions of the network (it depends on the amount of generators, as already explained).

Regarding the line resilience testing performed, it was found that for either the Colombian power grid or the synthetic power grids, attacking highly central edges, as those with a high edge betweenness centrality, the graph segregates rapidly into multiple clusters and the giant component reduces abruptly. The transition to blackout, however, is not faster than random elimination. It is, as expected, a very similar behavior to that found in node elimination through node betweenness. The elimination based on the dynamical measure of phase difference (Δθ ) was found to be the most efficient in order to bring the whole network to blackout, and remarkably, this behavior can be observed for any value tested of the maximum transfer capacity in transmission lines.

Concretely, the main contribution of this paper was finding a reduced resilience to successive elimination of components in the second-order Kuramoto model, when the removal steps are focused on the most dynamically-loaded elements in the ensemble. For the case of node elimination, it was also found that a homogeneous power distribution, can enhance this resilience in such a way that the network could be as robust to focused attacks as it is to random attacks. This statement could in principle, lead the design of future smart grids by recommending the implementation of multiple small and renewable power generators, instead of just a few concentrated high-power producers [16]. Similarly, connectivity patterns have to be considered when designing the topology of the grid, looking for an architecture that maximizes the volume of the basin of attraction [20, 21, 2733], minimizes power load in transmission lines [51, 52] and allows for the existence of self-sustained islands in case of localized branch failures [40]; those initial design stages could be supported by the resilience algorithm presented and tested in this paper.

Some work is still left open to research such as further exploring the hidden causes that are yielding the reduction of non-linear stability due to heterogeneous power distribution. Similarly, a more detailed study on the phase transitions T1 and T2, as well as the influence of structural parameters in these critical points, could potentially point in the right direction to design the most appropriate connectivity in a graph and parameter distribution such that resilience against cascading failures is optimized. More dynamical measures of vulnerability could also be incorporated in the removal algorithm; for instance, the diffusivity of perturbations [10] that could indicate regions of faster failure propagation in the grid, the resistance centrality, which includes information about the topology of connections in the graph and the dynamics around the equilibrium point in a computationally efficient way [30, 31], the recovery time that depicts nodes with the slowest average response against disturbances [25], or the multi-node basin stability [24] computed over the nearest-neighbours, as it would include information about the most vulnerable clusters, rather than individual elements. The latter would however, impose a higher computational effort to a task that is already computationally expensive. Likewise, analysing these dynamical and topological vulnerabilities through the cosusceptibility framework seems very promising for the identification of clusters and graph motifs that undermine the performance of the power grid [58]. Future work should also validate these resilience metrics in more realistic power grid models, considering for instance, voltage dynamics [59], ohmic losses [32] and real-world parameters for machines and transmission lines.

Acknowledgments

We would like to pay gratitude to Jorge Fernando Gutiérrez (deceased on September 2019) for valuable ideas and advice at the initial stages of this research. We are also thankful with Arthur Montanari, Luis A Aguirre, Laura Lotero, Fabiola Angulo and Juan Gabriel Restrepo for fruitful discussions and the technical assistance of Reinel Tabares and Rafael Ruiz. C C Galindo-González received financial support from Universidad Nacional de Colombia—Manizales, Dirección de Investigación y Extensión (DIMA) and Ministerio de Ciencia, Tecnología e Innovación (Minciencias) under the contracts FP44842-052-2016 and FP44842-505-2017. D Angulo-García was financially supported by 'Vicerrectoria de Investigaciones—Universidad de Cartagena' through the projects 085-2018 and 011-2018. Special thanks to Hypernet Labs and the project Galileo, for supplying a substantial part of the computational resources that were required [60].

Appendix A.: List of symbols

See table A1.

Appendix B.: Random growth model for power grids

This section describes the algorithm proposed in [49] which allows to generate complex networks that incorporate a spatial embedding (accounts for geographic position of nodes) and that have similar structural properties to those found in real-world power grids. For this algorithm, a redundancy cost function that will be subject to optimization has to be defined as:

Equation (B.1)

where dG (i, j) is the length of the shortest path between nodes i and j in the graph G, dS (yi , yj ) is the spatial distance (Euclidean distance) between the nodes positions yi and yj , and u is a parameter that controls the cost-vs-redundancy trade-off. After defining the parameters from table 1, the construction algorithm is developed through two phases: initialization and growth, which are performed as follows:

B.1. Initialization

  • (a)  
    Initialize a minimum spanning tree for the initial N0 nodes such that it optimizes the edges that have to be built between them based on the minimum Euclidean distance dS (yi , yj ).
  • (b)  
    Let m = ⌊N0(1 − s)(p + q)⌋. For each h = {1, 2, ..., m} find the pair of nodes (i, j) that are not yet linked and for which f(i,j,G) is maximal and connect them.

B.2. Growth

  • (a)  
    For each h = {1, 2, ..., NN0} do:
    • 1.  
      Add a new node h to the graph.
    • 2.  
      Generate a random number ζ1. If ζ1 ⩽ 1 − s then:
      • (i)  
        Set the position of the new node yh .
      • (ii)  
        Add a link between the new node h and the closest node j geographically, that is, the node j for which dS (yh , yj ) is minimal.
      • (iii)  
        Generate a random number ζ2, if ζ2 < p add a link between the new node h and some node l for which f(h,l,G) is maximal.
      • (iv)  
        Generate a random number ζ3, if ζ3 < q draw a node h' from the network uniformly at random. Then find the node l' that is not yet linked to h' and for which f(h,l,G) is maximal and connect them.
    • 3.  
      Otherwise (if ζ1 > 1 − s), select an edge (i, j) of the network uniformly at random. The new geographic location of node h will be given by ${y}_{h}=\frac{\left({y}_{i}+{y}_{j}\right)}{2}$, so the edge (i, j) is removed from the graph and then the edges (i, h) and (h, j) are added.

Please wait… references are loading.
10.1088/1367-2630/abb962