1 Introduction

Small computing devices comprising networks, be it commercial wireless sensor networks, or communicating devices in the Internet of Things, are becoming increasingly common. However, to enable these devices to communicate efficiently, they have to employ methods to use the shared communication medium while avoiding conflicting messages on this medium, in particular in the form of collisions. Collisions occur if two or more devices simultaneously try to access the communication medium, and often result in neither message being delivered. Several protocols to organise shared medium access have been developed and analysed [1, 51]. These protocols typically identify a common time frame and divide this frame into slots associated to each node. Thus every node has an allocated time slot that it may use to send its messages onto the shared medium.

Such an approach introduces the need for a common clock between the nodes, i.e., they need to synchronise. A valuable approach to achieve synchrony of nodes is the implementation of biologically-inspired pulse-coupled oscillators (PCOs) [38]. A network of PCOs synchronises in the following way: all oscillators have a similar clock cycle at the end of which they fire. That is, they transmit a broadcast message which is received by all oscillators in their communication range. These oscillators then adjust their own position within their clock cycle according to a phase response function. Depending on the concrete implementation, they may move their current position within the clock cycle closer to its end, or closer to its start.

Most analyses of the synchronisation behaviour of PCOs are concerned with continouous clock cycles, i.e., where clocks take real values from the interval [0, 1]. However, the smaller devices get, the more important it is to save memory and computing time for such a low-level functionality. Even a floating point number may need too much memory, compared to an implementation with, for example, a four-bit vector. Hence, in previous work, we chose to analyse the behaviour of discrete time PCOs [24].

In contrast to continuous time PCOs, networks of discrete time PCOs are not always guaranteed to synchronise. Instead, whether they synchronise or not depends on the type of coupling between the oscillators and their common phase-response function. We analysed the behaviour of such networks for different parameters via model-checking, to check both qualitatively for which parameters the networks synchronise, as well as quantitatively for how long they need to achieve a synchronised state and how much energy is used to achieve this [25]. In the context of large numbers of single oscillators, for example in the context of wireless sensor networks, the well-known state-space explosion problem of the model-checking approach is extremely important [14]. We formalised a network of oscillators as population models [20] which exploit the behavioural homogeneity of the nodes to encode the global state efficiently. This allows the network size to be increased above what would be feasible when distinguishing each node, with the restriction that only fully-connected networks, where all sensors can communicate with all other sensors, can be modelled. But the construction of a population model from a given oscillator specification is not straightforward, and in particular, it is not obvious whether the constructed population model correctly reflects the behaviour of the oscillators. This results in an ‘abstraction gap’: after abstracting into populations, how can we be sure that the abstraction process was correct and that the results of verification of population models actually hold for the concrete models on which they are based?

In this paper, we remedy this lack of certainty, by proving the correspondence of our population model with an explicit formalisation of the oscillators. To that end, we present the concrete oscillator model as well as its formalisation as a discrete-time Markov chain. Subsequently we describe the corresponding population model, and show how we can, in addition to the abstraction created by the populations, reduce the state space even further to facilitate the analysis. Finally, we prove that the behaviour of a network of concrete oscillators and the population model are probabilistically weakly bisimilar. We cannot prove a one-to-one correspondence, since the concrete model implicitly includes the possibility of identifying individual oscillators, which is exactly what the population model abstracts from. In particular, our contributions are:

  • the definition of a model for fully-connected networks of pulse-coupled oscillators (Sect. 4),

  • the detailed definition of a population model (Sect. 5), based on previous work [24, 25],

  • a way of reducing the size of the formal population models (Sect. 5.5),

  • a proof that these two models are probabilistic weak-bisimilar (Theorem 3), and

  • an evaluation of synchronisation behaviour using probabilistic model-checking (Sect. 7).

The paper is structured as follows. In Sect. 2, we review a selection of related work, both for models of pulse-coupled oscillators, as well as approaches for their verification. After an introduction of preliminary notions in Sect. 3, we present the concrete model of single oscillators as a discrete-time Markov chain in Sect. 4. The abstract model in terms of population models and proofs about their properties are contained in Sect. 5. In Sect. 6, we prove the correspondence between the two types of models. The experimental evaluation of synchronisation behaviour is presented in Sects. 7, and 8 concludes the paper.

2 Related work

The canonical model of pulse-coupled oscillators, and their synchronisation, was formulated by Mirollo and Strogatz [38], and based on Peskin’s model of a cardiac pacemaker [43]. Here the progression of an oscillator through its oscillation cycle is given by a real value in the interval [0, 1]. Mirollo and Strogatz proved that with a convex phase response function, a network of mutually coupled oscillators always converges, i.e., their position within the oscillation cycle eventually coincides. Such a model has been shown to be applicable to the clock synchronisation of wireless sensor nodes [47] and swarms of robots [41].

Synchronisation algorithms based on pulse-coupled oscillators are often beneficial in unreliable, decentralised networks, where other synchronisation algorithms are not appropriate. For example, the Flooding Time Synchronisation Protocol (FTSP) [36] requires the use of an arbitrary root node. In situations where the root becomes unavailable due to communication failure or power outage, FTSP will have to assign another root node. When implemented on unreliable, decentralised networks, FTSP may spend considerable resources on repeatedly assigning root nodes, which may slow down or prevent synchronisation [11]. Other algorithms such as the Berkeley algorithm [26] and Cristian’s algorithm [16] require the use of centralised time servers, which is problematic for unreliable, decentralised networks.

Several decentralised network algorithms for synchronisation are based on pulse-coupled oscillators [47, 50]. For example, the Gradient Time Synchronisation Protocol (GTSP) by Sommer and Wattenhofer [46] achieves synchronisation by having nodes send their current clock value to their neighbours. Each node then calculates the average of the clock values received and its own clock value. This process is then repeated to maintain synchronisation. Another approach to synchronisation, the Pulse-Coupled Oscillator Protocol [39], makes use of refractory periods after sending messages containing time information. During the refractory period, no more messages are sent, which reduces network bandwidth and energy usage. A similar approach is used in the FiGo protocol [11], which combines biologically inspired synchronisation with information distribution via gossiping. All of these approaches use different phase response functions.

In general, synchronisation algorithms based on PCOs are more robust for unreliable networks, as they do not require centralised nodes and can work with only partial network connectivity [11]. They are particularly useful for battery-powered nodes in wireless networks, as the node can be placed in a low-power mode during the refractory period, thus reducing energy usage. (The clock keeps ticking even in low-power mode, thanks to the design of microcontrollers such as the ‘Atmel ATmega128L’ [6].)

Synchronisation of clocks for networks of nodes has been investigated from different perspectives. Heidarian et al. [29] analysed the behaviour of a synchronisation protocol based on time allocation slots for up to four nodes and different topologies, from fully connected networks to line topologies. They modelled the protocol as timed automata [3], and used the model-checker UPPAAL [10] to examine its worst-case behaviour. Their model is based on continuous time, and in particular, they did not model pulse-coupled oscillators.

Bartocci et al. [8] described pulse-coupled oscillators as extended timed automata with suitable semantics to model their peculiarities. They defined a dedicated logic to analyse the behaviour of a network of such automata along traces, and used a pacemaker as a case study to verify the eventual synchronisation and the time needed to achieve this.

Our models and methods are different from all of these approaches. A key difference in our work from that of others analysing PCOs is that we define the oscillation cycle to consist of discrete steps. To the best of our knowledge, with the exception of the paper by Webster et al. (including some of the authors of this paper) [49] and our previous work [24, 25], there is no other work concerned with PCOs with discrete oscillation cycles. Furthermore, all of these approaches distinguish between single oscillators in the network, while the properties of interest relate to global behaviour. This discrepancy between local modelling and global analysis restricts the size of networks that can be analysed, due to the state-space explosion. To extend the size of analysable networks, we employ population models, a counting-abstraction of such networks [19]. Instead of identifying each oscillator on its own, we record how many oscillators are in each step of the oscillation cycle. This significantly reduces the state-space by exploiting the symmetries in the model [20], and we are hence able to extend the size of networks.

The notion of population models should not be confused with population protocols [5], a formalism to express distributed algorithms. In contrast to our setting, communication in population protocols is always between two agents, where one agent initiates the communication and the other responds. Furthermore, even though the agents cannot identify the other agents in the network, within the global model each agent is uniquely associated with a state. In our model, we cannot distinguish between two different agents sharing the same state, even at the global level. Finally, our oscillators may change their state without interacting with other oscillators, while the agents in a population protocol must communicate with another agent to change their internal state.

Other techniques have been used to model populations of processes. For example population-based models using PEPA, a stochastic process algebra, are discussed in [30]. The modelling of individuals using PEPA is introduced and if the identification of individuals is not necessary (similar to our work) a population-based approach is advocated to allow larger populations to be modelled. Unlike our approach the population-based models make use of a continuous approximation of the discrete behaviour.

Chemical Reaction Networks (CRNs), see for example [12, 45] have been used to represent the behaviour of reactions between chemicals in a solution. These have been provided with different semantics including both deterministic and stochastic semantics. CRNs have been used to model asynchronous logic circuits with properties of the models being verified using the probabilistic model checker PRISM [13]. They have also been investigated to analyse their capacity to represent discrete probability distributions focusing on their steady state [12]. In our work we model both a fully connected network of oscillators and the population models directly as stochastic processes, in particular discrete-time Markov chains. We focus on synchronisation properties rather than their steady state. Like [13] we use PRISM to verify these properties.

Similarly to typical definitions of counter abstractions [9, 21], we use counters to model concurrent entities that are indistinguishable for our purposes. For example, to analyse the probability of eventually reaching a synchronised state, we are not interested in an order of oscillators, which would be artificial anyway. However, in contrast to these approaches, we do not include means to introduce new oscillators into a model. That is, the values within our population models are naturally bounded by the number of oscillators within the network.

3 Preliminaries

In this section we define discrete-time Markov chains (DTMCs), stochastic processes with discrete state space and discrete time, and introduce Probabilistic Computation Tree Logic (PCTL), a logic that can be used to reason about probabilistic reachability and rewards in these processes.

Throughout this paper, we use the notation \(f\oplus [x \mapsto y]\), where f is a function, to express updating f at x by y. That is, the function that coincides with f, except for x, where it takes the value y.

3.1 Discrete-time Markov chains

DTMCs can be used to model systems where the discrete-time evolution of the system can be represented by a discrete probabilistic choice over several outcomes at each step.

Definition 1

A discrete-time Markov chain D is a tuple \(({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) where \({\mathcal {S}}\) is a finite set of states. \(\sigma _I\) is the initial state, and \(L : {\mathcal {S}}\rightarrow {\mathbb {P}}({\mathcal {L}})\) is a labelling function that assigns properties of interest from a set of labels \({\mathcal {L}}\) to states. \({\mathbf {P}}: {\mathcal {S}}\times {\mathcal {S}}\rightarrow [0, 1]\) is the transition probability matrix subject to \(\sum _{\sigma ^\prime \in {\mathcal {S}}} {\mathbf {P}}(\sigma , \sigma ^\prime ) = 1\) for all \(\sigma \in {\mathcal {S}}\), where \({\mathbf {P}}(\sigma , \sigma ^\prime )\) gives the probability of transitioning from \(\sigma \) to \(\sigma ^\prime \). We say that there is a transition between two states \(\sigma , \sigma ^\prime \in {\mathcal {S}}\) if \({\mathbf {P}}(\sigma , \sigma ^\prime ) > 0\).

Intuitively, a DTMC is a state transition system where transitions between states are labelled with probabilities greater than 0 and where states are labelled with properties of interest. An execution path \(\omega \) of a DTMC \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) is a non-empty finite, or infinite, sequence \(\sigma _0 \sigma _1 \sigma _2 \cdots \) where \(\sigma _i \in {\mathcal {S}}\) and \({\mathbf {P}}(\sigma _i, \sigma _{i+1}) > 0\) for \(i \geqslant 0\). We denote the set of all paths starting in state \(\sigma \) by \( Paths ^D(\sigma )\), and the set of all finite paths starting in \(\sigma \) by \( Paths _f^D(\sigma )\). For paths where the first state along that path is the initial state \(\sigma _I\) we will simply use \( Paths ^D\) and \( Paths _f^D\). Furthermore, we will use \( Paths \) and \( Paths _f\) if D is clear from the context. For a finite path \(\omega _f \in Paths _f(\sigma )\) for some state \(\sigma \), the cylinder set of \(\omega _f\) is the set of all infinite paths in \( Paths (\sigma )\) that share \(\omega _f\) as a prefix. The probability of taking a finite path \(\sigma _0 \sigma _1 \cdots \sigma _n\) is given by \(\prod _{i=1}^n {\mathbf {P}}(\sigma _{i-1}, \sigma _i)\). This measure over finite paths can be extended to a probability measure \( Pr _\sigma \) over the set of infinite paths \( Paths (\sigma )\), where the smallest sigma-algebra over \( Paths (\sigma )\) is the smallest set containing all cylinder sets for paths in \( Paths _f(\sigma )\). For the probability measure over paths where the first state is the initial state \(\sigma _I\) we will simply use \( Pr \). For a detailed description of the construction of the probability measure we refer the reader to [31].

3.2 Probabilistic computation tree logic

Probabilistic Computation Tree Logic [28] (PCTL) is a probabilistic extension of the temporal logic CTL. Properties for DTMCs can be formulated in PCTL and then checked against the DTMCs using model checking.

Definition 2

The syntax of PCTL is given by:

$$\begin{aligned} {\varPhi }&= p \mid \lnot {\varPhi } \mid ({\varPhi } \wedge {\varPhi }) \mid \mathrm {P}_{\bowtie \lambda }[{\varPsi }] \\ {\varPsi }&= {\varPhi } \ \mathrm {U}\ {\varPhi } \end{aligned}$$

where p is an atomic proposition taken from the set of labels \({\mathcal {L}}\), \(\bowtie \ \in \{<,\leqslant ,\geqslant ,>\}\) and \(\lambda \in [0, 1]\).

Formulas denoted by \({\varPhi }\) are state formulas and formulas denoted by \({\varPsi }\) are path formulas. A PCTL formula is always a state formula, and a path formula can only occur inside the \(\mathrm {P}\) operator. We now give the semantics of PCTL over a DTMC.

Definition 3

Given a DTMC \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\), we inductively define the satisfaction relation \(\models \) for any state \(\sigma \in {\mathcal {S}}\) as follows:

$$\begin{aligned} \sigma&\models p&\quad \text {iff} \quad&p \in L(\sigma ) \\ \sigma&\models \lnot {\varPhi }&\quad \text {iff} \quad&\sigma \not \models {\varPhi } \\ \sigma&\models ({\varPhi } \wedge {\varPhi }^\prime )&\quad \text {iff} \quad&\sigma \models {\varPhi } \text { and } \sigma \models {\varPhi }^\prime \\ \sigma&\models \mathrm {P}_{\bowtie \lambda }[{\varPsi }]&\quad \text {iff} \quad&Pr \{\omega \in Paths (\sigma ) \mid \omega \models {\varPsi }\} \bowtie \lambda \end{aligned}$$

where \(p \in {\mathcal {L}}\), and for any path \(\omega = \sigma _0 \sigma _1 \sigma _2 \cdots \) of D as follows:

$$\begin{aligned} \omega&\models {\varPhi } \ \mathrm {U}\ {\varPhi }^\prime&\quad \text {iff} \quad&\text {there exists }i \in {\mathbb {N}}\text { s.t. }\omega _i \models {\varPhi }^\prime \text { and for all } j < i. \omega _j \models {\varPhi }. \end{aligned}$$

Disjunction, \( true \), \( false \), and implication are derived as usual, and we define eventuality as \(\mathrm {F}\ {\varPhi } \equiv true \ \mathrm {U}\ {\varPhi }\). When model checking any PCTL formula of the form \(\mathrm {P}_{\bowtie \lambda }[{\varPsi }]\) the actual probability is first calculated and then compared to the bound \(\bowtie \lambda \) [35]. We will denote this calculated probability value by \(\mathrm {P}_{=?}[{\varPsi }]\).

While probabilistic reachability properties allow us to quantitatively analyse models with respect to the likelihood of reaching some set of states, they do not allow us to reason about other properties of interest, for instance the expected time taken for a network to synchronise [24], or the expected energy consumption of a network [25]. Therefore, we will often want to augment the DTMC corresponding to a population model with rewards. We do this by annotating states and transitions with real-valued rewards (respectively costs, should values be negative) that are awarded when states are visited, or transitions taken.

Definition 4

Given a DTMC \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) a reward structure for D is a pair \({\mathcal {R}}= (R_s, R_t)\) where \(R_s: {\mathcal {S}}\rightarrow {\mathbb {R}}\) and \(R_t: {\mathcal {S}}\times {\mathcal {S}}\rightarrow {\mathbb {R}}\) are the state reward and transition reward functions that respectively map states and transitions in D to real valued rewards.

For any finite path \(\omega = \sigma _0 \cdots \sigma _k\) of D we define the total reward accumulated along that path up to, but not including, \(\sigma _k\) as

$$\begin{aligned} \mathrm {tot}_{{\mathcal {R}}}(\sigma _0 \cdots \sigma _k) = \sum _{i=0}^{k-1} \left( R_s(\sigma _i) + R_t(\sigma _i, \sigma _{i+1}) \right) . \end{aligned}$$
(1)

Given a DTMC \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) augmented with a reward structure \({\mathcal {R}}\), and some state \(\sigma _0 \in {\mathcal {S}}\), we will often want to reason about the reward that is accumulated along a path \(\omega = \sigma _0 \sigma _1 \sigma _2 \cdots \in Paths (\sigma _0)\) that eventually passes through some set of target states \({\varOmega }\subset {\mathcal {S}}\). We first define a random variable over the set of infinite paths starting in state \(\sigma _0\), \(V_{{\varOmega }, \sigma _0} : Paths (\sigma _0) \rightarrow {\mathbb {R}}\cup \{\infty \}\). If \(\sigma _0 = \sigma _I\) then we will simply use \(V_{\varOmega }\). Given the set \(\omega _{\varOmega }= \{j \mid \sigma _j \in \omega \cap {\varOmega }\}\) of indices of states in \(\omega \) that are in \({\varOmega }\) we define the random variable

$$\begin{aligned} V_{{\varOmega }, \sigma _0}(\omega ) = {\left\{ \begin{array}{ll} \begin{array}{ll} \infty &{} \text { if } \omega _{\varOmega }= \emptyset \\ \mathrm {tot}_{{\mathcal {R}}}(\sigma _0 \cdots \sigma _k) &{} \text { otherwise, where } k = \min \omega _{\varOmega }, \end{array} \end{array}\right. } \end{aligned}$$

and define the expectation of \(V_{{\varOmega }, \sigma _0}\) with respect to \( Pr _{\sigma _0}\) by

$$\begin{aligned} E[V_{{\varOmega }, \sigma _0}] = \int _{\omega \in Paths (\sigma _0)} V_{{\varOmega }, \sigma _0}(\omega ) \ d Pr = \sum _{\omega \in Paths _{\sigma _0}} V_{{\varOmega }, \sigma _0}(\omega ) Pr \{\omega \}. \end{aligned}$$

The logic of PCTL can be extended to include reward properties by introducing the state formula \(\mathrm {R}_{\bowtie r}[\mathrm {F}\ {\varPhi }]\), where \(\bowtie \in \{<,\leqslant ,\geqslant ,>\}\) and \(r \in {\mathbb {R}}\) [33]. Given a state \(\sigma \in {\mathcal {S}}\), a real value r, and a PCTL path formula \({\varPhi }\), the semantics of this formula is given by

$$\begin{aligned} \sigma \models \mathrm {R}_{\bowtie r}[\mathrm {F}\ {\varPhi }] \quad \text {iff}\quad E[V_{ Sat ({{\varPhi }})}] \bowtie r, \end{aligned}$$

where \( Sat ({\varPhi })\) denotes the set of states in \({\mathcal {S}}\) that satisfy \({\varPhi }\). Similarly to the operator \(\mathrm {P}\), for any PCTL formula of the form \(\mathrm {R}_{\bowtie r}[{\varPsi }]\) we will denote the calculated expected value by \(\mathrm {R}_{=?}[{\varPsi }]\).

4 Concrete model of a network of pulse-coupled oscillators

In this section we give a brief introduction to the formal model of a single pulse-coupled oscillator, as originally presented in previous work [24]. Subsequently, we encode fully-coupled networks of such oscillators as discrete time Markov chains.

4.1 Pulse-coupled oscillator model

A model of a pulse-coupled oscillator is composed of two features. Firstly, we need to model the oscillation, a periodic variation of the state of the oscillator. In our model the phase of an oscillator indicates its progression through an oscillation cycle, which is divided into discrete steps. We assume that the oscillation frequency is the same for all oscillators; the internal clocks of all oscillators are running at the same speed. Secondly, the model must encapsulate the interactions between the oscillators. Oscillators that are pulse-coupled interact with each other at discrete times during their oscillation cycles. At some distinguished point an oscillator transmits a message to other oscillators that react to the reception of the message by adjusting their phase. We assume that the duration of time corresponding to an increment of 1 to the phase of an oscillator is long enough to perceive all messages coming from other oscillators. That is, the only way a message may be lost is if the sending oscillator fails to transmit its message.

The phase of an oscillator u at time t is denoted by \(\phi _u(t)\). The phase of each u progresses through a sequence of discrete integer values bounded by some \(T \geqslant 1\), its maximal phase. The phase progression over time of a single uncoupled oscillator is determined by the successor function, where the phase increases over time until it exceeds T, at which point the oscillator will fire in the next moment in time and the phase will reset to one. The phase progression of an uncoupled oscillator is therefore cyclic with period T, and we refer to one cycle as an oscillation cycle.

When an oscillator fires, it may happen that its firing is not perceived by any of the other oscillators coupled to it. We call this a broadcast failure and denote its probability by \(\mu \in [0,1]\). Note that \(\mu \) is a global parameter, hence the chance of broadcast failure is identical for all oscillators. When an oscillator fires, and a broadcast failure does not occur, it perturbs the phase of all oscillators to which it is coupled; we use \(\alpha _u(t)\) to denote the number of all other oscillators that are coupled to u and will fire at time t.

Definition 5

The phase response function is a positive increasing function \({\varDelta }: \{1, \ldots , T\} \times {\mathbb {N}} \times {\mathbb {R}}^+ \rightarrow {\mathbb {N}}\) that maps the phase of an oscillator u, the number of other oscillators perceived to be firing by u, and a real value defining the strength of the coupling between oscillators, to an integer value corresponding to the perturbation to phase induced by the firing of oscillators where broadcast failures did not occur. We require \({\varDelta }({\varPhi }, 0, \epsilon )=0\) for all possible phase response functions, that is, oscillators are only perturbed if they perceive at least one other firing oscillator.

We can introduce a refractory period into the oscillation cycle of each oscillator. A refractory period is an interval of discrete values \([1, R] \subseteq [1, T]\) where \( R \leqslant T\) is the size of the refractory period, such that if \(\phi _u(t)\) is inside the interval, for some oscillator u at time t, then u cannot be perturbed by other oscillators to which it is coupled. If \(R = 0\) then we set \([1, R] = \emptyset \), and there is no refractory period at all.

Definition 6

The refractory function \(\mathrm {ref}: \{1, \ldots , T\} \times {\mathbb {N}} \rightarrow {\mathbb {N}}\) is defined as \(\mathrm {ref}({\varPhi }, \delta ) = {\varPhi }\) if \({\varPhi }\in [1, R]\), or \(\mathrm {ref}({\varPhi },\delta ) = {\varPhi }+ \delta \) otherwise, and takes as parameters \(\delta \), the degree of perturbance to the phase of an oscillator, and \({\varPhi }\), the phase, and returns \({\varPhi }\) if it is in the refractory period, or \({\varPhi }+ \delta \) otherwise.

The phase evolution of an oscillator u over time is then defined as follows, where the update function and firing predicate, respectively denote the updated phase and firing of oscillator u,

$$\begin{aligned} update _u(t)&= 1 + \mathrm {ref}(\phi _u(t), {\varDelta }(\phi _u(t), \alpha _u(t), \epsilon )), \\ fire _u(t)&= update _u(t) > T, \\ \phi _u(t+1)&= {\left\{ \begin{array}{ll} 1 &{} \text {if } fire _u(t) \\ update _u(t) &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

For real deployments of synchronisation protocols it is often the case that the duration of a single oscillation cycle will be at least several seconds [15, 42]. The perturbation induced by the firing of a group of oscillators may lead to groups of other oscillators to which they are coupled firing in turn. The firing of these other oscillators may then cause further oscillators to fire, and so forth, leading to a “chain reaction”, where each group of oscillators triggered to fire is absorbed by the initial group of firing oscillators. Since the whole chain reaction of absorptions may occur within just a few milliseconds, and in our model the oscillation cycle is a sequence of discrete states, when a chain reaction occurs the phases of all perturbed oscillators should be updated in one single time step.

4.2 Modelling the network as a DTMC

In this section, we present our model of a fully-connected network of pulse-coupled oscillators. Observe that by the definition of single PCOs, the reaction of an oscillator u to incoming communication of other oscillators depends on the number of oscillators communicating and the implementation of the perturbation function. We choose to present the full semantics of such a network, instead of defining it as the parallel composition of single oscillators. After presenting the semantics, we will discuss this decision.

We model the whole, fully-connected network of oscillators as a single DTMC \(D = ({\mathcal {S}}, s_I, {\mathbf {P}}, L)\), where each state \(s\in {\mathcal {S}}\) denotes a global state of the network, with the exception of \(s_I\), which is a distinguished initial state. More precisely, each state of the DTMC contains the internal states of each oscillator, as well as an abstraction of the environment.

We model each transition of an oscillator as a single transition within the DTMC. However, since the oscillators may influence each other within a single time step (that is, when they are firing), we cannot simply allow for arbitrary sequences of transitions. For instance, as stated in Sect. 4.1, all oscillators run with the same clock-speed. Hence we need to prevent a single oscillator from taking a transition and thus progressing its phase without giving the other oscillators a chance to do the same. We achieve this by the following means:

  • we divide the internal computation of each oscillator into two modes: start and update, and

  • we add a counter to the model, containing the number of oscillators that fire.

The counter also possesses both modes, and resets at the start of each “round” of computation. First, in the start mode, each oscillator checks whether it would fire, according to its phase response function and the current number of oscillators that already fired, as given by the counter. If it does, it increases the counter and updates its mode to update, otherwise it just updates its mode. If all oscillators are in the update mode, they compute their new phases in a single step, according to the phase response function and the current state of the environment counter. Furthermore, we impose an order on the evaluation of the oscillators in the start mode if at least one oscillator fires, starting from the highest phase to the lowest. In this way, we model that firing oscillators are always perceived by the other nodes, and thus may lead to the firing of the latter. In particular, this reflects the absorptions of messages as defined in the previous section.

The general idea of the progress of the network of oscillators is visualised in Fig. 1. In the figure, each rounded rectangle shows a state of a network of four oscillators. The circles represent the nodes, where we inscribe their current phases and an abbreviation of their modes. A node that is about to fire is indicated by a starred circle, while a shaded circle indicates a node that is within the refractory period. The rectangle denotes the environment counter, with its corresponding value and mode. The phase response function is \({\varDelta }({\varPhi }, \alpha , \epsilon ) = \left[ {\varPhi } \cdot \alpha \cdot \epsilon \right] \), where \([ \cdot ]\) denotes rounding to the nearest integer. We set \(\epsilon = 0.3\), and \(\mu =0.2\).

In the first state, all outgoing transitions only check whether to increase the counter. Since no oscillator is in the firing phase, all oscillators just update their mode (observe that the single arrow actually denotes four transitions). In the next step, all oscillators increase their phase by one, and reset their mode to start. In the next four transitions, oscillator 2 fires and increases the counter, which in turn is sufficient for oscillator 3 to fire as well, since \(7 + [7 \cdot 1 \cdot 0.3] + 1 = 7 + [2.1] + 1 = 7 + 2 + 1 = 10 > T\). Oscillator 3, however, then fails to fire, and does not increase the counter. Neither oscillator 1 nor 4 get perturbed to fire. The latter because it is still in its refractory period, and thus ignores the firing oscillator, and the former since \( 4 + [4 \cdot 1 \cdot 0.3] + 1 = 4 + [1.2] + 1 = 4 + 1 + 1 = 6 \leqslant T\). During the last transition of the example, oscillator 2 and 3 reset their phase to one, while oscillator 1 is perturbed and increases its phase by two steps at once. Oscillator 4 is within its refractory period, which means that it is not perturbed, and simply increments its phase. In addition to these transitions, we also need some bookkeeping transitions, to ensure that the counter is reset before the oscillators check their phase response. Furthermore, observe that in the example, it is crucial that oscillator 3 checks its response after oscillator 2 increased the counter, since otherwise oscillator 3 would not have been perturbed to fire.

Formally, we combine the states of the oscillators and the environment into a single state of the DTMC. Each oscillator can be described by a tuple consisting of the current phase \({\varPhi }\) of the oscillator and the mode \(\mathsf {mode}\) within this phase. The phase ranges from 1 to T, while the mode takes values from \(\{ start , update \}\). Furthermore, we use a single counter to keep track of the number of oscillators that fired successfully within a single phase computation.

Fig. 1
figure 1

Transitions in the Concrete Oscillator Model (\(N=4, T = 9, R = 2\))

For a network of N oscillators, a state of the DTMC consists of a function \(\mathsf {osc}\) that associates a phase and mode with each oscillator,

$$\begin{aligned} \mathsf {osc}:\{1,\dots ,N\} \rightarrow (\{1, \dots , T\}\times \{ start , update \}), \end{aligned}$$

and the state of the environment \(\mathsf {env}\) counting the number of oscillators that fired,

$$\begin{aligned} \mathsf {env}\in \{0,\dots , N\} \times \{ start , update \}. \end{aligned}$$

A state is therefore a tuple \(s= (\mathsf {env}, \mathsf {osc})\), where \(\mathsf {env}\) is the state of the environment, and \(\mathsf {osc}\) is the state of the network. We denote the set of all concrete system states by \({\mathcal {S}}_{c}\). For simplicity, we will use the notation \({\phi }({s},{u})\) to refer to the phase of oscillator u in state \(s\), and similarly, \({\mathsf {mode}}({s},{u})\) to refer to its mode. By abuse of notation, we will also write \({\mathsf {mode}}({s},{\mathsf {env}})\) to denote the mode of the environment and \(\mathsf {count}(s)\) to refer to the value of the environment counter in state \(s\). We sometimes need to denote that an oscillator changes neither its phase nor its mode in the definitions of the transitions. To that end, we define the stability of oscillator u between \(s\) and \(s^\prime \) as

$$\begin{aligned} \mathsf {stable}_{u}(s,s^\prime )&\equiv {\phi }({s},{u}) = {\phi }({s^\prime },{u}) \wedge {\mathsf {mode}}({s},{u}) = {\mathsf {mode}}({s^\prime },{u}) . \end{aligned}$$

Similarly, we define stability of the environment:

$$\begin{aligned} \mathsf {stable}_{\mathsf {env}}(s,s^\prime )&\equiv \mathsf {count}(s) = \mathsf {count}(s^\prime ) \wedge {\mathsf {mode}}({s},{\mathsf {env}}) = {\mathsf {mode}}({s^\prime },{\mathsf {env}}) \end{aligned}$$

We use the notation \(\mathsf {init}_{{\varPhi }}(s) = \{u \mid {\mathsf {mode}}({s},{u}) = start \wedge {\phi }({s},{u}) = {\varPhi }\}\) for the set of all oscillators sharing phase \({\varPhi }\) and mode \( start \) in the state \( s\). Furthermore, we also use the notation \(\mathsf {init}_{}(s) = \{u \mid {\mathsf {mode}}({s},{u}) = start \}\).

We now define the transition probabilities between states. To do this we first distinguish the following cases:

  1. 1.

    the environment resets its counter;

  2. 2.

    no oscillator has a clock value of T;

  3. 3.

    an oscillator is in the mode \( start \), has a clock value lower than T, is perturbed, but not enough to fire;

  4. 4.

    an oscillator is in the mode \( start \), has a clock value lower than T and is perturbed enough to fire;

  5. 5.

    an oscillator is in the mode \( start \), has a clock value lower than T and is perturbed enough to fire, but fails to do so;

  6. 6.

    an oscillator is in the mode \( start \), has a clock value of T, and broadcasts its pulse;

  7. 7.

    an oscillator is in the mode \( start \), has a clock value of T, and fails to broadcast its pulse;

  8. 8.

    all oscillators are in the mode \( update \), update their clock and reset their state to \( start \).

We will impose an order on certain transitions for two reasons. Firstly, we will restrict transitions that are only used for bookkeeping purposes. For example, we will require that the reset transition of the environment is taken before any of the transitions for the oscillators within a phase are activated. In particular, this means that each computation starts with a transition of the type 1. Secondly, we need to ensure that, if at least one oscillator fires, the phase response of all oscillators is evaluated starting with oscillators in the highest phase, down to the lowest phase, as described above. The cases stated above are reflected in the following definitions for the transition probability between two states \(s= (\mathsf {env}, \mathsf {osc})\) and \(s^\prime = (\mathsf {env}^\prime , \mathsf {osc}^\prime )\).

Case 1, where the environment resetting its counter is treated as follows. In the precondition, we require that the mode of the counter is start, and the state of the oscillators does not change from \(s\) to \(s^\prime \). Furthermore, the mode of the counter changes to update in \(s^\prime \), and its value is set to 0. Since this transition is mandatory at the beginning of each round, its probability is 1. An example for this transition can be found in Fig. 1 in the transition from state (c) to (d).

$$\begin{aligned} \text {If }&{\mathsf {mode}}({s},{\mathsf {env}}) = start \wedge {\mathsf {mode}}({s^\prime },{\mathsf {env}}) = update \wedge \mathsf {count}(s^\prime ) = 0 \nonumber \\&\wedge \forall u :\mathsf {stable}_{u}(s,s^\prime ) , \nonumber \\ \text {then }&{\mathbf {P}}(s, s^\prime ) = 1. \end{aligned}$$
(2)

If no oscillator is at the end of its cycle, that is, in case 2, we define the probability of one oscillator updating its mode as follows. Observe that we have to normalise the transition probability by the number of all oscillators that have not transitioned to their update mode yet. This is correct, since no oscillator fires, which also means that no oscillator can be activated beyond the maximum phase. This implies in particular that the order of oscillator transitions does not matter in this round. For example, all outgoing transitions from state (a) in Fig. 1 have probability \(\frac{1}{4}\), while the subsequent transitions (which are not shown in the figure) occur with probability \(\frac{1}{3}\), and so on.

$$\begin{aligned} \text {If }&{\mathsf {mode}}({s},{\mathsf {env}}) = update \text { and there is a } w \text { s.t.} \nonumber \\&{\mathsf {mode}}({s},{w}) = start \wedge {\mathsf {mode}}({s^\prime },{w}) = update \wedge {\phi }({s},{w}) = {\phi }({s^\prime },{w}) \nonumber \\&\wedge \forall u :{\phi }({s},{u}) < T \wedge \forall u :u \ne w \rightarrow \mathsf {stable}_{u}(s,s^\prime ) \wedge \mathsf {stable}_{\mathsf {env}}(s,s^\prime ) \nonumber \\ \text {then }&{\mathbf {P}}(s, s^\prime ) = \frac{1}{|\mathsf {init}_{}(s)|}. \end{aligned}$$
(3)

Now we will consider the cases 3, 4 and 5, where some oscillator already fired (i.e., \(\mathsf {count}(s) > 0\)), and other oscillators are perturbed. In all three cases, one common precondition is that the counter is in its update mode and that there is an appropriate oscillator in the start mode. One complication arises: we have to ensure that the messages of firing oscillators lead to the perturbation of the other oscillators. Recall that the perturbation function is increasing, and thus a higher phase of an oscillator may result in a higher perturbation. That is, oscillators with a higher phase need to be perturbed by fewer firing oscillators before their phase is increased beyond the threshold and they in turn fire. However, if the oscillators with high phases fire, their additional messages may be enough to perturb oscillators with lower phases to fire as well. Hence, if we did not enforce an order from high to low phases, oscillators with a lower phase might not be perturbed when oscillators with a higher phase fire. To solve this, we only allow the oscillators to update their mode once all oscillators with a higher phase have been considered. Observe that we normalise the transition probabilities according to the number of oscillators satisfying similar conditions. Formally, this means that we need to normalise on the number of oscillators with the same phase in the start mode. To model case 3, we only change the mode of the corresponding oscillator to update, and keep the rest of the state. The precondition is, that all oscillators with higher phases have already been considered, and the oscillator under consideration is not perturbed enough to fire, still within its refractory period, or both. For example, from state (f) on, the oscillators 1 and 4 are possibly perturbed, but not enough to fire. In this case, we will still first update oscillator 1, due to its higher phase.

$$\begin{aligned} \text {If }&{\mathsf {mode}}({s},{\mathsf {env}}) = update \text { and there is a } w \text { s.t.} \nonumber \\&{\mathsf {mode}}({s},{w}) = start \wedge {\mathsf {mode}}({s^\prime },{w}) = update \wedge {\phi }({s},{w}) = {\phi }({s^\prime },{w}) \nonumber \\&\wedge {\phi }({s},{w}) < T \wedge \exists u :{\phi }({s},{u}) = T \nonumber \\&\wedge \forall u :u \ne w \rightarrow ({\mathsf {mode}}({s},{u}) = update \vee {\phi }({s},{u}) \leqslant {\phi }({s},{w}))\nonumber \\&\wedge {\phi }({s},{w}) + {\varDelta }( {\phi }({s},{w}), \mathsf {count}(s),\epsilon ) + 1 \leqslant T \vee {\phi }({s},{w}) \leqslant R \nonumber \\&\wedge \forall u :u \ne w \rightarrow \mathsf {stable}_{u}(s,s^\prime )\nonumber \\&\wedge \mathsf {stable}_{\mathsf {env}}(s,s^\prime ) \nonumber \\ \text {then }&{\mathbf {P}}(s, s^\prime ) = \frac{1}{|\mathsf {init}_{{\phi }({s},{w})}(s)|}. \end{aligned}$$
(4)

If an oscillator is perturbed, and actually fires, we update its mode to update, and increase the counter of the environment. We only allow for this transition, if the oscillator is perturbed to fire, and is outside of refractory period. This definition corresponds to case 4. The probability of such a transition is the probability that a broadcaset failure does not occur, \(1 - \mu \), normalised by the number of oscillators in the same phase that have not yet been considered.

$$\begin{aligned} \text {If }&{\mathsf {mode}}({s},{\mathsf {env}}) = update \text { and there is a } w \text { s.t.} \nonumber \\&{\mathsf {mode}}({s},{w}) = start \wedge {\mathsf {mode}}({s^\prime },{w}) = update \wedge {\phi }({s},{w}) = {\phi }({s^\prime },{w}) \nonumber \\&\wedge {\phi }({s},{w}) < T \wedge \exists u :{\phi }({s},{u}) = T \nonumber \\&\wedge \forall u :u \ne w \rightarrow ({\mathsf {mode}}({s},{u}) = update \vee {\phi }({s},{u}) \leqslant {\phi }({s},{w}))\nonumber \\&\wedge {\phi }({s},{w}) + {\varDelta }( {\phi }({s},{w}), \mathsf {count}(s),\epsilon ) + 1> T \wedge {\phi }({s},{w}) > R \nonumber \\&\wedge \forall u :u \ne w \rightarrow \mathsf {stable}_{u}(s,s^\prime )\nonumber \\&\wedge \mathsf {count}(s^\prime ) = \mathsf {count}(s) +1 \nonumber \\&\wedge {\mathsf {mode}}({s},{\mathsf {env}}) = {\mathsf {mode}}({s^\prime },{\mathsf {env}}) \nonumber \\ \text {then }&{\mathbf {P}}(s, s^\prime ) = \frac{1 - \mu }{|\mathsf {init}_{{\phi }({s},{w})}(s)|} \end{aligned}$$
(5)

If the oscillator fails to fire, as described by case 5, the only differences to the preceeding case is that we do not increase the counter, and that the probability of the transition is the normalised broadcast failure probability. As an example, consider the transition in Fig. 1 from state e to f, where oscillator 3 is indeed perturbed to the end of its cycle, but fails to fire. That is, the environment counter is not increased. The probability of this transition is \(\frac{\mu }{1} = 0.2\), since it is the only oscillator with the phase value 7.

$$\begin{aligned} \text {If }&{\mathsf {mode}}({s},{\mathsf {env}}) = update \text { and there is a } w \text { s.t.} \nonumber \\&{\mathsf {mode}}({s},{w}) = start \wedge {\mathsf {mode}}({s^\prime },{w}) = update \wedge {\phi }({s},{w}) = {\phi }({s^\prime },{w}) \nonumber \\&\wedge {\phi }({s},{w}) < T \wedge \exists u :{\phi }({s},{u}) = T \nonumber \\&\wedge \forall u :u \ne w \rightarrow ({\mathsf {mode}}({s},{u}) = update \vee {\phi }({s},{u}) \leqslant {\phi }({s},{w}))\nonumber \\&\wedge {\phi }({s},{w}) + {\varDelta }( {\phi }({s},{w}), \mathsf {count}(s),\epsilon ) + 1> T \wedge {\phi }({s},{w}) > R \nonumber \\&\wedge \forall u :u \ne w \rightarrow \mathsf {stable}_{u}(s,s^\prime )\nonumber \\&\wedge \mathsf {stable}_{ \mathsf {env}}(s,s^\prime ) \nonumber \\ \text {then }&{\mathbf {P}}(s, s^\prime ) = \frac{\mu }{|\mathsf {init}_{{\phi }({s},{w})}(s)|}. \end{aligned}$$
(6)

Now we turn to the cases 6 and 7 where some oscillator is at the end of its cycle. The preconditions of both cases are similar to the preceeding cases: the counter is required to be in the update mode, and there is an oscillator w, whose phase is T and mode is start. Furthermore, in \(s^\prime \), the mode of w is update, and the state of all other oscillators does not change. The difference between the cases is whether the counter is increased, that is, whether the oscillator manages to broadcast its signal. The probability of succeeding is \(\frac{1 - \mu }{|\mathsf {init}_{T}(s)|}\), since there may be more than one oscillator in phase T at state \(s\). Hence we have to normalise the tranistion probability accordingly. Similarly, the probability of failing to fire is \(\frac{ \mu }{|\mathsf {init}_{T}(s)|}\). So, the probability of the transition from state (d) to (e) is \(\frac{1 - 0.2}{1} = 0.8\), since it is the only oscillator in the firing phase.

$$\begin{aligned} \text {If }&{\mathsf {mode}}({s},{\mathsf {env}}) = update \text { and there is a } w \text { s.t.} \nonumber \\&{\mathsf {mode}}({s},{w}) = start \wedge {\phi }({s},{w}) = T \wedge {\mathsf {mode}}({s^\prime },{w}) = update \nonumber \\&\wedge {\phi }({s},{w}) = {\phi }({s^\prime },{w}) \wedge \forall u :u \ne w \rightarrow \mathsf {stable}_{u}(s,s^\prime ) \nonumber \\&\wedge \mathsf {count}(s^\prime ) = \mathsf {count}(s)+1 \nonumber \\ \text {then }&{\mathbf {P}}(s, s^\prime ) = \frac{1 - \mu }{|\mathsf {init}_{T}(s)|}. \end{aligned}$$
(7)
$$\begin{aligned} \text {If }&{\mathsf {mode}}({s},{\mathsf {env}}) = update \text { and there is a } w \text { s.t.} \nonumber \\&{\mathsf {mode}}({s},{w}) = start \wedge {\phi }({s},{w}) = T \wedge {\mathsf {mode}}({s^\prime },{w}) = update \nonumber \\&\wedge {\phi }({s},{w}) = {\phi }({s^\prime },{w}) \wedge \forall u :u \ne w \rightarrow \mathsf {stable}_{u}(s,s^\prime ) \nonumber \\&\wedge \mathsf {count}(s^\prime ) = \mathsf {count}(s) \nonumber \\ \text {then }&{\mathbf {P}}(s, s^\prime ) = \frac{\mu }{|\mathsf {init}_{T}(s)|}. \end{aligned}$$
(8)

The final case 8, where all oscillators update their clock values simultaneously, is given by the following formula. It requires that all oscillators have finished their computation, whether they fire, and both the counter and the oscillators will reset their mode to start after the transition.

$$\begin{aligned} \text {If }&{\mathsf {mode}}({s},{\mathsf {env}}) = update \text { and } {\mathsf {mode}}({s^\prime },{\mathsf {env}}) = start \text { and }\nonumber \\&\text {for all } u \text { we have } {\mathsf {mode}}({s},{u}) = update \wedge {\mathsf {mode}}({s^\prime },{u}) = start \wedge F_ update \nonumber \\ \text {then }&{\mathbf {P}}(s,s^\prime ) = 1 . \end{aligned}$$
(9)

The formula \(F_ update \) is an abbreviation for the conjunction of the following four conditions, which model the update of the phases of the oscillators, according to the phase response function. Observe that the phases of the oscillators had not been updated by the previously defined transitions. Hence, we now update the phases of all oscillators at once.

$$\begin{aligned}&\forall u :{\phi }({s},{u}) = T \rightarrow \nonumber \\&\quad {\phi }({s^\prime },{u}) =1 \end{aligned}$$
(9a)
$$\begin{aligned}&\forall u :{\phi }({s},{u}) < T \wedge {\phi }({s},{u}) \leqslant R \rightarrow \nonumber \\&\quad {\phi }({s^\prime },{u}) = {\phi }({s},{u}) + 1 \end{aligned}$$
(9b)
$$\begin{aligned}&\forall u :{\phi }({s},{u}) < T \wedge {\phi }({s},{u}) > R \;\wedge \nonumber \\&\;{\phi }({s},{u}) + {\varDelta }({\phi }({s},{u}), \mathsf {count}(s),\epsilon ) +1 \leqslant T \rightarrow \nonumber \\&\quad {\phi }({s^\prime },{u}) = {\phi }({s},{u}) + {\varDelta }({\phi }({s},{u}), \mathsf {count}(s),\epsilon ) +1 \end{aligned}$$
(9c)
$$\begin{aligned}&\forall u :{\phi }({s},{u}) < T \wedge {\phi }({s},{u})> R \;\wedge \nonumber \\&\;{\phi }({s},{u}) + {\varDelta }({\phi }({s},{u}), \mathsf {count}(s), \epsilon ) +1 > T \rightarrow \nonumber \\&\quad {\phi }({s^\prime },{u}) = 1 . \end{aligned}$$
(9d)

In this formula, (9a) handles the simple case of firing oscillators, while (9b) defines the behaviour of oscillators within their refractory period. The formulas (9c) and (9d) reflect the two cases where oscillators are perturbed, either not exceeding their oscillation cycle, or firing, respectively. For example, in the transition from state (b) to (c) in the figure, oscillators 1 to 3 satisfy clause 9c (where \({\varDelta }({\varPhi }, 0, 0.3) = 0\) by definition for all phases \({\varPhi }\)), while oscillator 4 is updated due to clause 9b. In the transition from (g) to (h), however, oscillator 1 is again updated due to clause 9c, oscillator 2 due to clause 9a, oscillator 3 due to 9d and oscillator 4 again due to 9b.

Finally, we define the transitions from the initial state, that form a distribution over all possible initial configurations for the network. As explained above, states where any component (i.e., an oscillator or the environment) is in the mode update are considered intermediate states. Hence, we only allow for transitions from the initial state to states, where every component is in the start mode, and furthermore, the counter of the environment is set to 0. Let us denote this set of states by \({\mathcal {S}}^\prime \), i.e.,

$$\begin{aligned} {\mathcal {S}}^\prime&= \{ s\in {\mathcal {S}}\mid \mathsf {count}(s) = 0 \wedge {\mathsf {mode}}({s},{\mathsf {env}}) = start \wedge \forall u :{\mathsf {mode}}({s},{u}) = start \} . \end{aligned}$$

Then, for each state \(s\in {\mathcal {S}}^\prime \), we have

$$\begin{aligned} {\mathbf {P}}(s_I, s)&= \frac{1}{|{\mathcal {S}}^\prime |} . \end{aligned}$$
(10)

As stated above, this model directly models a fully-connected network of N oscillators. It would certainly be possible to define such a model as a parallel composition of single oscillators, but we will in the following argue that such a definition would not increase the readability of the definitions, and even obscure the behaviour of the network. In particular, the probability for an oscillator to end up in a certain state after such a step is dependent on the overall number of oscillators in the system. For example, consider the third state shown in the example network of Fig. 1. That is, oscillator 2 is about to fire and oscillator 4 is still in the refractory period. If we added another oscillator, which we name “5”, to the network, the possible transitions from this state depend on the phase of this oscillator. If it is not at the end of its cycle, the only outgoing transitions are due to oscillator 2 firing (and either succeeding or failing). However, oscillator 5 may be perturbed enough to fire, thus increasing the chance of oscillator 1 being perturbed enough to fire as well. Similarly, if oscillator 5 is also at the end of its cycle, the activation of oscillators 1 and 3 is increased, depending on the perturbation function. In the final transition of the sequence (i.e., after all oscillators changed their mode from \( start \) to \( update \)), the resulting state will be different then the corresponding state in the original model with four oscillators. Even in this small example, the necessary case distinctions for the definition of a parallel composition grow quite large, and would still not allow us to refer to already existing transitions, but require the creation of completely new ones.

With this model, we could begin to analyse the synchronisation behaviour with respect to different phase response functions or broadcast failure probabilities. However, the state space of the model increases exponentially with the number of oscillators, which makes an analysis beyond small numbers infeasible. To overcome this restriction, we increase the level of abstraction as presented in the next section.

5 Population model

In this section, we define a population model of a network of pulse-coupled oscillators for parameters as defined in Sect. 4.1 as \({\mathcal {P}}= ({\varDelta }, N ,T, R, \epsilon , \mu )\). Oscillators in our model have identical dynamics, and two oscillators are indistinguishable if they share the same phase. That is, we can reason about groups of oscillators, instead of individuals. We therefore encode the global state of the model as a tuple \(\langle k_1, \ldots , k_T \rangle \) where each \(k_{\varPhi }\) is the number of oscillators sharing a phase value of \({\varPhi }\). The population model does not account for the introduction of additional oscillators to a network, or the loss of existing coupled oscillators. That is, the population N remains constant.

Definition 7

A global state of a population model \({\mathcal {P}}= ({\varDelta }, N ,T, R, \epsilon , \mu )\) is a T-tuple \(\sigma \in \{0, \ldots , N\}^T\), where \(\sigma = \langle k_1, \ldots , k_T\rangle \) and \(\sum _{{\varPhi }= 1}^T k_{\varPhi }= N\). The set of all global states of \({\mathcal {P}}\) is \({\varGamma }({\mathcal {P}})\), or simply \({\varGamma }\) when \({\mathcal {P}}\) is clear from the context.

Example 1

Figure 2 shows four global states for an instantiated population model of \(N = 8\) oscillators with \(T = 10\) discrete values for their phase and a refractory period of length \(R = 2\). We use the linear phase response function \({\varDelta }({\varPhi }, \alpha , \epsilon ) = [{\varPhi }\cdot \alpha \cdot \epsilon ]\) where \([\cdot ]\) denotes rounding to the closest integer. Furthermore, let \(\epsilon = 0.115\). For example \(\sigma _0 = \langle 0, 0, 2, 1, 0, 0, 5, 0, 0, 0 \rangle \) is the global state where two oscillators have a phase of three, one oscillator has a phase of four, and five oscillators have a phase of seven. The starred node indicates the number of oscillators with phase ten that will fire in the next moment in time, while the shaded nodes indicate oscillators with phases that lie within the refractory period (one and two). If no oscillators have some phase \({\varPhi }\) then we omit the 0 in the corresponding node. Observe that, while going from \(\sigma _{i-1}\) to \(\sigma _{i}\) (\(1 \leqslant i \leqslant 3\)), the oscillator phases increase by one. In the next section, we will explain how transitions between these global states are made. Note that directional arrows indicate cyclic direction, and do not represent transitions.

Fig. 2
figure 2

Evolution of the global state over four discrete time steps

With every state \(\sigma \in {\varGamma }\) we associate a non-empty set of failure vectors, where each failure vector is a tuple of broadcast failures that could occur in \(\sigma \).

Definition 8

A failure vector is a T-tuple \(F= \langle f_1, \dots , f_T\rangle \in (\{0, \ldots , N\} \cup \{\star \})^T\), where \(f_i = \star \) implies \(f_j =\star \) for all \(1 \leqslant j \leqslant i\). We denote the set of all possible failure vectors by \({\mathcal {F}}\).

Given a failure vector \(F= \langle f_1, \ldots , f_T \rangle \), \(f_{\varPhi }\in \{0, \ldots , N\}\) indicates the number of broadcast failures that occur for all oscillators with a phase of \({\varPhi }\). If \(f_{\varPhi }= \star \) then no oscillators with a phase of \({\varPhi }\) fire. Semantically, \(f_{\varPhi }= 0\) and \(f_{\varPhi }= \star \) differ in that the former indicates that all (if any) oscillators with phase \({\varPhi }\) fire and no broadcast failures occur, while the latter indicates that all (if any) oscillators with a phase of \({\varPhi }\) do not fire. If no oscillators fire at all in a global state then we have only one possible failure vector, namely \(\{\star \}^T\).

5.1 Transitions

In Sect. 5.2 we will describe how we can calculate the set of all possible failure vectors for a global state, and thereby identify all of its successor states. However we must first show how we can calculate the single successor state of a global state \(\sigma \), given some failure vector \(F\).

Absorptions Since we are considering a fully connected network of oscillators, two oscillators sharing the same phase will have their phase updated to the same value in the next time step. They will always perceive the same number of other oscillators firing. Therefore, for each phase \({\varPhi }\) we define the function \(\alpha ^{\varPhi }:{\varGamma }\times {\mathcal {F}}\rightarrow \{0,\dots ,N\}\), where \(\alpha ^{\varPhi }(\sigma , F)\) is the number of oscillators with a phase greater than \({\varPhi }\) perceived to be firing by oscillators with phase \({\varPhi }\), in some global state, incorporating the broadcast failures defined in the failure vector \(F\). This allows us to encode the aforementioned chain reactions of firing oscillators. Note that our encoding of chain reactions results in a global semantics that differs from typical parallelisation operations, for example, the construction of the cross product of the individual oscillators. Observe that, in the concrete model of Sect. 4.2, we modelled such a behaviour by case 4.

Given a global state \(\sigma = \langle k_1, \ldots , k_T \rangle \) and a failure vector \(F= \langle f_1, \ldots , f_T \rangle \), the following mutually recursive definitions show how we calculate the values \(\alpha ^1(\sigma , F),\ldots ,\alpha ^T(\sigma , F)\), and how functions introduced in Sect. 4.1 are modified to indicate the update in phase, and firing, of all oscillators sharing the same phase \({\varPhi }\). Observe that to calculate any \(\alpha ^{\varPhi }(\sigma , F)\) we only refer to definitions for phases greater than \({\varPhi }\) and the base case is \({\varPhi }= T\), that is, values are computed from T down to 1. The function \(\mathrm {ref}\) is the refractory function as defined in Sect. 4.1.

$$\begin{aligned} update ^{\varPhi }(\sigma , F)&= 1 + \mathrm {ref}({\varPhi }, {\varDelta }({\varPhi }, \alpha ^{\varPhi }(\sigma , F), \epsilon )) \end{aligned}$$
(11)
$$\begin{aligned} fire ^{\varPhi }(\sigma , F)&= update ^{\varPhi }(\sigma , F) > T \end{aligned}$$
(12)
$$\begin{aligned} \alpha ^{\varPhi }(\sigma , F)&= {\left\{ \begin{array}{ll} 0 &{} \text {if } {\varPhi }{=}T\\ \alpha ^{{\varPhi }{+}1}(\sigma {,}F){+}k_{{\varPhi }{+}1}{-}f_{{\varPhi }{+}1} &{} \text {if } {\varPhi }{<}T, f_{{\varPhi }{+}1}{\ne }\star \text { and } fire ^{{\varPhi }{+}1}(\sigma {,}F) \\ \alpha ^{{\varPhi }+ 1}(\sigma , F) &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(13)

Transition function We now define the transition function that maps phase values to their updated values in the next time step. Note that since we no longer distinguish different oscillators with the same phase we only need to calculate a single value for their evolution and perturbation.

Definition 9

The phase transition function \(\tau : {\varGamma }\times \{1, \ldots , T\} \times {\mathcal {F}}\rightarrow {\mathbb {N}}\) maps a global state \(\sigma \), a phase \({\varPhi }\), and some possible failure vector \(F\) for \(\sigma \), to the updated phase in the next discrete time step, with respect to the broadcast failures defined in \(F\), and is defined as

$$\begin{aligned} \tau (\sigma , {\varPhi }, F) = {\left\{ \begin{array}{ll} 1 &{} \text {if } fire ^{\varPhi }(\sigma , F) \\ update ^{\varPhi }(\sigma , F) &{} \text {otherwise}. \\ \end{array}\right. } \end{aligned}$$
(14)

Let \({\mathcal {U}}_{\varPhi }(\sigma , F)\) be the set of phase values \({\varPsi }\) where all oscillators with phase \({\varPsi }\) in \(\sigma \) will have their phase updated to \({\varPhi }\) in the next time step, with respect to the broadcast failures defined in \(F\). Formally,

$$\begin{aligned} {\mathcal {U}}_{\varPhi }(\sigma , F) = \{{\varPsi }\mid {\varPsi }\in \{1,\ldots ,T\} \wedge \tau (\sigma , {\varPsi }, F) = {\varPhi }\}. \end{aligned}$$
(15)

We can now calculate the successor state of a global state \(\sigma \) and define how the model evolves over time.

Definition 10

The successor function maps a global state \(\sigma \) and a failure vector \(F\) to a state \(\sigma ^\prime \), and is defined as , where \(k_{\varPhi }^\prime {=}\sum _{{\varPsi }\in {\mathcal {U}}_{\varPhi }(\sigma , F)} k_{\varPsi }\) for \(1 \leqslant {\varPhi }\leqslant T\).

Example 2

Recall that the perturbation function of our example was given as \({\varDelta }({\varPhi }, \alpha , \epsilon ) = [{\varPhi }\cdot \alpha \cdot \epsilon ]\), where \([\cdot ]\) denotes rounding and \(\epsilon = 0.115\). Consider the global state \(\sigma _2\) of Fig. 3 where no oscillators will fire since \(k_{10} = 0\). We therefore have one possible failure vector for \(\sigma _0\), namely \(F= \{\star \}^{10}\). Since no oscillators fire the dynamics of the oscillators are determined solely by their standalone evolution, and all oscillators simply increase their phase by 1 in the next time step. Now consider the global state \(\sigma _3\) and \(F= \langle \star , \star , \star , \star , \star , \star , 1, 0, 0, 0 \rangle \), a possible failure vector for \(\sigma _3\), indicating that oscillators with phases of 7 to 10 will fire and one broadcast failure will occur for the single oscillator that will fire with phase 7. Here a chain reaction occurs as the perturbation induced by the firing of the 5 oscillators causes the single oscillator with a phase of 7 to also fire. A broadcast failure occurs when this single oscillator fires, and the perturbation of the 5 firing oscillators is insufficient to cause the 2 oscillators with a phase of 6 to also fire. In the next state the oscillator with phase 7 has been absorbed by the group of the 5 oscillators that had phase 10.

More explicitly, since \( fire ^{10}(\sigma _3, F)\) holds we have that \(\alpha ^{9}(\sigma _3, F)\) = \(\alpha ^{10}(\sigma _3, F) + k_{10} - f_{10} = 0 + 5 - 0 = 5\). Now, since \({\varDelta }(9, 5, 0.115) = [9 \cdot 5 \cdot 0.115] = [5.175] = 5\), we have \( update ^9(\sigma _3, F) = 15 > 10\), and thus, \( fire ^{9}\) holds. Hence, we have that \(\alpha ^{8}(\sigma _3, F) = \alpha ^{9}(\sigma _3, F) + k_{9} - f_{9} = 0 + 5 - 0 = 5\), and similarly, due to \({\varDelta }(8, 5, 0.115) =5\), \( fire ^{8}\) holds. That is, we have that \(\alpha ^{7}(\sigma _3, F) = \alpha ^{8}(\sigma _3, F) + k_{8} - f_{8} = 0 + 5 - 0 = 5\). We then continue calculating \(\alpha ^{{\varPhi }}(\sigma _3, F)\) for \(6 \geqslant {\varPhi }\geqslant 1\), and noting that \({\varDelta }(6,5,0.115) = [3.45] = 3\). Hence \( fire ^{6}(\sigma _3, F)\) does not hold, and we obtain \(\alpha ^{1}(\sigma _3, F) = \alpha ^{2}(\sigma _3, F) = \alpha ^{3}(\sigma _3, F) = \alpha ^{4}(\sigma _3, F) = \alpha ^{5}(\sigma _3, F) = \alpha ^{6}(\sigma _3, F) = \alpha ^{7}(\sigma _3, F) = 5\). We conclude that \({\mathcal {U}}_{1}(\sigma _3, F) = \{10, 9, 8, 7\}\), \({\mathcal {U}}_{10}(\sigma _3, F) = \{6, 5\}\), \({\mathcal {U}}_{9}(\sigma _3, F) = \{4, 3\}\), and \({\mathcal {U}}_{{\varPhi }}(\sigma _3, F) = \emptyset \) for \(9> {\varPhi }> 3\). Since \(R = 2\) we have that \({\mathcal {U}}_3(\sigma _3, F) = \{2\}\) and \({\mathcal {U}}_2(\sigma _3, F) = \{1\}\). We calculate the successor of \(\sigma _3\) as .

Fig. 3
figure 3

Evolution of the global state over four discrete time steps

Lemma 1

The number of oscillators is invariant during transitions, i.e., the successor function only creates tuples that are states of the given model. Formally, let \(\sigma = \langle k_1, \ldots , k_T \rangle \) and \(\sigma ^\prime = \langle k_1^\prime , \ldots , k_T^\prime \rangle \) be two states of a model \({\mathcal {P}}\) such that , where \(F\) is some possible failure vector for \(\sigma \). Then \(\sum _{{\varPhi }= 1}^T k_{\varPhi }= \sum _{{\varPhi }= 1}^T k_{\varPhi }^\prime = N.\)

Proof

Observe that the range of the function \(\tau \) is bound by T. By construction we can see that for any \(\sigma \), for any possible failure vector \(F\) for \(\sigma \), and for all \({\varPhi }\in \{1, \ldots , T\}\), we have that \(1 \leqslant \tau (\sigma , {\varPhi }, F) \leqslant T\). Hence for all \({\varPsi }\) with \(1 \leqslant {\varPsi }\leqslant T\), there is a \({\varPhi }\) such that \({\varPsi }\in {\mathcal {U}}_{\varPhi }(\sigma , F)\). This implies \(\bigcup _{{\varPhi }=1}^{T} {\mathcal {U}}_{\varPhi }(\sigma , F) = \{1, \ldots , T\}\). Furthermore, there cannot be more than one \({\varPhi }\) such that \({\varPsi }\in {\mathcal {U}}_{{\varPhi }}(\sigma , F)\), since \(\tau \) is functional. Now we have \(\sum _{{\varPhi }=1}^T k^\prime _{\varPhi }= \sum _{{\varPhi }=1}^T \sum _{{\varPsi }\in {\mathcal {U}}_{\varPhi }(\sigma , F)} k_{\varPsi }= \sum _{{\varPhi }=1}^T k_{\varPhi }= N.\) \(\square \)

5.2 Failure vector calculation

We construct all possible failure vectors for a global state by considering every group of oscillators in decreasing order of phase. At each stage we determine if the oscillators would fire. If they fire then we consider each outcome where any, all, or none of the firings result in a broadcast failure. We then add a corresponding value to a partially calculated failure vector and consider the next group of oscillators with a lower phase. If the oscillators do not fire then there is nothing left to do, since by Definition 5 we know that \({\varDelta }\) is increasing, therefore all oscillators with a lower phase will also not fire. We can then pad the partial failure vector with \(\star \) appropriately to indicate that no failure could happen since no oscillator fired.

Table 1 Construction of a possible failure vector for a global state \(\sigma _3 = \langle 0, 0, 0, 0, 0, 2, 1, 0, 0, 5 \rangle \)

Table 1 illustrates how a possible failure vector for global state \(\sigma _3\) in Fig. 3 is iteratively constructed. The first three columns respectively indicate the current iteration i, the global state \(\sigma _3\) with the currently considered oscillators underlined, and the elements of the failure vector \(F\) computed so far. The fourth column is \( true \) if the oscillators with phase \(T{+}1{-}i\) would fire given the broadcast failures in the partial failure vector. We must consider all outcomes of any or all firings resulting in broadcast failure. The final column therefore indicates whether the value added to the partial failure vector in the current iteration is the only possible value (\( false \)), or a choice from one of several possible values (\( true \)).

Initially we have an empty partial failure vector. At the first iteration there are 5 oscillators with a phase of 10. These oscillators will fire so we must consider each case where 0, 1, 2, 3, 4 or 5 broadcast failures occur. Here we choose 0 broadcast failures, which is then added to the partial failure vector. At iterations 2 and 3 the oscillators would have fired, but since there are no oscillators with a phase of 9 or 8 we only have one possible value to add to the partial failure vector, namely 0. At iteration 4 a single oscillator with a phase of 7 fires, and we choose the case where the firing resulted in a broadcast failure. In the final iteration oscillators with a phase of 6 do not fire, hence we can conclude that oscillators with phases less than 6 also do not fire, and can fill the partial failure vector appropriately with \(\star \).

Formally, we define a family of functions \( fail \) indexed by \({\varPhi }\), where each \( fail _{\varPhi }\) takes as parameters some global state \(\sigma \), and V, a vector of length \(T - {\varPhi }\). V represents all broadcast failures for all oscillators with a phase greater than \({\varPhi }\). The function \( fail _{\varPhi }\) then computes the set of all possible failure vectors for \(\sigma \) with suffix V. Here we use the notation \(v^\frown v^\prime \) to indicate vector concatenation.

Definition 11

We define \( fail _{\varPhi }: {\varGamma }\times \{0, \ldots , N\}^{T - {\varPhi }} \rightarrow {\mathbb {P}}((\{0, \ldots , N\} \cup \{\star \})^T)\), for \(1 \leqslant {\varPhi }\leqslant T\), as the family of functions indexed by \({\varPhi }\), where \(\sigma = \langle k_1, \ldots , k_T \rangle \) and

$$\begin{aligned} fail _{\varPhi }(\sigma , V)&= {\left\{ \begin{array}{ll} \textstyle \bigcup _{k = 0}^{k_{\varPhi }} fail _{{\varPhi }- 1}(\sigma , \langle k \rangle ^\frown V) &{} \text {if } 1 < {\varPhi }\leqslant T \text { and } fire ^{\varPhi }(\sigma , {\{\star \}^{\varPhi }}^\frown V) \\ \textstyle \bigcup _{k = 0}^{k_1} \left\{ \langle k \rangle ^\frown V \right\} &{} \text {if } {\varPhi }= 1 \text { and } fire ^1(\sigma , \langle \star \rangle ^\frown V) \\ \left\{ {\{\star \}^{\varPhi }}^\frown V \right\} &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

Observe that the result of \( fail _T\) is always a set of well defined failure vectors, since whenever \(\star \) is introduced into a failure vector at index \({\varPhi }\), all preceding indices are also filled with \(\star \), as required by Definition 8.

Definition 12

Given a global state \(\sigma \in {\varGamma }\), we define \({\mathcal {F}}_{\sigma }\), the set of all possible failure vectors for that state, as \({\mathcal {F}}_{\sigma } = fail _T(\sigma , \langle \rangle )\), and define \( next (\sigma )\), the set of all successor states of \(\sigma \), as .

Note that for some global states \(| next (\sigma )| < |{\mathcal {F}}_{\sigma }|\), since we may have that for some \(F, F' \in {\mathcal {F}}_{\sigma }\) with \(F\not = F'\).

Given a global state \(\sigma \) and a failure vector \(F\in {\mathcal {F}}_{\sigma }\), we will now compute the probability of a transition being made to state in the next time step. Recall that \(\mu \) is the probability with which a broadcast failure occurs. Firstly we define the probability mass function \(\mathsf {PMF}: \{1, \ldots , N\}^2 \rightarrow [0, 1]\), where \(\mathsf {PMF}(k, f)\) gives the probability of f broadcast failures occurring given that k oscillators fire, as \( \mathsf {PMF}(k, f) = \mu ^{f} (1 - \mu )^{k - f} {{k }\atopwithdelims (){f}}. \) We then denote by \(\mathsf {PFV}: {\varGamma }\times {\mathcal {F}}_\sigma \rightarrow [0, 1]\) the function mapping a possible broadcast failure vector \(F\) for \(\sigma \), to the probability of the failures in \(F\) occurring. That is,

$$\begin{aligned} \mathsf {PFV}(\langle k_1, \ldots , k_T \rangle , \langle f_1, \ldots , f_T \rangle ) = \prod _{{\varPhi }= 1}^T {\left\{ \begin{array}{ll} \mathsf {PMF}(k_{\varPhi }, f_{\varPhi }) &{} \text {if } f_{\varPhi }\ne \star \\ 1 &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(16)

Lemma 2

For any global state \(\sigma \), \(\mathsf {PFV}\) is a discrete probability distribution over \({\mathcal {F}}_\sigma \). Formally, \(\sum _{F\in {\mathcal {F}}_\sigma } \mathsf {PFV}(\sigma , F) = 1\).

Proof

Given a global state \(\sigma = \langle k_1, \ldots , k_T \rangle \) we can construct a tree of depth T where each leaf node is labelled with a possible failure vector for \(\sigma \), and each node \({\varLambda }\) at depth \({\varPhi }\) is labelled with a vector of length \({\varPhi }\) corresponding to the last \({\varPhi }\) elements of a possible failure vector for \(\sigma \). We denote the label of a node \({\varLambda }\) by \(V({\varLambda })\). We label each node \({\varLambda }_\omega \) with \(\langle \omega \rangle ^\frown V({\varLambda })\). We iteratively construct the tree, starting with the root node, \( root \), at depth 0, which we label with the empty tuple \(\langle \rangle \). For each node \({\varLambda }\) at depth \(0 \leqslant {\varPhi }< T\) we construct the children of \({\varLambda }\) as follows:

  1. 1.

    If oscillators with phase \({\varPhi }\) fire we define the sample space \({\varOmega } = \{0, \ldots , n_{\varPhi }\}\) to be a set of disjoint events, where each \(\omega \in {\varOmega }\) is the event where \(\omega \) broadcast failures occur, given that \(k_{\varPhi }\) oscillators fired. For each \(\omega \in {\varOmega }\) there is a child \({\varLambda }_\omega \) of \({\varLambda }\) with label \(\langle \omega \rangle ^\frown V({\varLambda })\), and we label the edge from \({\varLambda }\) to \({\varLambda }_\omega \) with \(\mathsf {PMF}(k_{\varPhi }, \omega )\).

  2. 2.

    If oscillators with phase \({\varPhi }\) do not fire then \({\varLambda }\) has a single child \({\varLambda }_\star \) labelled with \(\langle \star \rangle ^\frown V({\varLambda })\), and we label the edge from \({\varLambda }\) to \({\varLambda }_\star \) with 1.

We denote the label of an edge from a node \({\varLambda }\) to its child \({\varLambda }^\prime \) by \(L({\varLambda }, {\varLambda }^\prime )\). For case 2 we can observe that if oscillators with phase \({\varPhi }\) do not fire then we know that oscillators with any phase \({\varPsi }< {\varPhi }\) will also not fire, since from Definition 5 we know that \({\varDelta }\) is an increasing function. Hence, all descendants of \({\varLambda }\) will also have a single child, with an edge labelled with 1, and each node is labelled with the label of its parent, prefixed with \(\langle \star \rangle \).

After constructing the tree we have a vector of length T associated with each leaf node, corresponding to a failure vector for \(\sigma \). The set \({\mathcal {F}}_\sigma \) of all possible failure vectors for \(\sigma \) is therefore the set of all vectors labelling leaf nodes. We denote by \({\mathsf {P}}^{\downarrow }({\varLambda })\) the product of all labels on edges along the path from \({\varLambda }\) back to the root. Given a global state \(\sigma = \langle k_1, \ldots , k_T \rangle \) and a failure vector \(F= \langle f_1, \ldots , f_T \rangle \in {\mathcal {F}}_\sigma \) labelling some leaf node \({\varLambda }\) at depth T, we can see that

$$\begin{aligned} {\mathsf {P}}^{\downarrow }({\varLambda }) = 1 \cdot \prod _{{\varPhi }=1}^T {\left\{ \begin{array}{ll} \mathsf {PMF}(k_{\varPhi }, f_{\varPhi }) &{} \text {if } f_\phi \ne \star \\ 1 &{} \text {otherwise} \end{array}\right. } = \mathsf {PFV}(\sigma , F). \end{aligned}$$

Let \(D^{\varPhi }\) denote the set of all nodes at depth \({\varPhi }\). We show \(\sum _{d \in D^{\varPhi }} {\mathsf {P}}^{\downarrow }(d) = 1\) by induction on \({\varPhi }\). For \({\varPhi }= 0\), i.e., \(D^{\varPhi }= \{ root \}\), the property holds by definition. Now assume that \(\sum _{d \in D^{\varPhi }} {\mathsf {P}}^{\downarrow }(d) = 1\) holds for some \(0 \leqslant {\varPhi }< T\). Let \({\varLambda }\) be some node in \(D^{\varPhi }\), and let \(C^{\varLambda }\) be the set of all children of \({\varLambda }\). Consider the following two cases: If oscillators with phase \({\varPhi }\) do not fire then \(|C^{\varLambda }| = 1\), and for the only \(c \in C^{\varLambda }\) we have that \(L({\varLambda }, c) = 1\). If oscillators with phase \({\varPhi }\) fire observe that \(\mathsf {PMF}\) is a probability mass function for a random variable defined on the sample space \({\varOmega } = \{0, \ldots , k_{\varPhi }\}\). In either case we can see that \(\sum _{c \in C^{\varLambda }} L({\varLambda }, c) = 1\). Note that \(D^{{\varPhi }+ 1} = \bigcup _{d \in D^{\varPhi }} C^d\), and recall that \(L(d, c) \cdot {\mathsf {P}}^{\downarrow }(d) = {\mathsf {P}}^{\downarrow }(c)\). Therefore,

$$\begin{aligned} \sum _{d \in D^{{\varPhi }+ 1}} {\mathsf {P}}^{\downarrow }(d) = \sum _{d \in D^{\varPhi }} \sum _{c \in C^d} L(d, c) \cdot {\mathsf {P}}^{\downarrow }(d) = \sum _{d \in D^{\varPhi }} \left( {\mathsf {P}}^{\downarrow }(d) \sum _{c \in C^d} L(d, c) \right) . \end{aligned}$$

Since \(\sum _{c \in C^d} L(d, c) = 1\) for each \(d \in D^{\varPhi }\), and from the induction hypothesis, we then have that

$$\begin{aligned} \sum _{d \in D^{\varPhi }} \left( {\mathsf {P}}^{\downarrow }(d) \sum _{c \in C^d} L(d, c) \right) =\sum _{d \in D^{\varPhi }}{\mathsf {P}}^{\downarrow }(d) = 1. \end{aligned}$$

We have already shown that \({\mathsf {P}}^{\downarrow }({\varLambda }) = \mathsf {PFV}(\sigma , F)\) for any leaf node \({\varLambda }\) labelled with a failure vector \(F\), and since the set of all labels for leaf nodes is \({\mathcal {F}}_\sigma \) we can conclude that

$$\begin{aligned} \sum _{F\in {\mathcal {F}}_\sigma } \mathsf {PFV}(\sigma , F) = \sum _{d \in D^T} {\mathsf {P}}^{\downarrow }(d) = 1. \end{aligned}$$

This proves the lemma. \(\square \)

Example 3

We consider again the global states \(\sigma _3 = \langle 0, 0, 0, 0, 0, 2, 1, 0, 0, 5 \rangle \) and \(\sigma _4 = \langle 6, 0, 0, 0, 0, 0, 0, 0, 0, 2 \rangle \), given in Fig. 3, of the population model instantiated in Example 1, and the failure vector \(F= \langle \star , \star , \star , \star , \star , \star , 1, 0, 0, 0 \rangle \) given in Example 2, noting that \(F\in {\mathcal {F}}_{\sigma _3}\), , and \(\mu = 0.1\). We calculate the probability of a transition being made from \(\sigma _3\) to \(\sigma _4\) as

$$\begin{aligned}&\mathsf {PFV}(\langle 0, 0, 0, 0, 0, 2, 1, 0, 0, 5 \rangle , \langle \star , \star , \star , \star , \star , \star , 1, 0, 0, 0 \rangle )\\&\quad = 1 \cdot 1 \cdot 1 \cdot 1 \cdot 1 \cdot 1 \cdot \mathsf {PMF}(1, 1) \cdot \mathsf {PMF}(0, 0) \cdot \mathsf {PMF}(0, 0) \cdot \mathsf {PMF}(5, 0)\\&\quad = (0.1^1 \cdot 0.9^0 \cdot 1) \cdot (1) \cdot (1) \cdot ( 0.1^0 \cdot 0.9^5 \cdot 1) = 0.059049 \end{aligned}$$

We now have everything we need to fully describe the evolution of the global state of a population model over time. An execution path of a population model \({\mathcal {P}}\) is an infinite sequence of global states \(\omega = \sigma _0 \sigma _1 \sigma _2 \sigma _3 \cdots \), where \(\sigma _0\) is called the initial state, and \(\sigma _{k + 1} \in next (\sigma )\) for all \(k \geqslant 0\).

5.3 Synchronisation

When all oscillators in a population model have the same phase in a global state we say that the state is synchronised. Formally, a global state \(\sigma = \langle k_1, \ldots , k_T \rangle \) is synchronised if, and only if, there is some \({\varPhi }\in \{1, \ldots , T\}\) such that \(k_{\varPhi }= N\), and hence \(k_{{\varPhi }^\prime } = 0\) for all \({\varPhi }^\prime \ne {\varPhi }\). We will often want to reason about whether some particular run \(\omega \) of a model leads to a global state that is synchronised. We say that a path \(\omega = \sigma _0 \sigma _1 \cdots \) synchronises if, and only if, there exists some \(k \geqslant 0\) such that \(\sigma _k\) is synchronised. Once a synchronised global state is reached any successor states will also be synchronised. Finally we can say that a model synchronises if, and only if, all runs of the model synchronise.

5.4 Model construction

Given a population model \({\mathcal {P}}= ({\varDelta }, N ,T, R, \epsilon , \mu )\) we construct a DTMC \(D({\mathcal {P}}) = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) where L ranges over the singleton \(\{ synch \}\). We define the set of states \({\mathcal {S}}\) to be \({\varGamma }({\mathcal {P}}) \cup \{\sigma _I\}\), where \(\sigma _I\) is the initial state of the DTMC. For each \(\sigma = \langle k_1, \ldots , k_T \rangle \in {\varGamma }({\mathcal {P}})\), we set \(L(\sigma ) = \{ synch \}\) if there is some \(1 \leqslant i \leqslant T\) such that \(k_i = N\).

To calculate the likelihood of permutations of a population of oscillators we will make use of multinomial coefficients, for which we provide the standard definition.

Definition 13

The multinomial coefficient is an extension of the binomial coefficient that gives the number of ordered permutations of the elements of a multiset. Given a finite multiset \({\mathcal {M}}\), a permutation is an ordered arrangement of its elements, where each element appears a number of times equal to its multiplicity in \({\mathcal {M}}\). The number of permutations of \({\mathcal {M}}\) is given by

$$\begin{aligned} {{n}\atopwithdelims (){m_1, m_2, \ldots , m_i}} {=} \frac{n !}{m_1 ! m_2! \cdots m_i !} {=} {{m_1}\atopwithdelims (){m_1}} {{m_1 + m_2}\atopwithdelims (){m_2}} {\cdots } {{m_1 + m_2 + \ldots + m_i}\atopwithdelims (){m_i}}, \end{aligned}$$

where \(m_1, \ldots m_i\) are the multiplicities of the elements of \({\mathcal {M}}\) and n is the sum of those multiplicities.

In the initial state all oscillators are unconfigured. That is, oscillators have not yet been assigned a value for their phase. For each \(\sigma = \langle k_1, \ldots , k_T \rangle \in {\mathcal {S}}\setminus \{\sigma _I\}\) we define

$$\begin{aligned} {\mathbf {P}}(\sigma _I, \sigma ) = \frac{1}{T^N} {{N}\atopwithdelims (){k_1, \ldots , k_T}} \end{aligned}$$
(17)

to be the probability of moving from \(\sigma _I\) to a state where \(k_i\) arbitrary oscillators are configured with the phase value i for \(1\leqslant i\leqslant T\). The multinomial coefficient defines the number of possible assignments of phases to distinct oscillators that result in the global state \(\sigma \). The fractional coefficient normalises the multinomial coefficient with respect to the total number of possible assignments of phases to all oscillators. In general, given an arbitrary set of initial configurations (global states) for the oscillators, the total number of possible phase assignments can be calculated by computing the sum of the multinomial coefficients for each configuration (global state) in that set. Since \({\varGamma }\) is the set of all possible global states, we have that

$$\begin{aligned} \displaystyle \sum _{\langle k_1, \ldots , k_T \rangle \in {\varGamma }} {{N}\atopwithdelims (){k_1, \ldots , k_T}} = T^N. \end{aligned}$$

We assign probabilities to the transitions as follows: for every \(\sigma \in {\mathcal {S}}\setminus \{\sigma _I\}\), we consider each \(F \in {\mathcal {F}}_\sigma \), and set . For every combination of \(\sigma \) and \(\sigma ^\prime \) where \(\sigma ^\prime \not \in next (\sigma )\) we set \({\mathbf {P}}(\sigma , \sigma ^\prime ) = 0\).

5.5 Model reduction

We now describe a reduction of the population model that results in a significant decrease in the size of the model, but is equivalent to the original model with respect to the unbounded-time reachability of synchronised states.

We first distinguish between states where one or more oscillators are about to fire, and states where no oscillators will fire at all. We refer to these states as firing states and non-firing states respectively. In the following, we fix a population model \({\mathcal {P}}\) and simply refer to the states by \({\varGamma }\), instead of \({\varGamma }({\mathcal {P}})\).

Definition 14

Given a population model \({\mathcal {P}}\), a global state \(\langle k_1, \ldots , k_T \rangle \in {\varGamma }\) is a firing state if, and only if, \(k_T > 0\). We denote by \({\varGamma }^{\mathsf {{F}}}\) the set of all firing states of \({\mathcal {P}}\), and denote by \({\varGamma }^{\mathsf {{NF}}}= {\varGamma }\setminus {\varGamma }^{\mathsf {{F}}}\) the set of all non-firing states of \({\mathcal {P}}\). We will again omit \({\mathcal {P}}\) if it is clear from the context

The reduction is constructed by collapsing deterministic sequences of transitions. This allows non-firing states to be eliminated from the model. While this approach may seem simple, to the best of our knowledge such a reduction is not automatically applied by existing tools for the automatic analysis of such a model, for example the model checkers PRISM [34], Storm [18], and IscasMC [27]. In addition, collapsing sequences of deterministic transitions from the initial unconfigured state to firing states is not straightforward, and it is unclear how this could be easily inferred by automated tools.

Given a DTMC \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) let \(|{\mathbf {P}}|= |\{(t, t^\prime ) \mid t, t^\prime \in {\mathcal {S}}^2 \text { and } {\mathbf {P}}(t, t^\prime ) > 0\} |\) be the number of non-zero transitions in \({\mathbf {P}}\), and \(|D |= |{\mathcal {S}}|+ |{\mathbf {P}}|\) be the total number of states and non-zero transitions in D.

Theorem 1

For every population model \({\mathcal {P}}\) and its corresponding DTMC \(D({\mathcal {P}}) = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\), there is a reduced model \(D^\prime ({\mathcal {P}}) = ({\mathcal {S}}^\prime , \sigma _I, {\mathbf {P}}^\prime , L^\prime )\) where \(|D^\prime ({\mathcal {P}}) |< |D({\mathcal {P}}) |\) and unbounded-time reachability properties with respect to synchronised firing states in \(D({\mathcal {P}})\) are preserved in \(D'({\mathcal {P}})\). In particular, the states and transitions in \(D({\mathcal {P}})\) are reduced in \(D^\prime ({\mathcal {P}})\) such that \({\mathcal {S}}^\prime = {\mathcal {S}}\setminus {\varGamma }^{\mathsf {{NF}}}\) and

$$\begin{aligned} |{\mathcal {S}}^\prime |&= 1 + \frac{T^{(N-1)}}{(N-1)!}, \\ |{\mathbf {P}}^\prime |&\leqslant |{\mathbf {P}}|- 2 |{\varGamma }^{\mathsf {{NF}}}|\end{aligned}$$

where \(x^{(n)}\) is the rising factorial \(x^{(n)} = x(x + 1) \cdots (x + n - 1)\).

Observe that only unbounded-time reachability properties are preserved in the reduction of a model. In the unreduced model precisely T transitions correspond to one oscillation of a standalone oscillator. For the reduced model this is not the case, since the length of removed deterministic transitions sequences are not encoded in the reduction.

We now proceed to prove this theorem. To that end, we need some preliminary properties of non-firing states and their relation to firing states.

Lemma 3

Every non-firing state \(\sigma \in {\varGamma }^{\mathsf {{NF}}}\) has exactly one successor state, and in that state all oscillator phases have increased by 1.

Proof

Given a non-firing state \(\sigma = \langle k_1, \ldots , k_T \rangle \) observe that as \(k_T = 0\) there is only one possible failure vector for \(\sigma \), namely \(\{\star \}^T\). The set of all successor states of \(\sigma \) is then the singleton . By construction we can then see that \(update^{\varPhi }(\sigma , \{\star \}^T) = 1\) and \({\mathcal {U}}_{\varPhi }(\sigma , \{\star \}^T) = \{{\varPhi }- 1\}\) for \(1 \leqslant {\varPhi }\leqslant T\). The single successor state is then given by . \(\square \)

Corollary 1

A transition from any non-firing state is taken deterministically, since for any \(\sigma \in {\varGamma }^{\mathsf {{NF}}}\) we have \(\mathsf {PFV}(\sigma , \{\star \}^T) = 1\).

Reachable state reduction Given a path \(\omega = \sigma _0 \cdots \sigma _{n-1} \sigma _n\) where \(\sigma _i \in {\varGamma }^{\mathsf {{NF}}}\) for \(0< i < n\) and \(\sigma _0, \sigma _n \in {\varGamma }^{\mathsf {{F}}}\), we omit transitions \((\sigma _i, \sigma _{i + 1})\) for \(0 \leqslant i < n\), and instead introduce a direct transition from \(\sigma _0\), the first firing state, to \(\sigma _n\), the next firing state in the sequence. For any \(\sigma = \langle k_1, \ldots , k_T \rangle \in {\varGamma }\) let \(\delta _\sigma = \max \{{\varPhi }\mid k_{\varPhi }> 0 \text { and } 1 \leqslant {\varPhi }\leqslant T \}\) be the highest phase of any oscillator in \(\sigma \). The successor state of a non-firing state is then the state where all phases have increased by \(T - \delta _\sigma \). Observe that \(T - \delta _\sigma = 0\) for any \(\sigma \in {\varGamma }^{\mathsf {{F}}}\).

Definition 15

The deterministic successor function , given by

maps a state \(\sigma \in {\varGamma }\) to the next firing state reachable by taking \(T-\delta _\sigma \) deterministic transitions. Observe that for any firing state \(\sigma \) we have \(\delta _\sigma = T\), and hence that .

We now update the definition for the set of all successor states for some global state \(\sigma \in {\varGamma }\) to incorporate the deterministic successor function.

Definition 16

Given a global state \(\sigma \in {\varGamma }\), we define to be the set of all successor states of \(\sigma \), where

Definition 17

Given a firing state \(\sigma \in {\varGamma }^{\mathsf {{F}}}\) let \( pred (\sigma )\) be the set of all non-firing predecessors of \(\sigma \), where \(\sigma \) is reachable from the predecessor by taking some positive number of transitions deterministically. Formally,

We refer to all states \(\sigma ^\prime \in pred (\sigma )\) as deterministic predecessors of \(\sigma \).

Then given \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) with \({\mathcal {S}}= \{\sigma _I\} \cup {\varGamma }\), we define \({\mathcal {S}}^\prime = {\mathcal {S}}\setminus \bigcup _{\sigma \in {\varGamma }^{\mathsf {{F}}}} pred (\sigma )\) to be the reduction of \({\mathcal {S}}\) where all non-firing states from which a firing state can be reached deterministically are removed.

Lemma 4

For any \(D({\mathcal {P}}) = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) with \({\mathcal {S}}= {\varGamma }\cup \{\sigma _I\}\), the reduction \({\mathcal {S}}^\prime \) is equal to \({\varGamma }^{\mathsf {{F}}}\cup \{\sigma _I\}\).

Proof

Let \(P = \bigcup _{\sigma \in {\varGamma }^{\mathsf {{F}}}} pred (\sigma )\) be the set of all predecessors of firing states in \({\varGamma }^{\mathsf {{F}}}\). Since \({\mathcal {S}}= {\varGamma }\cup \{\sigma _I\}\) and \({\mathcal {S}}^\prime = {\mathcal {S}}\setminus P\) we can see that \({\mathcal {S}}^\prime = {\varGamma }^{\mathsf {{F}}}\cup \{\sigma _I\}\) if, and only if, \(P = {\varGamma }^{\mathsf {{NF}}}\). From Definition 17 it follows that \(P \subseteq {\varGamma }^{\mathsf {{NF}}}\). In addition, for any \(\sigma \in {\varGamma }^{\mathsf {{NF}}}\) there is some state \(\sigma ^\prime \) such that \(\sigma \in pred (\sigma ^\prime )\) and , hence \({\varGamma }^{\mathsf {{NF}}}\subseteq P\) and the lemma is proved. \(\square \)

Lemma 5

For a population model \({\mathcal {P}}= ({\varDelta }, N ,T, R, \epsilon , \mu )\) and its corresponding DTMC \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) with \({\mathcal {S}}= {\varGamma }\cup \{\sigma _I\}\), the number of states in the reduction of \({\mathcal {S}}\) is given by \( |{\mathcal {S}}^\prime |= 1 + \frac{T^{(N-1)}}{(N-1)!}. \)

Proof

Observe that there are \({{N + T - 1}\atopwithdelims (){N}}\) ways to assign T distinguishable phases to N indistinguishable oscillators [23]. Since \({\mathcal {S}}= {\varGamma }\cup \{\sigma _I\}\) and \({\varGamma }\) is the set of all possible configurations for oscillators we can see that \(|{\mathcal {S}}|= {{N + T - 1}\atopwithdelims (){N}} + 1\). For any non-firing state \(\sigma = \langle k_1, \ldots k_T \rangle \in {\varGamma }^{\mathsf {{NF}}}\) we know from Definition 7 that \(\sum _{{\varPhi }=1}^{T} k_{\varPhi }= N\) and from Definition 14 that \(k_T = 0\), so it must be the case that \(\sum _{{\varPhi }=1}^{T-1} k_{\varPhi }= N\). That is, there must be \({{N + T - 2}\atopwithdelims (){N}}\) ways to assign \(T - 1\) distinguishable phases to N indistinguishable oscillators, and so \(|{\varGamma }^{\mathsf {{NF}}}|= {{N + T - 2}\atopwithdelims (){N}}\). From Lemma 4 we know that \({\mathcal {S}}^\prime = {\mathcal {S}}\setminus {\varGamma }^{\mathsf {{NF}}}\) so it must be the case that \( |{\mathcal {S}}^\prime |= |{\mathcal {S}}\vert - |{\varGamma }^{\mathsf {{NF}}}|= 1 + {{N+T-1}\atopwithdelims (){N}} - {{N+T-2}\atopwithdelims (){N}} = 1 + \frac{T^{(N-1)}}{(N-1)!}. \) \(\square \)

Transition matrix reduction Here we describe the reduction in the number of non-zero transitions in the model. We ilustrate how initial transitions to non-firing states are removed by using a simple example, and then describe how we remove transitions from firing states to any successor non-firing states..

Fig. 4
figure 4

Five possible initial configurations in \({\mathcal {S}}\) for \(N = 2\), \(T = 6\)

Figure 4 shows five possible initial configurations \(\sigma _i, \ldots , \sigma _{i+4} \in {\mathcal {S}}\) for \(N = 2\) oscillators with \(T = 6\) values for phase, where a transition is taken from \(\sigma _I\) to each \(\sigma _k\) with probability \({\mathbf {P}}(\sigma _I, \sigma _k)\). Any infinite run of D where a transition is taken from \(\sigma _I\) to one of the configured states \(\sigma _i, \ldots , \sigma _{i+3}\) will pass through \(\sigma _{i+4}\), since all transitions \((\sigma _{i+k}, \sigma _{i+k+1})\) for \(0 \leqslant k \leqslant 3\) are taken deterministically. Also, observe that states \(\sigma _i, \ldots , \sigma _{i+3}\) are not in \({\mathcal {S}}^\prime \), since \(\sigma _{i+4}\) is reachable from each by taking some number of deterministic transitions. We therefore set the probability of moving from \(\sigma _I\) to \(\sigma _{i+4}\) in \({\mathbf {P}}^\prime \) to be the sum of the probabilities of moving from \(\sigma _I\) to \(\sigma _{i+4}\) and each of its predecessors in \({\mathbf {P}}\). Generally, given a state \(\sigma \in {\mathcal {S}}^\prime \) where \(\sigma \ne \sigma _I\), we set \({\mathbf {P}}^\prime (\sigma _I, \sigma ) = {\mathbf {P}}(\sigma _I, \sigma ) + \sum _{\sigma ^\prime \in pred (\sigma )} {\mathbf {P}}(\sigma _I, \sigma ^\prime )\).

We now define how we calculate the probability with which a transition is taken from a firing state to each of its possible successors. For each firing state \(\sigma \in {\mathcal {S}}^\prime \) we consider each possible successor of \(\sigma \) and define \({\mathcal {F}}_{\sigma \rightarrow \sigma ^\prime }\) to be the set of all possible failure vectors for \(\sigma \) for which the successor of \(\sigma \) is \(\sigma ^\prime \), given by . We then set the probability with which a transition from \(\sigma \) to \(\sigma ^\prime \) is taken to \( {\mathbf {P}}^\prime (\sigma , \sigma ^\prime ) = \sum _{F\in {\mathcal {F}}_{\sigma \rightarrow \sigma ^\prime }} \mathsf {PFV}(\sigma , F). \)

Lemma 6

For a population model \({\mathcal {P}}= ({\varDelta }, N ,T, R, \epsilon , \mu )\), the corresponding DTMC \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) with \({\mathcal {S}}= \{\sigma _I\} \cup {\varGamma }\), and its reduction \(D^\prime ({\mathcal {P}}) = ({\mathcal {S}}^\prime , \sigma _I, {\mathbf {P}}^\prime , L^\prime )\), the transitions in \({\mathbf {P}}\) are reduced in \({\mathbf {P}}^\prime \) such that \(|{\mathbf {P}}^\prime |\leqslant |{\mathbf {P}}|- 2 |{\varGamma }^{\mathsf {{NF}}}|\)

Proof

From Lemma 4 we know that \(|{\mathcal {S}}^\prime |= |{\mathcal {S}}\setminus {\varGamma }^{\mathsf {{NF}}}|\), and hence that \(|{\varGamma }^{\mathsf {{NF}}}|\) transitions from \(\sigma _I\) to non-firing states are not in \({\mathbf {P}}^\prime \), and from Lemma 3 we also know that there is one transition from each non-firing state to its unique successor state that is not in \({\mathbf {P}}^\prime \). Since no additional transitions are introduced in the reduction it is clear that \(|{\mathbf {P}}^\prime |\leqslant |{\mathbf {P}}|- 2 |{\varGamma }^{\mathsf {{NF}}}|\).

\(\square \)

Lemma 7

For every population model DTMC \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\), unbounded-time reachability properties with respect to synchronised firing states in D are preserved in its reduction \(D^\prime \).

Proof

We want to show that for every \(\bowtie \ \in \{<,\leqslant ,\geqslant ,>\}\) and every \(\lambda \in [0, 1]\), if \( \sigma _I\models \mathrm {P}_{\bowtie \lambda }[\mathrm {F}\ synch ] \) holds in D then it also holds in \(D^\prime \). From the semantics of PCTL over a DTMC we have

$$\begin{aligned} \sigma _I\models \mathrm {P}_{\bowtie \lambda }[\mathrm {F}\ synch ] \quad \Leftrightarrow \quad Pr \{\omega \in Paths ^D \mid \omega \models \mathrm {F}\ synch \} \bowtie \lambda . \end{aligned}$$

Therefore we need to show that

$$\begin{aligned} Pr ^D\{\omega \in Paths ^D \mid \omega \models \mathrm {F}\ synch \} =_\sigma Pr ^{D^\prime }\{\omega ^\prime \in Paths ^{D^\prime } \mid \omega ^\prime \models \mathrm {F}\ synch \}, \end{aligned}$$

where \( Pr ^D\) and \( Pr ^{D^\prime }\) denote the probability measures with respect to the sets of infinite paths from \(\sigma _I\) in D and \(D^\prime \) respectively.

Given a firing state \(\sigma ^{\mathsf {{F}}}\in {\mathcal {S}}\) we denote by \( Paths ^{D}_{\sigma ^{\mathsf {{F}}}}\) the set of all infinite paths of D starting in \(\sigma _I\) where the first firing state reached along that path is \(\sigma ^{\mathsf {{F}}}\). All such sets for all firing states in \({\mathcal {S}}\) form a partition, such that \(\bigcup _{\sigma ^{\mathsf {{F}}}\in {\varGamma }^{\mathsf {{F}}}} Paths ^{D}_{\sigma ^{\mathsf {{F}}}} = Paths ^{D}\). That is, for all firing states \(\sigma ^{\mathsf {{F}}}, \sigma ^{\mathsf {{F}}\prime }\in {\mathcal {S}}\) where \(\sigma ^{\mathsf {{F}}}\ne \sigma ^{\mathsf {{F}}\prime }\) we have that \( Paths ^{D}_{\sigma ^{\mathsf {{F}}}} \cap Paths ^{D}_{\sigma ^{\mathsf {{F}}\prime }} = \emptyset \).

Now observe that any infinite path \(\omega \) of D can be written in the form \(\omega = \sigma _I\omega ^{\mathsf {{NF}}}_{1} \sigma ^{\mathsf {{F}}}_1 \omega ^{\mathsf {{NF}}}_{2} \sigma ^{\mathsf {{F}}}_2 \cdots \) where \(\sigma ^{\mathsf {{F}}}_i\) is the \(i^{th}\) firing state in the path and each \(\omega ^{\mathsf {{NF}}}_i = \sigma _i^1 \sigma _i^2 \cdots \sigma _i^{k_i}\) is a possibly empty sequence of \(k_i\) non-firing states. Then for every such path in D there is a corresponding path \(\omega ^\prime \) of \(D^\prime \) without non-firing states, and of the form \(\omega ^\prime = \sigma _I\sigma ^{\mathsf {{F}}}_1 \sigma ^{\mathsf {{F}}}_2 \sigma ^{\mathsf {{F}}}_3 \cdots \), as for any i we have \(\sigma _i^j \in pred (\sigma ^{\mathsf {{F}}}_i)\) for all \(1 \leqslant j \leqslant k^i\). As only deterministic transitions have been removed in \(D^\prime \) we can see that \( Pr ^{D}_{\sigma ^{\mathsf {{F}}}_1 }\{\sigma ^{\mathsf {{F}}}_1 \omega ^{\mathsf {{NF}}}_{2} \sigma ^{\mathsf {{F}}}_2 \cdots \} = Pr ^{D^\prime }_{\sigma ^{\mathsf {{F}}}_1}\{\sigma ^{\mathsf {{F}}}_1 \sigma ^{\mathsf {{F}}}_2 \sigma ^{\mathsf {{F}}}_3 \cdots \}\). Hence, we only have to consider the finite paths from \(\sigma _I\) to \(\sigma ^{\mathsf {{F}}}_1\). To that end, observe that there are \(\left| pred (\sigma ^{\mathsf {{F}}}_1)\right| \) possible prefixes for each path from \(\sigma _I\) to \(\sigma ^{\mathsf {{F}}}_1\) where the initial transition is taken from \(\sigma _I\) to some non-firing predecessor of \(\sigma ^{\mathsf {{F}}}_1\), plus the single prefix where the initial transition is taken to \(\sigma ^{\mathsf {{F}}}_1\) itself. Overall there are exactly \(\left| pred (\sigma ^{\mathsf {{F}}}_1)\right| + 1\) distinct finite prefixes that have \(\omega ^\prime \) as their corresponding path in \(D^\prime \). We denote the set of these prefixes for a path \(\omega ^\prime \) in \(D^\prime \) by \( Pref (\omega ^\prime )\). Since the measure of each finite prefix extends to a measure over the set of infinite paths sharing that prefix, it is sufficient to show that the sum of the probabilities for these finite prefixes is equal to the probability of the unique prefix \(\sigma _0, \sigma ^{\mathsf {{F}}}_1\) of \(\omega ^\prime \), that is \( Pr ^{D} Pref (\omega ^\prime ) = Pr ^{D^\prime } \{\sigma _I, \sigma ^{\mathsf {{F}}}_1\}\). We can then write

$$\begin{aligned} Pr ^{D} Pref (\omega ^\prime )&= {\mathbf {P}}(\sigma _I, \sigma ^{\mathsf {{F}}}_1) + \sum _{\sigma ^\prime \in pred (\sigma ^{\mathsf {{F}}}_1)} {\mathbf {P}}(\sigma _I, \sigma ^\prime ) \cdot 1^{k_{\sigma ^\prime }} \\&= {\mathbf {P}}(\sigma _I, \sigma ^{\mathsf {{F}}}_1) + \sum _{\sigma ^\prime \in pred (\sigma ^{\mathsf {{F}}}_1)} {\mathbf {P}}(\sigma _I, \sigma ^\prime ), \end{aligned}$$

where \(k_{\sigma ^\prime }\) is the number of deterministic transitions that lead from \(\sigma ^\prime \) to \(\sigma ^{\mathsf {{F}}}_1\) in D. Now recall that for any \(\sigma \in {\mathcal {S}}^\prime \setminus \{\sigma _I\}\) we have

$$\begin{aligned} {\mathbf {P}}^\prime (\sigma _I, \sigma ) = {\mathbf {P}}(\sigma _I, \sigma ) + \sum _{\sigma ^\prime \in pred (\sigma )} {\mathbf {P}}(\sigma _I, \sigma ). \end{aligned}$$

So we have shown that \( Pr ^{D} Pref (\omega ^\prime ) = Pr ^{D^\prime } \{\sigma _I, \sigma ^{\mathsf {{F}}}_1\}\) and the lemma is proved. \(\square \)

Proof of Theorem 1

Follows from Lemmas 5 and 6 for the reduction of states and transitions respectively, and from Lemma 7 for the preservation of unbounded time reachability properties. \(\square \)

5.6 Reward structures for reductions

We now show how a reward structure for a reduced model can be derived from any reward structure for the unreduced model, and prove that these reward structures are equivalent with respect to the reachability of synchronised firing states.

Theorem 2

For every population model \({\mathcal {P}}\) with corresponding DTMC \(D = ({\mathcal {S}}, \sigma _I, {\mathbf {P}}, L)\) and a reduction \(D^\prime = ({\mathcal {S}}^\prime , \sigma _I, {\mathbf {P}}^\prime , L^\prime )\) of D, and for every reward structure \({\mathcal {R}}= (R_s, R_t)\) for D, there is a reward structure \({\mathcal {R}}^\prime = (R_s^\prime , R_t^\prime )\) for \(D^\prime \) such that unbounded-time reachability reward properties with respect to synchronised firing states in D are preserved in \(D^\prime \).

Given any reward structure \({\mathcal {R}}= (R_s, R_t)\) on D we construct the corresponding reward structure \({\mathcal {R}}^\prime = (R_s^\prime , R_t^\prime )\) as follows:

  • There is no reward for the initial state and we set \(R_s(\sigma _I) = 0\).

  • For every firing state \(\sigma ^{\mathsf {{F}}}\) in \({\mathcal {S}}\) with \(R_s(\sigma ^{\mathsf {{F}}}) = r\) we set \(R_s^\prime (\sigma ^{\mathsf {{F}}}) = r\).

  • For every pair of distinct firing states \(\sigma ^{\mathsf {{F}}}_1, \sigma ^{\mathsf {{F}}}_2 \in {\mathcal {S}}^\prime \), where there is a non-zero transition from \(\sigma ^{\mathsf {{F}}}_1\) to \(\sigma ^{\mathsf {{F}}}_2\) in \(D^\prime \), there is a (possibly empty) sequence \(\sigma ^{\mathsf {{NF}}}_1 \cdots \sigma ^{\mathsf {{NF}}}_k\) of k deterministic predecessors of \(\sigma ^{\mathsf {{F}}}_2\) in \({\mathcal {S}}\) such that \(k > 0\) implies \({\mathbf {P}}(\sigma ^{\mathsf {{F}}}_1, \sigma ^{\mathsf {{NF}}}_1) > 0\), \({\mathbf {P}}(\sigma ^{\mathsf {{NF}}}_k, \sigma ^{\mathsf {{F}}}_2) = 1\), and \({\mathbf {P}}(\sigma ^{\mathsf {{NF}}}_i, \sigma ^{\mathsf {{NF}}}_{i+1}) = 1\) for \(1 \leqslant i < k\). We set the reward for taking the transition from \(\sigma ^{\mathsf {{F}}}_1\) to \(\sigma ^{\mathsf {{F}}}_2\) in \(D^\prime \) to be the sum of the rewards that would be accumulated across that sequence by a path in D, formally

    $$\begin{aligned} R_t^\prime (\sigma ^{\mathsf {{F}}}_1, \sigma ^{\mathsf {{F}}}_2) = \mathrm {tot}_{{\mathcal {R}}}(\sigma ^{\mathsf {{F}}}_1 \sigma ^{\mathsf {{NF}}}_1 \cdots \sigma ^{\mathsf {{NF}}}_k \sigma ^{\mathsf {{F}}}_2). \end{aligned}$$
  • For every firing state \(\sigma ^{\mathsf {{F}}}\) in \({\mathcal {S}}^\prime \) there is a non-zero transition from the initial state \(\sigma _I\) to \(\sigma ^{\mathsf {{F}}}\) in \({\mathbf {P}}^\prime \). Therefore, all paths of \(D^\prime \) where \(\sigma ^{\mathsf {{F}}}\) is the first firing state along that path share the same prefix, namely \(\sigma _I, \sigma ^{\mathsf {{F}}}\). For paths of D this is not necessarily the case, since \(\sigma ^{\mathsf {{F}}}\) is the first firing state not only along the path where the initial transition is taken to \(\sigma ^{\mathsf {{F}}}\) itself, but also along any path where the initial transition is taken to a non-firing state from which a sequence of deterministic transitions leads to \(\sigma ^{\mathsf {{F}}}\) (that state is a deterministic predecessor of \(\sigma ^{\mathsf {{F}}}\)). We therefore set the reward along a path \(\omega ^\prime = \sigma _I\sigma ^{\mathsf {{F}}}_1 \sigma ^{\mathsf {{F}}}_2 \cdots \) for taking the initial transition to \(\sigma ^{\mathsf {{F}}}\) in \(D^\prime \) to be the sum of the total rewards accumulated along all distinct path prefixes of the form \(\sigma _I\omega ^{\mathsf {{NF}}}\sigma ^{\mathsf {{F}}}\), normalised by the total probabilitiy of taking any of these paths, where \(\omega ^{\mathsf {{NF}}}\) is a possibly empty sequence of deterministic predecessors of \(\sigma ^{\mathsf {{F}}}\), and where the total reward for each prefix is weighted by the probability of taking the transitions along that sequence,

    $$\begin{aligned} R_t^\prime (\sigma _I, \sigma ^{\mathsf {{F}}}) =&\frac{ \sum _{\omega _{ pre } \in Pref (\omega ^\prime )} \mathrm {tot}_{{\mathcal {R}}}(\omega _{ pre }) Pr ^{D}\{\omega _{ pre }\} }{ Pr ^{D^\prime }\{\sigma _I\sigma ^{\mathsf {{F}}}_1\}} \end{aligned}$$
    (18)

Proof of Theorem 2

We want to show that for every reward structure \({\mathcal {R}}\) for D and corresponding reward structure \({\mathcal {R}}^\prime \) for \(D^\prime \), every \(\bowtie \ \in \{<,\leqslant ,\geqslant ,>\}\) and every \(r \in {\mathbb {R}}\), if \(\sigma _I\models \mathrm {R}_{\bowtie r}[\mathrm {F}\ synch ]\) holds in D then it also holds in \(D^\prime \). Let \(V_{ Sat ({\mathrm {F} synch })}\) and \(V^\prime _{ Sat ({\mathrm {F} synch })}\) respectively denote the random variables over \( Paths ^D\) and \( Paths ^{D^\prime }\) whose expectations correspond to \({\mathcal {R}}\) and \({\mathcal {R}}^\prime \). From the semantics of PCTL over a DTMC we have

$$\begin{aligned} \sigma _I\models \mathrm {R}_{\bowtie r}[\mathrm {F}\ synch ]&\quad \Leftrightarrow E[V_{ Sat ({ synch })}] \bowtie r \\&\quad \Leftrightarrow \sum _{\omega \in Paths } V_{ Sat ({ synch })}(\omega ) Pr ^D \{\omega \} \bowtie r. \end{aligned}$$

Therefore, we need to show that

$$\begin{aligned} \sum _{\omega \in Paths ^D} V_{ Sat ({ synch })}(\omega ) Pr ^D\{\omega \} = \sum _{\omega ^\prime \in Paths ^{D^\prime }} V^\prime _{ Sat ({ synch })}(\omega ^\prime ) Pr ^{D^\prime }\{\omega ^\prime \}, \end{aligned}$$
(19)

where \( Pr ^D\) and \( Pr ^{D^\prime }\) denote the probability measures with respect to the sets of infinite paths from \(\sigma _I\) in D and \(D^\prime \) respectively. There are two cases:

Firstly, if there exists some path of D that does not synchronise then by definition \(V_{ Sat ({ synch })} = \infty \). Also, from Lemma 7 we know that there is a corresponding path of \(D^\prime \) that does not synchronise, and hence that \(V^\prime _{ Sat ({ synch })} = \infty \). By definition the probability measure of all paths of D and \(D^\prime \) are strictly positive. Therefore, all summands of Equation 19 are defined, and the expectation of both \(V_{ Sat ({ synch })}\) and \(V^\prime _{ Sat ({ synch })}\) is \(\infty \).

Secondly, we consider the case where all possible paths of D and \(D^\prime \) synchronise. First we define the function \( reduce : Paths ^D \rightarrow Paths ^{D^\prime }\) that maps paths of D to their corresponding path in the reduction \(D^\prime \),

$$\begin{aligned} reduce (\sigma _I\omega ^{\mathsf {{NF}}}_1 \sigma ^{\mathsf {{F}}}_1 \omega ^{\mathsf {{NF}}}_2 \sigma ^{\mathsf {{F}}}_2 \cdots ) = \sigma _I\sigma ^{\mathsf {{F}}}_1 \sigma ^{\mathsf {{F}}}_2 \cdots , \end{aligned}$$

where \(\omega ^{\mathsf {{NF}}}_i\) is the (possibly empty) sequence of deterministic predecessors of the firing state \(\sigma ^{\mathsf {{F}}}_i\). Let \( reduce ^{-1}(\omega )\) denote the preimage of \(\omega \) under \( reduce \). Then, we can rewrite the left side of (19) to

$$\begin{aligned} \sum _{\omega ^\prime \in Paths ^{D^\prime }} \sum _{\omega \in reduce ^{-1}(\omega ^\prime )} V_{ Sat ({ synch })}(\omega ) Pr ^{D}\{\omega \}. \end{aligned}$$

For any path \(\omega \) of D or \(D^\prime \) let \( pre _s(\omega )\) be the prefix of that path whose last state is the first firing state along that path that is in the set \( Sat ( synch )\). So we want to show that the following holds for any path \(\omega ^\prime \) of \(D^\prime \),

$$\begin{aligned} \sum _{\omega \in reduce ^{-1}(\omega ^\prime )} V_{ Sat ({ synch })}(\omega ) Pr ^{D}\{\omega \}&= V^\prime _{ Sat ({ synch })}(\omega ^\prime ) Pr ^{D^\prime }\{\omega ^\prime \} \nonumber \\ \sum _{\omega \in reduce ^{-1}(\omega ^\prime )} \mathrm {tot}_{{\mathcal {R}}}( pre _s(\omega )) Pr ^{D}\{\omega \}&= \mathrm {tot}_{{\mathcal {R}}^\prime }( pre _s(\omega ^\prime )) Pr ^{D^\prime }\{\omega ^\prime \}. \end{aligned}$$
(20)

Given some path \(\omega \) let \(\omega [i:j]\) denote the sequence of states in \(\omega \) from the \(i^{th}\) firing state to the \(j^{th}\) firing state along that path (inclusively). The notation \(\omega [-:j]\) indicates that no states are removed from the start of the path i.e. the first state is \(\sigma _I\), and the notation \(\omega [i:-]\) indicates that no states are removed from the end of the path. By recalling that \( Pr (\sigma _0 \sigma _1 \cdots \sigma _n) = \prod _{i=1}^n {\mathbf {P}}(\sigma _{i-1},\sigma _i)\) we can see that \( Pr (\sigma _0 \sigma _1 \cdots \sigma _n) = Pr (\sigma _0 \cdots \sigma _i) Pr (\sigma _i \cdots \sigma _n)\) for any \(0< i < n\). Also from (1) it is clear that for any reward structure \({\mathcal {R}}\), \(\mathrm {tot}_{\mathcal {R}}(\sigma _I\cdots \sigma _n) = \mathrm {tot}_{\mathcal {R}}(\sigma _I\cdots \sigma _i) + \mathrm {tot}_{\mathcal {R}}(\sigma _i \cdots \sigma _n)\) holds for all \(0< i < n\). Now we can rewrite (20) to

$$\begin{aligned} \begin{aligned}&\sum _{\omega \in reduce ^{-1}(\omega ^\prime )} \left( \mathrm {tot}_{{\mathcal {R}}}( pre _s(\omega )[-:1]) + \mathrm {tot}_{{\mathcal {R}}}( pre _s(\omega )[1:-]) \right) Pr ^{D}\{\omega [-:1]\} \\&\quad =\left( \mathrm {tot}_{{\mathcal {R}}^\prime }( pre _s(\omega ^\prime )[-:1]) + \mathrm {tot}_{{\mathcal {R}}^\prime }( pre _s(\omega ^\prime )[1:-]) \right) Pr ^{D^\prime }\{\omega ^\prime [-:1]\}. \end{aligned} \end{aligned}$$
(21)

By the definition of \({\mathcal {R}}^\prime \) we can write the right hand side of (21) as

$$\begin{aligned} \begin{aligned}&\left( \left( \frac{ \sum _{\omega _{ pre } \in Pref (\omega ^\prime )} \mathrm {tot}_{{\mathcal {R}}}(\omega _{ pre }) Pr ^{D}\{\omega _{ pre }\} }{ Pr ^{D^\prime }\{\omega ^\prime [-:1]\}} \right) + \mathrm {tot}_{{\mathcal {R}}^\prime }( pre _s(\omega ^\prime )[1:-]) \right) Pr ^{D^\prime }\{\omega ^\prime [-:1]\} \\&\quad = \sum _{\omega _{ pre } \in Pref (\omega ^\prime )} \left( \mathrm {tot}_{{\mathcal {R}}}(\omega _{ pre }) Pr ^{D}\{\omega _{ pre }\} \right) + \mathrm {tot}_{{\mathcal {R}}^\prime }( pre _s(\omega ^\prime )[1:-]) Pr ^{D^\prime }\{\omega ^\prime [-:1]\}. \end{aligned} \end{aligned}$$

From Lemma 7 we know that

$$\begin{aligned} Pr ^{D^\prime }\{\omega ^\prime [-:1]\} = Pr ^D Pref (\omega ^\prime ) = \sum _{\omega _{ pre } \in Pref (\omega ^\prime )} Pr ^D\{\omega _{ pre }\}, \end{aligned}$$

and hence obtain

$$\begin{aligned} \begin{aligned}&\sum _{\omega _{ pre } \in Pref (\omega ^\prime )} \left( \mathrm {tot}_{{\mathcal {R}}}(\omega _{ pre }) Pr ^{D}\{\omega _{ pre }\} \right) + \sum _{\omega _{ pre } \in Pref (\omega ^\prime )} \mathrm {tot}_{{\mathcal {R}}^\prime }( pre _s(\omega ^\prime )[1:-]) Pr ^{D}\{\omega _{ pre }\} \\&\quad = \sum _{\omega _{ pre } \in Pref (\omega ^\prime )} \left( \mathrm {tot}_{{\mathcal {R}}}(\omega _{ pre }) + \mathrm {tot}_{{\mathcal {R}}^\prime }( pre _s(\omega ^\prime )[1:-]) \right) Pr ^{D}\{\omega _{ pre }\}. \end{aligned} \end{aligned}$$
(22)

Since \( Pref (\omega ^\prime )\) is the set of all possible finite prefixes from the initial state \(\sigma _I\) to the first firing state \(\sigma ^{\mathsf {{F}}}_1\), and since \(\omega [-:1] = pre _s(\omega )[-:1]\) clearly holds, we know that

$$\begin{aligned} \bigcup _{\omega _{ pre } \in Pref (\omega ^\prime )} \{\omega _{ pre }\} = \bigcup _{\omega \in reduce ^{-1}(\omega ^\prime )} \{\omega [-:1]\}. = \bigcup _{\omega \in reduce ^{-1}(\omega ^\prime )} \{ pre _s(\omega )[-:1]\}. \end{aligned}$$

Using this fact, and by observing that by definition

$$\begin{aligned} \mathrm {tot}_{{\mathcal {R}}^\prime }( pre _s(\omega ^\prime )[1:-]) = \mathrm {tot}_{{\mathcal {R}}}( pre _s(\omega )[1:-]), \end{aligned}$$

we can write (22) as

$$\begin{aligned} \sum _{\omega \in reduce ^{-1}(\omega ^\prime )}&\left( \mathrm {tot}_{{\mathcal {R}}}( pre _s(\omega )[-:1]) + \mathrm {tot}_{{\mathcal {R}}^\prime }( pre _s(\omega ^\prime )[1:-]) \right) Pr ^{D}\{\omega [-:1]\}. \end{aligned}$$

This is the same as the left hand side of (21) and the theorem is proved. \(\square \)

6 Connecting the concrete model and the population model

In this section, we define the abstraction function to connect a concrete model with a population model. To that end, let \(D_c = ({\mathcal {S}}_{c}, s_I, {\mathbf {P}}_c)\) be a concrete model of a network of N PCOs with a clock cycle length T, a refractory period R, a phase response function \({\varDelta }\), a coupling \(\epsilon \) and broadcast failure probability of \(\mu \). Furthermore, let \(D_p = ({\mathcal {S}}_{p}, \sigma _I, {\mathbf {P}}_p)\) be the DTMC of a population model for the same parameters. For a finite path \(\omega = \sigma _0,\dots ,\sigma _n \) we denote its last element by \( last (\omega ) = \sigma _n\).

The correspondence we want to show is that the initial states of the two models are weakly bisimulation equivalent [7]. Since we do not employ any actions except for the silent action in our models, we first define a simplified version of weak bisimulations. To that end, we use the definition of Baier and Hermanns [7], but with slightly altered notations to fit to our setting, and by ignoring all references to sequences of actions.

Definition 18

(Weak Bisimulation [7]) A weak bisimulation on \(D = ({\mathcal {S}}, s_I, {\mathbf {P}})\) is an equivalence relation R on \({\mathcal {S}}\) such that for all \((s, s^\prime ) \in R\), and all equivalence classes \(E \in {\mathcal {S}}/ R\), we have

$$\begin{aligned} {\mathbf {P}}(s, E) = {\mathbf {P}}(s^\prime , E) \end{aligned}$$

where \({\mathbf {P}}(s,E) = \sum _{\omega \in Paths _f^D(s) \wedge last (\omega ) \in E} {\mathbf {P}}(\omega )\) for any set \(E \subseteq {\mathcal {S}}\). We say that two states \(s\) and \(s^\prime \) are weakly bisimilar, if, and only if, there is a weak bisimulation R such that \((s, s^\prime ) \in R\).

6.1 Proving the correspondence between concrete and population models

In this section, we will formally define a weak bisimulation relation between states in the concrete model \(D_c\) and the corresponding population model \(D_p\). Since weak bisimulations are defined on a single DTMC, we will define a single DTMC combining both the concrete and the population model.

Definition 19

Let \(D_c = ({\mathcal {S}}_{c}, s_I, {\mathbf {P}}_c)\) be a concrete model of a network and let \(D_p = ({\mathcal {S}}_{p}, \sigma _I, {\mathbf {P}}_p)\) be the DTMC of a corresponding population model. The combination of \(D_c\) and \(D_p\) is the DTMC \(D = ({\mathcal {S}}, i, {\mathbf {P}})\), where \({\mathcal {S}}= {\mathcal {S}}_{c}\cup {\mathcal {S}}_{p}\cup \{i\}\), i is a new state (\(i \not \in {\mathcal {S}}_{c}\cup {\mathcal {S}}_{p}\)), and \({\mathbf {P}}\) is defined as follows:

$$\begin{aligned} {\mathbf {P}}(s, s^\prime )&= {\left\{ \begin{array}{ll} {\mathbf {P}}_c(s, s^\prime )\ &{} \text {if } s, s^\prime \in {\mathcal {S}}_{c}\\ {\mathbf {P}}_p(s, s^\prime )\ &{} \text {if } s, s^\prime \in {\mathcal {S}}_{p}\\ \frac{1}{2} &{} \text {if } s=i \text { and } s^\prime \in \{s_I, \sigma _I\}\\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

The initial transitions from the new state i form a uniform distribution over the two behaviours, and hence the combination DTMC could behave like either model. However, for simplicity we will often refer to the original models \(D_c\) and \(D_p\).

We need to associate states in \(D_c\) to states in \(D_p\). In general, several concrete states will be mapped to a single population state, since we do not distinguish between different orders of oscillators in the latter, while we do in the former.

Furthermore, we want to abstract from different modes of the oscillators. However, it is not sensible to associate all modes within a phase to the same population state, since in the transitions from one mode to the next the system chooses, whether an oscillator fails to broadcast its pulse or not. If we want to be able to define a probabilistic weak bisimulation relation, we need to represent the failures described by the transitions in the population model. To have an exact correspondence, we first collect all the concrete states where the counter and all oscillators are at the start mode into a single set.

$$\begin{aligned} {\mathcal {S}}_{c}^\prime&= \{s\in {\mathcal {S}}_{c}\mid {\mathsf {mode}}({s},{\mathsf {env}}) = start \wedge \forall u :{\mathsf {mode}}({s},{u}) = start \} \end{aligned}$$

The abstraction function \( h:{\mathcal {S}}_{c}^\prime \rightarrow {\mathcal {S}}_{p}\) takes a concrete state \(s\) and counts the number of oscillators sharing the same phase, mapping \(s= (\mathsf {env}, \mathsf {osc})\) to the corresponding state of the population model,

$$\begin{aligned} h(s)&= \langle |\{u \mid {\phi }({s},{u}) = 1\}|, \dots , |\{u \mid {\phi }({s},{u}) = T\}| \rangle . \end{aligned}$$

Now we can define the weak bisimulation relation between the models.

Definition 20

Let \(D = ({\mathcal {S}}, i, {\mathbf {P}})\) be the combination of a concrete model and its population model according to Definition 19. Then let \(R^\prime \subseteq {\mathcal {S}}\times {\mathcal {S}}\) be defined by

  1. 1.

    \((s_I,\sigma _I) \in R^\prime \) and

  2. 2.

    \( (s, \sigma ) \in R^\prime \) if, and only if, \(s\in {\mathcal {S}}_{c}^\prime \), \(\sigma \in {\mathcal {S}}_{p}\) and \(h(s) = \sigma \).

The weak bisimulation \(R \subseteq {\mathcal {S}}\times {\mathcal {S}}\) is then the reflexive, symmetric and transitive closure of \(R^\prime \).

The partition induced by the relation R has three types of equivalence classes:

  1. 1.

    the class \(E_{ init } = \{s_I, \sigma _I\}\) consisting of the initial states;

  2. 2.

    classes \(E = \{\sigma \} \cup \{s\mid s\in {\mathcal {S}}_{c}^\prime \wedge h(s) = \sigma \}\) that contain exactly one state \(\sigma \) from the population model and all states \(s\) of the concrete model, whose abstraction is \(\sigma \) and where all components are in the mode \( start \);

  3. 3.

    the class \(E_\emptyset \) containing all states that are not in relation with other states. This class in particular does not contain any state of the population model.

We now proceed to show that this relation indeed is a weak bisimulation on D, with a minor caveat. Consider a state of the population model \(\sigma \) and a state of the concrete model \(s\) in the same equivalence class. Then we have \({\mathbf {P}}(s, E_\emptyset ) = 1\), since every transistion starting in \(s\) immediately leads into a state in \(E_\emptyset \). However, we also have \({\mathbf {P}}(\sigma , E_\emptyset ) = 0\), since none of the successors of \(\sigma \) is an element of \(E_\emptyset \). We could remedy this discrepancy by introducing a new state \(\sigma ^\prime \) for each state in the population model. Then all of these states would be in \(E_\emptyset \), and every \(\sigma \) of the original population model would have a unique successor in \(E_\emptyset \). Since this addition does not change the overall behaviour of the population model we chose instead to ignore the transitions ending in \(E_\emptyset \), and only consider the probabilities of paths into the other equivalence classes.

Theorem 3

Let \(D_c = ({\mathcal {S}}_{c}, s_I, {\mathbf {P}}_c)\) and \(D_p = ({\mathcal {S}}_{p}, \sigma _I, {\mathbf {P}}_p)\) be a concrete network of oscillators and its abstraction as a population model, respectively. Furthermore, let \(D = ({\mathcal {S}}, i, {\mathbf {P}})\) be the combination of both of these models from Definition 19 and \(R \subseteq {\mathcal {S}}\times {\mathcal {S}}\) be the relation from Definition 20. Then R is a weak bisimulation relation, and \(s_I\) and \(\sigma _I\) are weakly bisimilar.

Proof

From transition sequences in \(D_p\) to transition sequences in \(D_c\). Let \((s_1, \sigma _1) \in R\), where \(s_1 \in {\mathcal {S}}_{c}\), \(\sigma _1 \in {\mathcal {S}}_{p}\), and let E be an equivalence class such that \({\mathbf {P}}(\sigma _1, E) > 0\), i.e., for the single state \(\sigma _2 \in E\) of the population model, we have \({\mathbf {P}}(\sigma _1,\sigma _2) > 0\). Furthermore, for \(s_1\), we have by the definition of R that \({\mathsf {mode}}({s_1},{\mathsf {env}}) = start \), \({\mathsf {mode}}({s_1},{u}) = start \) for all \(1 \leqslant u \leqslant N\) and \(h(s_1) = \langle |\{u \mid {\phi }({s_1},{u}) = 1\}|, \dots , |\{u \mid {\phi }({s_1},{u}) = T\}| \rangle = \sigma _1\). Note that there is only a single outgoing transition from \(s_1\) according to condition (2). That is, in the successor state \(s\) of \(s_1\), we have \(\mathsf {count}(s) = 0\) and \({\mathsf {mode}}({s},{\mathsf {env}}) = update \), while the oscillator states are not changed. To keep the notation tidy, we identify this successor state with \(s_1\) in the following.

Now consider two cases. If \(|\{u \mid {\phi }({s_1},{u}) = T\}| = 0\), then \(\sigma _2 = \langle 0, |\{u \mid {\phi }({s_1},{u}) = 1\}|, \dots , |\{u \mid {\phi }({s_1},{u}) = T-1\}| \rangle \), since no set of oscillators in \(\sigma _1\) is perturbed by a firing oscillator. In particular, there is no u such that \({\phi }({s_1},{u}) = T\). Hence, for all possible successors \(s^\prime \) of \(s_1\), we have that only condition (3) is satisfied. Furthermore, this is the case until all oscillators changed their mode to \( update \). Let us call this state \(s_1^\prime \). Now, the environment was not changed from \(s_1\) to \(s_1^\prime \), i.e., \(\mathsf {count}(s_1) = \mathsf {count}(s_1^\prime ) = 0\).

Hence, condition (9b) is satisfied for all oscillators. Since \({\varDelta }({\varPhi }, 0, \epsilon ) = 0\) for all \({\varPhi }\), the phase of each oscillator is increased by one. This implies that there is a single successor of \(s_1^\prime \), which we call \(s_2\) and that for all u, \({\phi }({s_2},{u}) = {\phi }({s_1^\prime },{u}) + 1\). In particular, we have that for all oscillators u, \({\phi }({s_2},{u}) > 0\), and \(\{u \mid {\phi }({s_2},{u}) = {\varPhi }\} = \{u \mid {\phi }({s^\prime _1},{u}) = {\varPhi }- 1\}\) for all \(0 < {\varPhi }\leqslant T\). Hence \(s_2 \in E\), and thus \({\mathbf {P}}(s_1, E) > 0\).

Now let \(|\{u \mid {\phi }({s_1},{u}) = T\}| > 0\). Then, each transition in the population model is induced by a failure vector \(F=\langle f_1, \dots , f_T\rangle \). In particular, there is a maximal number k, such that for all \(l < k\), we have \(f_{l} = \star \). That is, k denotes the lowest phase in which oscillators possibly fire.

First, we introduce some notation, where \({\varPhi }> R\).

$$\begin{aligned} {\mathcal {N}}^{\mathrm {F}}(s)&= \{u \mid {\phi }({s},{u}) = T \}\\ {\mathcal {N}}^{\mathrm {PF}}_{{\varPhi }}(s)&= \{u \mid {\phi }({s},{u}) = {\varPhi }\wedge {\varDelta }({\phi }({s},{u}),\alpha ^{{\phi }({s},{u})}(\sigma _1, F), \epsilon ) +1 > T \} \\ {\mathcal {N}}^{\mathrm {P}}_{{\varPhi }}(s)&= \{u \mid {\phi }({s},{u}) = {\varPhi }\wedge {\varDelta }({\phi }({s},{u}), \alpha ^{{\phi }({s},{u})}(\sigma _1, F), \epsilon ) +1 \leqslant T \} \end{aligned}$$

That is, \({\mathcal {N}}^{\mathrm {F}}(s)\) denotes the set of oscillators possibly firing in \(s\). The sets \({\mathcal {N}}^{\mathrm {P}}_{{\varPhi }}(s)\) and \({\mathcal {N}}^{\mathrm {PF}}_{{\varPhi }}(s)\) denote the sets of oscillators being perturbed but not firing (since the perturbation is not sufficient for the oscillators to reach the end of their cycle), and possibly firing, respectively. We can only say that elements of \({\mathcal {N}}^{\mathrm {F}}(s)\) and \({\mathcal {N}}^{\mathrm {PF}}_{{\varPhi }}(s)\) possibly fire, since they may be affected by a broadcast failure.

We now have to construct a sequence of transitions, where we draw the firing oscillators from the sets \({\mathcal {N}}^{\mathrm {F}}(s_1)\) and \({\mathcal {N}}^{\mathrm {PF}}_{{\varPhi }}(s_1)\), according to the broadcast failure vector F. Furthermore, all elements of \({\mathcal {N}}^{\mathrm {F}}(s_1)\) and the sets \({\mathcal {N}}^{\mathrm {PF}}_{{\varPhi }}(s_1)\) have to take transitions such that their phase value in the next iteration is 1.

Let \(\sigma _1 = \langle k_1, k_2, \dots , k_T\rangle \). Now consider an arbitrary sequence \(u_1, \dots , u_{k_T}\) of all \(k_{T}\) elements from \({\mathcal {N}}^{\mathrm {F}}(s_1)\). Additionally, let \(C_{T} \subseteq {\mathcal {N}}^{\mathrm {F}}(s_1) \) be the set of oscillators in phase T with a broadcast failure, i.e., \(|C_{T}| = f_T\). Observe that \({\phi }({s_1},{u_j}) = T\) for all \(1 \leqslant j \leqslant k_T\). Furthermore, let \(r_0^T = s_1\). Then we define a sequence of successors of \(r_0^T= (\mathsf {env}_0^T, \mathsf {osc}_0^T)\) as follows, where \(1 \leqslant j \leqslant k_T\). If \(u_i \not \in C_{T}\), then

$$\begin{aligned} \mathsf {osc}_j^T&= \mathsf {osc}_{j-1}^T\oplus [u_j \mapsto (T, update )]\\ \mathsf {env}_j^T&= (\mathsf {count}(r_{j-1}^T) + 1, update ) \end{aligned}$$

otherwise

$$\begin{aligned} \mathsf {osc}_j^T&= \mathsf {osc}_{j-1}^T\oplus [u_j \mapsto (T, update )]\\ \mathsf {env}_j^T&= ( \mathsf {count}(r_{j-1}^T), update ) \end{aligned}$$

Observe that these states define a sequence of transitions from \(r_0^T\) to \(r_{k_{T}}^T\) according to conditions (7) and (8).

Now, for each phase \({\varPhi }\), with \(k \leqslant {\varPhi }< T\), we proceed similarly. That is, we first choose a sequence \(u^{\varPhi }_1, \dots , u^{\varPhi }_{k_{\varPhi }} \) of oscillators and a set \(C_{{\varPhi }} \subseteq {\mathcal {N}}^{\mathrm {PF}}_{{\varPhi }}(s_1)\) with \(|C_{{\varPhi }}| = f_{\varPhi }\).

Subsequently, we define each \(r^{\varPhi }_j\) to be

$$\begin{aligned} \mathsf {osc}^{\varPhi }_j&= \mathsf {osc}_{j-1}\oplus [u^{\varPhi }_j \mapsto ({\varPhi }, update )]\\ \mathsf {env}^{\varPhi }_j&= \left\{ \begin{array}{ll} ( \mathsf {count}(s^{\varPhi }_{j-1}) + 1, update ), &{} \quad \text { if } u_j \not \in C_{{\varPhi }}\\ ( \mathsf {count}(s^{\varPhi }_{j-1}), update ), &{} \quad \text {otherwise} \end{array} \right. \end{aligned}$$

where \(r^{\varPhi }_0 = r^{{\varPhi }+ 1}_{k_{\varPhi }}\). Observe again, that these sequences exhaust \({\mathcal {N}}^{\mathrm {PF}}_{{\varPhi }}(s_1)\) for each phase \({\varPhi }\). Furthermore, the number of firing oscillators that are not inhibited by a broadcast failure in the concrete model coincides with the number of perceived firing oscillators in the population model in this phase.

$$\begin{aligned} \text {For each } {\varPhi }\text { with } k \leqslant {\varPhi }\leqslant T \text {, we have }\mathsf {count}(r^{\varPhi }_0) = \alpha ^{{\varPhi }}(\sigma _1, F). \end{aligned}$$
(23)

To prove this let us consider the different cases: for \({\varPhi }= T\), we have \(\mathsf {count}(r^T_0) = 0 = \alpha ^T(\sigma _1,F)\). Now let \({\varPhi }< T\) and assume \(\mathsf {count}(r^{{\varPhi }+1}_0) = \alpha ^{{\varPhi }+1}(\sigma _1,F)\). By definition, we have

$$\begin{aligned} \alpha ^{{\varPhi }}(\sigma _1, F)&= \alpha ^{{\varPhi }+ 1}(\sigma ,F) +k_{{\varPhi }+1} - f_{{\varPhi }+1}, \end{aligned}$$

since \({\varPhi }< T\) and \(f_{{\varPhi }+1} \ne \star \).

Now, in the sequence \(r^{{\varPhi }+1}_0, \dots , r^{{\varPhi }+1}_{k_{{\varPhi }+1}}\), we increase \(\mathsf {count}(r^{{\varPhi }+1}_0)\) exactly \(k_{{\varPhi }+1} - f_{{\varPhi }+1}\) times, i.e,

$$\begin{aligned} \mathsf {count}(r^{\varPhi }_0)&= \mathsf {count}(r^{{\varPhi }+1}_0) +k_{{\varPhi }+1} - f_{{\varPhi }+1}. \end{aligned}$$

By assumption, we then get

$$\begin{aligned} \mathsf {count}(r^{\varPhi }_0)&= \alpha ^{{\varPhi }+ 1}(\sigma _1,F) +k_{{\varPhi }+1} - f_{{\varPhi }+1} = \alpha ^{{\varPhi }}(\sigma _1,F), \end{aligned}$$

and property (23) holds.

This property in particular states that the perturbation within the population model and the concrete model is the same.

Since \({\varDelta }\) is a monotonically increasing function in \(\alpha \), every oscillator in \({\mathcal {N}}^{\mathrm {PF}}_{{\varPhi }}(s_1)\) is still perturbed to firing after other oscillators in the same phase fired. Hence, for each pair of states \(u^{\varPhi }_{j-1}\) and \(u^{\varPhi }_j\) with \(1 \leqslant j \leqslant k_{\varPhi }- f_{\varPhi }\), a transition according to condition (6) is well-defined. Similarly, for oscillators that should fire, but are affected by a broadcast failure, \(u^{\varPhi }_{j-1}\) and \(u^{\varPhi }_j\) with \( k_{\varPhi }- f_{\varPhi }+1 \leqslant j \leqslant k_{\varPhi }\), the transition is defined according to condition (5).

Now, for every \({\varPhi }< k\), we know that \({\varPhi }+ {\varDelta }({\varPhi }, \alpha ^{{\varPhi }}(\sigma _1, F), \epsilon )+1 \leqslant T\) and \(\alpha ^{{\varPhi }-1}(\sigma _1, F) = \alpha ^{{\varPhi }}(\sigma _1,F)\), according to equation (13). Hence, for every phase \({\varPhi }< k\), we arbitrarily enumerate the oscillators of \({\mathcal {N}}^{\mathrm {P}}_{{\varPhi }}(s_1) = \{u^{\varPhi }_1, \dots , u^{\varPhi }_{k_{\varPhi }}\}\) and define the following sequence of states \(r_j^{\varPhi }\), where \(r_0^{\varPhi }= r^{{\varPhi }+1}_{k_{{\varPhi }+1}}\).

$$\begin{aligned} \mathsf {osc}^{\varPhi }_j&= \mathsf {osc}_{j-1}\oplus [u^{\varPhi }_j \mapsto ({\varPhi }, update )]\\ \mathsf {env}^{\varPhi }_j&= (\mathsf {count}(r^{\varPhi }_{j-1}), update ) \end{aligned}$$

For each \({\varPhi }\) and pair of states \(r^{\varPhi }_j\) and \(r^{\varPhi }_{j+1}\), there is a transition according to condition (4). So, all in all, we have a sequence of transitions from \(s_1\) to \(r^0_{k_0}\).

Now, in \(r^0_{k_0}\), we have that \({\mathsf {mode}}({r^0_{k_0}},{\mathsf {env}}) = update \) and for all u, \({\mathsf {mode}}({r^0_{k_0}},{u}) = update \). Then let \(s_2\) be defined by the following formulas.

$$\begin{aligned}&{\mathsf {mode}}({s_2},{\mathsf {env}}) = start \text { and } \mathsf {count}(s_2) = \mathsf {count}(r^0_{k_0}) \end{aligned}$$
(24)
$$\begin{aligned}&\forall u :{\mathsf {mode}}({s_2},{u}) = start \end{aligned}$$
(25)
$$\begin{aligned}&\forall u :{\phi }({r^0_{k_0}},{u}) = T \rightarrow {\phi }({s_2},{u}) = 1 \end{aligned}$$
(26)
$$\begin{aligned}&\forall u :{\phi }({r^0_{k_0}},{u}) < T \wedge {\phi }({r^0_{k_0}(u)},{\leqslant })R \rightarrow {\phi }({s_2},{u}) = {\phi }({r^0_{k_0}},{u}) +1 \end{aligned}$$
(27)
$$\begin{aligned}&\forall u :{\phi }({r^0_{_0}},{u}) < T \wedge {\phi }({r^0_{k_0}},{u}) > R\; \wedge \nonumber \\&\;{\phi }({r^0_{k_0}},{u}) + {\varDelta }({\phi }({r^0_{k_0}},{u}), \mathsf {count}(r^0_{k_0}), \epsilon ) + 1 \leqslant T \nonumber \\&\quad \rightarrow {\phi }({s_2},{u}) = {\phi }({r^0_{k_0}},{u}) + {\varDelta }({\phi }({r^0_{k_0}},{u}), \mathsf {count}(r^0_{k_0}), \epsilon ) +1 \end{aligned}$$
(28)
$$\begin{aligned}&\forall u :{\phi }({r^0_{k_0}},{u}) < T \wedge {\phi }({r^0_{k_0}},{u})> R\; \wedge \nonumber \\&\;{\phi }({r^0_{k_0}},{u}) + {\varDelta }({\phi }({r^0_{k_0}},{u}), \mathsf {count}(r^0_{k_0}),\epsilon ) + 1 > T \nonumber \\&\quad \rightarrow {\phi }({s_2},{u}) = 1 \end{aligned}$$
(29)

Then \(r^0_{k_0}\) and \(s_2\) satisfy all parts of condition (9). Hence, we have a sequence of transitions from \(s_1\) to \(s_2\), and \(s_2 \in {\mathcal {S}}_{c}^\prime \). To prove \(h(s_2) = \sigma _2\), we need to show that the number of oscillators possessing a phase \({\varPhi }\) in \(s_2\) matches the \({\varPhi }\)-th entry of \(\sigma _2 = \langle k^\prime _1, \dots , k^\prime _T\rangle \). To that end, recall that by Definition 10, each \(k^\prime _{\varPhi }=\sum _{{\varPsi }\in {\mathcal {U}}_{\varPhi }(\sigma , F)} k_{\varPsi }\), where \(\sigma _1 = \langle k_1, \dots , k_T\rangle \) and \( {\mathcal {U}}_{\varPhi }(\sigma , F) = \{{\varPsi }\mid {\varPsi }\in \{1,\ldots ,T\} \wedge \tau (\sigma , {\varPsi }, F) = {\varPhi }\} \). Observe that both the concrete model and the population model use the same perturbation function \({\varDelta }\) and that \(\tau \) is defined in terms of \({\varDelta }\). In particular, we have

$$\begin{aligned} \tau (\sigma , {\varPhi }, F) = {\left\{ \begin{array}{ll} 1 &{} \text {if } fire ^{\varPhi }(\sigma , F) \\ update ^{\varPhi }(\sigma , F) &{} \text {otherwise}. \\ \end{array}\right. } \end{aligned}$$

Now let us distinguish three cases for \({\varPhi }\).

  1. 1.

    If \({\varPhi }\leqslant R\), then \( update ^{\varPhi }(\sigma _1, F) = {\varPhi }+ 1 \), due to the definition of the refractory function \(\mathrm {ref}\). Similarly, for all u such that \( {\phi }({s_1},{u}) = {\varPhi }\), we get that \({\phi }({s_2},{u}) = {\varPhi }+1 \). Hence, for all \({\varPhi } \leqslant R\), we have that \(|\{u \mid {\phi }({s_2},{u}) = {\varPhi }+1\}| = |\{u \mid {\phi }({s_1},{u}) = {\varPhi }\}|\).

  2. 2.

    If \({\varPhi }> R\) and \( update ^{\varPhi }(\sigma _1, F) = {\varPsi }\), with \({\varPsi }\leqslant T\). Then \({\phi }({s_2},{u}) = {\varPsi }\), by formula (28). Observe that the number of oscillators in \(s_1\) with a phase of \({\varPhi }\) is \(k_{\varPhi }\). So, the number of oscillators that get perturbed to be in \({\varPsi }\) is the union of the oscillators u in phases \({\varPhi }\), where \({\varDelta }({\varPhi },\mathsf {count}(s_2), \epsilon )+1 = {\varPsi }\). That is, \(\{u \mid {\phi }({s_2},{u}) = {\varPsi }\} = \{u \mid {\varDelta }({\phi }({s_1},{u}),\mathsf {count}(s_2), \epsilon )+1 ={\varPsi }\}\). By the definition of \(\tau \) and property (23), we get that \(\tau (\sigma _1, {\varPhi }, F) = {\varPsi }\). That is, for a specific \({\varPsi }\), we have that the phases \({\varPhi }\) of oscillators perturbed to \({\varPsi }\) are in \({\mathcal {U}}_{\varPsi }(\sigma _1,F)\). Hence, since the sets of oscillators in each phase are disjoint, \(|\{u \mid {\phi }({s_2},{u}) = {\varPsi }\}| = \sum _{{\varPhi }\in {\mathcal {U}}_{\varPsi }(\sigma _1, F)}k_{\varPhi }\).

  3. 3.

    Finally, let \( update ^{\varPhi }(\sigma _1, F) = {\varPsi }\) and \({\varPsi }> T\). Then \(\tau (\sigma _1, {\varPhi },F) = 1\). Furthermore, by formulas (26) and (29), we have \({\phi }({s_2},{u}) = 1\) for all u with phase \({\varPhi }\). With similar reasoning as above, we get that \(|\{u \mid {\phi }({s_2},{u}) = 1\}| = \sum _{{\varPhi }\in {\mathcal {U}}_1(\sigma _1, F)}k_{\varPhi }\).

Hence, we get \(h(s_2) = \sigma _2\), that is \(s_2 \in E\). Together with the existence of the transition sequence from \(s_1\) to \(s_2\) we get \({\mathbf {P}}(s_1, E) > 0\).

From transition sequences in \(D_c\) to transition sequences in \(D_p\). Now we turn our attention to the other direction. That is, if we have a sequence of transitions in the concrete model, we can find a corresponding transition sequence in the population model. Let \((s_1, \sigma _1) \in R\), and let E be an equivalence class such that \({\mathbf {P}}(s_1, E) > 0\). If \(s_1 \in E\), then the theorem holds, since \(\sigma _1 \in E\) by definition of R. Otherwise, let \(\omega = s_1 \dots s_2\) be an execution sequence from \(s_1\) to \(s_2\) and assume that for all \(s^\prime \) on \(\omega \) different from \(s_1\) and \(s_2\), we have \({\mathsf {mode}}({s^\prime },{\mathsf {env}}) = update \). Recall that there is a unique \(\sigma _2 \in E\). By definition of R, we have

$$\begin{aligned} \sigma _1&= \langle |\{u \mid {\phi }({s_1},{u}) = 1\}|, \dots , |\{u \mid {\phi }({s_1},{u}) = T\}|\rangle \\ \sigma _2&= \langle |\{u \mid {\phi }({s_2},{u}) = 1\}|, \dots , |\{u \mid {\phi }({s_2},{u}) = T\}|\rangle \end{aligned}$$

We now distinguish two cases. First, assume that \(\{u \mid {\phi }({s_1},{u}) = T\} = \emptyset \), and let \(s\) be a state on \(\omega \) such that \({\mathsf {mode}}({s},{u}) = update \) for all u. Then there is exactly one transition from \(s\) to \( s_2\), which is defined according to Eq. (9). Furthermore, due to the assumption that no oscillator fires, we have \(\mathsf {count}(s) = 0\), which implies \({\varDelta }({\varPhi }, \mathsf {count}(s), \epsilon )=0\) for all \({\varPhi }\) by Definition 5. Hence, for all u, we have \({\phi }({s_2},{u}) = {\phi }({s},{u}) + 1 = {\phi }({s_1},{u})+1\). That is,

$$\begin{aligned} \sigma _2&=\langle 0, |\{u \mid {\phi }({s_1},{u}) +1 = 2\}|, \dots ,|\{u \mid {\phi }({s_1},{u}) +1 = T\}|\rangle \\&= \langle 0, |\{u \mid {\phi }({s_1},{u}) = 1\}|, \dots ,|\{u \mid {\phi }({s_1},{u}) = T -1\}|\rangle . \end{aligned}$$

That is, we have \({\mathbf {P}}(\sigma _1, \sigma _2) > 0\) due to a deterministic transition, which, in particular, implies \({\mathbf {P}}(\sigma _1, E) > 0\).

The second case is more involved. Let us assume \(\{u \mid {\phi }({s_1},{u}) = T\} \ne \emptyset \), that is, at least one oscillator fires. Hence, due to the preconditions of the transitions, we can divide the transition sequence from \(s_1\) to \(s_2\) as follows:

$$\begin{aligned} \omega = s_1, \dots , r_T, \dots , r_{T-1}, \dots , r_1, \dots , s_2 , \end{aligned}$$

where \(r_{\varPhi }\) denotes the state where all oscillators with phase \({\varPhi }\) changed their mode to \( update \). Our goal now is to find a broadcast failure vector \(F\), such that . To that end, let

$$\begin{aligned} f_{\varPhi }&= |\{ u \mid {\phi }({s_1},{u}) \geqslant {\varPhi }\}| - \left( \mathsf {count}(r_{\varPhi }) + \sum _{{\varPsi }= {\varPhi }+1}^{T} f_{\varPsi }\right) \end{aligned}$$

for all \({\varPhi }\) where \({\varPhi }+ {\varDelta }({\varPhi }, \mathsf {count}(r_{\varPhi }), \epsilon ) +1 > T\). For the remaining \({\varPhi }\), set \(f_{\varPhi }= \star \). Then \(F= \langle f_1, \dots , f_T\ \rangle \). With this broadcast failure vector at hand, we now have to show that

$$\begin{aligned} \sum _{{\varPsi }\in {\mathcal {U}}_{\varPhi }(\sigma _1, F) }|\{u \mid {\phi }({s_1},{u}) = {\varPsi }\}| = |\{u \mid {\phi }({s_2},{u}) = {\varPhi }\}| . \end{aligned}$$

Recall that \({\mathcal {U}}_{\varPhi }(\sigma _1, F) = \{{\varPsi }\mid \tau (\sigma _1, {\varPsi }, F) = {\varPhi }\}\). Hence, we need to show \({\phi }({s_2},{u}) = \tau (\sigma _1, {\phi }({s_1},{u}), F)\) for all oscillators u. To this end, we now need to distinguish several cases, according to the different cases of the transition defined by condition (9).

First, let u be such that \({\phi }({s_1},{u}) \leqslant R\), i.e., oscillator u is within its refractory period. If \({\phi }({s_1},{u}) = T\), then we have

$$\begin{aligned} {\phi }({s_2},{u})&= 1&\{\text {Cond.}~(9a)\}\\&=\tau (\sigma _1, {\phi }({s_1},{u}), F)&\{\text {Definition}~9\} \end{aligned}$$

Otherwise, if \({\phi }({s_1},{u}) < T\), we have

$$\begin{aligned} {\phi }({s_2},{u})&= {\phi }({s_1},{u}) + 1&\{\text {Cond.}~(9b)\}\\&=\tau (\sigma _1, {\phi }({s_1},{u}), F)&\{\text {Definition}~9\} \end{aligned}$$

Now assume that \({\phi }({s_1},{u}) \geqslant R\), i.e., oscillator u is outside of its refractory period and thus will be perturbed by firing oscillators. If \({\phi }({s_1},{u}) = T\), then we proceed as in the previous case. So, let us assume \({\phi }({s_1},{u}) < T\). To show that the transition function of the population model coincides with the result within the concrete model, we need to ensure that the perceived firing oscillators are equal in both models for each oscillator.

Within each phase, the perceived oscillators in the population model coincide with the oscillators that fired up to the next higher phase in the concrete model. Formally, for each \(1 \leqslant {\varPhi }< T\), we have

$$\begin{aligned} \mathsf {count}(r_{{\varPhi }+1}) = \alpha ^{\varPhi }(\sigma _1,F) . \end{aligned}$$
(30)

To show that this is true, let \(\sigma _1 = \langle k_1, \dots , k_T\rangle \). Let \(f_{\varPhi }\ne \star \). Then for \({\varPhi }= T-1\), we have

$$\begin{aligned} \mathsf {count}(r_T)&= |\{ u \mid {\phi }({s_1},{u}) = T\}| - f_T&\{\text {Definition of } f_{\varPhi }\}\\&= 0 + k_T - f_T&\{\text {Definition of } \sigma _1\}\\&= \alpha ^T(\sigma _1,F) + k_T - f_T&\{\text {Definition of } \alpha ^{\varPhi }\text {, Eq.}~(13)\}\\&= \alpha ^{T-1}(\sigma _1, F)&\end{aligned}$$

Now assume that \(\mathsf {count}(r_{{\varPhi }+1}) = \alpha ^{\varPhi }(\sigma _1,F)\) for some \(1< {\varPhi }< T\) with \(f_{\varPhi }\ne \star \). Then

$$\begin{aligned} \mathsf {count}(r_{\varPhi })&= |\{ u \mid {\phi }({s_1},{u}) \geqslant {\varPhi }\}| - \left( \sum _{{\varPsi }= {\varPhi }}^{T} f_{\varPsi }\right)&\{\text {Definition of } f_{\varPhi }\}\\&= \left( \sum _{{\varPsi }= {\varPhi }}^{T}k_{\varPsi }\right) - \left( \sum _{{\varPsi }= {\varPhi }}^{T} f_{\varPsi }\right)&\{\text {Definition of } \sigma _1\}\\&= \left( \sum _{{\varPsi }= {\varPhi }+1}^{T}k_{\varPsi }\right) - \left( \sum _{{\varPsi }= {\varPhi }+1}^{T} f_{\varPsi }\right) + k_{\varPhi }- f_{\varPhi }&\\&= \mathsf {count}(r_{{\varPhi }+1}) + k_{\varPhi }- f_{\varPhi }&\{\text {Definition of }f_{\varPhi }\}\\&= \alpha ^{\varPhi }(\sigma _1, F) + k_{\varPhi }- f_{\varPhi }&\{\text {Ass.}\}\\&= \alpha ^{{\varPhi }-1}(\sigma _1, F)&\{\text {Definition of }\alpha ^{\varPhi }, \text {Eq.}~(13)\} \end{aligned}$$

If \(f_{\varPhi }= \star \), then the equality immediately holds.

Furthermore, observe that for all \({\varPhi }\), if \(f_{\varPhi }= \star \), then we have \(\mathsf {count}(r_{{\varPhi }+1}) = \mathsf {count}(r_1)\). We can now proceed to prove the final two cases.

First, let \({\phi }({s_1},{u}) + {\varDelta }({\phi }({s_1},{u}),\mathsf {count}(r_1), \epsilon ) + 1 \leqslant T \). This implies \(f_{{\phi }({s_1},{u})} = \star \). Then, by the observation and the claim above, we also get

$$\begin{aligned}&\tau (\sigma _1,{\phi }({s_1},{u}) , F) \\&\quad = 1 + \mathrm {ref}({\phi }({ s_1},{u}), {\varDelta }({\phi }({ s_1},{u}), \alpha ^{{\phi }({ s_1},{u})}(\sigma _1, F), \epsilon ))&\{{\phi }({ s_1},{u}) > R\} \\&\quad = 1 + {\phi }({ s_1},{u}) + {\varDelta }({\phi }({ s_1},{u}), \alpha ^{{\phi }({ s_1},{u})}(\sigma _1, F), \epsilon )&\{\text {Eq.}~(30)\}\\&\quad = 1 + {\phi }({ s_1},{u}) + {\varDelta }({\phi }({ s_1},{u}), \mathsf {count}( r_{{\phi }({ s_1},{u})+1}), \epsilon )&\{\text {Obs.}\}\\&\quad ={\phi }({s_1},{u}) + {\varDelta }({\phi }({s_1},{u}),\mathsf {count}(r_1), \epsilon ) + 1&\{\text {Eq.}~(9c)\}\\&\quad = {\phi }({s_2},{u}).&\end{aligned}$$

Finally, let \({\phi }({s_1},{u}) + {\varDelta }({\phi }({s_1},{u}),\mathsf {count}(r_1), \epsilon ) + 1 > T \), i.e. \({\phi }({s_2},{u}) = 1\). Then it has also to be the case that \({\phi }({s_1},{u}) + {\varDelta }({\phi }({s_1},{u}),\mathsf {count}(r_{{\phi }({s_1},{u})+1}), \epsilon ) + 1 > T\). By the claim above, this means \({\phi }({s_1},{u}) + {\varDelta }({\phi }({s_1},{u}),\alpha ^{{\phi }({s_1},{u})}(\sigma _1,F), \epsilon ) + 1 > T\). Hence, \(\tau (\sigma _1,{\phi }({ s_1},{u}) , F) = 1\) as well.

Now recall that we assumed initially that for all intermediate states \(s\) of the transition sequence, we have \({\mathsf {mode}}({s},{\mathsf {env}}) = update \). If this is not the case, we can partition the sequence into distinct subsequences, where this assumption holds for each subsequence, and apply the arguments above.

Comparing the probabilities of transition sequences. Let \((s_1, \sigma _1) \in R\) and E be an equivalence class with \({\mathbf {P}}(s_1, E)>0\) and \({\mathbf {P}}(\sigma _1,E)>0\). Furthermore, let \(\sigma _2, s_2 \in E\) be the unique state of the population model in E and a state of the concrete model, respectively. Let \(\sigma _1 = \langle k_1, \dots , k_T\rangle \). Furthermore, let \(N = \sum _{i=1}^T k_i\). If no oscillator fires in \(\sigma _1\) this also the case in \(s_1\). Then we have N! possibilities to create an transition sequence starting in \(s_1\), each of which has a probability of \(\frac{1}{N!}\) to happen. Hence, the probability that one of these transitions happen is \(N! \cdot \frac{1}{N!} = 1\), which coincides with the definition in the population model.

For the case that at least one oscillator fires and thus perturbs the other oscillators, we consider the construction of a transition sequence from \(s_1\) with respect to a failure vector \(F = \langle f_1, \dots , f_T\rangle \) for \(\sigma _1\). During each phase \({\varPhi }\), we have to choose the particular order of the \(k_{\varPhi }\) oscillators to create a sequence from \(r_0^{{\varPhi }}\) to \(r_0^{{\varPhi }-1}\) and in addition, we have to choose the set \(C_{{\varPhi }}\). That is, we have \(k_{\varPhi }!\) possible orders, and \(\Bigl (\begin{array}{@{}c@{}}k_{\varPhi }\\ f_{\varPhi }\end{array}\Bigr )\) possibilities for the choice of \(C_{{\varPhi }}\). Furthermore, the combined probability for the transitions of the oscillators that should fire but are inhibited by a broadcast failure is

$$\begin{aligned} \frac{1}{|\mathsf {init}_{{\varPhi }}(s_1)|!} \cdot (1-\mu )^{k_{\varPhi }- f_{\varPhi }} \cdot \mu ^{f_{\varPhi }} . \end{aligned}$$

Observe that at the start of the construction of each phase, \(|\mathsf {init}_{{\varPhi }}(s_1)| = k_{\varPhi }\). Hence the probability above simplifies to

$$\begin{aligned} \frac{1}{k_{\varPhi }!} \cdot (1-\mu )^{k_{\varPhi }- f_{\varPhi }} \cdot \mu ^{f_{\varPhi }} . \end{aligned}$$

Due to the possible choices during the construction of the transition sequence, we have that the probability of one of these sequences to happen is

$$\begin{aligned} \Bigl (\begin{array}{@{}c@{}}k_{\varPhi }\\ f_{\varPhi }\end{array}\Bigr ) \cdot {k_{\varPhi }}! \cdot \frac{1}{k_{\varPhi }!} \cdot (1-\mu )^{k_{\varPhi }- f_{\varPhi }} \cdot \mu ^{f_{\varPhi }} = \Bigl (\begin{array}{@{}c@{}}k_{\varPhi }\\ f_{\varPhi }\end{array}\Bigr ) \cdot (1-\mu )^{k_{\varPhi }- f_{\varPhi }} \cdot \mu ^{f_{\varPhi }} , \end{aligned}$$

which is exactly the function \(\mathsf {PMF}(k_{\varPhi }, f_{\varPhi })\) as in the population model. Furthermore, with similar reasoning as above, the transition probability for the sequences, where no oscillator is perturbed anymore, is 1. Hence, \({\mathbf {P}}(s_1,E) = {\mathbf {P}}(\sigma _1,E)\), and the sum of the probabilities of the paths from \(s_1\) of a population model state \(\sigma _1\) to the elements of the equivalence class E is equal to the probability of the the transition from \(\sigma _1\) to \(\sigma _2\).

Finally, consider the transitions from the initial states into the corresponding equivalence classes. As stated in Sect. 5.4, Eq. (17), for each state of the population model \(\sigma = \langle k_1, \ldots , k_T\rangle \), we have

$$\begin{aligned} {\mathbf {P}}(\sigma _I, \sigma ) = \frac{1}{T^N} {{N}\atopwithdelims (){k_1, \ldots , k_T}} , \end{aligned}$$

which by construction equals \({\mathbf {P}}(\sigma _I, E)\), where E is the equivalence class with \(\sigma \in E\). The multinomial coefficient gives the number of possible assignments of values \(k_i\) to the oscillators. This is exactly the number of states in the set \({\mathcal {S}}_{c}^\prime \), where \(k_i\) oscillators are in phase i and the environment counter is set to 0. Since \(T^N\) is the total number of such states, the probability of reaching the equivalence class E from \(s_I\) is also

$$\begin{aligned} {\mathbf {P}}(s_I, E) = \frac{1}{T^N} {{N}\atopwithdelims (){k_1, \ldots , k_T}} , \end{aligned}$$

and R is a weak bisimulation. \(\square \)

By this theorem, we can use population models to analyse the global properties of a network of pulse-coupled oscillators following the concrete model as defined in Sect. 4 without loss of precision. In particular, this allows us to increase the size of the network to check such properties, while still giving us the opportunity to analyse the internal behaviour of nodes, if we restrict the network size.

7 Empirical analysis

In this section we first compare the model checking times for an analysis of the concrete and the population model, followed by an analysis of the impact of the reduction of the population model as defined in Sect. 5.5. Subsequently, we present the results of an empirical analysis of the population model. This evaluation is based on previous work where parametric influence [24] and power consumption [25] were investigated. For all analyses the perturbation function we used is a discretisation of the Mirollo and Strogatz model of pulse-coupled oscillator synchronisation [38]. In particular, we set the perturbation function to be \({\varDelta }({\varPhi }, \alpha , \epsilon ) = \left[ {\varPhi } \cdot \alpha \cdot \epsilon \right] \), where \([ \cdot ]\) denotes rounding to the nearest integer; the perturbation induced by the firing of another oscillator increases linearly with the phase of the perturbed oscillator. We use the probabilistic model checker PRISM [34] to formally verify properties of our models.

7.1 Model specification

Both the concrete models and population models are encoded using the guarded command language of PRISM, a state-based language, based on the Reactive Modules formalism of Alur and Henzinger [4]. Each model consists of a set of modules, which in turn consist of a set of local variables over finitely bound integers and Booleans and a set of commands that define its behaviour. A command consists of a predicate over the set of local variables for all modules and a set of possible transitions that will occur with some given probability should the predicate hold. A transition is an assignment of values to each of the local variables of the module. The local state space of a module is given by the set of all valuations for its local variables, and the global space is the product of all local state spaces. For further details we refer the reader to [40].

Concrete model The specification of the concrete model within PRISM is a straightforward implementation. Each individual oscillator, as well as the environment, is defined as a module. All of these modules only synchronise on a single message sync, to ensure that all oscillators were considered to update their phases.

The module for oscillator i contains local variables \( mode _i\) and \( phase _i\) for its mode and phase. Each module specifies transitions for a single oscillator according to Sect. 4.2. All of these transitions are similar in structure in each module, and hence we can employ the template mechanism of PRISM. To that end, we define the behaviour of a single oscillator within the network, and can then replace the names of the local variables suitably.Footnote 1 That is, while we still need to explicitly define the dependencies between the oscillators, we do not need to manually write all module specifications. With the exception of the number of oscillators, the parameters of the network (the lengths of the oscillation cycle and the refractory period, the coupling constant and the broadcast failure probability) are also parameters within the input language of PRISM. That is, we can instantiate them and automatically run the model-checking engine on sets of parameter combinations.

Population model A single module is used to specify an instance of the population model. The global state is encoded using T finitely bound integer variables ranging over N discrete values, where each variable records the number of oscillators sharing some phase value in \(1, \ldots , T\). In contrast to the specification of the concrete model, specifying an instance of the population model is not straightforward. For the concrete model the length of the oscillation cycle T could be easily specified as a parameter within the input language of PRISM, and moving between two models with different values for T, but sharing the same value for all other parameters, is as simple as changing the value of the parameter in the specification. For the population model this was not possible, as T variables are needed to count the oscillators with different phases, and hence changing T leads to a change in the number of variables. The changes that arise from the introduction or removal of these new variables propagate throughout the specification, which for large models may consist of many thousands of lines of code. Further complications also arose when encoding the transitions from the initial unconfigured state \(\sigma _I\) to every possible configured state for the oscillators, since a single model may have many thousands of such transitions for larger values of N and T.

To facilitate the analysis of families of parameter-wise different oscillator population models we developed a Python scriptFootnote 2 that allows the user to define ranges for N, T, R, \(\epsilon \) and \(\mu \), for some fixed definitions for the perturbation function \({\varDelta }\). Then, given a list of properties, for each combination of parameters the script generates a specification, checks all the given properties against that specification using PRISM, and writes user specified output (e.g. result, model checking time, etc.) to a file that can be used by statistical analysis tools. The reduction introduced in Sect. 5.5 could not be implemented with the existing high-level specification language of PRISM. It was only possible to specify the individual rewards labelling some transitions, namely those from the initial unconfigured state to each of its possible successors, by introducing additional variables to the model. Therefore, at the time of writing, and to the best of our knowledge, it is not possible to implement a script for the generation of PRISM code for all the reduced models. The results shown in Table 3 were obtained using a prototypical version of the model checker ePMC, formerly known as IscasMC [27], which had been modified to accept a low level representation of a model as input (as a set of states and transitions), where each transition could be individually labelled with a reward.

7.2 Comparison between concrete and population

In this section, we compare the differences between the concrete and population models. We restrict our comparison to the analysis of the probability to achieve synchronisation, excluding the necessary time or power to achieve this (cf. Sect. 7.4). Model checking this property against both models yields identical results, as was expected in light of the results of Sect. 6. However, as expected from the definition of the concrete model in Sect. 4, the increase in the size of the model is much more pronounced for the concrete model. Hence, the performance of the model checking procedure differs strongly between the models.

Table 2 shows the model construction and checking times for some exemplary parameter combinations of the models, as reported by PRISM.Footnote 3 In the table, the model construction time denotes the time PRISM needs to construct a DTMC representation from the specification. In the concrete model, the bulk of time is spent in the model checking phase, while the construction is much faster. For the analysis of the population model, however, the situation is reversed. The model construction phase is at least an order of magnitude longer than the model checking phase. As expected, the model checker needs less time for the analysis of the population model, even if the times needed for model construction and checking were considered together. However, we can also see that while checking both the concrete and population model, refractory periods around \(R = \frac{T}{2} \) are harder for the model checker to analyse. This is because for very low and very high refractory periods the synchronisation probability is either 1 or 0, and such qualitative results are trivially, and efficiently obtained in a precomputation step by the model checker.

Table 2 Model construction and model checking times for the concrete and population model with \(T=10\), \(\mu = 0.2\) and \(\epsilon = 0.1\) (in seconds)

For properties relating to global behaviour a population model is beneficial, However, it requires all nodes to be behaviourally identical. While we defined our concrete model in a similar fashion, we could in principle relax this restriction. That is, we could allow for different perturbation functions for each node, or for partitions of nodes.

7.3 Reduction analysis

In this section we present the effect of the reduction of population models as defined in Sect. 5.5. Table 3 shows the number of reachable states and transitions of the DTMC D, and corresponding reduction \(D^\prime \), for different population sizes (N) and oscillation cycle lengths (T), using the Mirollo and Strogatz model of synchronisation presented at the start of this section. The number of reachable states is stable under changes to the parameters R, \(\epsilon \), and \(\mu \), since every possible firing state is always reachable from the initial state. For the results shown here the parameters were arbitrarily set to \(R = 1\), \(\epsilon = 0.1\). The underlying graph of the DTMC, and hence the number of transitions, is stable under changes to the parameter \(\mu \), and is not of interest here. Larger values of N and T were not investigated, due to the large model sizes when generating the unreduced model.

Table 3 Reduction in state space and transitions where \(R= 1\) and \(\epsilon = 0.1\) for different population sizes and maximal phase values

Table 4 shows the number of transitions of the DTMC, and corresponding reduction, for various population model instances, and again uses the Mirollo and Strogatz model of synchronisation. Increasing the length of the refractory period (R) results in an increase in the reduction of transitions in the model. A longer refractory period leads to more firing states where the firing of a group of oscillators is ignored. This results in successor states having oscillators with lower values for phase, and hence a longer sequence of deterministic transitions (later removed in the reduction) leading to the next firing state. Conversely, increasing the strength of the coupling between oscillators (\(\epsilon \)) results in a decrease in the reduction of transitions in the model. For the Mirollo and Strogatz model of synchronisation used here, increasing the coupling strength results in a linear increase in the pertubation to phase induced by the firing of an oscillator. This results in successor states of firing states having oscillators with higher values for phase, and hence a shorter sequence of deterministic transitions leading to the next firing state.

Table 4 Reduction in transitions for \(N=5\) and \(T=10\) with different values of R and \(\epsilon \)

7.4 Population model evaluation

In this section we discuss the influence of different parameters of the population model defined in Sect. 5 on both the likelihood that a network of oscillators will eventually synchronise, and the requisite time and power consumption to achieve this. For a real deployment of synchronising nodes, for example a Wireless Sensor Network (WSN), communication is costly with respect to energy consumption. Therefore, minimising power consumption is a critical consideration for their design [2, 44]. Once deployed, a WSN is generally expected to function independently for long periods of time. In particular, regular battery replacement can be costly and impractical for remote sensing applications. Hence, it is important to reduce the power consumption of the individual nodes by choosing low-power hardware and/or energy efficient protocols. However, to make informed choices, it is also necessary to have good estimations of the power consumption for individual nodes. While the general power consumption of the hardware can be extracted from data sheets, estimating the overall power consumption of different protocols is more demanding. Communication between nodes in the network is either active when sending a message, i.e., when a node fires, or passive, when receiving messages from other nodes. Hence, during periods where a sensor does neither, the antenna can be shut down to save energy. In our models, this interval of inactivity corresponds to the refractory period. That is, the longer the refractory period is, the less energy will be consumed.

First, we consider a binary metric where we are only interested in states where all oscillators share precisely the same phase. Second, we derive a synchronisation metric from the complex order parameter of Kuramoto [32], that captures the degree of synchrony of a fully connected network of oscillators as a real value in the interval [0, 1].

Binary synchronisation metric Here we use the binary notion of synchronisation introduced in Sect. 5.3, where a global state \(\sigma = \langle k_1, \ldots , k_T \rangle \) is synchronised if, and only if, there is some \({\varPhi }\in \{1, \ldots , T\}\) such that \(k_{\varPhi }= N\). We created concrete input models for PRISM for different parameters of the model, for example the number of oscillators and different coupling strengths. Each of these models was subsequently checked with respect to different properties. Other case studies could also be considered for alternative models of synchronisation where the dynamics of oscillators, and their interactions, can be described by some perturbation function.

We are interested in the probability of eventual synchronisation and in the expected time needed to achieve synchronisation. The probability of eventual synchronisation is given by the PCTL property

$$\begin{aligned} \mathrm {P}_{=?}\left[ \mathrm {F}\bigvee _{i=1}^T (k_i=N)\right] . \end{aligned}$$

we first define a reward structure that associates a value of \(\frac{1}{T}\) with every unsynchronised state in \({\varGamma }\setminus \{\sigma _I\}\), that records the number of cycles taken to achieve synchrony.

To determine the expected time taken for a population model to synchronise we accumulate a reward along a path until some synchronised global state is reached, and define a reward structure \({\mathcal {R}}_{\mathrm {\scriptstyle time}}= (R_s, R_t)\). We set \(R_s(\sigma ) = \frac{1}{T}\) for every unsynchronised state in \({\varGamma }\setminus \{\sigma _I\}\), \(R_s(\sigma ^\prime ) = 0\) for all other \(\sigma ^\prime \in {\varGamma }\), and \(R_t(\sigma _1, \sigma _2) = 0\) for all \(\sigma _1, \sigma _2 \in {\varGamma }\). Intuitively, we expect a reward of 1 along a path where synchronisation occurs after T transitions (one complete oscillation cycle). This is achieved by assigning a reward of \(\frac{1}{T}\) to each unsynchronised state, since a transition of the model from one state to the next corresponds to a step of \(\frac{1}{T}\) oscillation cycles. In this way we obtain a measure of synchronisation time in oscillation cycles.

The expectation of time to achieve synchronisation is then given by the PCTL property

$$\begin{aligned} \mathrm {R}_{=?}\left[ \mathrm {F}\bigvee _{i=1}^T (k_i=N)\right] , \end{aligned}$$

with respect to the reward structure \({\mathcal {R}}_{\mathrm {\scriptstyle time}}\). We note here that a result of \( Infinity \) is obtained for accumulating this reward along a path where the probability of reaching a synchronised state is less than 1.

We generated models for different numbers of oscillators \(3 \leqslant N \leqslant 8\), cycle lengths \(4 \leqslant T \leqslant 10\), coupling constants \(\epsilon \in \{0, 0.1, \dots , 1.0\}\), refractory periods \(0 \leqslant R \leqslant T\), and message loss probabilities \(\mu \in \{0, 0.1, \dots , 1.0\}\), and analysed the models with respect to the two properties of interest.

Fig. 5
figure 5

Synchronisation probabilities for different refractory periods (a), and synchronisation times for different rates of broadcast failure (b)

Figure 5a plots the probability of synchronisation for different rates of broadcast failure against the refractory period for \(N = 8\), \(T = 10\), and \(\epsilon = 0.1\). We can observe a trade-off between a high refractory period and high synchronisation probability. As long as the refractory period is less than half the oscillation cycle, synchronisation will be achieved in almost all cases. Higher values for R result in a rapid drop in synchronisation probability. The exception is the edge case \(\mu = 0\), which may seem surprising. If \(\mu = 0\) a model is deterministic. The results for \(\mu = 1\) are omitted here as, unsurprisingly, if all firings result in broadcast failures the synchronisation probability is almost zero. In fact, the only runs that synchronise in this case are runs where the first configured state is already synchronised.

Figure 5b shows us that a higher refractory period results in shorter synchronisation times when the probability for broadcast failure is low. In general, a longer refractory period up to half the cycle length improves the rate of convergence to synchrony, which is consistent with the findings of [17]. Furthermore, for high values of \(\mu \) the differences in synchronisation times for different refractory period lengths are negligible. Hence, a refractory period of slightly less than half the cycle, with a low coupling constant \(\epsilon \), is optimal for this model of synchronisation. As \(\epsilon \) is increased the results remain similar, but with a decrease in synchronisation times.

Order parameter synchronisation metric In the previous section a binary metric of synchrony for a population model was employed, where a state was synchronised if, and only if, all oscillators in that state shared the same phase. However, it is clear that some global states appear to be closer to achieving a truly synchronised state than others. Consider the global states \(\sigma _1 = \langle 0, 2, 0, 2, 0, 2 \rangle \) and \(\sigma _2 = \langle 0, 0, 0, 0, 1, 5 \rangle \) of some population model for a network of \(N = 6\) nodes with an oscillation cycle over \(T = 6\) discrete values. Using the binary notion of synchrony all that is known is that both states are not synchronised, yet it is clear that for nearly all models of synchronisation, encoded as some perturbation function, \(\sigma _2\) appears to be closer to converging to a state where all oscillators share the same phase.

The binary notion of synchrony can be extended by introducing a phase coherence metric for the level of synchrony of a global state. The metric is derived from the order parameter introduced by Kuramoto [32] as a measure of synchrony for a population of coupled oscillators. If the phases of the oscillators are considered as positions on the unit circle in the complex plane, they can be represented as complex numbers with magnitude 1.

Definition 21

The function \(\phi ^{\mathbb {C}}: [1 \ldots T] \rightarrow {\mathbb {C}}\) maps a phase value to its corresponding position on the unit circle in the complex plane, and is defined as \(\phi ^{\mathbb {C}}({\varPhi }) = e^{i \theta _{\varPhi }}\), where \(\theta _{\varPhi }= \frac{2 \pi }{T}({\varPhi }- 1)\).

Fig. 6
figure 6

Argand diagram of the phase positions for the global state \(\langle 0, 0, 0, 0, 0, 2, 1, 0, 0, 5 \rangle \)

A measure of synchrony \(\eta \in [0, 1]\) can then be obtained by calculating the magnitude of the complex number corresponding to the mean of the phase positions. A global state has a maximal value of \(\eta = 1\) when all oscillators are synchronised and share the same phase \({\varPhi }\), mapped to the position defined by \(\phi ^{\mathbb {C}}({\varPhi })\). It then follows that the mean position is also \(\phi ^{\mathbb {C}}({\varPhi })\) and \(|\phi ^{\mathbb {C}}({\varPhi })| = 1\). A global state has a minimal value of \(\eta = 0\) when all of the positions mapped to the phases of the oscillators are uniformly distributed around the unit circle. This also occurs when their positions achieve mutual counterpoise, for example when \(\frac{N}{2}\) oscillators share some phase value \({\varPhi }\) and the remaining \(\frac{N}{2}\) oscillators have a phase value whose position on the complex plane is the negation of \(\phi ^{\mathbb {C}}({\varPhi })\).

Definition 22

The phase coherence function \(\mathsf {PCF}: {\varGamma }\rightarrow [0, 1]\) maps a global state to a real value in the interval [0, 1], and is given by

$$\begin{aligned} \mathsf {PCF}(\langle k_1, \ldots , k_T \rangle ) = \left| \frac{1}{N} \sum _{{\varPhi }= 1}^{T} k_{\varPhi }\phi ^{\mathbb {C}}({\varPhi }) \right| , \end{aligned}$$

where \(|\cdot |\) denotes the complex modulus.

Example 4

Figure 6 shows a plot on the complex plane of the positions of the phases for some global state \(\sigma = \langle 0, 0, 0, 0, 0, 2, 1, 0, 0, 5 \rangle \) of a population model where \(N = 8\), \(T = 10\). The phase positions are given by \(\phi ^{\mathbb {C}}(6) = e^{i \pi }\) for 2 oscillators with phase 6, \(\phi ^{\mathbb {C}}(7) = e^{\frac{6 i \pi }{5}}\) for 1 oscillator with phase 7, and \(\phi ^{\mathbb {C}}(10) = e^{\frac{9 i \pi }{5}}\) for 5 oscillators with phase 10. The phase coherence can then be determined as

$$\begin{aligned} \mathsf {PCF}(\sigma ) = \left|\frac{1}{8} \left( 2 e^{i \pi } + e^{\frac{6i \pi }{5}} + 5 e^{\frac{9 i \pi }{5}}\right) \right|= 0.4671. \end{aligned}$$

The mean phase position is indicated on the diagram by \({\overline{{\varPhi }}}\).

The two new properties of interest are firstly, the time taken for the network to reach a state where some desirable degree of synchronisation with respect to the new metric has been achieved, and secondly, the power consumed by the network to reach that state. In addition to the expected time/power consumption we will also investigate the maximal time/power consumption.Footnote 4

We now define a reward structure \({\mathcal {R}}_{\mathrm {\scriptstyle pow}}= (R_s, R_t)\) that annotates a model with rewards corresponding to power consumption. Firstly, let \(I_{id}\), \(I_{rx}\), and \(I_{tx}\) be the current draw in amperes for the idle, receive, and transmit modes of a synchronising node in a network, V be the voltage, C be the length of the oscillation cycle in seconds, and \(M_t\) be the time taken to transmit a synchronisation message in seconds. We now define

$$\begin{aligned} W_{id}= \frac{I_{id}V C}{3600 T}, \qquad W_{rx}= \frac{I_{rx}V C}{3600 T}, \qquad W_{tx}= \frac{I_{tx}V M_t}{3600}, \end{aligned}$$

where \(W_{id}\) is the power consumption in Watt-hours of one node for one discrete step within its refractory period (the oscillator is in the idle mode), \(W_{rx}\) is the power consumption in Watt-hours of one node for one discrete step outside of its refractory period (in receive mode), and \(W_{tx}\) is the power consumption in Watt-hours to transmit one synchronisation message. The power consumption of the network consists of the power necessary to transmit the synchronisation messages, and that of the oscillators in the idle and receive modes. For synchronisation messages, we consider each firing state \(\sigma \), and assign a reward of \(R_t(\sigma , \sigma ^\prime ) = k_1 W_{tx}\) to every transition from \(\sigma \) to a successor state \(\sigma ^{\prime } = \langle k_1, \ldots , k_T \rangle \). This corresponds to the total power consumption for the transmission of \(k_1\) synchronisation messages. For each state \(\sigma = \langle k_1, \ldots , k_T \rangle \) where the oscillators are configured, the total power consumption for oscillators in the idle and receive modes is

$$\begin{aligned} \rho _{\mathcal {S}}(\sigma )=\sum _{{\varPhi }= 1}^{R} k_{{\varPhi }} W_{id}+ \sum _{{\varPhi }= R + 1}^{T} k_{{\varPhi }} W_{rx}. \end{aligned}$$

For our experiments we set \(\epsilon = 0.1\) and \(\mu = 0.2\). We could have conducted analyses for different values for these parameters. For a real system, the probability \(\mu \) of broadcast failure occurrence is highly dependent on the deployment environment. For deployments in benign environments we would expect a relatively low rate of failure, for instance a WSN within city limits under controlled conditions, whilst a comparably high rate of failure would be expected in harsh environments such as a network of off-shore sensors below sea level. The coupling constant \(\epsilon \) is a parameter of the system itself. Our results suggest that higher values for \(\epsilon \) are always beneficial, however this is because we only model fully connected networks. High values for \(\epsilon \) may be detrimental when considering different topologies, since firing nodes may perturb synchronised subcomponents of a network. However we defer such an analysis to future work.

As an example we analyse the power consumption for values taken from the datasheet of the MICAz mote [37]. For the transmit, receive and idling mode, we assume \(I_{tx}= 17.4\ mA\), \(I_{rx}= 19,7\ mA\), and \( I_{id}= 20 \mu A\), respectively. Furthermore, we assume that the oscillators use a voltage of \(3.0\ V\). We define \(\mathsf {coherent}_\lambda \) to be a predicate that holds for any state \(\sigma \) in \({\varGamma }\setminus \{\sigma _I\}\) with \(\mathsf {PCF}(\sigma ) \geqslant \lambda \). The properties of interest are then given by the PCTL property

$$\begin{aligned} \mathrm {R}_{=?}[\mathrm {F}\ \mathsf {coherent}_\lambda ] \end{aligned}$$
(31)
Fig. 7
figure 7

Power (a) and time (b) per node to achieve synchronisation

Figure 7a, b show both the average and maximal power consumption per node (in mWh) and time (in cycles) needed to synchronise, in relation to the phase coherence of the network with respect to different lengths of the refractory period, where \(\epsilon = 0.1\) and \(\mu = 0.2\). That is, they show how much power is consumed (time is needed, resp.) for a system in an arbitrary state to reach a state where some degree of phase coherence has been achieved.

The average, and maximal values are obtained using the avg and max filters of the PRISM model checker. These filters give the average (maximum, resp.) expected reward across all paths starting in states that satisfy some given predicate. The desired values are therefore obtained by checking property (31) against all states of the model where oscillators are configured i.e. all \(\sigma \in {\mathcal {S}}\setminus \{\sigma _I\}\). For both cases the total values obtained were divided by the number of nodes in the network.

Fig. 8
figure 8

Power consumption in relation to broadcast failure probability

The much larger values obtained for \(R=1\) and phase coherence \(\geqslant 0.9\) are not shown here, to avoid distortion of the figures. The energy consumption for these values is roughly 2.4 mWh, while the time needed is around 19 cycles. Observe that we only show values for the refractory period R with \(R < \frac{T}{2}\). For larger values of R not all runs synchronise [24], resulting in an infinitely large reward being accumulated for both the maximal and average cases.

As expected when starting from an arbitrary state, the time and power consumption increases monotonically with the order of synchrony to be achieved. On average, networks with longer refractory periods require less power for synchronisation, and take less time to achieve it. The only exception is that the average time to achieve synchrony with a refractory period of four is higher than for two and three. However, if lower phase coherence is sufficient then this trend is stable. In contrast, the maximal power consumption of networks with \(R=4\) is consistently higher than of networks with \(R=3\). In addition, the maximal time needed to achieve synchrony for networks with \(R=4\) is higher than for lower refractory periods, except when the phase coherence is greater than or equal to 0.9. We find that networks with a refractory period of three will need the smallest amount of time to synchronise, regardless of whether we consider the maximal or average values. Furthermore, the average power consumption for full synchronisation (phase coherence one) differs only slightly between \(R=3\) and \(R=4\) (less than 0.3 mWh). Hence, for the given example, \(R=3\) gives the best results. These relationships are stable even for different broadcast failure probabilities \(\mu \), while the concrete values increase only slightly, as illustrated in Fig. 8, which shows the power consumption for different values of \(\mu \) when \(\epsilon = 0.1\).

Fig. 9
figure 9

Expected (a) and maximal (b) power consumption vs. time taken to achieve synchrony

The general relationship between power consumption and time needed to synchronise is shown in Fig. 9a, b. Within these figures, we do not distinguish between different coupling constants and broadcast failure probabilities. We omit the two values for \(R=1\), \(\epsilon = 0.1\) and \(\mu \in \{0.1, 0.2\}\) in Fig. 9b to avoid distortion of the graph, since the low coupling strength and low probability of broadcast failure leads to longer synchronisation times and hence higher power consumption. While this might seem surprising it has been shown that uncertainty in discrete systems often aids convergence [22].

The relationship between power consumption and time to synchronise is linear, and the slope of the relation decreases for higher refractory periods. While the linearity is almost perfect for the average values, the maximal values have larger variation. The figures again suggest that \(R=3\) is a sensible and reliable choice, since it provides the best stability of power consumption and time to synchronise. In particular, if the broadcast failure probability changes, the variations are less severe for \(R=3\) than for other refractory periods.

8 Conclusion

In this paper we have introduced a formal concrete model for a network of nodes synchronising their clocks over a set of discrete values. Furthermore, we developed a population model that can alleviate state-space explosion when reasoning about significantly larger networks. We encoded both models as discrete-time Markov chains, and evaluated them for several parameter combinations. Furthermore, we formally connected the models by showing that a concrete model of a network and a population model of same network are probabilistically weakly bisimilar.

Formalising the individual nodes of a network allows for the analysis of their internal properties. Even though we did not give explicit definitions, a concrete network could be instantiated to incorporate different topologies by explicit encoding of possible perturbances in the nodes’ transitions. However, the internal structure also complicates the verification of global network properties. Modelling the whole network as the product of the models for the individual nodes quickly, and unsurprisingly, results in a model that is too large to analyse with existing tools and techniques. While the use of appropriate collective abstractions, such as population models, allow for the analysis of larger networks, they often impose restrictions on the topologies of the network that can be considered. We could, of course, simply take the product of individual population models to represent network structures more specialised than the fully-connected graphs considered here, but again we face the consequences of this approach when trying to analyse the resulting model, in particular the explosion of the state space. Furthermore, this also means that every node of a population model may influence all nodes of a connected population model. Finally, our abstract relation would need to take the mapping of single nodes into different components into account. When using population models we lose the possibility to distinguish between nodes having the same internal state. However, this does not restrict our analysis when considering networks of homogeneous nodes where the properties of interest relate to global behaviours of the network itself.

Our current definition of pulse-coupled oscillators only allows for non-negative results of the phase response function. However, there are also oscillator definitions with phase response functions with possibly negative values [48]. That is, instead of shifting the state of an oscillator towards the end of the cycle, the perturbation may reduce the value of the oscillator’s state. It would be interesting to study the impact of negative-valued phase response functions in the setting of discrete clock values.