1 Introduction

System verification requires comparing a system’s behavior against a specification. When the system is built from several components, we can distinguish between local and global specifications. A local specification applies to a single component, whereas a global specification should hold for the entire system. Since these two kinds of specifications are used to reason at different levels of abstraction, both kinds are often needed.

Ideally one aims at freely passing from local to global specifications and vice versa. Most specification formalisms natively support specification composition and there are well-studied examples of operators for composing them, e.g., logical conjunction, set intersection, and the synchronous product of automata. Unfortunately, the same does not hold for specification decomposition: obtaining local specifications from a global one is, in general, much more difficult.

Over the past decades, many research communities have independently investigated decomposition methods, each focussing on the specification formalisms and assumptions appropriate for their application context. In particular, important results were obtained in the fields of control theory and formal verification.

In control theory, natural projection [40] is exploited to simplify systems built from multiple components, modeled as automata. Natural projection is often applied component-wise to solve the controller synthesis problem, i.e., for synthesizing local controllers from a global specification of an asynchronous discrete-event system [11]. In this way, by interacting only with a single component of a system, local controllers guarantee that the global specification is never violated. By composing local controllers in parallel with other sub-systems, it is possible to implement distributed control systems [41, 42].

Table 1 Summary of existing results on natural projection and partial model checking for finite-state Labeled Transition Systems

The formal verification community proposed partial model checking [1] as a technique to mitigate the state explosion problem arising when verifying large systems composed from many parallel processes. Partial model checking tackles this problem by decomposing a specification, given as a formula of the \(\mu \)-calculus [27], using a quotienting operator, and thereby supporting the analysis of the individual processes independently. Quotienting carries out a partial evaluation of a specification while preserving the model checking problem. Thus for instance, a system built from two modules satisfies a specification if and only if one of the modules satisfies the specification after quotienting against the other [1]. The use of quotienting may reduce the problem size, resulting in smaller models and hence faster verification.

Table 1 summarizes some relevant results about the two approaches for finite-state Labeled Transitions Systems; for more details, we refer the reader to Sect. 6. Since natural projection and partial model checking apply to different formalisms, they cannot be directly compared without defining a common framework. For example, a relevant question is to compare how specifications grow under the two approaches. Although it is known that both may lead to exponential growth (see [26, 39] and [3]), these results apply in one case to finite-state automata (FSAs) and in the other case to \(\mu \)-calculus formulae.

Although decomposition work has been carried out in different communities, there have also been proposals for the cross-fertilization of ideas and methods [17]. For instance, methods for synthesizing controllers using partial model checking are given in [7, 31]. The authors of [20] and [22] propose similar techniques, using fragments of the \(\mu \)-calculus and CTL\(^*\), respectively.

One of our starting points was suggested by Ehlers et al. [17], who advocate establishing formal connections between these two approaches. In their words:

Such a formal bridge should be a source of inspiration for new lines of investigation that will leverage the power of the synthesis techniques that have been developed in these two areas. [...] It would be worthwhile to develop case studies that would allow a detailed comparison of these two frameworks in terms of plant and specification modeling, computational complexity of synthesis, and implementation of derived supervisor/controller.

We address the first remark about a formal bridge by showing that, under reasonable assumptions, natural projection reduces to partial model checking and, when cast in a common setting, they are equivalent. To this end, we start by defining a common theoretical framework for both. In particular, we slightly extend both the notion of natural projection and the semantics of the \(\mu \)-calculus in terms of the satisfying traces. These extensions allow us to apply natural projection to the language denoted by a specification. In addition, we extend the main property of the quotienting operator by showing that it corresponds to the natural projection of the language denoted by the specification, and vice versa (Theorem 3.2).

We also provide additional results that contribute to the detailed comparison, referred to in the second remark. In particular, we propose a new algorithm for partial model checking that operates directly on Labeled Transition Systems (LTS), rather than on the \(\mu \)-calculus. We prove that our algorithm is correct with respect to the traditional quotienting rules and we show that it runs in polynomial time, like the algorithms based on natural projection.

A preliminary version of the above results have been previously presented in [13], and are systematized and formally proved here.Footnote 1 In this paper we additionally lift these results to symbolic Labeled Transition Systems (s-LTS), a slight generalization of symbolic FSAs [15], which themselves substantially generalize traditional FSAs. Roughly speaking, the transitions of an s-LTS carry predicates rather than letters, as LTS do, and can thus handle rich, non-finite alphabets. In particular, the alphabet of an s-LTS is the carrier of an effective boolean algebra, thereby maintaining the operational flavor of transition systems. In the next section, we give an example of a concurrent program running on a GPU that shows the added expressive power of specifications rendered by s-LTSs.

Our lifting of results proceeds in several steps. First we define the notion of symbolic traces composed by transitions with predicates as labels, and we show their relationship to the more standard traces labeled by the elements of a given finite alphabet. More significantly we define symbolic synchronous composition of s-LTSs, which is crucial for composing these richer system specifications. We then introduce novel symbolic versions of partial model checking and of natural projection. Also, for the symbolic case, we prove a theorem (Theorem 5.2) that extends the statement of Theorem 3.2 to the s-LTSs, i.e., that establishes the correspondence between partial model checking and natural projection for s-LTSs. Finally, we define a new algorithm for symbolic partial model checking directly on s-LTSs, and we prove it correct with respect to the symbolic quotienting operator. As expected, our algorithm’s time complexity is exponential. This is due to the need to check the satisfiability of the predicates labeling the symbolic transitions.

We have implemented our algorithm for partial model checking on Labeled Transition Systems in the tool available online [14]. Along with the tool, we developed several case studies illustrating its application to the synthesis of both submodules and local controllers. The implementation of the algorithm for s-LTS is still under development.

Structure of the paper We start by presenting a motivating example in Sect. 2. Section 3 presents our unified theoretical framework for natural projection and partial model checking as well as its formal properties. In Sect. 4 we present the quotienting algorithm, discuss its properties, and apply it to our running example. We extend our framework to the symbolic transition systems in Sect. 5. Section 5.4 presents our novel symbolic quotienting algorithm. In Sect. 6 we briefly survey the related literature and in Sect. 7 we draw conclusions. The “Technical Appendix” contains all the formal proofs together with the correctness and the complexity of our algorithms. Finally, all the additional material about (i) implementation of the algorithms, (ii) tool usage and (iii) replication of the experiments is available at https://github.com/gabriele-costa/pests.

2 A Running Example: A GPU Kernel

In this section we introduce a simple yet realistic example that we use as running throughout the paper. The example illustrates an instance of a system made of two concurrent components, and its global specification consisting of two properties intuitively presented below. We will show how the decomposition of the global specification is done by partially evaluating it against one of the components. Then, we model check the obtained local specification against the other component, so verifying the original global specification. The first of the two properties is expressed through an LTS and discussed in Sect. 4. For the second we take advantage of the richer expressive power of s-LTS to reason about both data and control. In Sect. 5 we show how this enables a fine-grained analysis of the system behavior.

We consider a concurrent program (called kernel) running on a Graphical Processing Unit (GPU). The program implements a producer-consumer schema relying on a circular queue. The program is written in OpenCL,Footnote 2 a C-like language for programming GPUs. A sequential application P embodies an OpenCL kernel and uses it to accelerate some computations. In practice, P compiles the kernel at run time, loads it on the GPU memory, and launches its execution, which is carried on by a group of threads running concurrently on the different GPU cores. During the execution, each thread is bound to an identifier, called local id, and threads share a portion of the GPU memory, called local memory. A group of threads can synchronize through a barrier. Intuitively, a barrier is an operator that blocks the execution of each thread at a particular point. When all the threads reach the same barrier, their execution is resumed.

Consider the OpenCL kernel of Fig. 1 that implements a simple producer-consumer schema. Briefly, one instance of the kernel function manager is executed on each core of a GPU. Here, for simplicity, we assume that only two cores exist. A manager kernel iteratively invokes one of two functions, produce and consume, depending on the thread identifier (either 0 or 1) returned by get_local_id(0). Hence, the manager kernel forces each thread to assume one of the two roles, either a producer or a consumer. The two functions use the local memory to share a vector, called buffer, which implements a circular queue. The queue has eight slots: a new item (i.e., a four-byte integer) is inserted (by the producer) in position L[1] and removed (by the consumer) from position L[0]. In practice, the first two bytes of L contain the head and tail pointers of the circular queue. Thus, they are incremented after each enqueue/dequeue operation and set to 0 when they exceed the buffer limit. The two threads iterate until both the producer and the consumer processed exactly *N items.

Fig. 1
figure 1

A fragment of OpenCL

The code of Fig. 1 suffers from several typical flaws. The first flaw concerns the buffer’s consistency. Provided that the buffer’s size is at least 8, the two threads cannot cause a buffer overflow. Nevertheless, there is no guarantee that enqueue (line 15) and dequeue (line 5) always occur in the right order. In fact, since the two threads run in parallel with no priority constraints, two unsafe configurations may be reached: (i) the consumer attempts to extract an element from the empty buffer and (ii) the producer attempts to insert an element into a full buffer.

The second potential flaw is a data race. Data races occur when two threads simultaneously access the same, shared memory location and at least one of them modifies the data. When both the threads access the same memory in write mode, it is called a write-write data race. Otherwise we have a write-read data race. The two threads of Fig. 1 handle three pointers to the shared memory space, i.e., L, buffer, and N (line 22). These variables are identified by the local modifier. No data races can occur on N as it is never modified. A write-read data race on buffer happens when the producer and the consumer access the same location. Notice that this happens under conditions similar to those discussed for the buffer consistency, e.g., enqueue and dequeue are not executed in the right order. The case of L is more subtle. Both produce() and consume() modify the four bytes of the variable L (of type int). However, the two functions operate on different bytes, i.e., L[0] and L[1]. The single byte granularity is achieved through a cast to type char * (lines 4 and 14). Hence, no data race actually affects L.

Verifying the correctness of GPU kernels, in general, and producer-consumer schemas, in particular, are active research fields. Static analysis techniques such as [9] and [36] aim at validating a kernel against some specific property, such as absence of data races. The tools based on these techniques support developers by identifying potentially dangerous code. Still, the developer must manually confirm these alerts since the static analysis commonly considers an over-approximation of the program’s actual behavior. For instance, GPUVerify [9], a prominent static verification tool, reports a possible write-read data race on L when applied to the kernel of Fig. 1 (see the “Technical Appendix”). As we will see in Sect. 5, we avoid this false positive through our symbolic algorithm.

Systems are usually composed of several modules, in our example the consumer and the producer. Verifying that the system as a whole complies with a specification requires checking that it satisfies a global specification. If the check fails, often there is no indication of which module is not compliant, and thus one must rethink the entire implementation. Instead, through decomposition, one can specialize the specification to operate on the single modules, thereby possibly enhancing the verification of the whole system. In addition, given a global specification and a system missing some components, one can just synthesize the specifications for the missing parts. For instance, as we will show in Example 3, the program in Fig. 1 suffers from a buffer inconsistency flaw. Given a model of the producer, in Sect. 4.2 we decompose a buffer consistency specification into a partial one that the consumer must obey to avoid this misbehavior.

3 A General Framework

In this section we cast both natural projection and partial model checking in the common framework of Labeled Transition Systems.

3.1 Language Semantics Versus State Semantics

Natural projection is commonly defined over (sets of) words [40]. Words are finite sequences of actions, i.e., symbols labeling the transitions between the states of a finite-state automaton (FSA). The language of an FSA is the set of all words that label a sequence of transitions from an initial state to some distinguished state, like a final or marking state. We let \(\mathcal {L}\) denote the function that maps each FSA to the corresponding language semantics. Given a system Sys and a specification Spec, both FSAs, then Sys is said to satisfy Spec whenever \(\mathcal {L}(Sys) \subseteq \mathcal {L}(Spec)\).

Rather than an FSA, here we use a labeled transition system (LTS) to specify a system Sys. An LTS is similar to an FSA, but with a weaker notion of acceptance, where all states are final. We specify our running example below as an LTS.

Example 1

(Running example) Consider again the OpenCL program from Sect. 2 where the buffer positions are fixed to 8. Figure 2 depicts a transition system that encodes the specification P for the buffer’s consistency, where the symbols e and d represent the (generic) enqueue and dequeue operations, respectively. Intuitively, the threads cannot perform e actions when the buffer is full (state \(p_8\)) and d actions when the buffer is empty (state \(p_0\)). Barrier synchronizations do not affect the specification’s state. We indicate these actions with self-loops labeled with b. Only the three operations mentioned above are relevant for the specification P. Thus, we do not introduce further action labels. \(\square \)

For partial model checking, the specification Spec is defined by a formula of the \(\mu \)-calculus. The standard interpretation of the formulas is given by a state semantics, i.e., a function that, given an LTS (for a system) Sys and a formula \(\varPhi \), returns the set of states of Sys that satisfy \(\varPhi \). A set of evaluation rules formalizes whether a state satisfies a formula or not. Given an LTS Sys and a \(\mu \)-calculus formula \(\varPhi \), we say that Sys satisfies \(\varPhi \) whenever its initial state does.

Fig. 2
figure 2

The specification P of the consistency of a buffer with 8 positions, namely P(8)

The language semantics of temporal logics is strictly less expressive than the state-based one [21]. A similar fact holds for FSAs and regular expressions [6]. Below we use a semantics from which both the state-based and the language semantics can be obtained.

3.2 Operational Model and Natural Projection

We now slightly generalize the existing approaches based on partial model checking and on supervisory control theory used for locally verifying global properties of discrete event systems. We then constructively prove that the two approaches are equally expressive so that techniques from one can be transferred to the other. To this end, we consider models expressed as (finite) labeled transition systems, which describe the behavior of discrete systems. In particular, we restrict ourselves here to deterministic transition systems.

Definition 3.1

A (deterministic) labeled transition system (LTS) is a tuple \(A = (S_A, \varSigma _A, \rightarrow _A, \imath _A)\), where \(S_A\) is a finite set of states (with \(\imath _A\) the initial state), \(\varSigma _A\) is a finite set of action labels, and \(\rightarrow _A : S_A \times \varSigma _A \rightarrow S_A\) is the transition function. We write \(t = s \xrightarrow {a} s'\) to denote a transition, whenever \(\rightarrow _A(a,s) = s'\), and we call s the source state, a the action label, and \(s'\) the destination state.

A trace \(\sigma \in {\mathcal {T}}\) of an LTS A is either a single state s or a finite sequence of transitions \(t_1 \cdot t_2 \cdot \dots \) such that for each \(t_i\), its destination is the source of \(t_{i+1}\) (if any). When unnecessary, we omit the source of \(t_{i+1}\), and write a trace simply as the sequence \(\sigma = s_0 \varvec{a}_{\varvec{1}} s_1 \varvec{a}_{\varvec{2}} s_2 \ldots \varvec{a}_{\varvec{n}} s_n\), alternating elements of \(S_A\) and \(\varSigma _A\) (written in boldface for readability). Finally, we denote by \(\llbracket {A,s}\rrbracket _{}^{}\) the set of traces of A starting from state s and we write \(\llbracket {A}\rrbracket _{}^{}\) for \(\llbracket {A,\imath _A}\rrbracket _{}^{}\), i.e., for those traces starting from the initial state \(\imath _A\). \(\square \)

Example 2

Consider again our running example. Figure 3 depicts the LTSs A and B that model the behavior of the consumer and producer, respectively. On the left-hand side we show the control flow graph (CFG) of the consumer thread where we use a light grey font for the irrelevant instructions. Intuitively, the CFG consists of a loop iterating the execution of the central block. For this reason, the LTS A alternates actions b (for barrier) and d (for dequeue). The CFG of the producer is similar: the only difference is that it increments the tail pointer, rather than the head pointer. Hence, B is symmetric: it performs e (for enqueue) in place of d. The traces starting from the initial states of A and B are, respectively,

\(\square \)

Fig. 3
figure 3

From left to right: CFG of the consumer, and LTSs for the consumer (A) and producer (B)

Typically, a system, or plant in control theory, consists of multiple interacting components running in parallel. Intuitively, when two LTSs are put in parallel, each proceeds asynchronously, except on those actions they share, upon which they synchronize. We render this behavior by means of the synchronous product [4]. In particular, we rephrase the definition given in [40].

Definition 3.2

Given two LTSs A and B such that \(\varSigma _A \cap \varSigma _B = \varGamma \), the synchronous product of A and B is \(A \parallel B = (S_A \times S_B, \varSigma _A \cup \varSigma _B, \rightarrow _{A \parallel B}, \langle {\imath _A},{\imath _B}\rangle )\), where \(\rightarrow _{A \parallel B}\) is as follows:

\(\square \)

Example 3

Consider again the LTSs A and B from Fig. 3. Their synchronous product \(A \parallel B\) (with \(\varGamma = \{b\}\)) is depicted in Fig. 4. We use bold edges to denote synchronous transitions. Intuitively, \(A \parallel B\) does not satisfy P(n), for any \(n > 0\). In fact \(\langle {q_0},{r_0}\rangle \xrightarrow {b} \langle {q_1},{r_1}\rangle \xrightarrow {d} \langle {q_0},{r_1}\rangle \) but \(bd \not \in \mathcal {L}(P(n))\). \(\square \)

Fig. 4
figure 4

Synchronous product \(A \parallel B\), where bold transitions denote synchronous moves

Next, we generalize the notion of natural projection on languages. Intuitively, natural projection can be seen as the inverse operation with respect to the synchronous product of two LTSs. Indeed, through natural projection one recovers the LTS of one of the components of the parallel composition.

Given a computation of \(A \parallel B\), natural projection extracts the relevant trace of one of the two LTSs, including the synchronized transitions (see the second case below). Note that, unlike other definitions, e.g., in [40], our traces are sequences of transitions including both states and actions. We also define the inverse projection in the expected way.

Definition 3.3

Given LTSs A and B with \(\varGamma = \varSigma _A \cap \varSigma _B\), the natural projection on A of a trace \(\sigma \) of \(A \parallel B\), in symbols \(P_{A}({\sigma })\), is defined as follows:

Natural projection on the second component B is analogously defined. We extend the natural projection to sets of traces in the usual way: \(P_{A}({\mathcal {T}}) = \{P_{A}({\sigma }) \mid \sigma \in \mathcal {T}\}\).

The inverse projection of a trace \(\sigma \) over an LTS \(A \parallel B\), in symbols \(P^{-1}_{A}({\sigma })\), is defined as \(P^{-1}_{A}({\sigma }) = \{ \sigma ' \mid P_{A}({\sigma '}) = \sigma \}\). Its extension to sets is \(P^{-1}_{A}({\mathcal {T}}) = \bigcup \nolimits _{\sigma \in \mathcal {T}} P^{-1}_{A}({\sigma })\). \(\square \)

Example 4

Consider the following two traces \(\sigma _1 = \langle {q_0},{r_0}\rangle \varvec{b} \langle {q_1},{r_1}\rangle \varvec{d} \langle {q_0},{r_1}\rangle \varvec{e} \langle {q_0},{r_0}\rangle \) and \(\sigma _2 = \langle {q_0},{r_0}\rangle \varvec{b} \langle {q_1},{r_1}\rangle \varvec{e} \langle {q_1},{r_0}\rangle \varvec{d} \langle {q_0},{r_0}\rangle \). We have that the projections \(P_{A}({\sigma _1}) = P_{A}({\sigma _2}) = q_0 \varvec{b} q_1 \varvec{d} q_0 \in \llbracket {A}\rrbracket _{}^{}\) and \(\sigma _1, \sigma _2 \in P^{-1}_{B}({q_0 \varvec{b} q_1 \varvec{d} q_0})\). \(\square \)

Two classical properties [40] concerning the interplay between the synchronous product and the natural projection hold. Their proofs are trivial.

Fact 3.1

\(P_{A}({\llbracket {A \parallel B}\rrbracket _{}^{}}) \subseteq \llbracket {A}\rrbracket _{}^{} \quad \text {and} \quad \llbracket {A \parallel B}\rrbracket _{}^{} = P^{-1}_{B}({\llbracket {A}\rrbracket _{}^{}}) \cap P^{-1}_{A}({\llbracket {B}\rrbracket _{}^{}}). \)

3.3 Equational \(\mu \)-Calculus and Partial Model Checking

Below, we recall the variant of the \(\mu \)-calculus commonly used in partial model checking called modal equations [1]. A specification is given as a sequence of modal equations, and one is typically interested in the value of the top variable that is the simultaneous solution of all the equations. Equations have variables on the left-hand side and assertions on the right-hand side. Assertions are built from the boolean constants \( ff \) and \( tt \), variables x, boolean operators \(\wedge \) and \(\vee \), and modalities for necessity \([{\cdot }]\) and possibility \(\langle { \cdot }\rangle \). Equations also have fix-point operators (minimum \(\mu \) and maximum \(\nu \)) over variables x, and can be organized in equation systems.

Definition 3.4

(Syntax of the \(\mu \)-calculus) Given a set of variables \(x \in X\) and an alphabet of actions \(a \in \varSigma \), assertions \(\phi , \phi ' \in \mathcal {A}\) are given by the syntax:

$$\begin{aligned} \phi \,\text {::=}\, ff \,|\, tt \,|\,x \,|\,\phi \wedge \phi ' \,|\,\phi \vee \phi ' \,|\,[{a}]{\phi } \,|\,\langle {a}\rangle {\phi }. \end{aligned}$$

An equation is of the form \(x =_{\pi } \phi \), where \(\pi \in \{\mu , \nu \}\), \(\mu \) denotes a minimum fixed point equation, and \(\nu \) a maximum one. An equation system \(\varPhi \) is a possibly empty sequence (\(\epsilon \)) of equations, where each variable x occurs in the left-hand side of at most a single equation. Thus \(\varPhi \) is given by

$$\begin{aligned} \varPhi \,\text {::=}\,x =_{\pi } \phi ; \varPhi \,\,|\,\, \epsilon . \end{aligned}$$

A top assertion \(\varPhi \downarrow x\) amounts to the simultaneous solution of an equation system \(\varPhi \) onto the top variable x. \(\square \)

We define the semantics of modal equations in terms of the traces of an LTS by extending the usual state semantics of [1] as follows. First, given an assertion \(\phi \), its state semantics \(\Vert {\phi }\Vert _{\rho }^{}\) is given by the set of states of an LTS that satisfy \(\phi \) in the context \(\rho \), where the function \(\rho \) assigns meaning to variables. The boolean connectives are interpreted as intersection and union. The possibility modality \(\Vert {\langle {a}\rangle {\phi }}\Vert _{\rho }^{}\) (respectively, the necessity modality \(\Vert {[{a}]{\phi }}\Vert _{\rho }^{}\)) denotes the states for which some (respectively, all) of their outgoing transitions labeled by a lead to states that satisfy \(\phi \). For more details on \(\mu \)-calculus see [10, 27].

Definition 3.5

(Semantics of the \(\mu \)-calculus [1]) Let A be an LTS, and \(\rho : X \rightarrow 2^{S_A}\) be an environment that maps variables to sets of A’s states. Given an assertion \(\phi \), the state semantics of \(\phi \) is the mapping \(\Vert {\cdot }\Vert _{}^{} : {\mathcal {A}} \rightarrow (X \rightarrow 2^{S_A}) \rightarrow 2^{S_A}\) inductively defined as follows.

figure a

We extend the state semantics from assertions to equation systems. First we introduce some auxiliary notation. The empty mapping is represented by \([\,]\), \([x \mapsto U]\) is the environment where U is assigned to x, and \(\rho \circ \rho '\) is the mapping obtained by composing \(\rho \) and \(\rho '\). Given a function f(U) on the powerset of \(S_A\), let \(\pi U. f(U)\) be its fixed point. We now define the semantics of equation systems by:

figure b

Finally, for top assertions, let \(\Vert {\varPhi \downarrow x}\Vert _{}^{}\) be a shorthand for \(\Vert {\varPhi }\Vert _{[\,]}^{}(x)\). \(\square \)

Note that whenever we apply function composition \(\circ \), its arguments have disjoint domains. Next, we present the trace semantics: a trace starting from a state s satisfies \(\phi \) if s does.

Definition 3.6

Given an LTS A, an environment \(\rho \), and a state \(s \in S_A\), the trace semantics of an assertion \(\phi \) is a function \(\langle \!\langle {\cdot }\rangle \!\rangle _{}^{} : {\mathcal {A}} \rightarrow S_A \rightarrow ( X \rightarrow 2^{S_A}) \rightarrow {\mathcal {T}}\), which we also extend to equation systems, defined as follows.

$$\begin{aligned} \langle \!\langle {\phi }\rangle \!\rangle _{\rho }^{s} = {\left\{ \begin{array}{ll} \llbracket {A,s}\rrbracket _{}^{} \text { if } s \in \Vert {\phi }\Vert _{\rho }^{} \\ \emptyset \text { otherwise} \end{array}\right. } \quad \langle \!\langle {\varPhi }\rangle \!\rangle _{\rho }^{} = \lambda x. \bigcup \nolimits _{s \in \Vert {\varPhi }\Vert _{\rho }^{}(x)} \llbracket {A,s}\rrbracket _{}^{}. \end{aligned}$$

We write \(\langle \!\langle {\varPhi \downarrow x}\rangle \!\rangle _{}^{}\) in place of \(\lambda x.\langle \!\langle {\varPhi }\rangle \!\rangle _{[\,]}^{}\). \(\square \)

Example 5

Consider \(\varPhi \downarrow x\) where \( \varPhi = \left\{ x =_{\mu } [{e}]{y} \wedge \langle {d}\rangle { tt }; y =_{\nu } \langle {e}\rangle {x} \vee \langle {b}\rangle {x} \right\} . \)

This system consists of two equations. Intuitively, the first equation says that after every e transition a state satisfying the second equation for y is reached (\([{e}]{y}\)) and that, from the current state, there must exist at least one d transition (\(\langle {d}\rangle { tt }\)) The second equation states that there must exist either a e transition or a b transition. In both cases, the reached state must satisfy the x equation.

We compute \(\Vert {\varPhi \downarrow x}\Vert _{}^{}\) with respect to \(A \parallel B\). \(\Vert {\varPhi \downarrow x}\Vert _{}^{} = U^* = \mu U . F(U)\), where \(F(U) = \Vert {[{e}]{y} \wedge \langle {d}\rangle { tt }}\Vert _{[x \mapsto U, y \mapsto G(U)]}^{}\) and \(G(U) = \nu U'.\Vert {\langle {e}\rangle {x} \vee \langle {b}\rangle {x}}\Vert _{[x \mapsto U, y \mapsto U']}^{} = \Vert {\langle {e}\rangle {x} \vee \langle {b}\rangle {x}}\Vert _{[x \mapsto U]}^{}\) (since y does not occur in the assertion). Following the Knaster-Tarski theorem, we compute \(U^* = \bigcup ^n F^n(\emptyset )\):

  1. 1.

    \(G(\emptyset ) = \Vert {\langle {e}\rangle {x} \vee \langle {b}\rangle {x}}\Vert _{[x \mapsto \emptyset ]}^{} = \emptyset \) and \(U^1 = F(\emptyset ) = \Vert {[{e}]{y} \wedge \langle {d}\rangle { tt }}\Vert _{[x \mapsto \emptyset , y \mapsto \emptyset ]}^{} = \{\langle {q_1},{r_0}\rangle \}\) (i.e., the only state that admits d but not e).

  2. 2.

    \(G(\{\langle {q_1},{r_0}\rangle \}) = \Vert {\langle {e}\rangle {x} \vee \langle {b}\rangle {x}}\Vert _{[x \mapsto \{\langle {q_1},{r_0}\rangle \}]}^{} = \{\langle {q_1},{r_1}\rangle \}\) (since \(\langle {q_1},{r_1}\rangle \xrightarrow {e} \langle {q_1},{r_0}\rangle \)) and \(U^2 = F(\{\langle {q_1},{r_0}\rangle \}) = \Vert {[{e}]{y} \wedge \langle {d}\rangle { tt }}\Vert _{[x \mapsto \{\langle {q_1},{r_0}\rangle \}, y \mapsto \{\langle {q_1},{r_1}\rangle \}]}^{} = \{\langle {q_1},{r_0}\rangle \}\).

Since \(U^2 = U^1\), we have obtained the fixed point \(U^*\). Finally, we can compute \(\langle \!\langle {\varPhi \downarrow x}\rangle \!\rangle _{}^{}\), which amounts to \(\llbracket {A \parallel B,\langle {q_1},{r_0}\rangle }\rrbracket _{}^{}\). \(\square \)

We now define when an LTS satisfies an equation system. Recall that \(\llbracket {A}\rrbracket _{}^{}\) stands for \(\llbracket {A,\imath _A}\rrbracket _{}^{}\).

Definition 3.7

An LTS A satisfies a top assertion \(\varPhi \downarrow x\), in symbols \(A \models _s \varPhi \downarrow x\), if and only if \(\imath _A \in \Vert {\varPhi \downarrow x}\Vert _{}^{}\). Moreover, let \(A \models _\sigma \varPhi \downarrow x\) if and only if \(\llbracket {A}\rrbracket _{}^{} \subseteq \langle \!\langle {\varPhi \downarrow x}\rangle \!\rangle _{}^{}\). \(\square \)

The following fact relates the notion of satisfiability defined in terms of the state semantics (\(\models _s\)) with the one based on the trace semantics (\(\models _\sigma \)); its proof is immediate by Definition 3.6.

Fact 3.2

\(A \models _s \varPhi \downarrow x\) if and only if \(A \models _\sigma \varPhi \downarrow x\).

As previously mentioned, partial model checking is based on the quotienting operation \({}//_{\!}{}\). Roughly, the idea is to specialize the specification of a composed system on a particular component. Below, we define the quotienting operation [1] on the LTS \(A \parallel B\). Quotienting reduces \(A \parallel B \models _s \varPhi \) to \(B \models _s {\varPhi \downarrow x}//_{\!B}{A}\). Note that each equation of the system \(\varPhi \) gives rise to a system of equations, one for each state \(s_i\) of A, all of the same kind, minimum or maximum (thus forming a \(\pi \)-block [3]). This is done by introducing a fresh variable \(x_{s_i}\) for each state \(s_i\). Intuitively, the equation \(x_{s_i} =_{\pi } {\phi }//_{\!\varSigma _B}{s_i}\) represents the requirements on B when A is in state \(s_i\). Since the occurrence of the variables on the right-hand side depends on A’s transitions, \({\varPhi \downarrow x}//_{\!B}{A}\) embeds the behavior of A.

Definition 3.8

Given a top assertion \(\varPhi \downarrow x\), we define the quotienting of the assertion on an LTS A with respect to an alphabet \(\varSigma _B\) as follows.

\(\square \)

Example 6

Consider the top assertion \(\varPhi \downarrow x\) of Example 5 and the LTSs A and B of Example 2. Quotienting \(\varPhi \downarrow x\) against A, we obtain \({\varPhi }//_{\!\varSigma _B}{A} \downarrow x_{q_0}\), where

$$\begin{aligned} {\varPhi }//_{\!\varSigma _A}{B} =\left\{ \begin{array}{l} x_{q_0} =_{\mu } [{e}]{y_{q_0}} \wedge ff \\ x_{q_1} =_{\mu } [{e}]{y_{q_1}} \wedge tt \\ y_{q_0} =_{\nu } \langle {e}\rangle {x_{q_0}} \vee ff \\ y_{q_1} =_{\nu } \langle {e}\rangle {x_{q_1}} \vee \langle {b}\rangle {x_{q_0}} \\ \end{array}\right. =\left\{ \begin{array}{l} x_{q_0} =_{\mu } ff \\ x_{q_1} =_{\mu } [{e}]{y_{q_1}} \\ y_{q_0} =_{\nu } \langle {e}\rangle {x_{q_0}} \\ y_{q_1} =_{\nu } \langle {e}\rangle {x_{q_1}} \vee \langle {b}\rangle {x_{q_0}} \\ \end{array}\right. =\left\{ x_{q_0} =_{\mu } ff \right\} . \end{aligned}$$

The leftmost equations are obtained by applying the rules of Definition 3.8. Then we simplify on the right-hand sides of the first three equations, i.e., those of \(x_{q_0}\), \(x_{q_1}\) and \(y_{q_0}\). In particular, we apply the standard boolean transformations \(\psi \wedge ff \equiv ff \), \(\psi \wedge tt \equiv \psi \), and \(\psi \vee ff \equiv \psi \). Finally we reduce the number of equations by removing those unreachable from the top variable \(x_{q_0}\). For a detailed description of our simplification strategies, see [3]. Therefore \(\langle \!\langle {{\varPhi \downarrow x}//_{\!\varSigma _B}{A}}\rangle \!\rangle _{}^{} = \emptyset \). This was expected since, as shown in Example 5, \(\langle q_0, r_0 \rangle \not \in \Vert {\varPhi \downarrow x}\Vert _{}^{}\). \(\square \)

3.4 Unifying the Logical and the Operational Approaches

Here we prove the equivalence between natural projection and partial model checking (Theorem 3.2), establishing the correspondence between quotienting and natural projection.

Theorem 3.1

For all ABx, and \(\varPhi \) on \(A \parallel B\), \(\langle \!\langle {{\varPhi \downarrow x}//_{\!\varSigma _B}{A} }\rangle \!\rangle _{}^{} = P_{B}({\langle \!\langle {\varPhi \downarrow x}\rangle \!\rangle _{}^{}})\).

The following theorem states that the synchronous product of two LTSs satisfies a global equation system if and only if its components satisfy their quotients, i.e., their local assertions.

Theorem 3.2

For all ABx and \(\varPhi \) on \(A \parallel B\),

$$\begin{aligned} A \parallel B \models _\varsigma \varPhi \downarrow x \quad (\varsigma \in \{ s, \sigma \}) \end{aligned}$$

if and only if any of the following equivalent statements holds:

  1. 1.

    \(A \models _\varsigma {\varPhi \downarrow x}//_{\!\varSigma _A}{B}\)    2. \(B \models _\varsigma {\varPhi \downarrow x}//_{\!\varSigma _B}{A}\)

  2. 3.

    \(A \models _\sigma P_{A}({\langle \!\langle {\varPhi \downarrow x}\rangle \!\rangle _{}^{}})\)    4. \(B \models _\sigma P_{B}({\langle \!\langle {\varPhi \downarrow x}\rangle \!\rangle _{}^{}})\).

4 Quotienting Finite-State Systems

In this section we present an algorithm for quotienting a finite-state system defined as an LTS. Afterwards, we prove its correctness with respect to the standard quotienting operator and we study its complexity. Finally, we apply it to our running example to address three problems: verification, submodule construction, and controller synthesis.

4.1 Quotienting Algorithm

Our algorithm consists of two procedures that are applied sequentially. The first, called quotient (Table 2), builds a non-deterministic transition system starting from two LTSs, i.e., a specification P and an agent A. Moreover, it takes as an argument the alphabet of actions \(\varSigma _B\) of the new transition system B. Non-deterministic transition systems have a distinguished label \(\lambda \), and serve as an intermediate representation. The states of the resulting transition system include all the pairs of states of P and A, except for those that denote a violation of P (line 1). The transition relation (line 3) is defined using the quotienting rules from Sect. 3. Also, note that the relation \(\rightarrow \) is restricted to the states of S (denoted \(\rightarrow _S\)).

The second procedure, called unify (in Table 3) translates a non-deterministic transition system back to an LTS. By using closures over \(\lambda \), unify groups transition system states. This process is similar to the standard subset construction [24], except that we put an \(a \in \varSigma _B {\setminus } \varGamma \) transition between two groups Q and M only if (i) M is the intersection of the \(\lambda \)-closures of the states reachable from Q with an a transition and (ii) all the states of Q admit at least an a transition leading to a state of M (\(\wedge \)-move). The procedure unify works as follows. Starting from the \(\lambda \)-closure of B’s initial state (line 1), it repeats a partition generation cycle (lines 4–13). Each cycle removes an element Q from the set S of the partitions to be processed. Then, for all the actions in \(\varSigma _B {\setminus } \{ \lambda \}\), a partition M is computed by \(\wedge \)-move (line 7). If the partition is nonempty, a new transition is added from Q to M (line 9). Also, if M is a freshly generated partition, i.e., \(M \not \in R\), it is added to both S and R (line 10). The procedure terminates when no new partitions are generated.

Table 2 The quotienting algorithm
Table 3 The unification algorithm

Our quotienting algorithm is correct with respect to the quotienting operator and runs in PTIME. More precisely, assuming that \(\varGamma , \varSigma _A {\setminus } \varGamma \), and \(\varSigma _B {\setminus } \varGamma \) have m elements, and that P and A have n states, the complexity is \(O(n^6m^2)\) (see Appendix A.4 for more details). We avoid an exponential blow-up in our algorithm (in contrast to Table 1) since we only consider deterministic transition systems. Note that a determinization step for non-deterministic transition systems is exponential in the worst case.

4.2 Application to Our Running Example

Recall from Example 3 that \(A \parallel B\) does not satisfy the buffer consistency property P. Informally the reason is that the barrier does not prevent the consumer A from accessing the buffer before the producer B. However, the barrier does ensure that iterations of the producer and the consumer are always paired. This implies that only the first position of the buffer is actually used.

Fig. 5
figure 5

Graphical representation of the consumer \(A'\)

We apply our quotienting algorithm to find an \(A'\) such that \(A' \parallel B \models P\). That is, we solve an instance of the submodule construction problem for B and P. The resulting LTS is given in Fig. 5. Intuitively, \(A'\) behaves as follows. Initially, it synchronizes (action b) twice to ensure that B enqueues at least one item. Then, it either (i) synchronizes again and moves to the next state or (ii) dequeues an item (action d) and goes back one state. The reason is that each state \(w_i\) denotes a configuration under which the buffer contains i or \(i-1\) items. As a result, there cannot be a state \(w_9\) and also the state \(w_0\) can be reached only once at the start. Finally, note that a similar construction also applies to the controller synthesis and verification problems. For the former it suffices to constrain the alphabet of \(A'\) to only contain synchronization actions, while for the latter we check that the submodule \(A'\) accepts the empty string.

5 Quotienting Symbolic Finite-State Systems

In this section, we extend our results to symbolic Finite-State Transition Systems (s-LTSs). This rather expressive formalism is a variant of symbolic Finite State Automata [16] where all states are final. The novelty with respect to a standard LTS (or to an FSA) is that the alphabet is the carrier of an effective boolean algebra and that transitions are enabled by predicates on the possibly infinitely many elements of the algebra. This model allows a convenient representation of large systems, the behavior of which also depends on the data handled, and not only on the control flow as it is the case with a standard LTS.

For example, consider again the OpenCL kernel of Fig. 1 and the kinds of flaws mentioned in Sect. 2. Buffer consistency has been addressed using the model of standard LTSs, because consistency only depends on the actions (enqueue and dequeue) performed. However, when representing data races we cannot abstract away from the affected memory location, the action performed (read/write), and the data involved. We model them using to s-LTSs. In Fig. 6, we show on the left the control-flow graph of our consumer. Since we are interested in the actions on L we highlight them. In the upper part on the right there is the s-LTS for the consumer. Accordingly, we show only the portion with read/write actions that are parametric with respect to the memory address L and its offset. In the bottom part, we display the s-LTS of the consumer.

In this example, one can encode our producer/consumer in a standard LTS, because the operations and data are finite. The price to pay is an exponential growth of the number of resulting labels and, consequently, of the transitions. Clearly, such encodings cannot be done, when data are taken from an infinite domain like the natural numbers or strings from a given alphabet. In these cases there always exists a standard LTS that accepts a language that however is isomoprphic to the given s-LTS (see [16]).

We start by recalling some known notions about s-LTSs, adapting them to our case as needed and illustrating them on our running example. Then, we present our contributions: a symbolic version of (i) the synchronous product operator; (ii) partial model checking and natural projection; and (iii) a quotienting algorithm.

5.1 Symbolic Labeled Transition Systems

We start by recalling the definition of an effective boolean algebra and algebraic operators over them that are the building blocks for symbolic LTSs.

Definition 5.1

[15] An effective boolean algebra (EBA) is a tuple \(\mathcal {A} = \langle \mathfrak {D}, \varPsi , \{\!|{\cdot }|\!\}_{}^{} \rangle \) where:

  • \(\mathfrak {D}\) is a non-empty, recursively enumerable set (called the alphabet or universe of \(\mathcal {A}\));

  • \(\varPsi \) is a recursively enumerable set of predicates closed under the connectives \(\wedge \), \(\vee \), and \(\lnot \) such that \(\bot , \top \in \varPsi \); and

  • \( \{\!|{\cdot }|\!\}_{}^{} : \varPhi \rightarrow 2^{\mathfrak {D}}\) is the denotation function such that \(\{\!|{\bot }|\!\}_{}^{} = \emptyset \), \(\{\!|{\top }|\!\}_{}^{} = \mathfrak {D}\), \(\{\!|{\varphi \wedge \psi }|\!\}_{}^{} = \{\!|{\varphi }|\!\}_{}^{} \cap \{\!|{\psi }|\!\}_{}^{}\), \(\{\!|{\varphi \vee \psi }|\!\}_{}^{} = \{\!|{\varphi }|\!\}_{}^{} \cup \{\!|{\psi }|\!\}_{}^{}\), and \(\{\!|{\lnot \varphi }|\!\}_{}^{} = \mathfrak {D} {\setminus } \{\!|{\varphi }|\!\}_{}^{}\) (for any \(\varphi , \psi \in \varPsi \)). \(\square \)

Given a predicate \(\varphi \) of an EBA \(\mathcal {A},\) we say that \(\varphi \) is satisfiable, in symbols \({\mathbf {sat}}_{\mathcal {A}}({\varphi })\), when \(\{\!|{\varphi }|\!\}_{}^{} \ne \emptyset \).

EBAs can be composed using several operators (see [15, 38] for details). We recall those that are relevant for the definitions given below. Let \(\mathcal {A}_1 = \langle \mathfrak {D}_1, \varPsi _1, \{\!|{\cdot }|\!\}_{1}^{} \rangle \) and \(\mathcal {A}_2 = \langle \mathfrak {D}_2, \varPsi _2, \{\!|{\cdot }|\!\}_{2}^{} \rangle \) be EBAs.

(union):

\(\mathcal {A}_1 \oplus \mathcal {A}_2\) is the EBA \(\langle \mathfrak {D}_\oplus , \varPsi _\oplus , \{\!|{\cdot }|\!\}_{\oplus }^{} \rangle \) such that

  • \(\mathfrak {D}_\oplus = (\mathfrak {D}_1 \times \{1\}) \cup (\mathfrak {D}_2 \times \{2\})\);

  • \(\varPsi _\oplus = \varPsi _1 \times \varPsi _2\); and

  • \(\{\!|{\langle \varphi _1, \varphi _2 \rangle }|\!\}_{\oplus }^{} = (\{\!|{\varphi _1}|\!\}_{1}^{} \times \{1\}) \cup (\{\!|{\varphi _2}|\!\}_{2}^{} \times \{2\})\).

(product):

\(\mathcal {A}_1 \otimes \mathcal {A}_2\) is the EBA \(\langle \mathfrak {D}_\otimes , \varPsi _\otimes , \{\!|{\cdot }|\!\}_{\otimes }^{} \rangle \) such that

  • \(\mathfrak {D}_\otimes = \mathfrak {D}_1 \times \mathfrak {D}_2\);

  • \(\varPsi _\otimes = \varPsi _1 \times \varPsi _2\); and

  • \(\{\!|{\langle \varphi _1, \varphi _2 \rangle }|\!\}_{\otimes }^{} = \{\!|{\varphi _1}|\!\}_{1}^{} \times \{\!|{\varphi _2}|\!\}_{2}^{}\).

(restriction):

\(\mathcal {A}_1 \upharpoonright V\) (with \(V \in 2^{\mathfrak {D}_1}\)) is the EBA \(\langle \mathfrak {D}, \varPsi , \{\!|{\cdot }|\!\}_{}^{} \rangle \) such that

  • \(\mathfrak {D} = \mathfrak {D}_1 \cap V\);

  • \(\varPsi = \varPsi _1\); and

  • \(\{\!|{\varphi }|\!\}_{}^{} = \{\!|{\varphi }|\!\}_{1}^{} \cap V\).

For brevity, we may write \(\mathcal {A} \upharpoonright \varphi \) for \(\mathcal {A} \upharpoonright \{\!|{\varphi }|\!\}_{}^{}\).

Example 7

The EBA \(\mathcal {B}\) encoding the write/read actions of our running example is defined as follows.

  • \(\mathfrak {D} = \{r,w\} \times Id \times \mathbb {N}\), where \( Id \) stands for the set of variable identifiers of a program.

  • \(\varPsi \) includes equality and inequality (on both \(\{r,w\}\) and \( Id \)) and ordering relationships between natural numbers.

We use the variables \(\alpha \) and \(\beta \) to range over \(\{r,w\}\) and X and Y for generic elements of \( Id \), the bytes of which are identified by their position (variable n). Also, we write \(\alpha (X,n) : \varphi \) to denote the predicates of \(\varPsi \) and we use straightforward abbreviations such as w(L, 0) for \(\alpha (X,n) : \alpha = w \wedge X = L \wedge n = 0\). \(\square \)

We now state the definition of an s-LTS, we introduce its symbolic traces and we show the mapping from the symbolic to the concrete traces. The definition of s-LTS is based on that of s-FA [16].

Definition 5.2

(s-LTS) A symbolic LTS (s-LTS) is a tuple \(M = (Q, \mathcal {A}, \varDelta , \imath )\), where Q is a finite set of states (with \(\imath \) the initial state), \(\mathcal {A} = \langle \mathfrak {D}, \varPsi , \{\!|{\cdot }|\!\}_{}^{} \rangle \) is an EBA, and \(\varDelta \subseteq Q \times \varPsi \times Q\) is the transition relation such that \((s,\varphi ,s') \in \varDelta \) only if \({\mathbf {sat}}_{\mathcal {A}}({\varphi })\).

An s-LTS is deterministic when for all \((q, \varphi , q')\) and \((q, \varphi ', q''), \varphi \wedge \varphi '\) is unsatisfiable. Given an s-LTS there always exists an equivalent, deterministic one. Thus, in the following we only consider deterministic s-LTSs.

Analogously to Definition 3.1, the traces of an s-LTS belong to \(\mathcal {T}\) and have the form \(\sigma = s_0 d_1 s_1 d_2 \ldots d_n s_n\), where for each \(i \in [1, n]\) there exists \((s_{i-1},\varphi ,s_i) \in \varDelta \) such that \(d_i \in \{\!|{\varphi }|\!\}_{}^{}\). In contrast, a symbolic trace of the s-LTS M is a sequence \(\eta = s_0 \varphi _1 s_1 \varphi _2 \ldots \varphi _n s_n\), where for each \(i \in [1, n]\) there exists \((s_{i-1},\varphi _i,s_i) \in \varDelta \). We use \(\llbracket {M, s}\rrbracket _{}^{}\) to denote the set of traces of M such that \(s_0 = s\) and tr(Ms) to denote the set of symbolic traces such that \(s_0 = s\) (also we omit s when \(s = \imath \)).

Finally, a symbolic trace \(\eta = s_0 \varphi _1 s_1 \varphi _2 \ldots \varphi _n s_n\) can be instantiated to the set of concrete traces \(s2c(\eta ) = \{ s_0 d_1 s_1 d_2 \ldots d_n s_n \mid \forall \, i \in [1, n] . \, d_i \in \{\!|{\varphi _i}|\!\}_{}^{} \}\). \(\square \)

We next describe the symbolic model of our running example.

Example 8

Consider again the data race flaw for the OpenCL code discussed in Sect. 2. We use the EBA \(\mathcal {B}\) of Example 7 to model the kernel accesses to the shared memory. The predicate \(\alpha (X,n) : \varphi \) specifies the kernel accesses actions \(\alpha \) (read or write) on the nth byte of variable X. Here \(\varphi \) is a constraint on the values that \(\alpha \), X, and n can assume. Figure 6 on the left shows the CFG of the consumer and an s-LTS modeling it, in the right, upper part. Below, we also show the s-LTS for the producer. Recall that the variable head points to L[0], while tail (see Fig. 1) refers to L[1]. \(\square \)

5.2 Parallel Composition of s-LTSs

Fig. 6
figure 6

From left to right: CFG of the consumer, and s-LTSs for the consumer (top) and for the producer (bottom)

Before proposing a new notion of parallel composition for s-LTSs, it is convenient to introduce an auxiliary operation on EBAs.

Definition 5.3

Given two EBAs, \(\mathcal {A}_1 = \langle \mathfrak {D}_1, \varPsi _1, \{\!|{\cdot }|\!\}_{1}^{} \rangle \) and \(\mathcal {A}_2 = \langle \mathfrak {D}_2, \varPsi _2, \{\!|{\cdot }|\!\}_{2}^{} \rangle \) and two predicates \(\psi _1 \in \varPsi _1\) and \(\psi _2 \in \varPsi _2\) (called synchronization predicates), we define the parallel product of \(\mathcal {A}_1\) and \(\mathcal {A}_2\) over \(\psi _1\) and \(\psi _2\) (in symbols \(\mathcal {A}_1 \circledast _{\psi _1,\psi _2} \mathcal {A}_2\)) as

$$\begin{aligned} \mathcal {A}_1 \circledast _{\psi _1,\psi _2} \mathcal {A}_2 = \mathcal {A}_1 \upharpoonright (\lnot \psi _1) \oplus \mathcal {A}_2 \upharpoonright (\lnot \psi _2) \oplus (\mathcal {A}_1 \upharpoonright \psi _1 \otimes \mathcal {A}_2 \upharpoonright \psi _2). \end{aligned}$$

A predicate of \(\mathcal {A}_1 \circledast _{\psi _1,\psi _2} \mathcal {A}_2\) has the form \(\psi = ( ( \psi _{\mathcal {A}_1}, \psi _{\mathcal {A}_2} ), ( \psi '_{\mathcal {A}_1}, \psi '_{\mathcal {A}_2} ) )\), for some \(\psi _{\mathcal {A}_1}, \psi '_{\mathcal {A}_1} \in \varPsi _1\) and \(\psi _{\mathcal {A}_2}, \psi '_{\mathcal {A}_2} \in \varPsi _2\). We write \(\psi _{|_1}, \psi _{|_2}, \psi _{|_3}, \psi _{|_4}\) to denote \(\psi _{\mathcal {A}_1}, \psi _{\mathcal {A}_2}, \psi '_{\mathcal {A}_1}, \psi '_{\mathcal {A}_2}\), respectively. Similarly, the elements in the alphabet of \(\mathcal {A}_1 \circledast _{\psi _1,\psi _2} \mathcal {A}_2\) have the form \(( ( ( ( d_1, 1), ( d_2, 2 ) ) ,1 ), ( ( d'_1,d'_2), 2 ) )\), which we abbreviate to \(( (d_1,1), (d_2,2), (( d'_1, d'_2 ), 3) )\) or even, when clear from the context, to \((d_1,d_2,d'_1,d'_2)\). \(\square \)

The definition of the parallel product of two s-LTSs follows. While this operation on LTSs requires a common sub-alphabet \(\varGamma \), its symbolic counterpart synchronizes two s-LTSs on those actions that satisfy two distinguished, synchronization predicates. Intuitively, these predicates define the conditions under which a synchronous transition occurs. Note that we need two predicates as the involved s-LTSs can be defined on two different EBAs.

Definition 5.4

(Parallel composition) Given two s-LTS \(M_1 = (Q_1, \mathcal {A}_1, \varDelta _1, \imath _1, )\) and \(M_2 = (Q_2, \mathcal {A}_2, \varDelta _2, \imath _2)\) and two synchronization predicates \(\psi _1 \in \varPsi _1\) and \(\psi _2 \in \varPsi _2\), the parallel composition of \(M_1\) and \(M_2\) over \(\psi _1\) and \(\psi _2\) (in symbols \(M_1 \parallel _{\psi _1,\psi _2} M_2\)) is

$$\begin{aligned} M_1 \parallel _{\psi _1,\psi _2} M_2 = (Q_1 \times Q_2, \mathcal {A}_1 \circledast _{\psi _1,\psi _2} \mathcal {A}_2, \varDelta ^*, \langle \imath _1,\imath _2\rangle ), \end{aligned}$$

where

$$\begin{aligned} \varDelta ^*= \bigcup \nolimits _{\begin{array}{c} (p_1, \varphi _1, p_1') \in \varDelta _1 \\ (p_2, \varphi _2, p_2') \in \varDelta _2 \end{array}} \left\{ \begin{array}{l r} \{ (\langle p_1,p_2\rangle , \langle \bot _1, \bot _2, \langle \varphi _1 \wedge \psi _1, \varphi _2 \wedge \psi _2\rangle \rangle , \langle p_1',p_2'\rangle ) \} \\ \{ (\langle p_1,p_2\rangle , \langle \varphi _1 \wedge \lnot \psi _1, \bot _2, \langle \bot _1, \bot _2 \rangle \rangle , \langle p_1',p_2\rangle ) \} \\ \{ (\langle p_1,p_2\rangle , \langle \bot _1, \varphi _2 \wedge \lnot \psi _2,\langle \bot _1, \bot _2 \rangle \rangle , \langle p_1,p_2'\rangle ) \} \end{array}\right. \end{aligned}$$

and \(\bot _1\) (\(\bot _2\)) is the false predicate of \(\mathcal {A}_1\) (\(\mathcal {A}_2\), respectively). \(\square \)

We now apply this definition to our running example.

Example 9

The parallel composition of the two s-LTS of Fig. 6 over the synchronization predicates \(\psi _1 = \alpha (X,n): \alpha = w \wedge X = L\) and \(\psi _2 = \alpha (X,n): X = L\) is depicted in Fig. 7. For readability, we omit the transition labels and we instead discuss them here. By the definition of product, a transition’s predicate can only belong to three groups: \(\langle \varphi _1 \wedge \lnot \psi _1, \bot , \bot \rangle \), \(\langle \bot , \varphi _2 \wedge \lnot \psi _2, \bot \rangle \), or \(\langle \bot , \bot , \langle \varphi _1 \wedge \psi _1, \varphi _2 \wedge \psi _2 \rangle \rangle \), where \(\varphi _1\) and \(\varphi _2\) are predicates of the consumer and producer, respectively. Note that the predicates of the second type are not satisfiable since \(\lnot \psi _2\) requires that \(X \ne L\) while all the \(\varphi _2\) constrain \(X = L\). Thus, the second group of transitions is empty. A similar observation applies to the predicates of the first group. Indeed, since \(X = L\) the only assignment that satisfies \(\lnot \psi _1\) is for \(\alpha = r\). Therefore, all these transitions are labeled with \(\langle r(L,0), \bot , \bot \rangle \). We use a thin arrow to denote them. As in Example 3 we use bold arrows to denote synchronous transitions. However, here we need to distinguish them according to their predicates. Analogous to the argument for the first group of transitions, here we have that the first component of a synchronization predicate must be w(L, 0). Thus, there are only two types of synchronous transitions depending on the second component of the synchronization predicate (either w(L, 1) or r(L, 1)). We use dashed lines for the transitions labeled with predicate \(\langle \bot , \bot , \langle w(L,0), r(L,1) \rangle \rangle \) and solid lines for \(\langle \bot , \bot , \langle w(L,0), w(L,1) \rangle \rangle \).

Fig. 7
figure 7

The parallel composition of the producer and the consumer of Fig. 6

The following small technical example illustrates a policy that ensures memory access segmentation and, thus, avoids data races.

Example 10

Consider the s-LTS W depicted in Fig. 8 that represents a policy specification to prevent data races. Briefly, W accepts any asynchronous operation carried out by each thread individually (left loop). Instead, synchronous operations are only permitted in one case (right loop), i.e., when different bytes are accessed by the two threads.

Fig. 8
figure 8

The s-LTS W specifying the data race policy

As a final remark, note that the product of Example 9 complies to this policy. Intuitively, the reason is that, for all the transition’s predicates of the product, there exists at least one satisfiable predicate among the policy’s transitions.

5.3 Symbolic Natural Projection and Symbolic Quotienting

We now extend the results of Sect. 4 to the symbolic case. First we lift the natural projection to the traces of an s-LTS M. Afterwards, we define the quotient of M with respect to a pair of synchronization predicates, and give an algorithm for computing it. Finally, we state the relationships between the symbolic versions of natural projection and quotienting. In the following, we overload some names and symbols.

Definition 5.5

(Natural projection) Given two s-LTS \(M_1 = (Q_1, \mathcal {A}_1, \varDelta _1, \imath _1 )\) and \(M_2 = (Q_2, \mathcal {A}_2, \varDelta _2, \imath _2 )\) and two synchronization predicates \(\psi _1 \in \varPsi _1\) and \(\psi _2 \in \varPsi _2\), the natural projection on \(M_1\) of a trace \(\sigma \) of \(M_1 \parallel _{\psi _1,\psi _2} M_2\), in symbols \(P_{M_1}({\sigma })\), is defined as follows:

The natural projection on the second component \(M_2\) is analogously defined.

Also, we extend the natural projection to sets of traces in the usual way. \(\square \)

Definition 5.6

(Symbolic natural projection) Given two s-LTS \(M_1 = (Q_1, \mathcal {A}_1, \varDelta _1, \imath _1 )\) and \(M_2 = (Q_2, \mathcal {A}_2, \varDelta _2, \imath _2 )\) and two synchronization predicates \(\psi _1 \in \varPsi _1\) and \(\psi _2 \in \varPsi _2\), the symbolic natural projection on \(M_1\) of a symbolic trace \(\eta \) of \(M_1 \parallel _{\psi _1,\psi _2} M_2\), in symbols \(\varPi _{M_1}({\eta })\), is defined as follows:

The symbolic natural projection on the second component \(M_2\) is analogously defined and we extend this definition to sets of traces in the usual way.

The inverse projection of a trace \(\sigma \) over an s-LTS \(M_1 \parallel _{\psi _1,\psi _2} M_2\), in symbols \(\varPi ^{-1}_{M_1}({\sigma })\), is defined as \(\varPi ^{-1}_{M_1}({\sigma }) = \{ \sigma ' \mid \varPi _{M_1}({\sigma '}) = \sigma \}\), and is lifted to sets as usual. \(\square \)

The following lemma shows that the natural projection of concrete traces coincides with the “concretization” via the function s2c of the symbolic traces obtained via the symbolic natural projection.

Lemma 5.1

For every s-LTSs \(M_1 = (Q_1, \mathcal {A}_1, \varDelta _1, \imath _1 )\) and \(M_2 = (Q_2, \mathcal {A}_2, \varDelta _2, \imath _2 )\) and synchronization predicates \(\psi _1 \in \varPsi _1\) and \(\psi _2 \in \varPsi _2\) the following holds

$$\begin{aligned} P_{M_i}(\llbracket {M_1 \parallel _{\psi _1,\psi _2} M_2}\rrbracket _{}^{}) = s2c(\varPi _{M_i}(tr(M_1 \parallel _{\psi _1,\psi _2} M_2) ) ) \quad (\text {with } i \in \{1,2\}) \end{aligned}$$

We now lift the definition of quotienting a \(\mu -\)equations’ system \(\varPhi \) for s-LTSs. The symbolic quotienting operator is \({\varPhi }//_{\!\psi _1,\psi _2}{M}\), where \(\psi _1\) and \(\psi _2\) are the synchronization predicates for M and for the s-LTS to be synthetized, respectively. The schema is the same of Definition 3.8 except for the cases that handle modalities. Since we are dealing with a product of EBAs, the alphabet symbols are as in Definition 5.3. Moreover, the transitions of M are now labeled by a predicate \(\psi \). Hence, an action \(d_1\) in the scope of a modality is a synchronization only if it satisfies \(\psi _1\). Instead, if it satisfies \(\lnot \psi _1\), it denotes an asynchronous transition. This results in checking the satisfiability of \((\psi \wedge \psi _1)(d_1)\) and \((\psi \wedge \lnot \psi _1)(d_1)\), respectively.

Definition 5.7

Given a top assertion \(\varPhi \downarrow x\) over the EBA \(\mathcal {A}_1 \circledast _{\psi _1,\psi _2} \mathcal {A}_2\), we define its quotienting on a s-LTS \(M = \langle Q, \mathcal {A}_1, \varDelta , \imath \rangle \), in symbols \({\varPhi \downarrow x}//_{\!\psi _1,\psi _2}{M}\), as follows.

\(\square \)

We next establish the correspondence between symbolic quotienting and symbolic natural projection. To this end, we must redefine the \(\mu -\)calculus state semantics of Definition 3.5 (and therefore the trace semantics of Definition 3.6) which applies to LTSs, rather than s-LTSs. The new definition is straightforward since, given an s-LTS \(M = (Q, \mathcal {A}, \varDelta , \imath )\), it only requires introducing the following notation.

$$\begin{aligned} s \xrightarrow {a}_M s' \Longleftrightarrow \exists (s, \varphi , s') \in \varDelta \text { s.t. } \varphi (a) \end{aligned}$$

Theorem 5.1

For all \(M_1 = ( Q_1,\mathcal {A}_1, \varDelta _1, \imath _1 ), M_2 = (Q_2, \mathcal {A}_2, \varDelta _2, \imath _2 ), x\), and \(\varPhi \) on the EBA \(\mathcal {A}_1 \circledast _{\psi _1,\psi _2} \mathcal {A}_2\), we have that

$$\begin{aligned} \langle \!\langle {{\varPhi \downarrow x}//_{\!\psi _1,\psi _2}{M_1}}\rangle \!\rangle _{}^{} = P_{M_2}({\langle \!\langle {\varPhi \downarrow x}\rangle \!\rangle _{}^{}}). \end{aligned}$$

As was the case for standard LTS, the synchronous product of two s-LTSs satisfies a global equation system if and only if its components satisfy their quotients, i.e., their local assertions. Note that Lemma 5.1 lifts this result also to symbolic natural projection.

Theorem 5.2

For all \( M_1 = (Q_1, \mathcal {A}_1, \varDelta _1, \imath _1 ), M_2 = (Q_2, \mathcal {A}_2, \varDelta _2, \imath _2 ), x\), and \(\varPhi \) on the EBA \(\mathcal {A}_1 \circledast _{\psi _1,\psi _2} \mathcal {A}_2\), we have that

$$\begin{aligned} M_1 \parallel _{\psi _1,\psi _2} M_2 \models _\varsigma \varPhi \downarrow x \quad (\varsigma \in \{ s, \sigma \}) \end{aligned}$$

if and only if any of the following equivalent statements holds:

  1. 1.

    \(M_1 \models _\varsigma {\varPhi \downarrow x}//_{\!{\psi _1,\psi _2}}{M_2}\)    2. \(M_2 \models _\varsigma {\varPhi \downarrow x}//_{\!{\psi _1,\psi _2}}{M_1}\)

  2. 3.

    \(M_1 \models _\sigma P_{M_1}({\langle \!\langle {\varPhi \downarrow x}\rangle \!\rangle _{}^{}})\)    4. \(M_2 \models _\sigma P_{M_2}({\langle \!\langle {\varPhi \downarrow x}\rangle \!\rangle _{}^{}})\).

5.4 Quotienting Algorithm

Before introducing the symbolic quotienting algorithm, we recall the definition of Minterms. Intuitively, Minterms are building blocks for translating an s-LTS into an LTS that accepts an isomorphic language. Based on the predicates appearing on transitions, Minterms partition the EBA domain into a finite number of satisfiability regions. It is immediate then to define an isomorphism between these regions and a finite alphabet. Note however that the transitions of the resulting LTS are exponentially many with respect to those of the original s-LTS. The details of our translation are given inside the correctness proof in the “Technical Appendix”.

Definition 5.8

[16] Let \(M = \langle Q, \mathcal {A}, \imath , \varDelta \rangle \) be an s-LTS, and let F denote the set of predicates labeling the transitions of M. The Minterms of M is the set

$$\begin{aligned} { Minterms }({M}) = \bigcup \nolimits _{I \subseteq F} \{\varphi _I = \bigwedge _{\varphi \in I} \varphi \wedge \bigwedge _{\bar{\varphi } \in F {\setminus } I} \lnot \bar{\varphi }\,|\, {\mathbf {sat}}_{\mathcal {A}}({\varphi _I}) \}. \end{aligned}$$

\(\square \)

Since our symbolic quotienting algorithm manipulates an s-LTS P encoding a specification over a parallel product \(M \parallel _{\psi _1,\psi _2} N\), the predicates on the transitions of P are four-tuples (see Definition 5.4). Therefore the same holds for \({ Minterms }({P})\).

Table 4 The symbolic quotienting algorithm for s-LTS

The symbolic quotienting algorithm is given in Table 4. It has the same structure of the algorithm of Table 2, thus we focus here on explaining the relationship between them.

As for the LTS case, our algorithm consists of two main procedures and an auxiliary one. The first, called quotient (Table 4), builds a non-deterministic s-LTS whose states are pairs, given a specification P, an agent M, and a pair of synchronization predicates \(\psi _1\) and \(\psi _2\). The labels record whether they derive from a transition of M (\(\psi _M \wedge \psi _{P|_1} \wedge \lnot \psi _1\)), of P (\(\psi _{P|_2} \wedge \lnot \psi _2\)), or whether they denote a synchronization with P (\( \psi _{P|_4} \wedge \psi _2\)), provided that \({\mathbf {sat}}_{\mathcal {A}}({\psi _M \wedge \psi _{P|_3} \wedge \psi _1})\). The second procedure is unify, which differs from the analogous one in Table 3 because Minterms are used in place of plain action labels. The same holds for the auxiliary \(\wedge \)-move, where the states in the intersection must be reachable through a transition (labeled with \(\varphi '\)) that is compatible with the Minterm predicate \(\varphi \), in symbols \({\mathbf {sat}}_{\mathcal {B}}({\varphi \wedge \varphi '})\).

Also the symbolic quotienting algorithm is correct with respect to the previous quotienting operator (see the “Technical Appendix”). As expected, it runs in EXPTIME, because of the satisfiability requirements and because the number of Minterms grows exponentially with the transitions of the s-LTS. Of course, one can beforehand transform an s-LTS in an LTS by using Minterms and apply the quotienting algorithm of Sect. 4. The overall process still requires EXPTIME. However, the partial specification obtained in this way will be in the form of an LTS, thus lacking the expressive power of the corresponding s-LTS obtained through symbolic quotienting.

Example 11

We apply the algorithm of Table 4 to compute the quotient \({W}//_{\!\psi _1, \psi _2}{M_1}\), where W is the specification of Example 10 depicted in Fig. 8, \(M_1\) is the s-LTS of the consumer of Example 8, \(\psi _1 = \alpha (X,n) : X = L \wedge \alpha = w\) and \(\psi _2 = \beta (Y, m) : Y = L\).

First notice that (for some q and \(q'\)) each transition in \(\bar{\varDelta }_{\lambda }\) has the form \((q, \psi _M \wedge \psi _{P|_1} \wedge \lnot \psi _1, q'\})\). However, \(\psi _M \wedge \lnot \psi _1\) is satisfiable only if the sub-formula \(\alpha (X,n) : X = L \wedge X \ne L\) is satisfiable, which is trivially false. For this reason \(\bar{\varDelta }_{\lambda } = \emptyset \).

Since \(\bar{\varDelta }_{\lambda } = \emptyset \), the set of transitions of the resulting s-LTS is given by \(\bar{\varDelta }_{B} \cup \bar{\varDelta }^{*}\). Figure 9 shows this, where we use different edge thickness to distinguish between the transitions of \(\bar{\varDelta }_{B}\) and \(\bar{\varDelta }^{*}\).

Fig. 9
figure 9

The s-LTS corresponding to \({W}//_{\!\psi _1, \psi _2}{M_1}\). Bold edges denote transitions in \(\bar{\varDelta }^*\)

6 Related Work

Natural projection is mostly used by the community working on control theory and discrete-event systems. In the 1980s, the seminal works by Wonham et al. (e.g., [41, 42]) exploited natural projection-based algorithms for synthesizing both local and global controllers. Other authors continued this line of research and proposed extensions and refinements of these methods, see e.g., [18, 19, 30, 39].

Partial model checking has been successfully applied to the synthesis of controllers. Given an automaton representing a plant and a \(\mu \)-calculus formula, Basu and Kumar [7] compute the quotient of the specification with respect to the plant. The satisfiability of the resulting formula is checked using a tableau that also returns a valid model yielding the controller. Their tableau works similarly to our quotienting algorithm, but applies to a more specific setting, as they are interested in generating controllers. In contrast, Martinelli and Matteucci [32] use partial model checking to generate a control process for a partially unspecified system in order to guarantee compliance with respect to a \(\mu \)-calculus formula. The generated controller takes the form of an edit automaton [8]. A quotienting-based approach was also proposed for real-time [29] and hybrid [12] systems. These paradigms aim to accurately model the behavior of, e.g., cyber-physical systems.

Some researchers have proposed techniques based on the verification of temporal logics for addressing the controller synthesis problem. Arnold et al. [5] were among the first to control a deterministic plant with a \(\mu \)-calculus specification. Also Ziller and Schneider [43] and Riedweg and Pinchinat [34] reduce the problem of synthesizing a controller to checking the satisfiability of a formula in (a variant of) the \(\mu \)-calculus. A similar approach was presented by Jiang and Kumar [25] and Gromyko et al. [22]. Similarly to [43] and [25, 34] present an approach that reduces the problem of synthesizing a controller to that of checking a CTL\(^\star \) formula’s satisfiability. In contrast, [22] proposes a method based on symbolic model checking to synthesize controllers. Their approach applies to a fragment of CTL.

7 Conclusion

Our work provides results that build a bridge between supervisory control theory and formal verification. In particular, we have formally established the relationship between partial model checking and natural projection by reducing natural projection to partial model checking and proving their equivalence under common assumptions. Besides using plain Labeled Transition System for expressing system specifications, we also considered symbolic Labeled Transitions System, whose transitions carry predicates on elements from possibly infinite boolean algebras, instead of letters. Dealing with this richer model required us to introduce new notions, including a new symbolic synchronous product and new symbolic versions of partial model checking and natural projection.

Aside from establishing novel and particularly relevant connections, our work also opens new directions for investigation. Since (symbolic) natural projection is related to language theory in general, there could be other application fields where (symbolic) partial model checking can be used as an alternative. The original formulation of partial model checking applies to the \(\mu \)-calculus, while our quotienting algorithm works on (symbolic) Labeled Transitions Systems. To the best of our knowledge, no quotienting algorithms exist for formalisms with a different expressive power, such as LTL or CTL, let alone symbolic variants of them.

We are also developing PESTS, a working prototype to handle both LTSs and s-LTSs. The source code and the documentation of our tool are available at https://github.com/gabriele-costa/pests, along with the experiments mentioned below. The performance of PESTS was experimentally assessed in [13] and the results are on the website under the heading “TACAS Experiments”. The experiments consisted in solving instances of increasing size of CSP and SCP for LTSs modeling an Unmanned Aerial Vehicles delivery system. Furthermore, we applied PESTS to a more realistic case study concerning the verification of the LTSs modeling a Flexible manufacturing system,Footnote 3 available under the heading “Flexible manufacturing system”.