1 Introduction

Originally defined by Gheorghe Păun in 1998, see Păun (1998), membrane systems, also known as P systems, are a model of computing based on the abstract notion of a membrane which can be seen as a container delimiting a region containing objects which are acted upon by the rewriting rules associated with the membrane. Quite often, these objects are multisets (for basic results on multiset computing, for example, see Kudlek et al. 2001), yet P systems operating on more complex objects (e.g., strings, arrays) are often considered, too, for instance, see Freund (2005).

A comprehensive overview of different flavors of membrane systems and their expressive power is given in the handbook which appeared in 2010, see Păun et al. (2010). For a state of the art snapshot of the domain, we refer the reader to the P systems website (http://ppage.psystems.eu/) as well as to the Bulletin of the International Membrane Computing Society (http://membranecomputing.net/IMCSBulletin/index.php).

Nearly thirty years ago, the monograph on regulated rewriting by Dassow and Păun (1989) already gave a first comprehensive overview on many concepts of regulated rewriting, especially for the string case. Yet as it turned out later, many of the mechanisms considered there for guiding the application of productions/rules can also be applied to other objects than strings, e.g., to n-dimensional arrays (Freund 1994). As exhibited in Freund et al. (2011), for comparing the generating power of grammars working in the sequential derivation mode, many relations between various regulating mechanisms can be established in a very general setting without any reference to the underlying objects the rules are working on, using a general model for graph-controlled, programmed, random-context, and ordered grammars of arbitrary type based on the applicability of rules. Also in the field of P systems (Păun et al. 2010; http://ppage.psystems.eu/), where mainly multisets have been considered, such regulating mechanisms were used, e.g., see Cavaliere et al. (2007).

Dynamic evolution of the set of available rules has been considered from the very beginning of membrane computing. Already in 1999, generalized P systems were introduced in Freund (1999); in these systems the membranes, alongside the objects, contain operators which act on these objects, while the P system itself acts on the operators, thereby modifying the transformations which will be carried out on the objects in the subsequent steps. Among further ideas on dynamic rules, one may list rule creation (Arroyo et al. 2002), activators (Alhazov 2004), inhibiting/deinhibiting rules (Cavaliere et al. 2004), and symport/antiport of rules (Cavaliere and Genova 2004). One of the more recent developments in this direction are polymorphic P systems (Alhazov et al. 2015, 2011; Ivanov 2014), in which rules are defined by pairs of membranes, whose contents may be modified by moving objects in or out, as well as P systems with randomized right-hand sides of rules (Alhazov et al. 2017, 2018a), where the right-hand sides are chosen randomly and in different ways from the given set of rules.

We here follow an approach started to be elaborated in Alhazov et al. (2018b) and then continued in Alhazov et al. (2018e), where in the general framework of sequential systems the applicability of rules is controlled by the application of rules in the preceding derivation step(s). The application of a rule in one derivation step may either activate some rules to be applied in the next derivation step(s) or may block their application. We immediately observe that the application of a rule requires its activation in a preceding step. A computation may also take derivation steps without applying a rule as long as there are some rules activated for future derivation steps. In contrast to the general framework for control mechanisms as described in Freund et al. (2011), we here are not dealing with the applicability of rules itself but with the possible activation or blocking of rules by the effective application of rules in preceding steps.

The idea of using activation and blockings of rules in the area of membrane systems first was considered in Alhazov et al. (2018c) and then in Alhazov et al. (2018d). In this paper we extend the results for P systems with activation and blockings of rules already obtained in these papers for several variants of P systems. For example, we will establish computational completeness results for various kinds of one-membrane P systems (resembling multiset grammars) and several derivation modes, using activation and blocking of rules to be applied in succeeding derivation steps. Depending on the derivation mode and the halting condition, the complexity of the systems may vary either with respect to the number of steps rules are activated ahead or whether the use of blocking rules is needed or not. Allowing a derivation step without applying a rule enables the P system to check for the appearance of a symbol in the current multiset.

We may even allow the application of rules to influence previous derivation steps, but using a conservative semantics that considers derivations to be consistent when such backwards activations or blockings of rules are not changing the correctness of the derivation, we cannot “go beyond Turing”, which on the other hand can be achieved by allowing such backwards information to change past configurations by triggering the applications of newly activated rules and by using a less conservative semantics looking at infinite computations on finite multisets as in “red-green P automata” (for instance, see Freund 2016).

Various possibilities of how one may “go beyond Turing” are discussed in van Leeuwen and Wiedermann (2012), for example, the definitions and results for red-green Turing machines can be found there. In Aman et al. (2014) the notion of red-green automata for register machines with input strings given on an input tape (often also called counter automata) is introduced and the concept of red-green P automata for several specific models of membrane systems is explained. Via red-green counter automata, the results for acceptance and recognizability of finite strings by red-green Turing machines are carried over to red-green P automata. The basic idea of red-green automata is to distinguish between two different sets of states (red and green states) and to consider infinite runs of the automaton on finite input objects (strings, multisets); allowed to change between red and green states more than once, red-green automata can recognize more than the recursively enumerable sets (of strings, multisets), i.e., in that way one can “go beyond Turing”. In the area of P systems, first attempts to do that can be found in Calude and Păun (2004) and Sosík and Valík (2006). Computations with infinite words by P automata were investigated in Freund et al. (2004).

In Freund et al. (2015, 2016), infinite runs of P automata are considered, taking into account the existence/ non-existence of a recursive feature of the current sequence of configurations. In that way, infinite sequences over \(\left\{ 0,1\right\}\), called “observer languages”, are obtained where 1 indicates that the specific feature is fulfilled by the current configuration and 0 indicates that this specific feature is not fulfilled. The recognizing runs of red-green automata then correspond with \(\omega\)-regular languages over \(\left\{ 0,1\right\}\) of a specific form ending with \(1^{\omega }\) as observer languages. The special observer language \(\left\{ 0,1\right\} ^{*}\left\{ 1\right\} ^{\omega }\) corresponds with the acceptance condition for P automata called “partial adult halting”. This special acceptance variant for P automata with infinite runs on finite multisets is motivated by an observation made for infinite sequences of infinite runs of a given P automaton: at some moment, a specific initial part in the sequence of configurations in the infinite sequence of runs does not change any more, for example, the initial configuration.

We now may also consider variants of P systems with activation and blocking of rules as well as infinite computations on a given finite multiset. A sequence of such infinite computations is called valid if each prefix of these computations becomes stable, i.e., neither the configuration itself nor the set of applicable rules changes any more. This less conservative semantics for activating and/or blocking the rules in preceding computations allows us to take the infinite sequence of stable configurations obtained in this way as the final computation on the given input. Provided such a computation—obtained as the limit of a valid sequence of computations—exists, we may just consider the result of the first computation step and thus the second configuration to see whether the input has been accepted. Again this can be seen as looking at a specific initial part of the computations and requiring that it does not change any more, but also requesting that the whole computation converges.

In the following section, we recall some notions from formal language theory; in the succeeding section the main definitions of the general framework for P systems working under different derivation modes (see Freund and Verlan 2007) are given. Then we define the new concept of activation and blocking of rules based on the applicability of rules within this general framework of static P systems, i.e., of P systems where the membrane structure does not change during any computation. In Sect. 5 we prove first results only using activation of rules for the next step. We also establish relations to the family of sets of multisets generated by tabled Lindenmayer systems and by P systems with promoters and inhibitors.

Computational completeness results using both activation and blocking of rules for at most the next two steps are established in Sect. 6, for different types of rules and various derivation modes, where we use the feature to allow at least one derivation step in between where no rule is applied. Then we extend our systems by allowing activation and blocking of rules in previous derivation steps in Sect. 7, and finally discuss how to “go beyond Turing” in Sect. 8. A summary of the results obtained in this paper and some future research topics extending the notions and results considered in this paper are given in Sect. 9.

2 Definitions

After some preliminaries from formal language theory, we define our model for hierarchical P systems in the general setting of this paper as well as the main derivation modes considered in the area of membrane systems, see Freund and Verlan (2007).

2.1 Preliminaries

The set of integers is denoted by \({\mathbb {Z}}\), the set of non-negative integers (natural numbers) by \({\mathbb {N}}_0\), and the set of positive integers by \({\mathbb {N}}\). An alphabet V is a finite non-empty set of abstract symbols. Given V, the free monoid generated by V under the operation of concatenation is denoted by \(V^{*}\); the elements of \(V^{*}\) are called strings, and the empty string is denoted by \(\lambda\); \(V^{*}{\setminus } \left\{ \lambda \right\}\) is denoted by \(V^{+}\). Let \(\left\{ a_{1},\dots ,a_{n}\right\}\) be an arbitrary alphabet; the number of occurrences of a symbol \(a_{i}\) in x is denoted by \(\left| x\right| _{a_{i}}\); the Parikh vector associated with x with respect to \(a_{1},\dots a_{n}\) is \(\left( \left| x\right| _{a_{1}},\dots ,\left| x\right| _{a_{n}}\right)\). The Parikh image of a language L over \(\left\{ a_{1},\dots ,a_{n}\right\}\) is the set of all Parikh vectors of strings in L, and we denote it by \(Ps\left( L\right)\). For a family of languages FL, the family of Parikh images of languages in FL is denoted by PsFL. The families of regular and recursively enumerable string languages are denoted by REG and RE, respectively.

A (finite) multiset u over the (finite) alphabet V, \(V=\left\{ a_{1},\dots ,a_{n}\right\}\), is a mapping \(f:V\longrightarrow {\mathbb {N}}_0\) and can be represented by any string x the Parikh vector of which with respect to \(a_{1},\dots ,a_{n}\) is \(\left( f\left( a_{1}\right) ,\dots ,f\left( a_{n}\right) \right)\). The weight of u is defined as \(f\left( a_{1}\right) +\dots +f\left( a_{n}\right)\). The set of all finite multisets over an alphabet V is denoted by \(V^{\circ }\).

For more details of formal language theory the reader is referred to the monographs and handbooks in this area (Dassow and Păun 1989; Rozenberg and Salomaa 1997).

2.2 Register machines

As a computationally complete model able to generate (accept) all sets in PsRE we will use register machines:

A register machine is a construct \(M=\left( n,H,R_M,p_{0},h\right)\) where n, \(n\ge 1\), is the number of registers (each of them contains a non-negative integer), H is the set of instruction labels, \(p_{0}\) is the start label, h is the halting label (only used for the \({\mathtt {HALT}}\) instruction), and \(R_M\) is a set of (labeled) instructions being of one of the following forms:

  • \(p:\left( {\mathtt {ADD}}\left( r\right) ,q,s\right)\) increments the value in register r and in a non-deterministic way chooses to continue either with the instruction labeled by q or with the instruction labeled by s;

  • \(p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right)\) decrements the value in register r and continues the computation with the instruction labeled by q if the register was non-empty, otherwise it continues with the instruction labeled by s;

  • \(h:{\mathtt {HALT}}\) halts the machine.

M is called deterministic if in all ADD-instructions \(p:\left( {\mathtt {ADD}}\left( r\right) ,q,s\right)\), it holds that \(q=s\); in this case we write \(p:\left( {\mathtt {ADD}}\left( r\right) ,q\right)\). Deterministic register machines can accept all recursively enumerable sets of vectors of natural numbers with k components using precisely \(k+2\) registers, see Minsky (1967).

2.3 ET0L-systems

Lindenmayer systems (with tables of rules) are a well-known parallel mechanism for generating strings. An ET0L-system is a construct

$$\begin{aligned} G=(V,T,R_1,\dots ,R_m,w) \ {\text { where }} \end{aligned}$$
  • V is the alphabet of objects;

  • \(T\subseteq V\) is the alphabet of terminal objects;

  • \(w\in V^{*}\) is the initial string;

  • \(R_i\), \(1\le i\le m\), is a finite set of non-cooperative rules, i.e., rules of the form \(X\rightarrow u\) with \(X\in V\) and \(u\in V^*\). We denote \(R=\bigcup _{1\le i\le n}R_i\).

A computation in G starts with the initial string w, and in each derivation step the rules from one of the rule sets \(R_i\), also called tables, are applied in the (maximally) parallel way; a successful computation ends with a terminal string over the terminal alphabet T.

An ET0L-system can also be seen as a multiset generating mechanism, i.e., we start with the initial object w being a multiset and then apply the rules in the tables as multiset rules. The terminal results then are multisets over the terminal alphabet T. To emphasize the multiset character of the system, we also call it an mET0L-system. The family of sets of multisets generated by mET0L-systems with at most n tables is denoted by \(PsET0L_n\). We omit the suffix n if the number of tables is not bounded.

We also consider ET0L-systems and mET0L-systems with promoters and inhibitors, i.e., the rules in G are of the form (pPQ) where \(p\in R\) and PQ are finite sets of finite multisets. The multisets in P are called promoters, those in Qinhibitors. A rule (pPQ) is applicable to a multiset w if and only if every multiset in P and none of the multisets in Q is contained in w.

An ET0L-system/mET0L-systems is called of type \((pro_{i,j},inh_{k,m})\) if the number of multisets in the sets of promoters and inhibitors is limited by i and k, respectively, and the weight of the multisets in the sets of promoters and inhibitors does not exceed j and m, respectively. The family of sets of multisets generated by mET0L-systems with n tables of rules of type \((pro_{i,j},inh_{k,m})\) is denoted by \(PsET0L_n(pro_{i,j},inh_{k,m})\).

3 A general model for hierarchical P systems

We first recall the main definitions of the general model for hierarchical P systems and the basic derivation modes as defined, for example, in Freund and Verlan (2007).

A (hierarchical) P system of type X working in the derivation mode \(\delta\) is a construct

$$\begin{aligned} \varPi =\left( V,T,\mu ,w_1,\dots ,w_m,R_1,\dots ,R_m,f, {\Longrightarrow }_{\varPi ,\delta }\right)\,\, {\mathrm {where}} \end{aligned}$$
  • V is the alphabet of objects;

  • \(T\subseteq V\) is the alphabet of terminal objects;

  • \(\mu\) is the hierarchical membrane structure (a rooted tree of membranes) with the membranes uniquely labeled by the numbers from 1 to m;

  • \(w_i\in V^{*}\), \(1\le i\le m\), is the initial multiset in membrane i;

  • \(R_i\), \(1\le i\le m\), is a finite set of rules of typeX assigned to membrane i;

  • f is the label of the membrane from which the result of a computation has to be taken from (in the generative case) or into which the initial multiset has to be given in addition to \(w_f\) (in the accepting case),

  • \({\Longrightarrow }_{\varPi ,\delta }\) is the derivation relation under the derivation mode \(\delta\).

3.1 Types of rules

The notion “rules of type X”, for example, may stand for “evolution rules” or “communication rules”. We now give examples for types of rules to be used further in this paper.

3.1.1 Evolution rules

In general, an evolution rule is of the form \(u\rightarrow v\) with \(u,v\in V^*\). An evolution rule \(u\rightarrow v\) is called “non-cooperative” (abbreviated “ncoo”) if \(u\in V\), otherwise it is called “cooperative” (abbreviated “coo”). An evolution rule is applied within the region of the membrane it is assigned to. The evolution rule \(u\rightarrow v\) can be applied in a membrane region if and only if the multiset in this region contains u as a submultiset. The application of \(u\rightarrow v\) eliminates u and adds v. To the symbols in v also targets—\(out,in_j\)—may be assigned: target out means that the corresponding symbol is sent out to the surrounding membrane whereas a target \(in_j\) means that the corresponding symbol is sent into the inner membrane labeled by j.

3.1.2 Communication rules

A communication rule is of the form u|v with \(u,v\in V^*\) and not both being empty. The communication rule u|v has the effect that u inside the membrane the rule is assigned to is exchanged with v in the outer region of this membrane (if u and v are available in the respective membrane regions). If both u and v are not empty, then u|v is called an antiport rule, otherwise a symport rule. The type of rules \(anti_{i,j}\) indicates that for every antiport rule u|v in the P system the conditions \(1\le |u| \le i\) and \(1\le |v| \le j\) hold. The type of rules \(sym_{i}\) indicates that for every symport rule \(u|\lambda\) or \(\lambda |v\) in the P system the condition \(1\le |u| \le i\) or \(1\le |v| \le i\), respectively, holds.

3.1.3 Insertion, deletion, and substitution rules

For an alphabet V, let \(a\rightarrow b\) be a rewriting rule with \(a,b\in V\cup \{\lambda \}\), and \(ab\ne \lambda\); we call such a rule a substitution rule if both a and b are different from \(\lambda\) and we also write S(ab); such a rule is called a deletion rule if \(a\ne \lambda\) and \(b=\lambda\), and it is also written as D(a); \(a\rightarrow b\) is called an insertion rule if \(a=\lambda\) and \(b\ne \lambda\), and we also write I(b). The sets of all insertion rules, deletion rules, and substitution rules over an alphabet V are denoted by \(Ins_{V}\), \(Del_{V}\), and \(Sub_{V}\), respectively. Whereas an insertion rule is always applicable, the applicability of a deletion and a substitution rule depends on the presence of the symbol a. In the case of a multiset this only means incrementing the number of symbols b, decrementing the number of symbols a, or decrementing the number of symbols a and at the same time incrementing the number of symbols b.

These types of rules and the corresponding notations can be extended by allowing more than one symbol on the left-hand and/or the right-hand side, i.e., \(a,b\in V^*\), and \(ab\ne \lambda\). The corresponding sets of all extended insertion rules, deletion rules, and substitution rules over an alphabet V are denoted by \(Ins^*_{V}\), \(Del^*_{V}\), and \(Sub^*_{V}\), respectively.

3.2 Derivation modes

The set of all multisets of rules applicable in each membrane to a given configuration can be restricted by imposing specific conditions, thus yielding the following basic derivation modes (for example, see Freund and Verlan (2007) for formal definitions):

  • asynchronous mode (abbreviated asyn): at least one rule is applied;

  • sequential mode (sequ): exactly one rule is applied;

  • maximally parallel mode (max): a non-extendable multiset of rules is applied;

  • maximally parallel mode with maximal number of rules (\(max_{rules}\)): a non-extendable multiset of rules of maximally possible cardinality is applied;

  • maximally parallel mode with maximal number of objects (\(max_{objects}\)): a non-extendable multiset of rules affecting as many objects of the current configuration as possible is applied.

In Alhazov et al. (2016), these derivation modes are restricted in such a way that each rule can be applied at most once, thus yielding the set modes sasyn, smax, \(smax_{rules}\), and \(smax_{objects}\) (the sequential mode is already a set mode by definition).

A configuration is a list of the contents of each cell; a sequence of configurations \(C_1,\dots ,C_k\) is called a computation in the derivation mode \(\delta\) if \(C_i{\Longrightarrow }_{\varPi ,\delta } C_{i+1}\) for \(1\le i<k\). The derivation relation \({\Longrightarrow }_{\varPi ,\delta }\) is defined by the set of rules in \(\varPi\) and the given derivation mode which determines the multiset of rules to be applied to the multisets contained in each membrane.

3.3 Computations

As we are dealing with membrane systems, we consider the classic halting condition, i.e., the computation ends when no rule can be applied any more.

The set of all terminal multisets obtained as results of halting computations in \(\varPi\) by using the derivation mode \(\delta\) is denoted by \(Ps(\varPi ,gen,\delta )\), with gen indicating that \(\varPi\) is considered as a generating device; if we are only interested in the number of symbols in the resulting multiset, the corresponding set of natural numbers is denoted by \(N(\varPi ,gen,\delta )\).

The families of sets of (k-dimensional) vectors of natural numbers and sets of natural numbers generated by P systems with at most n cells using rules of type X in the derivation mode \(\delta\) are denoted by \(Ps(OP_n(X),gen,\delta )\) and \(N(OP_n(X),gen,\delta )\), respectively. If n is not bounded, we simply omit the subscript in these notations.

We also consider P systems as accepting mechanisms: in membrane f, we add the input multiset \(w_0\) to \(w_f\) in the initial configuration \(C_1=(w_1,\dots ,w_m)\) thus obtaining \(C_1[w_0]=(w_1,\dots ,w_fw_0,\dots ,w_m)\); the input multiset \(w_0\) is accepted if there exists a halting computation in the derivation mode \(\delta\) starting from \(C_1[w_0]\).

The families of sets of (k-dimensional) vectors of natural numbers and sets of natural numbers accepted by P systems with at most n cells using rules of type X in the derivation mode \(\delta\) are denoted by \(Ps(OP_n(X),acc,\delta )\) and \(N(OP_n(X),acc,\delta )\), respectively. If n is not bounded, we simply omit the subscript in these notations.

3.4 Flattening

Many variants of P systems can be flattened to only one membrane: any object a in membrane i can be represented by an object [ai] in the skin membrane, the outermost membrane of the P system; the rules then can be adapted accordingly. For further details of the flattening procedure we refer to Freund et al. (2014).

For example, an evolution rule \(a\rightarrow (b,out)\) assigned to membrane 2 inside the skin membrane 1 then is replaced by the rule \([a,2]\rightarrow [b,1]\) in the single membrane 1 of the corresponding flattened P system.

For communication rules, a second region is necessary, but in this case we can restrict ourselves to the regions inside and outside the skin membrane. If we assume that in the region outside the skin membrane, the environment, all objects are available in an unbounded number, then the communication rule u|v assigned to the skin membrane has the same effect as the evolution rule \(u\rightarrow v\). On the other hand, symport rules of the form \(a|\lambda\) and \(\lambda |a\) correspond to the deletion rule D(a) and the insertion rule I(a), respectively.

Throughout the paper we therefore will assume the simplest membrane structure of only one membrane which in effect reduces the P system to a multiset processing mechanism, and, observing that \(f=1\), in what follows we will use the reduced notation

$$\begin{aligned} \varPi =\left( V,T,w,R, {\Longrightarrow }_{\varPi ,\delta }\right) . \end{aligned}$$

3.5 P systems with promoters and inhibitors

As for mET0L-systems we can also consider the rules in a P system

$$\begin{aligned} \varPi =\left( V,T,w,R, {\Longrightarrow }_{\varPi ,\delta }\right) . \end{aligned}$$

to be rules with promoters and inhibitors, i.e., the rules in \(\varPi\) are of the form (pPQ) where \(p\in R\) and PQ are finite sets of finite multisets; the multisets in P are the promoters, those in Q are the inhibitors. A rule (pPQ) is applicable to a multiset w if and only if every multiset in P and none of the multisets in Q is contained in w.

A P system using rules of type X together with promoters and inhibitors is called of type \((X,pro_{i,j},inh_{k,m})\) if the number of multisets in the sets of promoters and inhibitors is limited by i and k, respectively, and the number of objects in the multisets of the sets of promoters and inhibitors does not exceed j and m, respectively. The family of sets of multisets generated by P systems of type \((X,pro_{i,j},inh_{k,m})\) with the derivation mode \(\delta\) is denoted by \(Ps(OP(X,pro_{i,j},inh_{k,m}),gen,\delta )\). We omit \(inh_{k,m}\) or \(pro_{i,j}\) if no inhibitors or no promoters are used.

4 P systems with activation and blocking of rules

We now define our new concept of regulating the application of rules at a specific moment by activation and blocking relations for P systems (with only one membrane).

A P system with activation and blocking of rules of type X (an AB-P system of typeX for short) working in the derivation mode \(\delta\) is a construct

$$\begin{aligned} \varPi _{AB}=\left( \varPi ,L,f_L,A,B,L_1,\Longrightarrow _{\varPi _{AB},\delta }\right) \end{aligned}$$

where \(\varPi =\left( V,T,w,R, {\Longrightarrow }_{\varPi ,\delta }\right)\) is a P system of type X, L is a finite set of labels, \(f_L\) assigns one or more labels to each rule from R, AB are finite subsets of \(L\times L\times {\mathbb {N}}\), and \(L_1\subseteq L\) describes the set of rules which may be used in the first derivation step. The elements of A and B are of the form \(\left( p,q,t\right)\) with \(p,q\in L\) and \(t\in {\mathbb {N}}\); \(\left( p,q,t\right)\) indicates that t steps in the future the application of p activates (for A) or blocks (for B) the application of the rule q.

Now let \({\Longrightarrow }_{\varPi /P,\delta }\), for any set of rules \(P\subseteq R\), denote the derivation relation obtained from \({\Longrightarrow }_{\varPi ,\delta }\) by reducing the set of available rules from R to P. Then a sequence of multisets \(w_{i}\in V^{\circ },\)\(0\le i\le n\), with \(w_0=w\) is called a valid derivation of \(z=w_n\)—we also write \(w_{0}\Longrightarrow _{\varPi _{AB}, \delta } w_1\Longrightarrow _{\varPi _{AB},\delta } \dots w_n\)—if and only if, with \(P_k\) denoting the set of rules applied to \(w_k\) in the kth derivation step, for every i, \(0\le i< n\), the following conditions hold true:

  • either \(w_{i}\Longrightarrow _{\varPi /P_i,\delta }w_{i+1}\), where \(P_i\) is the set of all rules r (identified by their labels) such that there is a relation \(\left( r_j,r,t\right) \in A\) with \(i-j=t\), which means that the application of a rule \(r_j\) in the jth derivation step has activated rule r probably to be applied in the ith derivation step, and there is no rule relation \(\left( r_j,r,s\right) \in B\) such that \(i-j=s\), which means that the application of the rule \(r_j\) in the jth derivation step would block rule r to be applied in the ith derivation step, or

  • \(P_i\) is empty, i.e., no rule r is activated to be applied ith derivation step or every activated rule is blocked, too; in this case we take \(w_i=w_{i-1}\) provided there is still some rule activated to be applied later.

With this interpretation we see that A can be called the set of activating rule relations and B the set of blocking rule relations. The role of \(L_1\) is to get a derivation started by defining the rules to be applied in the first derivation step.

In the same way as for the original model of P systems we can define the derivations in the AB-P system \(\varPi _{AB}\), now using the derivation relation \(\Longrightarrow _{\varPi _{AB}, \delta }\) instead of \(\Longrightarrow _{\varPi , \delta }\). As we are dealing with membrane systems, the classic output condition—in the generating case—is to only consider halting computations; yet we also want to allow that for a bounded number of steps no rule can be applied as long as still some rules are activated for future steps. Hence, for AB-P systems we consider several variants of halting conditions and output strategies.

4.1 Halting conditions

In Alhazov et al. (2013), the notion halting with delayd is used to describe the situation that we allow the system to stay inactive (i.e., without applying a rule) for d steps before a computation is said to halt. We will also use this refinement of halting in this paper to specify how many steps we allow the system to go ahead without applying a rule.

Another natural condition is to take as results only those multisets which only consist of terminal symbols.

Hence, for the P systems considered in this paper we may specify the following derivation and output strategies:

  • halt: the only condition is that the system halts in the sense explained above, i.e., no rule is activated for future steps, which means that no rule will be applicable any more; the result is the multiset obtained at the end of such a halting computation (which in fact means that specifying the terminal alphabet is obsolete);

  • (haltd): the result is the multiset obtained at the end of a halting computation, but the additional condition is that in no successful derivation (i.e., a derivation yielding a result) more than d steps without applying a rule may occur; the special case \(d=0\) means that we do not allow a derivation step where no rule is applied (again specifying the terminal alphabet is obsolete);

  • term: the multiset obtained during a computation consists of terminal symbols only (yet the system need not have reached a halting configuration);

  • (termd): the multiset obtained during a computation consists of terminal symbols only (yet the system need not have reached a halting configuration), but, in addition, we require that in any successful derivation at most d steps without applying a rule may occur;

  • (haltterm): both conditions must be fulfilled, i.e., the multiset obtained as a result at the end of a halting computation consists of terminal symbols only;

  • (halttermd): all three conditions must be fulfilled, i.e., the multiset obtained as a result at the end of a halting computation consists of terminal symbols only, and in any successful derivation at most d steps without applying a rule may occur.

We may also write \((halt,*)\), \((term,*)\), and \((halt,term,*)\) instead of halt, term, and (haltterm), respectively.

4.2 Result of computations

The set of all multisets obtained as results of computations in \(\varPi _{AB}\) by using the derivation mode \(\delta\) with the output being obtained by the output strategy \(\beta \in {\mathbb {D}}\),

$$\begin{aligned} {\mathbb {D}}= \{(halt,\alpha ),(term,\alpha ),(halt,term,\alpha ) \mid \alpha \in {\mathbb {N}}_0\cup \{*\}\} \end{aligned}$$

is denoted by \(Ps(\varPi _{AB},gen,\delta ,\beta )\), with gen indicating that \(\varPi _{AB}\) is considered as a generating device; if we are only interested in the number of symbols in the resulting multiset, the corresponding set of natural numbers is denoted by \(N(\varPi _{AB},gen,\delta ,\beta )\).

The families of sets of (k-dimensional) vectors of natural numbers and sets of natural numbers generated by AB-P systems using rules of type X in the derivation mode \(\delta\) and using the output strategy \(\beta\) are denoted by \(Ps(OP(X),gen,\delta ,\beta )\) and \(N(OP(X),gen,\delta ,\beta )\), respectively.

Also P systems with activation and blocking of rules can be considered as accepting mechanisms: we add the input multiset \(w_0\) to \(w_1\) in the initial configuration \(C_1=(w_1)\) thus obtaining \(C_1[w_0]=(w_1w_0)\); the input multiset \(w_0\) is accepted if there exists a halting computation in the derivation mode \(\delta\) starting from \(C_1[w_0]\). We point out that in the accepting case we only consider halting computations, but may also restrict the number of derivation steps where no rule can be applied, i.e., in total we consider the following halting strategies:

$$\begin{aligned} {\mathbb {D^{\prime }}}= \{(halt,\alpha ) \mid \alpha \in {\mathbb {N}}_0\cup \{*\}\} \end{aligned}$$

The families of sets of (k-dimensional) vectors of natural numbers and sets of natural numbers accepted by AB-P systems using rules of type X in the derivation mode \(\delta\) and using the halting strategy \(\beta\) are denoted by \(Ps(OP(X,AB),acc,\delta ,\beta )\) and \(N(OP(X,AB),acc,\delta ,\beta )\), respectively.

If the set B of blocking rules is empty, then the AB-P system is said to be a P system with activation of rules (an A-P system for short) of type X; the corresponding sets of multisets generated/ accepted as well as the respective families of languages of multisets are denoted in the same way as for AB-P system by just omitting the B.

Moreover, an AB-P system is called an AkBm-P system if for all \(\left( p,q,t\right) \in A\) we have \(t\in \{1,\dots ,k\}\), which means that the rules applied in one derivation step activate only the rules which are to be applied in the next k steps, and for all \(\left( p,q,t\right) \in B\) we have \(t\in \{1,\dots ,m\}\), which means that the rules applied in one derivation step can block only the rules which are to be applied in the next m steps. In the case of \(k=1\) or \(m=1\) we only write \(\left( p,q\right)\) instead of \(\left( p,q,1\right)\).

Remark 1

As we need to activate rules for further derivation steps to be able to have unbounded derivations, only using blocking of rules makes no sense. Yet with implicitly activating all rules in every derivation step, we can only specify those rules to be blocked and in that way get a kind of B-P systems. Yet these are just a special variant of AB-P systems, as instead of only specifying the rules to be blocked in succeeding steps, by the application of any rule we can also activate all rules for the next \(k+1\) steps where k is the maximum of values t in blocking relations \((p,q,t)\in B\). Hence, in this paper we will not consider this variant further. \(\square\)

5 Results below PsRE

In this section we first give an illustrative example showing that with sequential A1-P systems we can get more than semilinear sets. Then we exhibit some relations between A1-P systems working in maximal derivation modes, ET0L-systems, and P systems with promoters working in maximal derivation modes.

5.1 More than semilinear

It is folklore that sequential P systems with non-cooperative rules (i.e., rules with exactly one symbol on their left-hand side) can only generate semilinear sets, i.e., PsREG. Our first example shows that using sequential A1-P systems (i.e., P systems in which the rules in a derivation step activate only the rules that may be applied in the next step) with non-cooperative rules we can already generate non-semilinear sets.

Example 1

The non-semilinear set \(L_1=\left\{ a^nb^m\mid 2\le n, \ 2\le m\le 2^{n-1}\right\}\) can be generated by a sequential A1-P systems with non-cooperative rules (we remind that this type of rules is abbreviated ncoo):

$$\begin{aligned} \varPi= & {} \left( V=\left\{ a,b,A,B\right\} ,T=\left\{ a,b\right\} ,\right. \\&\left. w=Ab, R, {\Longrightarrow }_{\varPi ,sequ }\right) ,\\ R= & {} \left\{ A\rightarrow a,b\rightarrow BB,A\rightarrow AA, B\rightarrow b\right\} ,\\ \varPi _{AB}= & {} \left( \varPi ,L,f_L,A,B=\emptyset ,L_1, \Longrightarrow _{\varPi _{AB},sequ }\right) ,\\ L= & {} \left\{ p_a,p_b,p_A,p_B\right\} ,\\ L_1= & {} \left\{ p_A\right\} ,\\ f_L= & {} \left\{ \left( p_a,A\rightarrow a\right) , \left( p_b,B\rightarrow b\right) , \left( p_A,A\rightarrow AA\right) ,\right. \\&\left. \left( p_B,b\rightarrow BB\right) \right\} ,\\ A= & {} \left\{ \left( p_a,p_a\right) , \left( p_b,p_a\right) , \left( p_b,p_b\right) , \left( p_b,p_A\right) , \left( p_A,p_B\right) ,\right. \\&\left. \left( p_B,p_b\right) , \left( p_B,p_B\right) \right\} . \end{aligned}$$

The set A of activating rule relations is graphically illustrated in the following figure which shows that this construction is rather similar to using graph control:

figure a

With every adding of one symbol A we may at most double the current number of symbols b using the rules labeled \(p_B\) and \(p_b\). At some moment instead of activating \(p_A\) by \(p_b\) we may switch to \(p_a\) whereafter only \(p_a\) can be applied any more, yielding a terminal multiset provided all symbols B have been derived to the terminal symbol b before switching from \(p_b\) to \(p_a\). In sum, we conclude

$$\begin{aligned} L_1\in Ps(OP(ncoo,A1),gen,sequ ,\beta ) \end{aligned}$$

for \(\beta \in \{(term,0),(halt,term,0)\}\).

For a similar result with only halting computations we have to guarantee that the multiset obtained at the end of a halting computation is terminal, too. This can be achieved in different ways: in any case, we add an additional label \(q_B\) and and add the activations \(\left( p_a,q_B\right)\) and \(\left( q_B,q_B\right)\) to A, which results in the following control diagram:

figure b

For the rule assigned to the new label \(q_B\) we may either take \(B\rightarrow b\) guaranteeing that all nonterminal symbols not already derived to the terminal symbol b can do this now, or\(B\rightarrow B\) trapping the derivation in an infinite loop if and only if not all nonterminal symbols have already been derived to the terminal symbol b by \(p_b\). In both cases we obtain

$$\begin{aligned} L_1\in Ps(OP(ncoo,A1),gen,sequ ,\beta ) \end{aligned}$$

for \(\beta \in \{(halt,0),(halt,term,0)\}\), too. \(\square\)

In the following proofs we will simplify the notation for AB-P systems by writing labeled rules as p : r instead of first listing all rules r in the underlying P system \(\varPi\) and then in \(\varPi _{AB}\) listing the labels p as well as finally defining the function \(f_L\) by listing all pairs \(\left( p,r\right)\). In a shorter way, the whole AB-P system then can be written as

$$\begin{aligned} \varPi _{AB}=\left( V,T,w,R,A,B,L_1, \Longrightarrow _{\varPi _{AB},\delta }\right) \end{aligned}$$

with R already containing the labeled rules.

Theorem 1

For any\(\beta \in \{(halt,0),(term,0),(halt,term,0)\}\),

$$\begin{aligned} PsREG\subsetneq Ps(OP(ncoo,A1),gen,sequ ,\beta ). \end{aligned}$$

Proof

Let \(G=(V,T,P,S)\) be a regular grammar where V is the total alphabet, T the terminal alphabet, \(N:=V{\setminus } T\) the set of nonterminal symbols, P a finite set of regular rules over N and T, and \(S\in N\) the start symbol. We assume the rules in P to be of the forms \(A\rightarrow bC\) or \(A\rightarrow \lambda\)with \(A,C\in N\) and \(b\in T\). For all \(X\in N\), let \(P_X\) denote the set of all rules from P with X on the left-hand side. For each rule \(r=A\rightarrow bC\), let rhs(r) denote the nonterminal symbol C on the right-hand side of the rule.

The set of terminal multisets generated by G can be generated by a sequential A1-P system with non-cooperative rules:

$$\begin{aligned} \varPi _{A}= & {} \left( V,T,w,R,A,B,L_1, \Longrightarrow _{\varPi _{A},sequ }\right) \\ R= & {} \left\{ p_r:r\mid r\in P\right\} \cup \left\{ p_X:X\rightarrow X\mid X\in N\right\} ,\\ L_1= & {} \left\{ p_r\mid r\in P_S\right\} ,\\ A= & {} \left\{ \left( p_r,p_q\right) \mid q\in P_{rhs(r)} \right\} \\&\cup \left\{ \left( p_r,p_X\right) \mid r\in P,\ X\in N \right\} \cup \left\{ \left( p_X,p_X\right) \mid X\in N \right\} . \end{aligned}$$

With the application of a labeled rule \(p_r:r\) for the next step of the derivation all rules are activated which contain the nonterminal symbol on the right-hand side of r on their left-hand side with the activation relations \(\left( p_r,p_q\right) , \ q\in P_{rhs(r)}\). If r is of the form \(A\rightarrow \lambda\), then \(P_{rhs(r)}\) is empty, the result is terminal, and the derivation must halt, as also the rules \(p_X:X\rightarrow X,\ X\in N\) activated by \(\left( p_r,p_X\right)\) cannot be applied. Hence, we infer

$$\begin{aligned} Ps(L(G))=Ps(\varPi _{A},gen,sequ,\beta ) \end{aligned}$$

for \(\beta \in \{(halt,0),(term,0),(halt,term,0)\}\).

On the other hand, the rules \(p_X:X\rightarrow X,\ X\in N\) and the related activation relations are only needed for the output strategy (halt, 0), but they guarantee that a derivation cannot halt as long as the actual multiset is not terminal. Moreover, these rules and the related activation relations can always be omitted if—without loss of generality—we assume that for every nonterminal symbol A there is at least one rule \(A\rightarrow bC\) or \(A\rightarrow \lambda\) in P.

The strictness of the inclusion follows from Example 1, which observation completes the proof. \(\square\)

Using the maximally parallel derivation mode, we can at least simulate ET0L-systems:

Theorem 2

For any\(\beta \in \{(halt,0),(term,0),(halt,term,0)\}\),

$$\begin{aligned} PsET0L\subseteq Ps(OP(ncoo,A1),gen,max,\beta ) . \end{aligned}$$

Proof

The main idea of constructing an equivalent A1-P system \(\varPi _{A}\) for a given ET0L-system \(G=(V,T,R_1,\dots ,R_m,w)\) follows the idea of the proof for P systems with states, see Alhazov et al. (2015): we use new symbols \(t_k\) representing the m tables \(R_k\), \(1\le k\le m\), of the ET0L-system to be simulated. Using a rule \(t_{i,j}:t_i\rightarrow t_j\) then indicates that after the application of table \(R_i\) the table \(R_j\) is to be used; hence, all rules in \(R_j\) as well as all rules \(t_{j,k}:t_j\rightarrow t_k\) for all k and the final rule \(t_{j,e}:t_j\rightarrow \lambda\) are activated by corresponding rule relations in A. The final rules \(t_{j,e}:t_j\rightarrow \lambda\) do not activate any rule, which means that after having applied this rule the computation in the A1-P system \(\varPi _{A}\) ends.

In order to start correctly, we use an initial symbol \(t_0\) and define

$$\begin{aligned} L_1=\left\{ t_{0,k}:t_0\rightarrow t_k\mid 1\le k\le n\right\} \end{aligned}$$

which allows us to activate the rules for simulating any table \(T_k\).

As in the proof of Theorem 1, the construction described so far only works for the output strategies \(\beta \in \{(term,0),(halt,term,0)\}\); for \(\beta =(halt,0)\) we again need the additional labeled rules \(p_X:X\rightarrow X,\ X\in N\), which are to be activated by the final rules \(t_{j,e}:t_j\rightarrow \lambda\) in order to trap the system in an infinite loop if a final rule is applied when the current multiset is not yet terminal. In any case, the derivations in G using the tables \(R_i\) are simulated correctly by the A1-P system \(\varPi _{A}\) using the adequate rule activations, and the terminal results obtained by the ET0L-system G interpreted as multisets are the same as the multisets obtained by the A1-P system \(\varPi _{A}\). Hence, we conclude

$$\begin{aligned} Ps(L(G))=Ps(\varPi _{A},gen,max,\beta ) \end{aligned}$$

for \(\beta \in \{(halt,0),(term,0),(halt,term,0)\}\). \(\square\)

Obviously, the same constructions as described in the proof above work if we use the maximal number of rules or the maximal number of objects, hence, we immediately obtain the following results:

Corollary 1

For any\(\delta \in \{max,max_{rules},max_{objects}\}\),

$$\begin{aligned} PsET0L\subseteq Ps(OP(ncoo,A1),gen,\delta ,\beta ) \end{aligned}$$

for any\(\beta \in \{(halt,0),(term,0),(halt,term,0)\}\).

The additional control symbols \(t_i\) used in the proof of Theorem 2 can also be interpreted as promoters, which idea is essential for proving the following result:

Theorem 3

For any\(\delta \in \{max,max_{rules},max_{objects}\}\),

$$\begin{aligned}&Ps(OP(ncoo,A1),gen,\delta ,(halt,term,0) ) \\&\quad \subseteq Ps(OP(ncoo,pro_{1,1}),gen,\delta ). \end{aligned}$$

Proof

Let \(\varPi _A=(V,T,w,R,A,L_1,\Longrightarrow _{\varPi _A,\delta })\) be an arbitrary A1-P system. We construct an equivalent P system

$$\begin{aligned} \varPi _{pro}=\left( V^{\prime },T,w_1,R^{\prime }, {\Longrightarrow }_{\varPi _{pro},\delta }\right) \end{aligned}$$

with atomic promoters, i.e., a rule in \(R^{\prime }\) is of the form \((a\rightarrow u,\{b\},\emptyset )\) with \(a,b\in V^{\prime }\) and \(u\in V^{\prime *}\), which we will write as \(a\rightarrow u|_b\).

$$\begin{aligned} V^{\prime }= & {} V\cup \{p_r\mid r\in R\},\\ w_1= & {} w\prod _{r\in L_1}p_r,\\ R= & {} \{a\rightarrow u\prod _{(r,q)\in A}p_q|_{p_r}\mid r:a\rightarrow u\in R\}\\&\cup \{p_r\rightarrow \lambda \mid r\in R\}. \end{aligned}$$

The additional work of \(\varPi _{pro}\) consists of keeping track of which rules are allowed to be executed in the next step by generating the corresponding promoters \(p_q\), which are eliminated again in the next derivation step. Both systems stop the derivation when no rule is applicable any more, and only terminal results are extracted. Hence, we conclude

$$\begin{aligned} Ps(\varPi _{A},gen,\delta ,(halt,term,0) )=Ps(\varPi _{pro},gen,\delta ) \end{aligned}$$

for any \(\delta \in \{max,max_{rules},max_{objects}\}\).

Putting together the results of Theorem 3 and Corollary 1 we obtain the following result, already shown in Sburlan (2006) for \(\delta =max\).

Corollary 2

For any\(\delta \in \{max,max_{rules},max_{objects}\}\),

$$\begin{aligned} PsET0L \subseteq Ps(OP(ncoo,pro_{1,1}),gen,\delta ). \end{aligned}$$

The construction given in the proof of Theorem 3 can be extended from only activation of rules simulated by atomic promoters to activation and blocking of rules simulated by atomic promoters and inhibitors, provided we still require the system to not allow delays or look-aheads more than one:

Theorem 4

For any \(\delta \in \{max,max_{rules},max_{objects}\},\)

$$\begin{aligned}&Ps(OP(ncoo,A1B1),gen,\delta ,(halt,term,0) ) \\&\quad \subseteq Ps(OP(ncoo,pro_{1,1},inh_{1,1}),gen,\delta ). \end{aligned}$$

Proof

Let \(\varPi _{AB}=(V,T,w,R,A,L_1,\Longrightarrow _{\varPi _{AB},\delta })\) be an arbitrary AB-P system without delays (which is reflected by the output strategy (haltterm, 0)) and no look-aheads more than one, i.e., activations and blockings of rules are just for the next step. We construct an equivalent P system

$$\begin{aligned} \varPi =\left( V^{\prime },T,w_1,R^{\prime }, {\Longrightarrow }_{\varPi ,\delta }\right) \end{aligned}$$

with atomic promoters and inhibitors, i.e., a rule in \(R^{\prime }\) is of the form \((a\rightarrow u,\{b\},\{c\})\) with \(a,b,c\in V^{\prime }\) and \(u\in V^{\prime *}\), which we will write as \(a\rightarrow u|_{b,\lnot c}\).

For every labeled rule \(p_r:r\) we use two control symbols—the promoter \(p_r\) and the inhibitor \(\bar{p_r}\), respectively, which are eliminated again in the next step. As a technicality we observe that the presence of both promoter \(p_r\) and inhibitor \(\bar{p_r}\) prevents the rule r from being applied.

$$\begin{aligned} V^{\prime }= & {} V\cup \{p_r,\bar{p_r}\mid r\in R\},\\ w_1= & {} w\prod _{r\in L_1}p_r,\\ R^{\prime }= & {} \left\{ a\rightarrow u\prod _{(r,q)\in A}p_q \prod _{(r,q)\in B}\bar{p_q} \ |_{\ p_r,\lnot \bar{p_r}} \mid r:a\rightarrow u\in R\right\} \\&\cup \{p_r\rightarrow \lambda , \bar{p_r}\rightarrow \lambda \mid r\in R\}. \end{aligned}$$

\(\varPi\) keeps track of which rules are allowed or prohibited to be executed in the next step by generating the corresponding promoters \(p_q\) and inhibitors \(\bar{p_q}\), which are eliminated again in the next derivation step. Both systems stop the derivation when no rule is applicable any more, and only terminal results are extracted. Hence, we conclude

$$\begin{aligned} Ps(\varPi _{AB},gen,\delta ,(halt,term,0) )=Ps(\varPi ,gen,\delta ) \end{aligned}$$

for any \(\delta \in \{max,max_{rules},max_{objects}\}\). \(\square\)

If we allow arbitrary look-aheads, the construction of the simulating P system with promoters and inhibitors becomes more involved than that given in the proof of Theorem 4, as we have to check that the AB-system does not have delays; therefore, in the following proof we will need two promoters of weight one or, equivalently, one promoter of weight two:

Theorem 5

For any\(\delta \in \{max,max_{rules},max_{objects}\}\),

$$\begin{aligned}&Ps(OP(ncoo,AB),gen,\delta ,(halt,term,0) ) \\&\quad \subseteq Ps(OP(ncoo,pro_{2,1},inh_{1,1}),gen,\delta ) \end{aligned}$$

and

$$\begin{aligned}&Ps(OP(ncoo,AB),gen,\delta ,(halt,term,0) ) \\&\quad \subseteq Ps(OP(ncoo,pro_{1,2},inh_{1,1}),gen,\delta ). \end{aligned}$$

Proof

Let \(\varPi _{AB}=(V,T,w,R,A,L_1,\Longrightarrow _{\varPi _{AB},\delta })\) be an arbitrary AB-P system without delays (which is reflected by the output strategy (haltterm, 0)). We construct an equivalent P system

$$\begin{aligned} \varPi =\left( V^{\prime },T,w_1,R^{\prime }, {\Longrightarrow }_{\varPi ,\delta }\right) \end{aligned}$$

with promoters and inhibitors of type \((ncoo,pro_{2,1},inh_{1,1})\), i.e., a rule in \(R^{\prime }\) is of the form \((a\rightarrow u,\{d,b\},\{c\})\) with \(a,b,c\in V^{\prime }\), \(d\in V^{\prime }\cup \{ \lambda \}\), and \(u\in V^{\prime *}\), which we will write as \(a\rightarrow u|_{d,b,\lnot c}\).

Let \(k=\max \{t\mid (p,q,t)\in A\cup B\}\) be the maximal look-ahead. Then for every labeled rule \(p_r:r\) we use control symbols \([p_r,t],[\bar{p_r},t]\), \(0\le t\le k-1\), which count down t to zero to let them become active as promoter (\([p_r,0]\)) or inhibitor (\([\bar{p_r},0]\)), respectively, before these promoters and inhibitors are eliminated again. As a technicality we observe that the presence of both promoter \([p_r,0]\) and inhibitor \([\bar{p_r},0]\) prevents the rule r from being applied. The additional symbol d is only generated if a rule from R is simulated; the necessity of its presence in order to apply a rule in \(\varPi\) guarantees that only derivations in \(\varPi _{AB}\) without delay are simulated in \(\varPi\).

$$\begin{aligned} V^{\prime }= & {} V\cup \{d\} \cup \{[p_r,t],[\bar{p_r},t]\mid r\in R,0\le t\le k-1\},\\ w_1= & {} wd\prod _{r\in L_1}p_r,\\ R^{\prime }= & {} \left\{ a\rightarrow du\prod _{(r,q,t)\in A}[p_q,t-1]\right. \\&\left. \prod _{(r,q,t)\in B}[\bar{p_q},t-1] \ |_{\ d,[p_r,0],\lnot [\bar{p_r},0]} \mid r:a\rightarrow u\in R\right\} \\&\cup \{[p_r,t]\rightarrow [p_r,t-1] \ |_{\ d},\ [\bar{p_r},t]\\&\rightarrow [\bar{p_r},t-1] \ |_{\ d} \mid r\in R, \ k-1>t>0\}\\&\cup \{[p_r,0]\rightarrow \lambda \ |_{\ d},[\bar{p_r},0]\\&\rightarrow \lambda \ |_{\ d} \mid r\in R\} \cup \{d\rightarrow \lambda \}. \end{aligned}$$

Each couple of two atomic promoters \(d,p_r\) can be replaced by the corresponding singleton promoter \(dp_r\) of weight two, which observation completes the proof. \(\square\)

6 Computational completeness results

In this section we show that several simple variants of P systems become computationally complete when using the control of activation and blocking of rules if we allow at most delays of one, i.e., in any case, after one derivation step without applying a rule, in the next step a rule has to be applied to continue the derivation.

6.1 Sequential P systems with non-cooperative rules

Theorem 6

For any\(\beta \in \{(halt,1),(term,1),(halt,term,1)\}\),

$$\begin{aligned} PsRE=Ps(OP(ncoo,A2B1),gen,sequ ,\beta ). \end{aligned}$$

Moreover,

$$\begin{aligned} PsRE=Ps(OP(ncoo,A2B1),acc,sequ ,(halt,1) ). \end{aligned}$$

Proof

The proof idea is to show how to simulate register machines. For a given register machine \(M=\left( n,H,R_M,p_{0},h\right)\) we construct an equivalent AB-P system

$$\begin{aligned} \varPi _{AB}=\left( V,T,w,R,A,B,L_1, \Longrightarrow _{\varPi _{AB},sequ }\right) \end{aligned}$$

in the following way: For every label \(p\in H{\setminus } \left\{ h\right\}\) we use labels \(\left\{ l_p,\bar{l}_p,\tilde{l}_p\right\}\) for an ADD-instruction and labels \(\left\{ l_p,l^{\prime }_p,\hat{l}_p,\tilde{l}_p,\bar{l}_p,\right\}\) for a SUB-instruction; for the final instruction \(h:{\mathtt {HALT}}\) we only use the rule \(l_h:h\rightarrow \lambda\). For any p, we also use the symbols \(p, p^{\prime }\), and for each register r its contents is described by the number of symbols \(a_r\) in (the configurations of) \(\varPi _{AB}\). The starting rule is given by \(L_1=\left\{ l_{p_0}\right\}\), and we start with \(w=p_0\).

An ADD-instruction \(p:\left( {\mathtt {ADD}}\left( r\right) ,q,s\right)\) is simulated by the following labeled rules in R and rule relations in A:

  1. 1.

    first step:

    \(l_p:p\rightarrow p^{\prime }a_r\) and \((l_p,\bar{l}_p), (l_p,\tilde{l}_p) \in A\);

  2. 2.

    second step:

    \(\bar{l}_p:p^{\prime }\rightarrow q, \tilde{l}_p:p^{\prime }\rightarrow s\) and \((\bar{l}_p,l_q), (\tilde{l}_p,l_s) \in A\).

A SUB-instruction \(p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right)\) is simulated by the following labeled rules in R and rule relations in A and B:

  1. 1.

    first step:

    \(l_p:p\rightarrow p^{\prime }\) and \((l_p,\hat{l}_p), (l_p,\tilde{l}_p,2) \in A\);

  2. 2.

    second step:

    \(\hat{l}_p:a_r \rightarrow \lambda\) and \((\hat{l}_p,\bar{l}_p) \in A\), \((\hat{l}_p,\tilde{l}_p) \in B\);

  3. 3.

    third step:

    \(\bar{l}_p:p^{\prime }\rightarrow q, \tilde{l}_p:p^{\prime }\rightarrow s\) and \((\bar{l}_p,l_q), (\tilde{l}_p,l_s) \in A\).

In sum, we obtain the AB-P system \(\varPi _{AB}\) as follows:

$$\begin{aligned} \varPi _{AB}= & {} \left( V,T,w,R,A,B,L_1, \Longrightarrow _{\varPi _{AB},sequ }\right) \\ V= & {} \left\{ p,p^{\prime } \mid p\in H{\setminus } \left\{ h\right\} \right\} \cup \left\{ h\right\} \cup \left\{ a_r \mid 1\le r\le n\right\} ,\\ w= & {} p_0,\\ L_1= & {} \left\{ l_{p_0}\right\} ,\\ R= & {} \left\{ l_p:p\rightarrow p^{\prime }a_r,\ \bar{l}_p:p^{\prime }\rightarrow q, \ \tilde{l}_p:p^{\prime }\rightarrow s \mid p:\left( {\mathtt {ADD}}\left( r\right) ,q,s\right) \in R_M \right\} \\&\cup \left\{ l_p:p\rightarrow p^{\prime }, \ \hat{l}_p:a_r \rightarrow \lambda , \ \bar{l}_p:p^{\prime }\rightarrow q, \ \tilde{l}_p:p^{\prime }\rightarrow s \mid p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right) \in R_M \right\} \\&\cup \left\{ l_h:h\rightarrow \lambda \right\} ,\\ A= & {} \left\{ (l_p,\bar{l}_p),\ (l_p,\tilde{l}_p), \ (\bar{l}_p,l_q), (\tilde{l}_p,l_s) \mid p:\left( {\mathtt {ADD}}\left( r\right) ,q,s\right) \in R_M \right\} \\&\cup \left\{ (l_p,\hat{l}_p), \ (l_p,\tilde{l}_p,2), \ (\hat{l}_p,\bar{l}_p), \ (\bar{l}_p,l_q), \ (\tilde{l}_p,l_s) \mid p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right) \in R_M \right\} ,\\ B= & {} \left\{ (\hat{l}_p,\tilde{l}_p) \mid p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right) \in R_M \right\} . \end{aligned}$$

If the rule \(\hat{l}_p:a_r \rightarrow \lambda\) can be applied in the second step, it activates \(\bar{l}_p\) and at the same time blocks \(\tilde{l}_p\), which has been activated in the first simulation step and thus will be applied if the register is empty, i.e., if \(\hat{l}_p\) cannot be applied. Only the second step of the simulation of a A SUB-instruction can be a derivation step where no rule is applied, but afterwards, in any case, a rule will be applied again in \(\varPi _{AB}\), which guarantees that the delay is at most one.

Hence, in the accepting case, the halting strategy (halt, 1) is already sufficient. In the generating case, again delay one is enough, and the special features of the underlying register machine guarantee that also the AB-P system \(\varPi _{AB}\) halts with a terminal multiset whenever it halts, and on the other hand, the multiset can only become terminal when the final rule \(l_h:h\rightarrow \lambda\) is applied, which terminates the derivation in \(\varPi _{AB}\). \(\square\)

6.2 P systems working in set-maximally parallel derivation modes

Corollary 3

For any\(\delta \in \left\{ smax, smax_{rules}, smax_{objects}\right\}\), we have the following results:

$$\begin{aligned} PsRE=Ps(OP(ncoo,A2B1),gen,\delta ,\beta ) \end{aligned}$$

for any\(\beta \in \{(halt,1),(term,1),(halt,term,1)\}\). Moreover,

$$\begin{aligned} PsRE=Ps(OP(ncoo,A2B1),acc,\delta ,(halt,1) ). \end{aligned}$$

Proof

We can use exactly the same construction as in the proof of Theorem 6. The set mode guarantees that the rule \(\hat{l}_p:a_r \rightarrow \lambda\) in the second step of the simulation of a SUB-instruction can be applied only once. The other arguments remain exactly the same as in that proof. \(\square\)

6.3 (Purely) Catalytic P systems working in maximally parallel derivation modes

A typical variant of rules in P systems are so-called catalytic rules of the form \(ca\rightarrow cv\), where c is a catalyst, a symbol which never evolves itself, but helps another symbol a to evolve into a multiset v. The type of P systems using only catalytic rules is called purely catalytic (abbreviated pcat); if both catalytic rules and non-cooperative rules are allowed, we speak of a catalytic P system (abbreviated cat). In the description of the families of sets of multisets generated/accepted by such (purely) catalytic P systems the maximal number of catalysts to be used is indicated as a subscript, i.e., we write \(pcat_n\) and \(cat_n\).

The following result then is a consequence of the preceding proofs; we emphasize the important fact that even in the purely catalytic case only one catalyst is needed.The bracket notation [p]cat indicates that the type is either cat or pcat.

Corollary 4

For any\(\delta \in \left\{ max, max_{rules}, max_{objects}\right\}\), we have the following results:

$$\begin{aligned} PsRE=Ps(OP([p]cat_1,A2B1),gen,\delta ,\beta ) \end{aligned}$$

for any\(\beta \in \{(halt,1),(term,1),(halt,term,1)\}\). Moreover,

$$\begin{aligned} PsRE=Ps(OP([p]cat_1,A2B1),acc,\delta ,(halt,1) ). \end{aligned}$$

Proof

Looking carefully into the proof of Theorem 6, we see that the only rules where the set mode would be needed are those of the form \(\hat{l}_p:a_r \rightarrow \lambda\). Using just one catalyst c, we can use the rules \(\hat{l}_p:ca_r \rightarrow c\) instead. The remaining details of the proof of Theorem 6 can remain as they are for the catalytic case.

For the purely catalytic case, we observe the astonishing fact that, as we are following the construction of a sequential P system, the same catalyst c can also be used for all the other rules; for example, we take \(l_p:cp\rightarrow cp^{\prime }\) instead of \(l_p:p\rightarrow p^{\prime }\). These observations complete the proof. \(\square\)

6.4 P systems with insertion and deletion rules

In this section we return to a sequential simulation of register machines using insertion and deletion rules; yet it turns out that the same construction also works for the set-maximally parallel derivation modes, as we have already seen in the case of non-cooperative rules. The type of rules with insertion and deletion of singleton symbols is denoted by ID.

Theorem 7

For any\(\delta \in \left\{ sequ,smax, smax_{rules}, smax_{objects}\right\}\), we have the following results:

$$\begin{aligned} PsRE=Ps(OP(ID,A2B1),gen,\delta ,\beta ) \end{aligned}$$

for any\(\beta \in \{(halt,1),(term,1),(halt,term,1)\}\). Moreover,

$$\begin{aligned} PsRE=Ps(OP(ID,A2B1),acc,\delta ,(halt,1) ). \end{aligned}$$

Proof

The proof idea again is to show how to simulate register machines. For a given register machine \(M=\left( n,H,R_M,p_{0},h\right)\) we construct an equivalent AB-P system

$$\begin{aligned} \varPi _{AB}=\left( V,T,w,R,A,B,L_1, \Longrightarrow _{\varPi _{AB},\delta }\right) \end{aligned}$$

with insertion and deletion rules; as symbols we only need the symbols \(a_r\) for representing the contents of registers r, \(1\le r\le n\). We mention that the HALT-instruction is never activated.

$$\begin{aligned} \varPi _{AB}= & {} \left( V,T,w,R,A,B,L_1, \Longrightarrow _{\varPi _{AB},\delta }\right) \\ V= & {} \left\{ a_r \mid 1\le r\le n\right\} ,\\ w= & {} \lambda ,\\ L_1= & {} \left\{ l_{p_0}\right\} ,\\ R= & {} \left\{ l_p:I(a_r) \mid p:\left( {\mathtt {ADD}}\left( r\right) ,q,s\right) \in R_M \right\} \\&\cup \left\{ l_p:I(a_r), \ \hat{l}_p:D(a_r), \ \tilde{l}_p:D(a_r) \mid p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right) \in R_M \right\} ,\\ A= & {} \left\{ (l_p,q) \mid p:\left( {\mathtt {ADD}}\left( r\right) ,q,s\right) \in R_M , \ q\ne h \right\} \\&\cup \left\{ (l_p,s) \mid p:\left( {\mathtt {ADD}}\left( r\right) ,q,s\right) \in R_M , \ s \ne h \right\} \\&\cup \left\{ (l_p,\hat{l}_p), (\hat{l}_p,\tilde{l}_p) \mid p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right) \in R_M \right\} \\&\cup \left\{ (\hat{l}_p,l_s,2) \mid p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right) \in R_M, \ s \ne h \right\} \\&\cup \left\{ (\tilde{l}_p,l_q) \mid p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right) \in R_M, \ q\ne h \right\} ,\\ B= & {} \left\{ (\tilde{l}_p,l_s) \mid p:\left( {\mathtt {SUB}}\left( r\right) ,q,s\right) \in R_M, \ s \ne h \right\} . \end{aligned}$$

The simulation of an ADD-instruction only needs one step with inserting one register symbol \(a_r\) using the rule \(l_p:I(a_r)\) and then branching to \(l_q\) or \(l_s\) in the non-deterministic case or just activating \(l_q\) in the deterministic case.

The simulation of a SUB-instruction starts with two steps inserting and immediately afterwards again deleting a register symbol \(a_r\) with the only purpose to allow the AB-P system \(\varPi _{AB}\) to activate both the decrement and the zero-test case with \((\hat{l}_p,\tilde{l}_p)\in A\) and \((\hat{l}_p,l_s,2)\in A\), respectively. If the rule \(\tilde{l}_p:D(a_r)\) can be applied in the third step, it activates \(l_q\) and at the same time blocks \(l_s\), which has been activated in the previous simulation step and thus will be applied if the register is empty, i.e., if \(\tilde{l}_p\) cannot be applied.

In the AB-P system defined above, only the third step of the simulation of a SUB-instruction can be a derivation step where no rule is applied, but afterwards, in any case, a rule will be applied again in \(\varPi _{AB}\), which guarantees that the delay is at most one.

Hence, in the accepting case, the halting strategy (halt, 1) is already sufficient. In the generating case, again delay one is enough, and the special features of the underlying register machine guarantee that also the AB-P system \(\varPi _{AB}\) halts with a terminal multiset whenever it halts, and on the other hand, the multiset can only become terminal when no rule is activated any more with M having reached the final label h, which also terminates the derivation in \(\varPi _{AB}\). \(\square\)

Remark 2

We emphasize that the construction in the preceding proof uses the minimal amount of symbols, i.e., one symbol for each register. The whole control just works with the activation and blocking of rules. \(\square\)

Remark 3

We finally observe that all proof constructions given in this section provide deterministic simulations of deterministic register machines, which is an important feature for the discussions in Sect. 8. \(\square\)

7 P systems using backwards activation and blocking of rules

The definition of AB-P systems given in Sect. 4 can be extended by allowing the relations in A and B to be of the form \(\left( r_j,r,t\right)\) with t possibly also being a negative integer. In that way rules can be activated or blocked in previous steps.

A conservative semantics for this extension is calling a derivation

$$\begin{aligned} w_{0}\Longrightarrow _{\varPi _{AB}, \delta } w_1 \Longrightarrow _{\varPi _{AB},\delta } \dots w_n \end{aligned}$$

to be consistent if and only if the available sets of rules for previous steps are not changed by having rules activated or blocked backwards in time.

In that way, at least for computationally complete AB-P systems, no increase in the computational power is obtained, as this condition can be checked by any computationally complete device, for example, a Turing machine.

A very special case is to allow \(t=0\), but not to insist on consistency (consistency for \(t=0\) would mean that the set of rules available for the current step does not change when considering activations and blockings for \(t=0\)). As long as only activations of rules are considered, this only opens the field for probably new rules to become applicable at the same step, but at the end the whole set of rules available for being applied will be fixed. The situation changes if we also allow blocking of rules, as in this case with the application of some rules other ones having been applicable could be blocked. In the simplest case, consider \((p,p,0)\in B\), i.e., a rule when being applied is blocking itself to be applied. On the other hand, starting with a set of activated rules, only a finite number of sets of rules eventually to be applied in the current derivation step in a non-conflicting way has to be checked to find out all possible continuations of the ongoing derivation. Hence, in the following section we shall focus especially on activations of rules backwards in time, i.e., on activations \((p,q,-t)\) for \(t>0\). As a special technical detail we mention that activations going back behind the first step just are ignored.

8 Going beyond Turing

We are now discussing how to “go beyond Turing” by using a less conservative semantics for activating (and/or blocking) the rules in preceding derivation steps.

The main idea is to consider infinite computations on given finite multisets —compare this with the idea of red-green Turing machines, see van Leeuwen and Wiedermann (2012), and of red-green register machines, see Aman et al. (2014).

There are several ways to look at these infinite computations and the development of the configurations, yet we have in mind the one based on the ideas elaborated in Freund et al. (2015): we consider sequences of computations where every computation in this sequence the evolution of configurations starts again from the beginning with the initial activations and blockings, but also takes into the account the activations and blockings of rules including the backwards signals obtained in the previous evolution of configurations. We call such an infinite sequence of computations valid if each prefix of length n of the computation becomes stable, i.e., for every k with \(1\le k\le n\) neither the kth configuration itself nor the set of rules applicable in the kth computation step change any more. We consider the infinite sequence of stable configurations obtained in this way as the final computation on the given input.

To make such a process of getting an infinite sequence of computations easier to be described, we consider the following procedure: the first computation makes only one step, the second one makes two derivation steps, \(\ldots\) , the nth computation starting with the initial configuration and using the initial activations and blockings enlarged with the actual activations and blockings of rules including the backwards signals obtained in the previous computation makes n derivation steps, \(\ldots\) We emphasize that all activations and blockings obtained in the nth computation of the given deterministic P system are taken over for the next computation.

Remark 4

That the condition for an infinite sequence of computations as defined above to be valid is a \(\varPi _3\)-condition shortly can be argued as follows: Any valid infinite sequence of computations of a deterministic P system with activations and blockings of rules can be encoded as an \(\omega\)-word, i.e., as an infinite sequence of encodings of finite configurations and the activated and blocked rules for the next computation. Observe that the number of steps ahead for which these activations and blockings are to be considered is bounded by a constant.

The condition that such an \(\omega\)-word represents a valid infinite sequence of computations then can be checked by a deterministic Turing machine: the deterministic derivation in a deterministic P system with activations and blockings of rules is simulated by a deterministic Turing machine, which goes back on its tape to the encoding of a configuration and the multiset of rules to be applied to it whenever one of these two still changes. Then the input \(\omega\)-word represents a valid sequence of computations of the deterministic P system with activations and blockings of rules if and only if the corresponding Turing machine has a complete non-oscillating run on it (a run is called complete when every position on the tape is reached, and it is called non-oscillating when every position on the tape is reached only finitely often during the infinite computation).

The family of \(\omega\)-languages accepted by deterministic Turing machines by complete non-oscillating runs equals \(\varPi _3\), for example, see Freund and Staiger (2001). Therefore, all \(\omega\)-languages representing exactly the valid sequences of computations of a deterministic P system with activations and blockings of rules are in \(\varPi _3\). \(\square\)

Provided it exists, we can just consider the stable first configuration of a valid infinite sequence of computations to see whether the input has been accepted. This idea can be used for all the computationally complete variants of P systems with activation and blocking of rules considered in this paper in Sect. 6, as according to Remark 3 all of them allow for a deterministic simulation of deterministic register machines.

One interesting construction principle which can be applied for simulating red-green P systems/automata (starting in red) in all these variants is the following:

  • in order to even capture sequential P systems with activation and blocking of rules, we expand the times in the rule relations by a factor of two, hence, the original computations will happen in each odd derivation step;

  • we use two new symbols YES and NO; in the initial configuration we add the new symbol NO;

  • each application of a rule p changing the color from red to green activates the rule \(p_Y:NO\rightarrow YES\) by the backwards activation \(\left( p,p_Y,-1\right)\) (no such rule is allowed to be activated in \(L_1\));

  • each application of a rule p changing the color from green to red activates the rule \(p_N:YES\rightarrow NO\) by the backwards activation \(\left( p,p_N,-1\right)\) (no such rule is allowed to be activated in \(L_1\));

  • the mind change (change of color) is propagated backwards by using the backwards activation relations \(\left( p_N,p_N,-2\right)\) and \(\left( p_Y,p_Y,-2\right)\), respectively;

  • these rules (labeled by) \(p_N\) and \(p_Y\) then are used “backwards” in every even derivation step; the backwards propagation stops when one of these rules is applied in the second derivation step (as already mentioned, we assume that backwards activations of rules have no effect any more if they activate a rule before step 1);

  • if the computation of a red-green P automaton stabilizes in green, i.e., no mind (color) change from green to red takes place any more, then, of course, no changes in the second configuration occur any more, i.e., it has become stable and therefore available for “reading out” the result of the computation.

It is interesting to mention that only “backwards ” rule activations are used in the algorithm described above, but no “backwards” rule blockings.

We conclude that with every kind of P systems with activation and blocking of rules which allows for the deterministic simulation of register machines we can simulate the corresponding variant of red-green P automata which characterize the \(\varSigma _2\)-sets in the Arithmetical Hierarchy (see Budnik 2006), i.e., with such systems we get at least \(\varSigma _2\); compare this with the results obtained in Freund et al. (2015, 2016). Yet \(\varSigma _2\) is only a lower bound: as already pointed out in Remark 4, the condition for a sequence of computations to be valid is a \(\varPi _3\)-condition, which implies that the upper bound is \(\varPi _3\). Hence, one of the most challenging open problems is to find out if we can obtain more than \(\varSigma _2\).

9 Conclusion

We have considered the concept of regulating the applicability of rules based on the application of rules in the preceding step(s) within a very general model for hierarchical P systems and for the main derivation modes. These concepts of activation and blocking of rules can also be extended in a natural way to the many variants of tissue P systems, i.e., networks of cells where a rule to be applied can affect multiple cells at the same time.

For the maximally parallel derivation modes and P systems with activation and blocking of non-cooperative rules not allowing derivation steps without applying any rule we have established several relations to ET0L-systems as well as to P systems with promoters and inhibitors.

Even with the sequential derivation mode and for the set modes of derivation, the resulting computational power already reaches computational completeness even with non-cooperative rules and using activation of rules with look-ahead two and blocking of rules for the next step when we allow at most one derivation step without any rule being applied before in the next step again some rule must be applied for continuing the derivation.

Using a special semantics for activating and/or blocking the rules in preceding derivation steps, we could even show how to “go beyond Turing” with activating rules in preceding derivation steps and to get \(\varSigma _2\) as a lower bound and \(\varPi _3\) as an upper bound. Another interesting topic for future research is to investigate how powerful such AB-P systems are in generating \(\omega\)-strings.