1 Introduction and Motivation

Proof search is one of the most general ways of deciding formulas of expressive logics, both automatically and interactively. In particular, proof systems can often be found to yield optimal decision algorithms, in terms of asymptotic complexity. To this end, we now know how to extract bounds for proof search in terms of various properties of the proof system at hand. For instance we may establish:

  • nondeterministic time bounds via proof complexity, e.g. [6, 7, 13];

  • (non)deterministic space bounds via the depth of proofs or search spaces, and loop-checking, e.g. [3, 12, 23];

  • deterministic or co-nondeterministic time bounds via systems of invertible rules, see e.g. [21, 28].

However, despite considerable progress in the field, there still remains a gap between the obtention of (co-)nondeterministic time bounds, such as \(\mathbf {NP}\) or \(\mathbf {coNP}\), and space bounds such as \(\mathbf {PSPACE}\). Phrased differently, while we have many logics we know to be \(\mathbf {PSPACE}\)-complete (intuitionistic propositional logic, various modal logics, etc.), we have very little understanding of their fragments corresponding to subclasses of \(\mathbf {PSPACE}\).

An alternative view of space complexity is in terms of alternating time complexity, where a Turing machine may have both existential (i.e. nondeterminstic) and universal (i.e. co-nondeterministic) branching states. In this way \(\mathbf {PSPACE}\) is known to be equivalent to alternating polynomial time [4]. This naturally yields a hierarchy of classes delineated by the number of alternations permitted in an accepting run, known as the polynomial hierarchy (\(\mathbf {PH}\)) [26], of which both \(\mathbf {NP}\) and \(\mathbf {coNP}\) are special cases. An almost exact instantiation of this (in a non-uniform setting) is the QBF hierarchy, where formulae are distinguished by their number of quantifier alternations in prefix notation. This raises the following open-ended question:

Question 1

How do we identify natural fragments of \(\mathbf {PSPACE}\)-complete logics complete for levels of the polynomial hierarchy? In particular, can proof theoretic methods help?

In previous work, [8], we considered this question for intuitionistic propositional logic, obtaining partial answers for certain expressive fragments. In this work we consider the case of multiplicative additive linear logic (\(\mathsf {MALL} \)) [11], and its affine variant which admits weakening (\(\mathsf {MALL} {\mathsf {w} }\)); both of these are often seen as the prototypical systems for \(\mathbf {PSPACE}\) since their inference rules constitute the abstract templates of terminating proof search. Indeed, both \(\mathsf {MALL} \) and \(\mathsf {MALL} {\mathsf {w} }\) are well-known to be \(\mathbf {PSPACE}\)-complete [17, 18], results that are subsumed by this work.Footnote 1 By considering a ‘focussed’ presentation of \(\mathsf {MALL} (\mathsf {w} )\), we analyse proof search to identify classes of theorems belonging to each level of \(\mathbf {PH}\). To demonstrate the accuracy of this method, we also show that these classes are, in fact, complete for their respective levels, via encodings from true quantified Boolean formulas (QBFs) of appropriate quantifier complexity, cf. [4].

The notion of focussing is a relatively recent development in structural proof theory that has emerged over the last 20-30 years, e.g. [1, 14, 16].

Focussed systems elegantly delineate the phases of invertible and non-invertible inferences in proofs, allowing the natural obtention of alternating time bounds for a logic. Furthermore, they significantly constrain the number of local choices available, resulting in reduced nondeterminism during proof search, while remaining complete. This result is known as the ‘focussing’ or ‘focalisation’ theorem. Such systems thus serve as a natural starting point for identifying fragments of \(\mathbf {PSPACE}\)-complete logics complete for levels of \(\mathbf {PH}\).

One shortfall of focussed systems is that, in their usual form, they do not make adequate consideration for deterministic computations, which correspond to invertible rules that do not branch, and so the natural measure of complexity there (‘decide depth’) can considerably overestimate the alternating time complexity of a theorem. In the worst case this can lead to rather degenerate bounds, exemplified in [8] where an encoding of \(\mathsf {SAT} \) in intuitionistic logic requires a linear decide depth, despite being \(\mathbf {NP}\)-complete.Footnote 2 To deal with this issue [8] proposed a more controlled form of focussing called over-focussing, which allows deterministic steps within synchronous phases, but as noted there this method is not available in \(\mathsf {MALL} \) due to the context-splitting rule. Instead, in this work we retain the classical abstract notion of focussing, but split the usual invertible, or ‘asynchronous’, phase into a ‘deterministic’ phase, with non-branching invertible rules, and a ‘co-nondeterministic’ phase, with branching invertible rules. In this way, when expressing proof search as an alternating predicate, a \(\forall \) quantifier needs only be introduced in a co-nondeterministic phase. It turns out that this adaptation suffices to obtain the tight bounds we are after.

This is an extended version of the conference paper Focussing, \(\mathsf {MALL} \)and the polynomial hierarchy [9] presented at IJCAR ’18. The main differences in this work are the following:

  • More proof details are provided throughout, in particular for the various intermediate results of Sects. 4, 5 and 6.

  • A whole new section, Sect. 7, is included which extends the main results of [9] to pure \(\mathsf {MALL} \), i.e. without weakening.

  • The exposition is generally expanded, with further commentary and insights throughout.

In general, Sects. 2–6 of [9] cover the same content as their respective sections in this work, although theorem numbers are different.

This paper is structured as follows. In Sect. 2 we present preliminaries on QBFs and alternating time complexity, and in Sect. 3 we present preliminaries on \(\mathsf {MALL} (\mathsf {w} )\) and focussing. In Sect. 4 we present an encoding of true QBFs into \(\mathsf {MALL} {\mathsf {w} }\), tracking the association between quantifier complexity and alternation complexity of focussed proof search. In Sect. 5 we explain how provability predicates for focussed systems may be obtained as QBFs, with quantifier complexity calibrated appropriately with alternation complexity (the ‘focussing hierarchy’). In Sect. 6 we show how this measure of complexity can be feasibly approximated to yield a bona fide encoding of \(\mathsf {MALL} {\mathsf {w} }\) back into true QBFs. Furthermore, we show that the composition of the two encodings preserves quantifier complexity, thus yielding fragments of \(\mathsf {MALL} {\mathsf {w} }\) complete for each level of the polynomial hierarchy. Sect. 7 extends this approach to pure \(\mathsf {MALL} \) via carefully composing with a certain encoding of \(\mathsf {MALL} {\mathsf {w} }\) into \(\mathsf {MALL} \). Finally, in Sect. 8 we give some concluding remarks and further perspectives on our presentation of focussing.

2 Preliminaries on Logic and Computational Complexity

In this section we will recall some basic theory of Boolean logic, and its connections to alternating time complexity.

This section follows Sect. 2 of [9], except that we include constants (or ‘units’) for generality here, and we also include a presentation of ‘Boolean Truth Trees’ in Sect. 2.2.

2.1 Second-Order Boolean Logic

Quantified Boolean formulas (QBFs) are obtained from the language of classical propositional logic by adding ‘second-order’ quantifiers, varying over propositions. Formally, let us fix some set \(\mathsf {Var} \) of propositional variables, written xy etc. QBFs, written \(\varphi , \psi \) etc., are generated as follows:

$$\begin{aligned} \varphi \quad {::=} \quad \mathsf {f} \ | \ \mathsf {t} \ | \ x \ |\ \overline{x} \ | \ (\varphi \vee \varphi ) \ | \ (\varphi \wedge \varphi ) \ | \ \exists x . \varphi \ | \ \forall x . \varphi \end{aligned}$$

We write \(\mathsf {f} \) and \(\mathsf {t} \) for the classical truth constants false and true respectively, so that they are not confused with the units from linear logic later.

The formula \(\overline{x}\) stands for the negation of x, and all formulas we deal with will be in De Morgan normal form, i.e. with negation restricted to variables as in the grammar above. Nonetheless, we may sometimes write \(\overline{\varphi }\) to denote the De Morgan dual of \(\varphi \), generated by the following identities:

$$\begin{aligned} \overline{\overline{x} }\ {:=}\ x \qquad \begin{array}{rcl} \overline{\mathsf {f} }\ &{}\ {:=}\ &{}\ \mathsf {t} \\ \overline{\mathsf {t} }\ &{}\ {:=}\ &{}\ \mathsf {f} \end{array} \qquad \begin{array}{rcl} \overline{(\varphi \vee \psi )} \ &{}\ {:=}\ &{}\ \overline{\varphi }\wedge \overline{\psi }\\ \overline{(\varphi \wedge \psi )} \ &{}\ {:=}\ &{}\ \overline{\varphi }\vee \overline{\psi }\end{array} \qquad \begin{array}{rcl} \overline{\exists x . \varphi } \ &{}\ {:=}\ &{}\ \forall x . \overline{\varphi }\\ \overline{\forall x . \varphi } \ &{}\ {:=}\ &{}\ \exists x . \overline{\varphi }\end{array} \end{aligned}$$

A formula is closed (or a sentence) if all its variables are bound by a quantifier (\(\exists \) or \(\forall \)). We write \(|\varphi |\) for the number of occurrences of literals (i.e. x or \(\overline{x}\)) in \(\varphi \).

An assignment is a function \(\alpha : \mathsf {Var} \rightarrow \{0,1\}\), here construed as a set \(\alpha \subseteq \mathsf {Var} \) in the usual way. We define the satisfaction relation between an assignment \(\alpha \) and a formula \(\varphi \), written \(\alpha \vDash \varphi \), in the usual way:

figure a

Definition 2

(Second-order Boolean logic) A QBF \(\varphi \) is satisfiable if there is some assignment \(\alpha \subseteq \mathsf {Var} \) such that \(\alpha \vDash \varphi \). It is valid if \(\alpha \vDash \varphi \) for every assignment \(\alpha \subseteq \mathsf {Var} \). If \(\varphi \) is closed, then we may simply say that it is true, written \(\vDash \varphi \), when it is satisfiable and/or valid.

Second-order Boolean logic (\(\mathsf {CPL} 2\)) is the set of true QBFs.

In practice, when dealing with a given formula \(\varphi \), we will only need to consider assignments \(\alpha \) that contain variables occurring in \(\varphi \). We will assume this later when we discuss predicates (or ‘languages’) computed by open QBFs.

We point out that, from the logical point of view, it suffices to work with only closed QBFs, with satisfiability recovered by prenexing \(\exists \) quantifiers and validity recovered by prenexing \(\forall \) quantifiers.

Definition 3

(QBF hierarchy) For \(k \ge 0\) we define the following classes:

  • \(\varSigma ^q_0 = \varPi ^q_0\) is the set of quantifier-free QBFs.

  • \(\varSigma ^q_{k+1} \supseteq \varPi ^q_k\) and, if \(\varphi \in \varSigma ^q_{k+1}\), then so is \(\exists x. \varphi \).

  • \(\varPi ^q_{k+1} \supseteq \varSigma ^q_k\) and, if \(\varphi \in \varPi ^q_{k+1}\), then so is \(\forall x. \varphi \).

Notice that \(\varphi \in \varSigma ^q_k\) if and only if \(\overline{\varphi }\in \varPi ^q_k\), by the definition of De Morgan duality.

We have only defined the classes above for ‘prenexed’ QBFs, i.e. with all quantifiers at the front. It is well known that any QBF is equivalent to such a formula. For this reason we will henceforth assume that any QBF we deal with is in prenex normal form. In this case we call its quantifier-free part, i.e. its largest quantifier-free subformula, the matrix.

2.2 Boolean Truth Trees

In this work we will not need to formally deal with any deduction system for \(\mathsf {CPL} 2\), although we point out that there is a simple system whose proof search dynamics closely match quantifier complexity, e.g. studied in [15]. We will briefly present a simplified system in order to exemplify the connection with alternating time complexity.

Boolean Truth Trees (BTTs) are a proof system whose lines are closed prenexed QBFs. Its inference rules are as follows,

where \(\tau \) varies over true quantifier-free sentences, i.e. true \((\vee ,\wedge )\)-combinations of \(\mathsf {f} \) and \(\mathsf {t} \). Note that we could have further broken down the \( tr \) rule into several local computation rules, but that is independent of the current analysis.

Example 4

Temporarily write for the exclusive-or function, i.e. is true if either x is true or y is true but not both. The following is a BTT proving :

Notice that a \(\forall \) step is invertible, i.e. its conclusion is true just if every premiss is true. On the other hand, an existential formula is true just if some \(\exists \) step applies. In this way we can describe the proof search process itself by some ‘alternating’ predicate whose matrix is just a truth-checker for quantifier-free sentences, a deterministic computation. It is not hard to see that the alternations between \(\forall \) and \(\exists \) in such a predicate will, in this case, match the quantifier complexity of the input formula, by inspection of the rules. In order to make all of this more precise, we will need to speak more formally about alternating predicates and alternating complexity.

2.3 Alternating Time Complexity

In computation we are used to the distinction between deterministic and nondeterministic computation. Intuitively, co-nondeterminism is just the ‘dual’ of nondeterminism: at the machine level it is captured by ‘nondeterministic’ Turing machines where every run is accepting, not just some run as in the case of usual nondeterminism. From here alternating Turing machines generalise both the nondeterministic and co-nondeterministic models by allowing both universally branching states and existentially branching states.

Intuitions aside, we will now introduce the necessary concepts assuming only a familiarity with deterministic and nondeterministic Turing machines and their complexity measures. The reader may find a comprehensive introduction to relevant machine models and complexity classes in [24].

For a language L of strings over some finite alphabet, we write \(\mathbf {NP}(L)\) for the class of languages accepted in polynomial time by some nondeterministic Turing machine which may, at any point, query in constant time whether some word is in L or not. We extend this to classes of languages \({\mathcal {C}}\), writing \(\mathbf {NP}({\mathcal {C}})\) for \(\bigcup \limits _{L \in {\mathcal {C}}} \mathbf {NP}(L)\). We also write \(\mathbf {co}{\mathcal {C}}\) for the class of languages whose complements are in \({\mathcal {C}}\).

Definition 5

(Polynomial hierarchy, [26]) We define the following classes:

  • \(\varSigma ^p_0 = \varPi ^p_0 {:=}{\mathbf {P}}\).

  • \(\varSigma ^p_{k+1} {:=}\mathbf {NP}(\varSigma ^p_k)\).

  • \(\varPi ^p_{k+1} {:=}\mathbf {co}\varSigma ^p_{k+1}\).

The polynomial hierarchy (\(\mathbf {PH}\)) is \(\bigcup \limits _{k = 0}^\infty \varSigma ^p_k = \bigcup \limits _{k = 0}^\infty \varPi ^p_k\).

We may more naturally view the polynomial hierarchy as the bounded-quantifier-alternation fragments of QBFs we introduced earlier. For this we construe \(\varSigma ^q_k\) and \(\varPi ^q_k\) as classes of finite languages, by associating with a QBF \(\varphi (x_1, \dots , x_n)\) (with all free variables indicated) the class of (finite) assignments \(\alpha \subseteq \{x_1, \dots , x_n \}\) satisfying it. These assignments may themselves may be seen as binary strings of length n which encode their characteristic functions in the usual way.

Definition 6

(Evaluation problems) Let \({\mathcal {C}} \) be a set of QBFs. \({\mathcal {C}}\)-evaluation is the problem of deciding, given a formula \(\varphi (\mathbf {x}) \in {\mathcal {C}}\), with all free variables indicated, and an assignment \(\alpha \subseteq \mathbf {x}\), whether \(\alpha \vDash \varphi (\mathbf {x})\).

Theorem 7

(cf. [4]) For \(k \ge 1\), we have the following:

  1. 1.

    \(\varSigma ^q_k \)-evaluation is \(\varSigma ^p_k\)-complete.

  2. 2.

    \(\varPi ^q_k\)-evaluation is \(\varPi ^p_k\)-complete.

Corollary 8

For \(k\ge 1\), we have the following:

  1. 1.

    \(\{ \varphi \in \varSigma ^q_k : \varphi \text { is closed and true} \}\) is \(\varSigma ^p_k\)-complete.

  2. 2.

    \(\{ \varphi \in \varPi ^q_k : \varphi \text { is closed and true} \}\) is \(\varPi ^p_k\)-complete.

Proof

Membership is immediate from Theorem 7, evaluating under the assignment \(\varnothing \). For hardness, notice that we may always simplify a QBF under an assignment \(\alpha \) to a closed formula as follows: first, replace all free variable occurrences x with \(\mathsf {t} \) if \(x \in \alpha \) and \(\mathsf {f} \) otherwise. Now simply apply the following rewrite rules,

$$\begin{aligned} \begin{array}{rccclcrcccl} \mathsf {f} \vee \varphi &{}\rightarrow &{}\varphi &{}\leftarrow &{} \varphi \vee \mathsf {f} &{}\qquad &{}\mathsf {f} \wedge \varphi &{}\rightarrow &{}\mathsf {f} &{}\leftarrow &{}\varphi \wedge \mathsf {f} \\ \mathsf {t} \vee \varphi &{}\rightarrow &{}\mathsf {t} &{}\leftarrow &{}\varphi \vee \mathsf {t} &{}\qquad &{} \mathsf {t} \wedge \varphi &{}\rightarrow &{}\varphi &{}\leftarrow &{}\varphi \wedge \mathsf {t} \end{array} \end{aligned}$$
(1)

\(\square \)

3 Linear Logic and Proof Search

Linear logic was introduced by Girard [11] to decompose the mechanics of cut-elimination by means of different connectives. It naturally subsumes both classical and intuitionistic logic by various embeddings, and has furthermore been influential in the theoretical foundations of logic programming via the study of focussing, which constrains the level of nondeterminism in proof search, cf. [1, 5, 10]. In this work we only consider the fragment multiplicative additive linear logic (\(\mathsf {MALL} \)) and its version with ‘weakening’(\(\mathsf {MALL} {\mathsf {w} }\)).

This section mostly follows Sect. 3 of [9], mainly differing in that we here include units in the formulation of \(\mathsf {MALL} \) and \(\mathsf {MALL} {\mathsf {w} }\), for generality, and give some further proof details.

3.1 Multiplicative Additive Linear Logic

For convenience, we work with the same set \(\mathsf {Var} \) of variables that we used for QBFs. To distinguish them from QBFs, we use the metavariables AB,  etc. for \(\mathsf {MALL} (\mathsf {w} )\) formulas, generated as follows:

are called multiplicative connectives, and are called additive connectives. Like for QBFs, we have restricted negation to the variables, thanks to De Morgan duality in \(\mathsf {MALL} \). Again, we may write \(\overline{A}\) for the De Morgan dual of A, which is generated similarly to the case of QBFs:

Due to De Morgan duality, we will work only with ‘one-sided’ calculi for \(\mathsf {MALL} \) and \(\mathsf {MALL} {\mathsf {w} }\), where all formulas occur to the right of the sequent arrow. This means we will have fewer cases to consider for formal proofs, although later we will also informally adopt a two-sided notation when it is convenient, cf. Rmk. 14.

Definition 9

(\(\mathsf {MALL} (\mathsf {w} )\))A cedent, written \(\varGamma , \varDelta \) etc., is a multiset of formulas, delimited by commas ‘,’, and a sequent is an expression \(\vdash \varGamma \).Footnote 3 The system (cut-free) \(\mathsf {MALL} \) is given in Fig. 1. \(\mathsf {MALL} {\mathsf {w} }\), a.k.a. affine \(\mathsf {MALL} \), is defined in the same way, only with the \(( id )\) rule and (1) rule replaced by the following analogues:

(2)
Fig. 1
figure 1

The system (cut-free) \(\mathsf {MALL} \), where \(i \in \{0,1 \}\)

We have not included the ‘cut’ rule, thanks to cut-elimination for linear logic [11]. We will only study cut-free proofs in this paper. Notice that, following the tradition in linear logic, we write ‘\(\vdash \)’ for the sequent arrow, though we point out that the deduction theorem does not actually hold w.r.t. linear implication. For the affine variant, we have simply built weakening into the initial steps, since it may always be permuted upwards in a proof:

Proposition 10

(Weakening admissibility) The following rule, called weakening, is admissible in \(\mathsf {MALL} {\mathsf {w} }\):

Proof

This is a routine (and indeed well-known) argument by induction on the size of a subproof that roots a weakening step. The initial sequents of \(\mathsf {MALL} {\mathsf {w} }\) are already closed under weakening, and have the following inductive cases:

\(\square \)

3.2 (Multi-)focussed Systems for Proof Search

Focussed systems for \(\mathsf {MALL} \) (and linear logic in general) have been widely studied [1, 5, 10, 14]. The idea is to associate polarities to the connectives based on whether their introduction rule is invertible (negative) or their dual’s introduction rule is invertible (positive). Now bottom-up proof search can be organised in a manner where, once we have chosen a positive principal formula to decompose (the ‘focus’), we may continue to decompose its auxiliary formulas until the focus becomes negative. The main result herein is the completeness of such proof search strategies, known as the focussing theorem (a.k.a. the ‘focalisation theorem’).

It is known that ‘multi-focussed’ variants, where one may have many foci in parallel, lead to certain ‘canonical’ representations of proofs for \(\mathsf {MALL} \) [5]. Furthermore, the alternation behaviour of focussed proof search can be understood via a game theoretic approach [10]. However, such frameworks unfortunately fall short of characterising the alternating complexity of proof search in a faithful way. The issue is that the usual focussing methodology does not make any account for deterministic computations, which correspond to invertible rules that do not branch. Such rules are usually treated just like the other invertible rules, which in general comprise the ‘co-nondeterministic’ stages of proof search.

For these reasons we introduce a bespoke presentation of (multi-)focussing for \(\mathsf {MALL} \), with a designated deterministic phase dedicated to invertible non-branching rules, in particular the rule. To avoid conflicts with more traditional presentations, we call the other two phases nondeterministic and co-nondeterministic rather than ‘synchronous’ and ‘asynchronous’ respectively. This terminology also reinforces the intended connections to computational complexity.

Henceforth we use ab,  etc. to vary over atomic formulas. We also use the following metavariables to vary over formulas with the indicated main connectives:

‘Vectors’ are used to vary over multisets of associated formulas, e.g. \(\mathbf {P}\) varies over multisets of P-formulas. We may sometimes view these as sequences or even sets for convenience. Sequents may now contain a single delimiter \(\Downarrow \) or \(\Uparrow \).

Definition 11

(Multi-focussed proof system) We define the (multi-focussed) system \(\mathsf {F} \mathsf {MALL} \) in Fig. 2. The system \(\mathsf {F} \mathsf {MALL} {\mathsf {w} }\) is the same as \(\mathsf {F} \mathsf {MALL} \) but with the \(( id ) \) and (1) rules replaced by the rules \(( wid )\) and (w1) from (2).

Fig. 2
figure 2

The system (cut-free) \(\mathsf {F} \mathsf {MALL} \), where \(\mathbf {P}'\) and \(\mathbf {M}\) must be nonempty and \(i \in \{0,1\}\)

A proof of a formula A is simply a proof of the sequent \(\vdash A\), i.e. there is no need to pre-decorate with arrows, as opposed to usual presentations, thanks to the deterministic phase. The rules \(D\) and \({\bar{D}}\) are called decide and co-decide respectively, while \(R\) and \({\bar{R}}\) are called release and co-release respectively.

Notice that the determinism of plays no role in this one-sided calculus, but in a two-sided calculus we would have a deterministic left rule that is analogous to the given rule (on the right). This is the same as how the ‘negativity’ (in the sense of non-invertibility on the left) of plays no role in this calculus. One may argue that a on the left is morally just a , but such a simplification sacrifices the duality of connectives being reflected in terms of duality in computational complexity: if is deterministic, then so should be its dual, . Indeed, in the above classification, O-formulas are dual to O-formulas, P-formulas dual to N-formulas, and Q-formulas dual to M-formulas.

As usual for multi-focussed systems, the analogous focussed system can be recovered by restricting rules to only one focussed formula in nondeterministic phases. Moreover, in our presentation, we may also impose the dual restriction, that there is only one formula in ‘co-focus’ during a co-nondeterministic phase:

Definition 12

(Simply (co-)focussed subsystems) A \(\mathsf {F} \mathsf {MALL} \) proof is focussed if \(\mathbf {P}'\) in \(D\) is always a singleton. It is co-focussed if \(\mathbf {M}\) in \({\bar{D}}\) is always a singleton. If a proof is both focussed and co-focussed then we say it is bi-focussed.

The notion of ‘co-focussing’ is not usually possible for (multi-)focussed systems since the invariant of being a singleton is not usually maintained in an asynchronous phase, due to the rule. However we treat as deterministic rather than co-nondeterministic, and we can see that the -rule indeed maintains the invariant of having just one formula on the right of \(\Uparrow \).

Theorem 13

(Focussing theorem) We have the following:

  1. 1.

    The class of bi-focussed \(\mathsf {F} \mathsf {MALL} \)-proofs is complete for \(\mathsf {MALL} \).

  2. 2.

    The class of bi-focussed \(\mathsf {F} \mathsf {MALL} {\mathsf {w} }\)-proofs is complete for \(\mathsf {MALL} {\mathsf {w} }\).

Evidently, this immediately means that \(\mathsf {F} \mathsf {MALL} \) (\(\mathsf {F} \mathsf {MALL} {\mathsf {w} }\)), as well as its focussed and co-focussed subsystems, are also complete for \(\mathsf {MALL} \) (resp. \(\mathsf {MALL} {\mathsf {w} }\)). The proof of Theroem 13 follows routinely from any other completeness proof for focussed \(\mathsf {MALL} \), e.g. [1, 14]. The only change in our presentation is in the organisation of phases, for which we may think of bi-focussed proofs as certain annotated focussed proofs.

To aid our exposition, we will sometimes use a ‘two-sided’ notation and extra connectives so that the intended semantics of sequents are clearer. Strictly speaking, this is just a shorthand for one-sided sequents: the calculi defined in Figs. 1 and 2 are the formal systems we are studying.

Remark 14

(Two-sided notation) We write \(\varGamma \vdash \varDelta \) as shorthand for the sequent \(\vdash \overline{\varGamma }, \varDelta \), where \(\overline{\varGamma }\) is \(\{ \overline{A} : A \in \varGamma \}\). We extend this notation to sequents with \(\Uparrow \) or \(\Downarrow \) symbols in the natural way, writing \(\varGamma \Uparrow \varDelta \vdash \varSigma \Uparrow \varPi \) for \(\vdash \overline{\varGamma }, \varSigma \Uparrow \overline{\varDelta }, \varPi \) and \(\varGamma \Downarrow \varDelta \vdash \varSigma \Downarrow \varPi \) for \(\vdash \overline{\varGamma }, \varSigma \Downarrow \overline{\varDelta }, \varPi \). In all cases, (co-)foci are always written to the right of \(\Downarrow \) or \(\Uparrow \).

We write \(A \multimap B \) as shorthand for the formula , and \(A \multimap ^+B\) as shorthand for the formula . Sometimes we will write, e.g., a step,

which, by definition, corresponds to a correct application of in \(\mathsf {F} \mathsf {MALL} (\mathsf {w} )\).

4 An Encoding From \(\mathsf {CPL} 2\) to \(\mathsf {MALL} {\mathsf {w} }\)

In this section we present an encoding of true QBFs into \(\mathsf {MALL} {\mathsf {w} }\). (We will later adapt this into an encoding into \(\mathsf {MALL} \) in Sect. 7.) The former were also used for the original proof that \(\mathsf {MALL} \) is \(\mathbf {PSPACE}\)-complete [17, 18], though our encoding differs from theirs and leads to a more refined result, cf. Sect. 6.

This section mostly follows Sect. 4 from [9], except with some further details in proofs and the exposition. Henceforth we assume that all QBFs are in prenex normal form and free of truth constants \(\mathsf {f} \) and \( \mathsf {t} \) (e.g. by Eqn. 1).

4.1 Positive and Negative Encodings of Quantifier-Free Evaluation

The base cases of our translation from QBFs to \(\mathsf {MALL} {\mathsf {w} }\) will be quantifier-free Boolean formula evaluation. This is naturally a deterministic computation, being polynomial-time computable. (In fact, Boolean formula evaluation is known to be \(\mathbf {ALOGTIME}\)-complete [2].) However one issue is that this determinism cannot be seen from the point of view of \(\mathsf {MALL} {\mathsf {w} }\), since the only deterministic connective (, on the right) is not expressive enough to encode evaluation.

Nonetheless we are able to circumvent this problem since \(\mathsf {MALL} {\mathsf {w} }\) is at least able to see that quantifier-free evaluation is in \(\mathbf {NP}\cap \mathbf {coNP}\), via a pair of corresponding encodings. For non-base levels of \(\mathbf {PH}\) this is morally the same as being deterministic, as we will see more formally over the course of this section.

Definition 15

(Positive and negative encodings) Let \(\varphi \) be a quantifier-free Boolean formula. We define:

  • \(\varphi ^{-}\) is the result of replacing every \(\vee \) in \(\varphi \) by and every \(\wedge \) in \(\varphi \) by .

  • \(\varphi ^+\) is the result of replacing every \(\vee \) in \(\varphi \) by and every \(\wedge \) in \(\varphi \) by .

For an assignment \(\alpha \subseteq \mathsf {Var} \) and a list of variables \(\mathbf {x} = (x_1 , \dots , x_k)\), we write \(\alpha (\mathbf {x})\) for the cedent \(\{ x_i : x_i \in \alpha , i\le k \} \cup \{ \overline{x}_i : x_i \notin \alpha , i\le k \}\). We write \(\alpha ^n(\mathbf {x})\) for the cedent consisting of n copies of each literal in \(\alpha (\mathbf {x})\).

Proposition 16

Let \(\varphi \) be a quantifier-free Boolean formula with free variables \(\mathbf {x}\) and let \(\alpha \) be an assignment. For \(n \ge |\varphi |\), the following are equivalent:

  1. 1.

    \(\alpha \vDash \varphi \).

  2. 2.

    \(\mathsf {MALL} {\mathsf {w} }\) proves \(\alpha (\mathbf {x}) \vdash \varphi ^{-}\).

  3. 3.

    \(\mathsf {MALL} {\mathsf {w} }\) proves \(\alpha ^n(\mathbf {x}) \vdash \varphi ^+\).

Proof

\(2\implies 1\) and \(3\implies 1\) are immediate from the ‘soundness’ of \(\mathsf {MALL} {\mathsf {w} }\) with respect to classical logic, by interpreting or as \(\wedge \) and or as \(\vee \).

Intuitively, \(1\implies 2\) follows directly from the invertibility of rules, while for \(1\implies 3\) we may appeal to the usual properties of satisfaction while controlling linearity appropriately. Formally we prove the following more general statements:

  • For any multiset \(\varLambda \) of quantifier-free Boolean formulas, if \(\alpha \vDash \bigvee \varLambda \) then \(\mathsf {MALL} {\mathsf {w} }\) proves \(\alpha (\mathbf {x} ) \vdash \varLambda ^{-}\), where \(\varLambda ^{-}\) is the \(\mathsf {MALL} {\mathsf {w} }\) cedent \(\{ \varphi ^{-} : \varphi \in \varLambda \}\).

  • For any quantifier-free Boolean formula \(\varphi \) with \(|\varphi |\le n\), if \(\alpha \vDash \varphi \) then \(\mathsf {MALL} {\mathsf {w} }\) proves \(\alpha ^n(\mathbf {x}) \vdash \varphi ^+\).

We proceed by induction on the number of connectives in \(\varLambda \) or \(\varphi \). The bases case is simple (relying on affinity) and the inductive cases are as follows,

where:

  • \(i = 0\) or \(i=1\), depending on whether \(\alpha \vDash \varphi _0\) or \(\alpha \vDash \varphi _1\), respectively; and,

  • l and m are chosen so that \(l \ge |\varphi |\) and \(m\ge |\psi |\); and,

  • the derivations marked \( IH \) are obtained from the inductive hypothesis.

\(\square \)

4.2 Encoding Quantifiers in \(\mathsf {MALL} {\mathsf {w} }\)

As we said before, we do not follow the ‘locks-and-keys’ approach of [17, 18]. Instead we follow a similar approach to Statman’s proof that intuitionistic propositional logic is \(\mathbf {PSPACE}\)-hard [25], adapted to minimise proof search complexity.

The basic idea is that we would like to encode quantifiers as follows:

(3)

The issue is that such a naive approach would induce an exponential blowup, due to the two occurrences of \(\varphi \) in each line above. This idea was considered by Statman in [25], for intuitionistic propositional logic, where he avoided the blowup by using Tseitin extension variables, essentially fresh variables used to abbreviate complex formulas, e.g. \((x \equiv \varphi )\). The issue is that this can considerably complicate the structure of proofs, since, in order to access the abbreviated formula, we must pass both a positive and negative phase induced by \(\equiv \).

Instead, we use an observation from [8] that \(\varphi \) occurs only positively in (3) above, and so we only need one direction of Tseitin extension. Doing this carefully will allow us to control the structure proofs in a way that is consistent with the alternation complexity of the initial QBF, as we will see later.

Definition 17

(\(\mathsf {CPL} 2\) to \(\mathsf {MALL} {\mathsf {w} }\)) Given a QBF \(\varphi = Q_k x_k . \cdots . Q_1 x_1 . \varphi _0\) with \(|\varphi _0|=n\) and all quantifiers indicated, we define \([\varphi ]\) by induction on k as follows,

where \(y_k\) is always fresh.

Lemma 18

Let \(\varphi (\mathbf {x})\) be a QBF with all free variables displayed and matrix \(\varphi _0\), with \(|\varphi _0| =n \). Then \(\alpha \vDash \varphi \) if and only if \(\mathsf {MALL} {\mathsf {w} }\) proves \(\alpha ^n (\mathbf {x}) \vdash \mathbf {y}, [\varphi ]\) for any assignment \(\alpha \) and any \(\mathbf {y}\) disjoint from \(\mathbf {x}\).

Fig. 3
figure 3

Proof of \(\exists \) case for left-right direction of Lemma 18

Fig. 4
figure 4

Proof of \(\forall \) case for left-right direction of Lemma 18

Proof

We proceed by induction on the number of quantifiers in \(\varphi \). For the base case, when \(\varphi \) is quantifier-free, we appeal to Prop. 16. The left-right direction follows directly by weakening (cf. Prop. 10), while the right-left direction follows after observing that \(\mathbf {y}\) does not occur in \([\varphi ]\) or \(\alpha ^n(\mathbf {x})\); thus \(\mathbf {y} \) may be deleted from a proof (along with its descendants) while preserving correctness.

For the inductive step, in the left-right direction we give appropriate bi-focussed proofs in Figs. 3 and 4, where: \(\pm x\) in Fig. 3 is chosen to be x if \(x\in \alpha \) and \(\overline{x} \) otherwise; the derivations marked \( IH \) are obtained by the inductive hypothesis; and the derivation marked \(\dots \) in Fig. 4 is analogous to the one on the left of it.Footnote 4

For the right-left direction, we need only consider the other possibilities that could occur during bi-focussed proof search, by the focussing theorem, Theorem 13. For the \(\exists \) case, bottom-up, one could have chosen to first decide on \([\varphi ] \multimap y\) in the antecedent. The associated \( \multimap _l\) step would have to send the formula to the right premiss (for y), since otherwise every variable occurrence in that premiss would be distinct and there would be no way to correctly finish proof search. Thus, possibly after weakening, we may apply the inductive hypothesis to the left premiss (for \([\varphi ]\)). A similar analysis of the upper \( \multimap _l\) step in Fig. 3 means that any other split will allow us to appeal to the inductive hypothesis after weakening. For the \(\forall \) case the argument is much simpler, since no matter which order we ‘co-decide’, we will end up with the same leaves. (This is actually exemplary of the more general phenomenon that invertible phases of rules are ‘confluent’, cf. [1, 5, 16].) In particular, -steps may be permuted as follows:

\(\square \)

Theorem 19

A closed QBF \(\varphi \) is true if and only if \(\mathsf {MALL} {\mathsf {w} }\) proves \([\varphi ]\).

Proof

Follows immediately from Lemma 18, setting \(\mathbf {y} = \varnothing \). \(\square \)

5 Focussed Proof Search as Alternating Time Predicates

In this section we show how to express focussed proof search as an alternating polynomial-time predicate that will later allow us to calibrate the complexity of proof search with levels of the QBF and polynomial hierarchies. The notions we develop apply equally to either \(\mathsf {MALL} \) or \(\mathsf {MALL} {\mathsf {w} }\).

This section mostly follows Sect. 5 of [9] except that, as well as further general details, we include a proof of Theorem 21 (essentially Theorem 20 in [9]).

We will now introduce ‘provability predicates’ that delineate the complexity of proof search in a similar way to the QBF and polynomial hierarchies we presented earlier. Recall the notions of deterministic, nondeterministic and co-nondeterministic rules from Dfn. 11, cf. Fig. 2.

Definition 20

(Focussing hierarchy) A cedent \(\varGamma \) of \(\mathsf {MALL} (\mathsf {w} )\) is:

  • \(\varSigma ^f_0\)-provable, equivalently \(\varPi ^f_0\)-provable, if \(\vdash \varGamma \) is provable using only deterministic rules.

  • \(\varSigma ^f_{k+1}\)-provable if there is a derivation of \(\vdash \varGamma \), using only deterministic and nondeterministic rules, from sequents \(\vdash \varGamma _i\) which are \(\varPi ^f_k\)-provable.

  • \(\varPi ^f_{k+1}\)-provable if every maximal path from \(\vdash \varGamma \), bottom-up, through deterministic and co-nondeterministic rules ends at a \(\varSigma ^f_k\)-provable sequent.

We sometimes simply say “\(\varGamma \) is \(\varSigma ^f_k\)” or even “\(\varGamma \in \varSigma ^f_k\)” if \(\varGamma \) is \(\varSigma ^f_k\)-provable.

The definition above is robust under the choice of multi-focussed, (co-)focussed or bi-focussed proof systems: while the number of \(D\) or \({\bar{D}}\) steps may increase, the number of alternations of nondeterministic and co-nondeterministic phases is the same. This robustness will also apply to other concepts introduced in this section.

From the definition it is not hard to see that we have a natural correspondence between the focussing hierarchy and the other hierarchies we have discussed:

Theorem 21

For \(k\ge 0\), we have the following:

  1. 1.

    \(\varPi ^f_k\)-provability is computable in \(\varPi ^p_k\).

  2. 2.

    \(\varSigma ^f_k\)-provability is computable in \(\varSigma ^p_k\).

An analogous result has been presented in previous work, [8]. An interesting point is that, for the rule, even though there are two premisses, the rule is context-splitting, and so a nondeterministic machine may simply split into two parallel threads with no blowup in complexity.

Proof of Theorem 21

We proceed by induction on k.

In the base case, a cedent is \(\varSigma ^f_0\)-provable (equivalently \(\varPi ^f_0\)-provable) just if it has a proof using only deterministic rules. Upon inspection of Fig. 2, we notice that it does not matter which order we apply deterministic rules, bottom-up, since maximal application will always lead to the same sequent at the top. This follows from a simple rule permutation argument:

Thus, since deterministic rules must terminate after a linear number of steps (bottom-up), we may verify deterministic-provability by simply applying deterministic steps maximally (in any order bottom-up) and verifying that the end result is a correct initial sequent. Thus we indeed have that \(\varSigma ^f_0\)-provability (equivalently \(\varPi ^f_0\)-provability) is computable in \({\mathbf {P}}= \varSigma ^p_0 = \varPi ^p_0\).

For the inductive step for 1, a cedent \(\varGamma \) is not \(\varPi ^f_{k+1}\)-provable just if there is some maximal branch of co-nondeterministic steps applied to \(\varGamma \), bottom-up, that terminates in a sequent \(\varGamma '\) that is not \(\varSigma ^f_k\)-provable. Any such branch has polynomial-size (by inspection of the rules), and we have from the inductive hypothesis that \(\varSigma ^f_k\)-provability is computable in \(\varSigma ^p_k\). Thus we have that non-\(\varPi ^f_{k+1}\)-provability is computable in \(\varSigma ^p_{k+1}\) and so, by Dfn. 5, \(\varPi ^f_{k+1}\)-provability is computable in \(\varPi ^p_{k+1}\).

For the inductive step for 2, a cedent \(\varGamma \) is \(\varSigma ^f_{k+1}\)-provable just if there is a derivation of the form,

using only deterministic and nondeterministic rules such that each \(\varGamma _i\) is \(\varPi ^f_k\)-provable. Notice that such \(\varPhi \) has polynomial-size in \(|\varGamma |\) with \(\sum _{i=1}^n |\varGamma _i| \le |\varGamma |\), by inspection of the (non)determinstic rules in Fig. 2. Thus, to check that \(\varGamma \) is \(\varSigma ^f_{k+1}\)-provable we need only guess the appropriate derivation \(\varPhi \) and sequents \(\varGamma _1, \dots , \varGamma _n\), and then check that each \(\varGamma _i\) is \(\varPi ^f_k\)-provable. By the inductive hypothesis, we have that \(\varPi ^f_k\)-provability is a \(\varPi ^p_k\) property, so we may check on a \(\varPi _k\)-machine that all these \(\varGamma _i\)s are actually \(\varPi ^f_k\)-provable in time \(\sum _{i=1}^n |\varGamma _i|^{O(1)} \le |\varGamma |^{O(1)}\). Thus we have that \(\varSigma ^f_{k+1}\)-provability is indeed computable in \(\varSigma ^p_k\).

\(\square \)

Corollary 22

For \(k\ge 1\), we have the following:

  1. 1.

    There are \(\varSigma ^q_k\) formulas \( \varSigma ^f_k\text {-}\mathsf {Prov} _n\), constructible in time polynomial in \(n\in {\mathbb {N}}\), computing \(\varSigma ^f_k\)-provability on all formulas A s.t. \(|A|=n\).

  2. 2.

    There are \(\varPi ^q_k\) formulas \(\varPi ^f_k\text {-}\mathsf {Prov} _n\), constructible in time polynomial in \(n\in {\mathbb {N}}\), computing \(\varPi ^f_k\)-provability on formulas A s.t. \(|A|=n\).

Proof

Follows immediately from Theorem 21 under Theorem 7. \(\square \)

We now give a slightly different way to view the focussing hierarchy, based on a more directly calculable measure of cedents that is similar to the notions of ‘decide depth’ and ‘release depth’ found in other works, e.g. [22]. We will use (a variation of) this to eventually formulate our encoding from \(\mathsf {MALL} {\mathsf {w} }\) to \(\mathsf {CPL} 2\) in the next section.

Definition 23

((Co-)nondeterministic complexity) Let \(\varPhi \) be a \(\mathsf {F} \mathsf {MALL} (\mathsf {w} )\) proof. We define the following:

  • The nondeterministic complexity of \(\varPhi \), written \(\sigma (\varPhi )\), is the maximum number of alternations, bottom-up, between \(D\) and \({\bar{D}}\) steps in a branch through \(\varPhi \), setting \(\sigma (\varPhi ) = 1 \) if \(\varPhi \) has only \(D\) steps.

  • The co-nondeterministic complexity of \(\varPhi \), written \(\pi (\varPhi )\), maximum number of alternations, bottom-up, between \(D\) and \({\bar{D}}\) steps in a branch through \(\varPhi \), setting \(\pi (\varPhi ) = 1\) if \(\varPhi \) has only \({\bar{D}}\) steps.

For a cedent \(\varGamma \) we further define the following:

  • \(\sigma ( \varGamma )\) is the least \(k\in {\mathbb {N}}\) s.t. there is a \(\mathsf {F} \mathsf {MALL} (\mathsf {w} )\) proof \(\varPhi \) of \(\vdash \varGamma \) with \(\sigma (\varPhi )= k\).

  • \(\pi (\varGamma )\)) is the least \(k\in {\mathbb {N}}\) s.t. there is a \(\mathsf {F} \mathsf {MALL} ({\mathsf {w}})\) proof \(\varPhi \) of \(\vdash \varGamma \) with \(\pi (\varPhi ) = k\).

Putting together the results and notions of this section, we have:

Proposition 24

Let \(\varGamma \) be a cedent and \(k \ge 0\). We have the following:

  1. 1.

    \(\varGamma \) is \(\varSigma ^f_k\)-provable if and only if \(\sigma (\varGamma )\le k\).

  2. 2.

    \(\varGamma \) is \(\varPi ^f_k\)-provable if and only if \(\pi (\varGamma )\le k\).

6 An ‘Inverse’ Encoding from \(\mathsf {MALL} {\mathsf {w} }\) into \(\mathsf {CPL} 2\)

In this section we will use the ideas of the previous section to give an explicit encoding from \(\mathsf {MALL} {\mathsf {w} }\) to \(\mathsf {CPL} 2\), i.e. a polynomial-time mapping from \(\mathsf {MALL} {\mathsf {w} }\)-formulas to QBFs whose restriction to theorems has image in \(\mathsf {CPL} 2\). Moreover, we will show that this encoding acts as an ‘inverse’ to the one we gave in Sect. 4, and finally identify natural fragments of \(\mathsf {MALL} {\mathsf {w} }\) complete for each level of \(\mathbf {PH}\).

This section mostly follows Sect. 6 of [9], except that we give significantly more proof details.

6.1 Approximating (Co-)nondeterministic Complexity

The nondeterministic and co-nondeterministic complexities \(\sigma \) and \(\pi \) we introduced in the previous section do not give us a bona fide encoding from \(\mathsf {MALL} {\mathsf {w} }\) to true QBFs since they are hard to compute. Instead we give an ‘overestimate’ here that will suffice for the encodings we are after. This overestimate will be parametrised by some enumeration of all formulas,Footnote 5 which will drive the possible choices during proof search. However we will later show that the choice of this enumeration is irrelevant, meaning that the approximation can be flexibly calculated and is in fact polynomial-time computable.

Another option might have been, rather than taking the ‘least’ formula in a sequent under some enumeration, that we may view the sequent as a list instead in a calculus with an exchange rule. We avoided this in the interest of having a terminating proof system.

Throughout, we will identify enumerations with total orders in the natural way.

Definition 25

(Approximating the complexity of a sequent) Let \(\prec \) be a total order on all \(\mathsf {MALL} (\mathsf {w} )\) formulas. We define the functions \(\lceil \sigma \rceil _\prec \) and \(\lceil \pi \rceil _\prec \) on sequents in Fig. 5.Footnote 6

Fig. 5
figure 5

Approximating (co-)nondeterminstic complexities

Proposition 26

(Confluence) For any two total orders \(\prec \) and \(\prec '\) on \(\mathsf {MALL} (\mathsf {w} )\) formulas, we have that \(\lceil \sigma \rceil _\prec (\varGamma ) = \lceil \sigma \rceil _{\prec '} (\varGamma )\) and \(\lceil \pi \rceil _\prec (\varGamma ) = \lceil \pi \rceil _{\prec '} (\varGamma )\).

To prove this, we give essentially a confluence argument for terminating relations, but we avoid using a formal rewriting argument for self-containedness.

Proof of Prop. 26

We proceed by induction on the number of connectives in \(\varGamma \). The base case, when \(\varGamma \) consists of only atomic formulas, is trivial, so we consider the inductive steps. When invoking the inductive hypothesis, we may freely suppress the subscripts \(\prec \) or \(\prec '\).

Suppose that, at some point along the definition of the approximations, \(\prec \) and \(\prec '\) disagree on what the least formula is. Namely, the \(\prec \)-least formula is \(P_0\) and the \(\prec '\)-least formula is \(P_1\). In this case we have the following situation:

$$\begin{aligned} \begin{array}{rcll} \lceil \sigma \rceil _\prec (\mathbf {a}, \mathbf {P}, P_0, P_1) &{} = &{} \lceil \sigma \rceil _\prec (\mathbf {a}, \mathbf {P} , P_0 \Downarrow P_1) &{} \\ &{} \vdots &{} &{} \\ &{} = &{} \delta _1 + \lceil \sigma \rceil _\prec (\mathbf {a}, \mathbf {P}, P_0, \mathbf {P}_1) &{} \\ &{} = &{} \delta _1 + \lceil \sigma \rceil (\mathbf {a}, \mathbf {P}, \mathbf {P}_1 \Downarrow P_0) &{} \text {by inductive hypothesis} \\ &{} \vdots &{} &{} \\ &{} = &{} \delta _0 + \delta _1 + \lceil \sigma \rceil (\mathbf {a}, \mathbf {P}, \mathbf {P}_1, \mathbf {P}_0) &{} \end{array} \end{aligned}$$

where each \(\delta _i\) is either 2 or 0 depending on whether a co-nondeterministic phase is entered or not during the bi-pole induced by \(P_i\). We will have a similar derivation for \(\prec '\), with only \(\delta _0\) and \(\delta _1\) swapped, whence we conclude by commutativity and associativity of addition.

The case where a M-formula is chosen in the definition of \(\lceil \pi \rceil \) is similar. \(\square \)

From now on, we may suppress the subscript \(\prec \) for the notions \(\lceil \sigma \rceil \) and \(\lceil \pi \rceil \).

Corollary 27

(Overapproximation) \(\sigma (\varGamma )\le \lceil \sigma \rceil (\varGamma )\) and \(\pi (\varGamma )\le \lceil \pi \rceil (\varGamma )\).

Proof

A proof with the minimal number of alternations between nondeterministic and co-nondeterministic phases will induce a strategy \(\prec \) on which we may evaluate \(\lceil \sigma \rceil \) and \(\lceil \pi \rceil \). These will be bounded below by their actual \(\sigma \) and \(\pi \) values. \(\square \)

Notice that the over-estimation for the case is particularly extreme: in the worst case we have that the entire context is copied to one branch. In fact we could optimise this somewhat, by only considering ‘plausible’ splittings, but it will not be necessary for our purposes.

Corollary 28

(Feasibility) \(\lceil \sigma \rceil \) and \(\lceil \pi \rceil \) are polynomial-time computable.

Proof

Clearly, \(\lceil \sigma \rceil _\prec \) and \(\lceil \pi \rceil _\prec \) are polynomial-time computable for any polynomial-time enumeration \(\prec \). So we may simply pick any polynomial-time enumeration of the formulas and appeal to Prop. 26. \(\square \)

6.2 Tightness of Approximations in the Image of \([\cdot ]\)

Since \([\varphi ]\) is always a relatively ‘balanced’ formula, we have that the overestimation just defined is in fact tight in the image of \([\cdot ]\) from \(\mathsf {MALL} {\mathsf {w} }\):

Proposition 29

(Tightness) For \(k \ge 1\) we have the following:

  1. 1.

    If \(\varphi \in \varSigma ^q_k \setminus \varPi ^q_k\) then \(\lceil \sigma \rceil ([\varphi ]) = \sigma ([\varphi ]) = k\) and \(\lceil \pi \rceil ([\varphi ]) = \pi ([\varphi ]) = 1+k\).

  2. 2.

    If \(\psi \in \varPi ^q_k \setminus \varSigma ^q_{k}\) then \(\lceil \pi \rceil ([\psi ]) = \pi ([\varphi ]) = k\) and \(\lceil \sigma \rceil ([\psi ]) = \pi ([\psi ]) = 1+k\).

Intuitively, the tightness of the approximation follows from the following two properties of the derivations in Figs. 3 and 4:

  • There is only one non-atomic formula per sequent, so the -overapproximation is not significant.

  • Weakening is only required on atomic formulas, so initial sequents need not be further broken down in the definitions of \(\lceil \sigma \rceil \) and \(\lceil \pi \rceil \).

Proof of Prop. 29

We have that \(\sigma (\varphi ) = k \) and \(\pi (\psi )= k\) already from the proof of Lemma 18, so it remains to show that the approximations are tight. For this we show by induction on the number of quantifiers in \(\varphi \) or \(\psi \) that, more generally:

  • \(\lceil \sigma \rceil ([\varphi ], \mathbf {a}) = k\) for any sequence \(\mathbf {a}\) of atomic formulas.

  • \(\lceil \pi \rceil ([\psi ], \mathbf {a}) = k\) for any sequence \(\mathbf {a}\) of atomic formulas.

When \(\varphi \) and \(\psi \) are quantifier-free, notice that \(\lceil \sigma \rceil (\varphi ^+, \mathbf {a}) = 1 = \lceil \pi \rceil (\varphi ^-, \mathbf {a})\), since \(\varphi ^+\) has only positive connectives and \(\varphi ^-\) has only negative connectives.

If \(\varphi \) is \(\exists x. \varphi '\) then \(\lceil \sigma \rceil ([\varphi '],\mathbf {b}) = k \), for any \(\mathbf {b}\), by the inductive hypothesis so:

We also have that , by a similar analysis.

If \(\psi \) is \(\forall x . \psi '\) then \(\lceil \pi \rceil ([\psi '], \mathbf {b}) = k\), for any \(\mathbf {b}\), by the inductive hypothesis so:

We also have that , by a similar analysis. \(\square \)

6.3 An Encoding from \(\mathsf {MALL} {\mathsf {w} }\) to QBFs and Main Results

From Cor. 22, let us henceforth fix appropriate QBFs \(\varSigma ^f_k\text {-}\mathsf {Prov} _n\) and \(\varPi ^f_k\text {-}\mathsf {Prov} _n\), for \(k\ge 1\), computing \(\varSigma ^f_k\)-provability and \(\varPi ^f_k\)-provability in \(\mathsf {F} \mathsf {MALL} {\mathsf {w} }\), respectively, for formulas of size n. We are now ready to define our ‘inverse’ encoding of \([\cdot ]\):

Definition 30

(\(\mathsf {MALL} {\mathsf {w} }\) to \(\mathsf {CPL} 2\)) For a \(\mathsf {MALL} {\mathsf {w} }\) formula A, we define:

$$\begin{aligned} \langle A \rangle \ {:=}\ {\left\{ \begin{array}{ll} \varSigma ^f_{k}\text {-}\mathsf {Prov} _{|A|}(A) &{} \text {if }k=\lceil \sigma \rceil (A)\le \lceil \pi \rceil (A) \\ \varPi ^f_{k}\text {-}\mathsf {Prov} _{|A|}(A) &{} \text {if }k=\lceil \pi \rceil (A) < \lceil \sigma \rceil (A) \end{array}\right. } \end{aligned}$$

Finally, we are able to present our main result:

Theorem 31

We have the following:

  1. 1.

    \([\cdot ]\) is a polynomial-time encoding from \(\mathsf {CPL} 2\) to \(\mathsf {MALL} {\mathsf {w} }\).

  2. 2.

    \(\langle \cdot \rangle \) is a polynomial-time encoding from \(\mathsf {MALL} {\mathsf {w} }\) to \(\mathsf {CPL} 2\).

  3. 3.

    The composition \( \langle \cdot \rangle \circ [\cdot ] : \mathsf {CPL} 2\rightarrow \mathsf {CPL} 2\) preserves quantifier complexity, i.e., for \(k \ge 1\), it maps true \(\varSigma ^q_k\) (\(\varPi ^q_k\)) sentences to true \(\varSigma ^q_k\) (resp. \(\varPi ^q_k\)) sentences.

Proof

We already proved 1 in Theorem 19. 2 follows from the definitions of \(\varSigma ^f_k\text {-}\mathsf {Prov} \) and \(\varPi ^f_k\text {-}\mathsf {Prov} \) (cf. Cor. 22), under Prop. 24 and Cors. 27 and 28. Finally 3 then follows by tightness of the approximations \(\lceil \sigma \rceil \), \(\lceil \pi \rceil \) in the image of \([\cdot ]\), Prop. 29. \(\square \)

Consequently, we may identify polynomial-time recognisable subsets of \(\mathsf {MALL} {\mathsf {w} }\)-formulas whose theorems are complete for levels of the polynomial hierarchy:

Corollary 32

We have the following, for \(k\ge 1\):

  1. 1.

    \(\{ A : \lceil \sigma \rceil (A) \le k \text { and } \mathsf {MALL} {\mathsf {w} }\text { proves } A \}\) is \(\varSigma ^p_k\)-complete.

  2. 2.

    \(\{ A : \lceil \pi \rceil (A) \le k \text { and } \mathsf {MALL} {\mathsf {w} }\text { proves } A \}\) is \(\varPi ^p_k\)-complete.

7 Extending the Approach to (Non-affine) \(\mathsf {MALL} \)

It is natural to wonder whether a similar result to Theorem 31 could be obtained for \(\mathsf {MALL} \), i.e. without weakening. The reason we chose \(\mathsf {MALL} {\mathsf {w} }\) is that it allows for a robust and uniform approach that highlights the capacity of focussed systems to obtain tight alternating time bounds for logics, without too many extraneous technicalities. However, the same approach does indeed extend to \(\mathsf {MALL} \) with only local adaptations. We give the argument in this section.

This section is comprised of new material not present in [9].

7.1 Encoding Weakening in \(\mathsf {MALL} \)

There is a well-known embedding of \(\mathsf {MALL} {\mathsf {w} }\) into \(\mathsf {MALL} \) by recursively replacing every subformula A by . However, doing this might considerably increase the alternation complexity of proof search, adding up to one alternation per subformula. Instead, we notice that we need only conduct this replacement on literals, since those are the only ones that are weakened in the proofs of Sect. 4. From here we realise that the consideration of formulae of the form (or variants thereof) may be delayed to the end of proof search. To formalise this appropriately, we first need the following notion:

Definition 33

(Weakened formulas) Let \(\varPhi \) be a \(\mathsf {MALL} {\mathsf {w} }\) proof whose initial \( id \)-sequents are \(\{ \varGamma _i, a_i, \overline{a}_i \}_{i<m}\) and initial 1-sequents are \(\{\varGamma _i, 1 \}_{m\le i<n} \) The weakened formulas of \(\varPhi \) is the set \(\varOmega {:=}\bigcup _{i<n} \varGamma _i\).

We identify elements of \(\varOmega \) in the above definition with subformula occurrences of the conclusion of \(\varPhi \) in the natural way. We have the following folklore result:

Lemma 34

(Weakening lemma) We have the following:

  1. 1.

    \(\mathsf {MALL} \) proves .

  2. 2.

    \(\mathsf {MALL} {\mathsf {w} }\) proves , with weakening only on A.

  3. 3.

    Let A be a \(\mathsf {MALL} (\mathsf {w} )\) formula and \(\varOmega \) a set of subformula occurrences of A. There is a \(\mathsf {MALL} {\mathsf {w} }\) proof \(\varPhi \) of A with weakened formulas among \(\varOmega \) if and only if \(\mathsf {MALL} \) proves .

Proof

1 and 2 are given by the following derivations:

3 now follows immediately from 1 and 2 under the ‘deep inference’ property:

$$\begin{aligned} \textit{If } \mathsf {MALL} (\mathsf {w} ) \textit{ proves A(B) and } B \vdash C \textit{then it proves A(C).} \end{aligned}$$
(4)

This property is well known (see, e.g., [27]) and follows by a routine induction on the structure of A, in particular appealing to the cut-elimination property of \(\mathsf {MALL} (\mathsf {w} )\). For instance here are the cases when A is a or formula,

where the derivations marked \( IH \) are obtained from the inductive hypothesis. The cases when A is a or formula are similar to the two cases above. \(\square \)

7.2 Adapting the Translation \([\cdot ]\) for \(\mathsf {MALL} \)

It turns out that, in the arguments of Sect. 4, we only used weakenings on atomic formulae: notice that, in the proofs of Prop. 16 and of Lemma 18, the only weakenings were applied on literals, and all initial sequents had the form \(\mathbf {a}\).

Observation 35

The proof of Theorem 19 requires weakening only on literals.

This motivates the following definition:

Definition 36

For a \(\mathsf {MALL} (\mathsf {w} )\) formula A, write \(A' \) for the result of replacing every literal occurrence a with .

We now have the following immediately from Lemma 34:

Proposition 37

\(\mathsf {MALL} {\mathsf {w} }\) proves \([\varphi ]\) if and only if \(\mathsf {MALL} \) proves \([\varphi ]' \).

7.3 Dealing with Formulas Deterministically

Even though we have restricted our treatment of weakened formulas to only literals, these may still a priori increase the alternation complexity of proof search linearly under the encoding \(A'\). To avoid this, we will work in a certain normal form of proofs that delays consideration of formulas of the form until the end of bottom-up proof search. To enforce this we must slightly ‘hack’ the proof systems and complexity approximations previously introduced.

For generality, let us introduce new metavariables cd etc. varying over -clauses containing at least one \(\bot \):

Our intention is to treat c-formulas much like atoms in proof search, in particular not decomposing them until the end. To this end, we will introduce a new focussed system \(\mathsf {F} \mathsf {MALL} '\) that enforces this within the rules.

Definition 38

(\(\mathsf {F} \mathsf {MALL} '\)) We temporarily redefine the metavariables MNOPQ so that only O is permitted to vary over c-formulas,i.e. if P or Q is a -formula it must not be a c-formula. The proof system \(\mathsf {F} \mathsf {MALL} '\) is hence defined just as \(\mathsf {F} \mathsf {MALL} \) in Fig. 2, under this revision of metavariables, with the following exceptions:

  • \(\mathbf {a}\) in Fig. 2 is everywhere replaced by \(\mathbf {a}, \mathbf {c}\), i.e. \(\mathsf {F} \mathsf {MALL} '\) has the following rules,

    instead of their analogous versions written in Fig. 2.

  • \(\mathsf {F} \mathsf {MALL} \prime \) has the following additional initial sequents:

    where literals in parentheses must occur in their respective formulas. These new initial sequents are deterministic.

(Co-)focussed and bi-focussed proofs of \(\mathsf {F} \mathsf {MALL} '\) are defined analogously to Dfn. 12.

Proposition 39

(Bi-focussed) \(\mathsf {F} \mathsf {MALL} '\) is sound and complete for \(\mathsf {MALL} \).

Proof

Soundness is routine, with the new initial sequents proved using simple and \(\bot \) steps, along with the other initial rules.

For completeness, we proceed by induction on the size of a \(\mathsf {F} \mathsf {MALL} \) proof, by essentially a rule permutation argument. The critical cases are when a \(\mathsf {F} \mathsf {MALL} \) proof focusses on a c-formula, which we adapt as follows:

where,

  • \(c,\varPhi \) is obtained from \(\varPhi \) by inductively adding c to each premiss of an inference step in \(\varPhi \), bottom-up, except at steps where we must only add c to one premiss, say the left one. This transformation preserves the local correctness of the proof, in particular since initial sequents are closed under addition of c-formulas.

  • a is a literal and \(\varPhi [c/a]\) is obtained from \(\varPhi \) by replacing every (indicated) occurrence of a by c. Since c contains a as a subformula, each initial sequent transformed in this way will again be an initial sequent.

All other cases are routine, simply mimicking the given \(\mathsf {F} \mathsf {MALL} \) proof. \(\square \)

7.4 Adapting the Translation \(\langle \cdot \rangle \) and Main Results

Finally we adapt the approximations of (co-)nondeterministic complexity to reflect the proof search dynamics of \(\mathsf {F} \mathsf {MALL} '\):

Definition 40

(Approximating alternating complexity in \(\mathsf {F} \mathsf {MALL} '\)) \(\lceil \sigma \rceil _\prec '\) and \(\lceil \pi \rceil _\prec '\) are defined exactly as \(\lceil \sigma \rceil _\prec \) and \(\lceil \pi \rceil _\prec \) in Fig. 5, under the metavariable conventions of Dfn. 38, with the following exception: \(\mathbf {a}\) and a are replaced everywhere by \(\mathbf {a}, \mathbf {c}\) and “a or c”, respectively. I.e. we have the following clauses,

instead of their analogous versions written in Fig. 5.

It is not hard to see that the results of Sect. 5 are applicable also to the notions \(\lceil \sigma \rceil '\) and \(\lceil \pi \rceil '\) developed here. In particular Prop. 26 holds also for \(\lceil \sigma \rceil '\) and \(\lceil \pi \rceil '\) and we may similarly omit the \(\prec \)-subscript henceforth. All together, this allows us to define a similar encoding from \(\mathsf {MALL} \) formulas to QBFs.

First, appealing to Cor. 22,Footnote 7 let us henceforth fix appropriate QBFs \(\varSigma ^f_k\text {-}\mathsf {Prov} _n'\) and \(\varPi ^f_k\text {-}\mathsf {Prov} _n'\), for \(k\ge 1\), computing \(\varSigma ^f_k\)-provability and \(\varPi ^f_k\)-provability in \(\mathsf {F} \mathsf {MALL} '\), respectively, for formulas of size n.

Definition 41

(\(\mathsf {MALL} \) to \(\mathsf {CPL} 2\)) For a \(\mathsf {MALL} \) formula A, we define:

We now have the following analogues of the main results of the previous section, proved by essentially the same arguments:

Theorem 42

We have the following:

  1. 1.

    \([\cdot ]'\) is a polynomial-time encoding from \(\mathsf {CPL} 2\) to \(\mathsf {MALL} \).

  2. 2.

    \(\langle \cdot \rangle '\) is a polynomial-time encoding from \(\mathsf {MALL} \) to \(\mathsf {CPL} 2\).

  3. 3.

    The composition \( \langle \cdot \rangle ' \circ [\cdot ]' : \mathsf {CPL} 2\rightarrow \mathsf {CPL} 2\) preserves quantifier complexity, i.e. for \(k\ge 1\), it maps true \(\varSigma ^q_k\) (\(\varPi ^q_k\)) sentences to true \(\varSigma ^q_k\) (resp. \(\varPi ^q_k\)) sentences.

Corollary 43

We have the following, for \(k\ge 1\):

  1. 1.

    \(\{ A : \lceil \sigma \rceil ' (A) \le k\text { and } \mathsf {MALL} \text { proves } A \}\) is \(\varSigma ^p_k\)-complete.

  2. 2.

    \(\{ A : \lceil \pi \rceil ' (A) \le k \text { and } \mathsf {MALL} \text { proves } A \}\) is \(\varPi ^p_k\)-complete.

8 Conclusions and Further Remarks

We gave a refined presentation of (multi-)focussed systems for multiplicative-additive linear logic, and its affine variant, that accounts for deterministic computations in proof search, cf. Sect. 3. We showed that it admits rather controlled normal forms in the form of bi-focussed proofs, and highlighted a duality between focussing and ‘co-focussing’ that emerges thanks to this presentation. The main reason for using focussed systems such as ours was to better reflect the alternating time complexity of bottom-up proof search, cf. Sect. 5. We justified the accuracy of these bounds by showing that natural measures of proof search complexity for \(\mathsf {F} \mathsf {MALL} {\mathsf {w} }\) tightly delineate the theorems of \(\mathsf {MALL} {\mathsf {w} }\) according to associated levels of the polynomial hierarchy, cf. Sects. 4 and 6. We were also able to obtain a similar delineation for \(\mathsf {MALL} \) too, cf. Sect. 7. These results exemplify how the capacity of proof search to provide optimal decision procedures for logics extends to important subclasses of \(\mathbf {PSPACE}\). As far as we know, this is the first time such an investigation has been carried out.

Our presentation of \(\mathsf {F} \mathsf {MALL} (\mathsf {w} )\) should extend to logics with quantifiers and exponentials, following traditional approaches to focussed linear logic, cf. [1, 14]. It would be interesting to see what could be said about the complexity of proof search for such logics. For instance, the usual \(\forall \) rule becomes deterministic in our analysis, since it does not branch:

As a result, the alternation complexity of proof search is not affected by the \(\forall \)-rule, but rather interactions between positive connectives, including \(\exists \), and negative connectives such as . Interpreting this over a classical setting could even give us new ways to delineate true QBFs according to the polynomial hierarchy, determined by the alternation of \(\exists \) and propositional connectives rather than \(\forall \). One issue here is that witnessing \(\exists \) steps seems to significantly impact the complexity proof search. Nonetheless this would be an interesting line of future research.

Much of the literature on logical frameworks via focussed systems is based around the idea that an inference rule may be simulated by a ‘bi-pole’, i.e. a single alternation between an invertible and non-invertible phase of inference steps. However accounting for determinism might yield more refined simulations where, say, non-invertible rules are simulated by phases of deterministic and nondeterministic rules, but not co-nondeterministic ones. In particular this should be possible for standard translations between modal logic and first-order logic, cf. [19, 20].