Introduction

Many real-world complex systems can be modelled as graphs in which vertices represent system elements and edges pairwise interactions between those elements Newman (2018), Barabási (2016). This approach allows powerful tools from graph theory to be used in the analysis of complex systems in numerous domains—from technological networks such as power grids, the internet and world-wide-web to biological networks such as food webs and molecular interaction networks, and social networks such as those that arise in online social media—and has been tremendously successful in discerning important structural and dynamical properties of the complex systems they represent Newman (2003).

Notably, a number of features are common to many disparate real-world networks, and they have also been observed in classical random graph models, such as the Barabási–Albert model and the Watts–Strogatz model. Examples include the presence of highly connected ‘hub’ vertices Barabási and Albert (1999), over-representation of important sub-graphs or ‘motifs’ Milo et al. (2002) and the presence of local clustering Watts and Strogatz (1998). In recent years, it has also become clear that many real-world networks also contain a large amount of structural redundancy (i.e. duplication of structural features), which, in turn, relates to the robustness and resilience of the underlying system.

Mathematically, the presence of structural redundancy is quantified by the graph automorphism group MacArthur et al. (2008), which identifies structurally equivalent vertices and edges. This allows tools from group theory to be used in network analysis and has seen a number of fruitful applications most notably in studies of robustness and resilience, efficient communication, group consensus, anonymisation, compression and patterns of network collective dynamics such as synchronisation MacArthur et al. (2008), Sánchez-García (2020), Pecora et al. (2014), Klickstein et al. (2019), Wu et al. (2010).

Moreover, a powerful tool for studying the structural properties of graphs is spectral theory. Given a graph \({\varGamma }\) and a square matrix associated with \({\varGamma }\), such as its adjacency matrix A, its Kirchhoff Laplacian \({\varDelta }\) or its normalised Laplacian L, the spectrum of each of these operators, i.e. the multiset of its eigenvalues, is known to encode many important qualitative properties of \({\varGamma }\) Chung (1997), Brouwer and Haemers (2012). Spectral theory studies the properties that are encoded by the spectra of these operators, and it is common both in pure mathematics and in applied sciences. Notably, for regular graphs, i.e. graphs in which all vertices have the same degree, the spectral properties of A, \({\varDelta }\) and L are equivalent, as their eigenvalues only differ by an additive or a multiplicative constant in this case. For general graphs, the spectral properties of the three matrices may be slightly different, although they are typically strongly related. Also, since A has both positive and negative eigenvalues while \({\varDelta }\) and L have non-negative eigenvalues, studying the spectral properties of the Laplacian matrices is often easier. Moreover, the eigenvalues of L are normalised with respect to the eigenvalues of \({\varDelta }\) and they are related to random walks on graphs, therefore studying spectral theory from the point of view of the normalised Laplacian is often preferred—and is the approach we will take here. We refer to Chung (1997), Brouwer and Haemers (2012) for two classical monographs on this subject.

However, graph theory-based analyses necessarily only consider system elements and their pairwise interactions. In many cases, higher-order interactions are also important and can play a significant part in system function Carlsson (2009). There is increasing interest in accounting for such higher-order structures, for example by encoding system structures either as simplicial complexes, which can be analysed using tools from algebraic topology, or, more generally, as hypergraphs. Both approaches have proven successful and are active areas of current research Zomorodian (2005), Jost and Mulas (2019), Klamt et al. (2009), Horak and Jost (2013).

The role of higher-order interactions is particularly important when considering systems of chemical reactions. For example, proteins typically perform their functions in cells by interacting physically to form chemical complexes. While protein–protein interaction networks enumerate possible pairwise interactions, they are not able to unambiguously capture the formation of higher order complexes involving three or more proteins. More generally, biochemical reactions typically involve more than two reactants and/or products. Thus, complex systems of biochemical reactions are not well described using the language of graph theory. Yet, they can be well modelled using hypergraphs which allow hyperedges involving more than two vertices.

Here we develop a general theory of automorphisms for oriented hypergraphs: a generalisation of classical hypergraphs with the additional structure that each vertex in a hyperedge is either an input or an output. Oriented hypergraphs were introduced in Shi (1992) and, as shown in Jost and Mulas (2019), they are a useful tool for the modelling of chemical reaction networks. The adjacency matrix and the Kirchhoff Laplacian for oriented hypergraphs were introduced in Reff and Rusnak (2012), as a generalisation of the classical ones for graphs. Moreover, the normalised Laplacian for oriented hypergraphs was introduced in Jost and Mulas (2019). The spectral properties of these operators, as well as possible applications, have been widely studied, see for instance Jost and Mulas (2019), Mulas et al. (2020), Mulas (2021), Mulas and Zhang (2021), [24], Mulas (2021), Reff (2014), Chen et al. (2015), Duttweiler and Reff (2019), Chen et al. (2018), Reff and Rusnak (2012), yet a general framework to study oriented hypergraph automorphisms is still lacking. As in the graph case, the spectral properties of these three operators are similar; the adjacency matrix has both positive and negative eigenvalues while the Laplacian matrices have non-negative eigenvalues, and the spectrum of L is normalised with respect to the spectrum of \({\varDelta }\). For this reason, we will focus on spectral properties of the normalised Laplacian matrix here.

The paper is structured as follows. In Sect. 2, we provide an overview of some required definitions related to oriented hypergraphs. In Sect. 3, we show how the classical theory of graph automorphisms can be extended to hypergraphs, and outline some key differences between graph and hypergraph automorphisms. In Sect. 4, we propose a further extension of this theory that takes hyperedge signs into account. We conclude with a discussion of the relevance of this general theory to systems of biochemical reactions.

Preliminary definitions

We start by introducing some preliminary definitions. We keep the set of definitions limited to those strictly needed for the new results in later sections.

Definition 1

(Shi 1992) An oriented hypergraph is a pair \({\varGamma }=(V,H)\), where V is a finite set of vertices and H is a set such that every element \(h \in H\) is a pair of disjoint subsets of vertices \(h=(h_{in},h_{out})\) (input and output), that is, \(h_{in}, h_{out} \in {\mathcal {P}}(V)\), where we write \({\mathcal {P}}(V)\) for the power set of V. The elements of H are called the oriented hyperedges (or, simply, hyperedges). Changing the orientation of a hyperedge h means exchanging its input and output, leading to the pair \((h_{out},h_{in})\). The vertices of a hyperedge \(h=(h_{in},h_{out})\) are the elements of \(h_{in} \cup h_{out} \subseteq V\). Two vertices in \(i,j \in h\) are called co-oriented if \(i,j \in h_{in}\) or \(i, j \in h_{out}\), and anti-oriented otherwise.

A classical hypergraph can be seen as an oriented hypergraph if one forgets about the input–output structure. In this sense, oriented hypergraphs generalise the standard notion of hypergraphs Bretto (2013). To illustrate these ideas, Fig. 1 shows an oriented hypergraph with five vertices and two hyperedges.

Fig. 1
figure 1

Example hypergraph. An oriented hypergraph with five vertices 1 to 5 and two hyperedges \(h_1\) and \(h_2\). The hyperedge \(h_1\) has 1 and 2 as inputs and 3 as output; the hyperedge \(h_2\) has 3 and 4 as inputs and 5 as output

Oriented hypergraphs offer a valid model for biochemical networks Jost and Mulas (2019). Each vertex may be thought of as a chemical substance and each hyperedge as a chemical reaction involving the substances that it contains as vertices (i.e. reactants and/or products of the reaction). The input–output structure then represents the reactant–product structure of chemical reactions.

Definition 2

(Reff and Rusnak 2012) The degree of a vertex i, denoted \(\deg (i)\), is the number of hyperedges containing i. The cardinality of a hyperedge h, denoted \({{\,\mathrm{card}\,}}(h)\), is the number of vertices in h.

For the rest of this article, let us fix such an oriented hypergraph \({\varGamma }=(V,H)\) on n vertices labelled \(1,2,\ldots ,n\) (that is, we assume \(V=\{1,2,\ldots ,n\}\)) and m hyperedges \(h_1,\ldots , h_m\). We also assume that \({\varGamma }\) has no vertices of degree zero, that is, every vertex belongs to at least one hyperedge. We define the following matrices associated with \({\varGamma }\).

Definition 3

(Jost and Mulas 2019) The \(n\times m\) incidence matrix of \({\varGamma }\) is \({\mathcal {I}}={\mathcal {I}}({\varGamma })=({\mathcal {I}}_{ih})_{i\in V, h\in H}\), where

$$\begin{aligned} {\mathcal {I}}_{ih}:={\left\{ \begin{array}{ll} 1 &{} \text { if }i\in h_{in}\\ -1 &{} \text { if }i\in h_{out}\\ 0 &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

We call \({\mathcal {I}}_{ih}\) the sign of vertex i in hyperedge h, and use the ‘\(+\)’ or ‘−’ symbols to represent non-zero signs in a graphical representation of a hypergraph (e.g. Fig. 1).

Definition 4

(Reff and Rusnak 2012) The \(n\times n\) diagonal degree matrix of \({\varGamma }\) is \(D=D({\varGamma })=(D_{ij})\), where

$$\begin{aligned} D_{ij}:={\left\{ \begin{array}{ll} \deg (i) &{} \text {if }i=j\\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Given vertices \(i, j \in V\), let us write \(\deg ^+(i,j)\) for the number of hyperedges in which i and j are co-oriented, and \(\deg ^-(i,j)\) for the number of hyperedges in which i and j are anti-oriented. Note that \(\deg (i)=\deg ^+(i,i)\), \(\deg ^-(i,i)=0\), and they are both symmetric functions: \(\deg ^\pm (i,j)=\deg ^\pm (j,i)\) for all \(i, j \in V\).

Definition 5

(Reff and Rusnak 2012) The \(n\times n\) adjacency matrix of \({\varGamma }\) is the symmetric matrix \(A=A({\varGamma })=(A_{ij})\), where \(A_{ii}=0\) for all i and

$$\begin{aligned} A_{ij}:= \deg ^-(i,j)-\deg ^+(i,j) \end{aligned}$$
(1)

for all \(i\ne j\).

Definition 6

(Reff and Rusnak 2012) The \(n\times n\) Kirchhoff Laplacian matrix of \({\varGamma }\) is \({\varDelta }={\varDelta }({\varGamma })=({\varDelta }_{ij})\), where \({\varDelta }=D-A\). That is,

$$\begin{aligned} {\varDelta }_{ij}:= \deg ^+(i,j)-\deg ^-(i,j) \end{aligned}$$
(2)

for all ij.

Definition 7

The \(n\times n\) normalised Laplacian matrix of \({\varGamma }\), \(L=L({\varGamma })=(L_{ij})\), is \(L=D^{-1}{\varDelta }= I - D^{-1}A \), where I is the \(n \times n\) identity matrix. (Note that D is invertible as we have removed all vertices of degree 0.) The entries of L are

$$\begin{aligned} L_{ij}:= \frac{\deg ^+(i,j)-\deg ^-(i,j)}{\deg (i)} \end{aligned}$$
(3)

for all ij.

The Kirchhoff Laplacian matrix \({\varDelta }\) is symmetric but the normalised Laplacian L is not. However, L is isospectral (meaning that it has the same eigenvalues, counted with multiplicity) to the symmetric matrix

$$\begin{aligned} {\mathcal {L}}:=D^{1/2}L D^{-1/2} \end{aligned}$$
(4)

(see e.g. (Mulas and Zhang 2021, Remark 2.14)) and thus has real eigenvalues. Note that the incidence matrix \({\mathcal {I}}\) uniquely determines the hypergraph, but, unlike graphs, this is not true for the adjacency or Laplacian matrices: two distinct hypergraphs may have the same adjacency, or Laplacian, matrix. To see this, consider the following simple example:

Example 1

Let \({\varGamma }=(V,H)\) and \({\varGamma }'=(V,H')\) be two hypergraphs with vertex set \(V=\{1,2,3\}\) and hyperedge sets \(H=\{h_1,h_2\}\) and \(H'=\{h'_1,h'_2\}\), where

  • \(h_1\) has 1 as input and 2 as output, and \(h_2\) has 1 and 2 as inputs and 3 as output;

  • \(h'_1\) has 1 as input and 3 as output, and \(h'_2\) has 2 as input and 3 as output.

These two hypergraphs are distinct (\({\varGamma }'\) is a graph but \({\varGamma }\) is not), but have the same adjacency matrix,

$$\begin{aligned} A=\begin{pmatrix} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} 1\\ 1 &{} 1 &{} 0 \end{pmatrix}. \end{aligned}$$

(The cancellation \(\deg ^-(1,2)-\deg ^+(1,2)=1-1=0\) for \({\varGamma }\) is undetected by this matrix.)

The terminology and matrices introduced so far generalise the similar concepts in graph theory. A simple graph \(G=(V,E)\) with a choice of edge orientations is the same as an oriented hypergraph \({\varGamma }=(V,H)\) with \(|h_{in}|=|h_{out}|=1\) for all \(h=(h_{in},h_{out})\in H\). In this case, the degree of a vertex in \({\varGamma }\) is the same as in G, \(d^+(i,j)=0\) for all \(i \ne j\), and \(d^-(i,j)=1\) if i and j are connected by an edge, and 0 otherwise. In particular, the degree, adjacency and Laplacian matrices for \({\varGamma }\) coincide with the usual definitions from graph theory for G.

Collectively, these results therefore indicate that properties of hypergraphs may be encoded in matrix representations that have some similarities to those of graphs, as well as some important differences. In the following sections we will outline how structural hypergraph properties—in particular, those related to redundancy—manifest the in hypergraph spectra. In order to motivate these general results we first introduce some established results concerning the effect of various simple structural features of an oriented hypergraph on the spectrum of its hypergraph normalised Laplacian Mulas and Zhang (2021), [24]. Some additional definitions, below, are needed to understand these features.

Definition 8

The auxiliary graph of \({\varGamma }\), written \(G({\varGamma })\), as the graph with adjacency matrix \(A({\varGamma })\). This is an undirected, weighted graph with the same vertex set as \({\varGamma }\) and an edge between i and j weighted by \(A_{ij} \ne 0\), and no such edge if \(A_{ij}=0\).

Definition 9

(Mulas and Zhang 2021) Two distinct vertices i and j are duplicate if the corresponding rows (equivalently, columns) of the adjacency matrix are the same, that is, if \(A_{ik}=A_{jk}\) (or, equivalently, \(A_{ki}=A_{kj}\)) for all \(k\in V\). In particular, \(A_{ij}=A_{ji}=A_{ii}=A_{jj}=0\).

Definition 10

([24]) Two distinct vertices i and j are twin if they belong to exactly the same set of hyperedges, with the same orientations, that is,

$$\begin{aligned} i \in h_{in} \iff j \in h_{in} \ \text { and }\ i \in h_{out} \iff j \in h_{out}, \end{aligned}$$

for all \(h=(h_{in},h_{out})\in H\).

Note that if i and j are twin then \(\deg ^\pm (i,k)=\deg ^\pm (j,k)\), and hence \(A_{ik}=A_{jk}\), for all \(k \in V\setminus \{i,j\}\). Moreover, \(A_{ij}=A_{ji}=-\deg (i)=-\deg (j)\ne 0\) (we assume that there are no vertices of degree zero). Therefore, twin vertices cannot be duplicate vertices and vice versa.

Recall that, in oriented hypergraphs, every vertex has a sign for each hyperedge in which it is contained (Definition 3). By reversing signs, we can define anti-duplicate and anti-twin vertices, as follows.

Definition 11

Two vertices i and j are anti-duplicate if the corresponding rows (equivalently, columns) of the adjacency matrix have opposite sign, that is, if \(A_{ik}=-A_{jk}\) (or, equivalently, \(A_{ki}=-A_{kj}\)) for all \(k\in V\). In particular, \(A_{ij}=A_{ji}=A_{ii}=A_{jj}=0\).

Definition 12

Two vertices i and j are anti-twin if they belong exactly to the same set of hyperedges, with reversed orientations, that is,

$$\begin{aligned} i \in h_{in} \iff j \in h_{out} \ \text { and }\ i \in h_{out} \iff j \in h_{in}, \end{aligned}$$

for all \(h=(h_{in},h_{out})\in H\).

Note that if i and j are anti-twin then \(\deg ^\pm (i,k)=\deg ^\mp (j,k)\), and hence \(A_{ik}=-A_{jk}\), for all \(k \in V\setminus \{i,j\}\). Moreover, \(A_{ij}=A_{ji}=\deg (i)=\deg (j)\). Therefore, anti-twin vertices cannot be anti-duplicate vertices and vice versa.

In Mulas and Zhang (2021) it is shown that a hypergraph that possesses k duplicate vertices will have normalised Laplacian eigenvalue 1 with multiplicity at least \(k-1\). Similarly, in [24] it is shown that the presence of k twin vertices produce the normalised Laplacian eigenvalue 0 with multiplicity at least \(k-1\). It is clear from these elementary results that structural repetition in a hypergraph naturally gives rise to repeated eigenvalues, yet the generality of these results is unclear. In the next section, we interpret these results as part of a more general theory that relates structural redundancy (measured by the presence of hypergraph automorphisms) to hypergraph spectra.

Redundancy and symmetry in hypergraphs

Informally, redundancy results in duplication of hypergraph structural features (such as vertices, hyperedges or collections of vertices and hyperedges). Moreover, from the results above it is expected that such repetition may leave a signature in its eigenvalue spectra. In this section, we show that these results are specific instances that arise from a general theory of hypergraph automorphisms, adapting the work in MacArthur et al. (2008), MacArthur and Sánchez-García (2009), Sánchez-García (2020) for hypergraphs and considering the normalised Laplacian.

Hypergraph automorphisms

Informally, a hypergraph symmetry is a permutation of the vertices that preserves the hypergraph structure. More precisely,

Definition 13

A hypergraph automorphism is a permutation p of the vertices of \({\varGamma }\) that preserves hyperedges, that is,

$$\begin{aligned} p(h)=(p(h_{in}),p(h_{out})) \in H \quad \text {for all } h=(h_{in},h_{out}) \in H. \end{aligned}$$

(We write \(p(S)=\{p(s_1),\ldots ,p(s_k)\}\) whenever \(S =\{s_1,\ldots ,s_k\} \subseteq V\).)

Note that, since p is invertible, it also induces a permutation on the hyperedges of \({\varGamma }\), \(h \mapsto p(h)\). Moreover, hypergraph automorphisms induce automorphisms of the adjacency and Laplacian matrices, as follows.

Definition 14

An adjacency automorphism is a permutation p of the vertices of a hypergraph that preserves adjacency, that is, \(A_{p(i)p(j)} = A_{ij}\) for all \(1 \le i, j \le n\), where \(A=A({\varGamma })\). We can write this in matrix form as

$$\begin{aligned} AP=PA, \end{aligned}$$
(5)

, where \(P=(P_{ij})\) is the permutation matrix representing p, that is, \(P_{ij}=1\) if \(p(i)=j\), and 0 otherwise.

Definition 15

A Laplacian automorphism is an adjacency automorphism p that also preserves degrees, that is, \(\deg (i)=\deg (p(i))\), for all \(i=1,\ldots ,n\).

Note that if p is a Laplacian automorphism and P is the permutation matrix representing p, then \({\varDelta } P=P{\varDelta }\) and \(LP=PL\), that is, p preserves both the Kirchhoff Laplacian and the normalised Laplacian. In general we the following inclusions hold.

Proposition 1

Every hypergraph automorphism is a Laplacian automorphism, and every Laplacian automorphism is an adjacency automorphism. The reciprocals of these statements hold if \({\varGamma }\) is a simple graph, but not in general.

Schematically, for graphs:

$$\begin{aligned} \{\text {adjacency automorphisms}\}&= \{\text {Laplacian automorphisms}\}, \\&= \{\text {graph automorphisms}\}, \end{aligned}$$

while for hypergraphs:

$$\begin{aligned} \{\text {adjacency automorphisms}\}&\supseteq \{\text {Laplacian automorphisms}\}, \\&\supseteq \{\text {hypergraph automorphisms}\}. \end{aligned}$$

Proof

If p is a hypergraph automorphism, then clearly \(\deg ^\pm (i,j)=\deg ^\pm \left( p(i),p(j)\right) \) for all \(i, j \in V\) and, in particular, \(\deg (i)=\deg ^+(i,i)=\deg (p(i))\). From Eq. (1), it is clear that p is a Laplacian automorphism. Moreover, by definition, it is clear that any Laplacian automorphism is also an adjacency automorphism. The case when \({\varGamma }\) is a simple graph is well-known and straightforward.

To see that the reciprocals do not necessarily hold in general, consider the following example. Let \({\varGamma }=(V,H)\) with vertex set \(V=\{1,2,3\}\) and hyperedge set \(H=\{h_1,h_2,h_3\}\), where

  • \(h_1\) has 1 as input and 2 as output;

  • \(h_2\) has 2 as input and 1 as output;

  • \(h_3\) only contains the vertex 3, as input.

Then, the adjacency matrix of \({\varGamma }\) is the \(3\times 3\) zero matrix, implying that any permutation of the vertices is an adjacency automorphism. However, the permutation p such that \(p(1)=3\) and \(p(3)=1\) is not a Laplacian automorphism, since \(\deg (1)=2\ne \deg (3)=1\). \(\square \)

We begin by describing duplicate and twin vertices in terms of automorphisms.

Proposition 2

Let \({\varGamma }\) be an oriented hypergraph.

  1. (i)

    If two vertices \(i,j \in V\) are duplicate then the transposition \(p=(i\ j)\) is an adjacency automorphism.

  2. (ii)

    If two vertices \(i,j \in V\) are duplicate and \(\deg (i)=\deg (j)\), then the transposition \(p=(i\ j)\) is a Laplacian automorphism.

  3. (iii)

    If two vertices \(i,j \in V\) are twin then the transposition \(p=(i\ j)\) is a hypergraph automorphism.

The converses of these statements are not necessarily true.

(For anti-duplicate and anti-twin vertices, see Proposition 4.)

Proof

(i) Let \(A=A({\varGamma })\) and let P be the permutation matrix of the transposition \(p=(i\ j)\) (see Definition 14). Clearly, AP is the matrix A with the ith and jth rows swapped, and PA is the matrix A with the ith and jth columns swapped. By Definition 9, the ith row, respectively, column, of A equals the jth row, respectively, column, of A. In particular, \(AP=PA\) and p is an adjacency automorphism. The converse is not true: \(AP=PA\) if and only if the ith row (and column) of A equals the jth row (and column) of A, except possibly \(A_{ii} = A_{jj} \ne A_{ij} = A_{ji}\). In that situation, \(p=(i\ j)\) is an adjacency automorphism but i and j are not duplicate.

(ii) This point follows easily from (i) and from the definition of Laplacian automorphism. The converse is not true: assume that \(\deg (i)=\deg (j)\) and the ith row (and column) of A equals the jth row (and column) of A, except \(A_{ii} = A_{jj} \ne A_{ij} = A_{ji}\). In that case, \(p=(i\ j)\) is a Laplacian automorphism but i and j are not duplicate.

(iii) If i and j are twin and \(h \in H\), then \(i, j \in h_{in}\), or \(i, j \in h_{out}\), or neither i nor j are vertices in h. In all cases, \(p(h)=h\), that is, p acts trivially on hyperedges. In particular, \(p(h)\in H\) for all \(h\in H\) and p is a hypergraph automorphism. The converse is not true: it is easy to find a hypergraph automorphism of the form \(p=(i\ j)\) not acting as trivially on hyperedges. \(\square \)

Now, we have formalised the concept of symmetry, or redundancy, in hypergraphs (as hypergraph automorphisms), we can deduce some structural and spectral results: namely, the effects of the presence of symmetry on hypergraph spectra.

Structural results

In this section, we discuss the effects of the presence of automorphisms, as defined above, on the hypergraph structure. To begin we note that the set of Laplacian automorphisms together with the composition of permutations forms a group, denoted \({{\,\mathrm{Aut}\,}}({\varGamma })\). Next, we explain a decomposition of \({{\,\mathrm{Aut}\,}}({\varGamma })\) into permutations with disjoint supports.

Definition 16

Given a permutation of the vertices p, its support is

$$\begin{aligned} {{\,\mathrm{supp}\,}}(p):=\{i \in V \mid p(i)\ne i\}. \end{aligned}$$

Two permutations are disjoint if their supports are non-intersecting.

Following MacArthur et al. (2008), MacArthur and Sánchez-García (2009), we decompose \({{\,\mathrm{Aut}\,}}({\varGamma })\) into a direct product of subgroups that naturally reflect structural redundancy in \({\varGamma }\). Let S be a set of generators of \({{\,\mathrm{Aut}\,}}({\varGamma })\) not containing the identity, and let \(S=S_1\sqcup \ldots \sqcup S_l\) be the (unique) irreducible partition of S into support-disjoint subsets. Let \({\mathcal {P}}_j\) be the subgroup generated by \(S_j\). Then,

$$\begin{aligned} {{\,\mathrm{Aut}\,}}({\varGamma })={\mathcal {P}}_1\times \ldots \times {\mathcal {P}}_l \end{aligned}$$
(6)

is the unique, irreducible direct product decomposition of \({{\,\mathrm{Aut}\,}}({\varGamma })\) (a proof follows that of (MacArthur et al. 2008, Equation 1), we omit details here). Since it relates to hypergraph symmetry, we will call (6) the symmetric decomposition of \({{\,\mathrm{Aut}\,}}({\varGamma })\). Similarly, for each \(j=1,\ldots ,l\), we denote

$$\begin{aligned} M_j:=\bigcup _{\tau \in S_j}{{\,\mathrm{supp}\,}}(\tau ). \end{aligned}$$

Using this notation, we call

$$\begin{aligned} V:=V_0\sqcup M_1\sqcup \ldots \sqcup M_l \end{aligned}$$

the symmetric decomposition of \({\varGamma }\) where \(V_0\) is the set of fixed points, that is,

$$\begin{aligned} V_0 = \{ i \in V \mid p(i)=i \text { for all } p \in {{\,\mathrm{Aut}\,}}({\varGamma })\}. \end{aligned}$$

As with any action of a group on a set, we have the concept of a group orbit.

Definition 17

The orbit of \(i\in V\) is

$$\begin{aligned} {\mathcal {O}}(i):=\{p(i):p\in {{\,\mathrm{Aut}\,}}({\varGamma })\}. \end{aligned}$$

From this definition, a natural measure of redundancy is:

$$\begin{aligned} r=\frac{\#{\mathcal {O}}-1}{n}, \end{aligned}$$

, where \(\#{\mathcal {O}}\) is the number of orbits, and n the number of vertices, of \({\varGamma }\). Note that \(1\le \#{\mathcal {O}}\le n\), so

$$\begin{aligned} 0\le r\le \frac{n}{n-1}<1. \end{aligned}$$

In particular, \(r=0\) if and only if \(\#{\mathcal {O}}=1\), that is, all vertices (reactants in a chemical reaction system) are structurally equivalent. On the other hand, \(r=\frac{n}{n-1}\) if and only if \(\#{\mathcal {O}}=n\), that is, if and only if \({{\,\mathrm{Aut}\,}}({\varGamma })\) is trivial and therefore there is no structural redundancy in \({\varGamma }\).

Thus, r quantifies the extent to which the oriented hypergraph \({\varGamma }\) is constructed from repetition of structurally equivalent units. Due to the evolutionary processes that form them, biochemical reaction systems often contain duplicated elements Vázquez et al. (2003), which gives rise to local symmetries (i.e. permutations of nodes that are close in the hypergraph that preserve adjacency). In the absence of global symmetries (i.e. permutations of nodes that are distant in the hypergraph that preserve adjacency), r is then a natural measure of structural redundancy: biochemical systems with a high redundancy are robust in the sense that damage or deletion of redundant vertices or units (i.e. individual chemical reactants, or small sub-systems of chemical reactions) do not cause catastrophic system failures, but rather can be absorbed by their replacements and so allow the system to continue to function normally.

Spectral results

Recall that the spectrum of a matrix is the multiset of its eigenvalues. Given \({\varGamma }\), we define the adjacency spectrum of \({\varGamma }\) as the spectrum of \(A({\varGamma })\), the Kirchhoff Laplacian spectrum as the spectrum of \({\varDelta }({\varGamma })\) and the normalised Laplacian spectrum as the spectrum of \(L({\varGamma })\). Each of these matrices has n real eigenvalues and the corresponding eigenvectors are elements of \({\mathbb {R}}^n\), where n is the number of vertices. We will see each eigenvector as a function \(f:V\rightarrow {\mathbb {R}}^n\) and we will therefore call them eigenfunctions. We will focus on the spectrum of the normalised Laplacian L (or, equivalently, on the spectrum of the matrix \({\mathcal {L}}\) defined in (4)).

We may factor out any redundancy to obtain the essential structural characteristics of a reaction system \({\varGamma }\). In particular, given a partition of the vertex set \(V=V_1\sqcup \ldots \sqcup V_l\), we define:

Definition 18

The quotient matrix of \({\mathcal {L}}\) is \(Q({\mathcal {L}}):=(Q_{\alpha \beta })_{\alpha \beta }\), where

$$\begin{aligned} Q_{\alpha \beta }:=\frac{1}{|V_\alpha |}\cdot \sum _{i\in V_\alpha , j\in V_\beta }{\mathcal {L}}_{ij}. \end{aligned}$$

Note that the quotient matrix can be also written in alternative form as follows. Let \(K:=\text {diag}(|V_1|,\ldots ,|V_l|)\) and let S be the \(n\times l\) characteristic matrix of the partition, that is, each column \(K_j\) is the characteristic vector of the set \(V_j\). Then,

$$\begin{aligned} Q({\mathcal {L}})=K^{-1}S^\top {\mathcal {L}}S. \end{aligned}$$

Because \(Q({\mathcal {L}})\) is not necessarily symmetric, it is not immediately clear if it has real spectrum. In fact, it does have real spectrum, as can be seen from the following definition.

Definition 19

Given a partition of the vertex set \(V=V_1\sqcup \ldots \sqcup V_l\), the symmetric quotient matrix of \({\mathcal {L}}\) is the \(l\times l\) symmetric matrix \(Q^{\text {sym}}({\mathcal {L}})\) with entries

$$\begin{aligned} Q^{\text {sym}}_{\alpha \beta }:=\frac{1}{\sqrt{|V_\alpha |\cdot |V_\beta |}}\cdot \sum _{i\in V_\alpha , j\in V_\beta }{\mathcal {L}}_{ij}. \end{aligned}$$

Note that the symmetric quotient matrix of \({\mathcal {L}}\) can be written as

$$\begin{aligned} Q^{\text {sym}}=K^{-1/2}S^\top {\mathcal {L}}SK^{-1/2}=K^{1/2}QK^{-1/2}. \end{aligned}$$

Hence, \(Q^{\text {sym}}\) and Q are similar, which implies that they are isospectral and thus \(Q({\mathcal {L}})\) has real spectrum. Moreover, f is an eigenfunction with eigenvalue \(\lambda \) for \(Q^{\text {sym}}\) if and only if \(K^{-1/2}f\) is eigenfunction of \(\lambda \) for Q.

From here on, we shall always refer to the quotient matrix and to the symmetric quotient matrix of \({\mathcal {L}}\) with respect to the partition of V into orbits. This partition is clearly equitable Brouwer and Haemers (2012), i.e. the row sum of each block of \({\mathcal {L}}\) with respect to the partition is constant.

With this notation, we are now in a position to consider the spectrum of L in terms of its underlying automorphism group, and therefore to dissect the effect of redundancy on its spectral properties. The following result is fundamental.

Proposition 3

The spectrum of L consists of the spectrum of \(Q^{\text {sym}}({\mathcal {L}})\) (with eigenfunctions that are constant on each orbit) together with the eigenvalues belonging to eigenfunctions that sum to zero on each orbit.

Proof

Use the following facts:

  • By (Brouwer and Haemers 2012, Lemma 2.3.1), the spectrum of \({\mathcal {L}}\) consists of the spectrum of \(Q({\mathcal {L}})\) (with eigenfunctions that are constant on each part of the partition) together with the eigenvalues belonging to eigenfunctions that sum to zero on each part of the partition.

  • By the considerations above, \(Q({\mathcal {L}})\) is isospectral to \(Q^{\text {sym}}({\mathcal {L}})\).

  • By (Mulas and Zhang 2021, Remark 2.14), L is isospectral to \({\mathcal {L}}\) and f is an eigenfunction with eigenvalue \(\lambda \) for \({\mathcal {L}}\) if and only if \(D^{1/2}f\) is eigenfunction of \(\lambda \) for L.

  • If f is either constant in the parts of the partition, or it sums to zero on each part of the partition, then the same holds for \(D^{1/2}f\), since the vertices belonging to the same set of the partition have the same degree.

\(\square \)

This result indicates that the spectrum of \({\varGamma }\) can be split into pieces relating to redundant and unique structural features. To deconstruct this decomposition further, the following definition is useful:

Definition 20

The quotient network of \({\varGamma }\), denoted \(Q({\varGamma })\), is the (unique) weighted, undirected graph with self-loops that has adjacency matrix \(Q^{\text {sym}}({\mathcal {L}})\).

Using this definition, we can rewrite Proposition 3 as follows.

Corollary 1

The spectrum of \({\varGamma }\) consists of the adjacency spectrum of its quotient network (with eigenfunctions that are constant on each orbit) together with the eigenvalues belonging to eigenfunctions that sum to zero on each orbit.

Proof

It follows from Proposition 3, together with the fact that the adjacency matrix of \(Q({\varGamma })\) is \(Q^{\text {sym}}({\mathcal {L}})\). \(\square \)

To illustrate these ideas, it is useful to consider an example.

Fig. 2
figure 2

The hyperflower. a The 5-hyperflower with 3 twins on 25 vertices. b Its quotient network. In the quotient network, \(\alpha \) represents the core vertices of the hyperflower, while \(\beta \) represents the peripheral vertices

Example 2

(Hyperflowers) Consider the l-hyperflower with t-twins introduced in [24] and shown in Fig. 2a. This is a hypergraph \({\varGamma }=(V,H)\) with only inputs whose vertex set can be written as \( V=W\sqcup V_1\sqcup \ldots \sqcup V_l\), where each \(V_j\) has cardinality l, and the hyperedge set is given by

$$\begin{aligned} H=\{h_j=W\cup V_j \text{ for } j=1,\ldots ,l \}. \end{aligned}$$

As shown in [24, Lemma 6.12], the spectrum of \({\varGamma }\) is given by:

  • 0, with multiplicity \(n-l\).

  • t, with multiplicity \(l-1\). As corresponding eigenfunctions, one can choose the \(f_j\)’s, for \(j\in \{2,\ldots ,l\}\), that are 1 on \(V_1\), \(-1\) on \(V_j\) and 0 otherwise.

  • \(n-tl+t\), and the constant functions are the corresponding eigenfunctions.

It’s easy to see that \({\varGamma }\) has two orbits and, in this case, the adjacency automorphisms coincide with the Laplacian automorphisms. Thus, the redundancy of the hyperflower is \(r=1/n\). Moreover, the quotient network only has two vertices \(\alpha \) and \(\beta \) representing the core vertices and the peripheral vertices of \({\varGamma }\), respectively. Its adjacency matrix is \(Q^{\text {sym}}\), where

$$\begin{aligned} Q^{\text {sym}}_{\alpha \beta }&=\frac{1}{\sqrt{|V_\alpha |\cdot |V_\beta |}}\cdot \sum _{i\in V_\alpha , j\in V_\beta }\left( -\frac{A_{ij}}{\sqrt{\deg (i)\deg (j)}}\right) \\&=\frac{1}{\sqrt{(n-tl)(tl)}}\cdot (n-tl)(tl)\cdot \frac{1}{\sqrt{l}}\\&=\sqrt{(n-tl)t} \end{aligned}$$

while

$$\begin{aligned} Q^{\text {sym}}_{\alpha \alpha }&=\frac{1}{|V_\alpha |}\cdot \left( \sum _{(i,j):i\ne j\in V_\alpha }\left( -\frac{A_{ij}}{\deg (i)}\right) +\sum _{i\in V_\alpha }1\right) \\&=\frac{1}{|V_\alpha |}\cdot \biggl ( |V_\alpha |\cdot (|V_\alpha |-1)+|V_\alpha |\biggr )\\&=|V_\alpha |=n-tl \end{aligned}$$

and

$$\begin{aligned} Q^{\text {sym}}_{\beta \beta }&=\frac{1}{|V_\beta |}\cdot \left( \sum _{(i,j):i\ne j\in V_\beta }\left( -\frac{A_{ij}}{\deg (i)}\right) +\sum _{i\in V_\beta }1\right) \\&=\frac{1}{tl}\left( tl(t-1)+tl\right) =t. \end{aligned}$$

Therefore, the quotient network has edges \((\alpha ,\beta )\), \((\alpha ,\alpha )\) and \((\beta ,\beta )\) with weights given by \(\sqrt{(n-tl)t}\), \(n-tl\) and t, respectively.

For the hyperflower in Fig. 2, for instance, the edge \((\alpha ,\beta )\) has weight \(\sqrt{30}\), the loop \((\alpha ,\alpha )\) has weight \(n-tl=10\) and the loop \((\beta ,\beta )\) has weight \(t=3\) (Fig. 2). Therefore

$$\begin{aligned} Q^{\text {sym}}=\left( \begin{matrix}10 &{}\sqrt{30}\\ \sqrt{30} &{}3 \end{matrix}\right) . \end{aligned}$$

,it is easy to check that the eigenvalues of this matrix are 13 and 0. Therefore, in this case, Proposition 3 tells us that:

  • 0 and 13 are eigenvalues for the hyperflower, with eigenfunctions that are constant on the peripheral vertices and constant on the core vertices;

  • The other eigenvalues of the hyperflower belong to eigenfunctions that sum to zero on the peripheral vertices.

These results are clearly in accordance with the alternative calculations given above (see also [24, Lemma 6.12]).

Signed automorphisms

The results presented so far straightforwardly extend the theory of automorphisms of graphs to hypergraphs. However, oriented hypergraphs have additional automorphisms induced by sign changes that are distinct from those encountered for graphs. In this section, we define signed automorphisms and study their effect on the hypergraph spectrum. Although signed automorphisms do not have an immediate biochemical interpretation, we include discussion of them here for mathematical completeness.

As shown in (Jost and Mulas 2019, Lemma 49), if we reverse the role of a vertex v in all the hyperedges in which it is contained, i.e. if we let it become an input where it is an output and vice versa, the spectrum does not change, while the eigenfunctions differ by a change of sign on v. More generally, given an oriented hypergraph \({\varGamma }\), we can reverse the role of a subset of k vertices \(1,\ldots ,k\) and obtain a hypergraph \({\varGamma }'\) which is isospectral to \({\varGamma }\). Thus, we can apply the theory of Laplacian automorphisms to \({\varGamma }'\) and translate the results to \({\varGamma }\). We formalise this idea as follows.

Definition 21

Let \(\sigma :V=\{1,\ldots ,n\}\rightarrow \{+1,-1\}\) be a sign function. Given a permutation p of the vertices of \({\varGamma }\), we define \(p^\sigma :V\rightarrow \{\pm 1,\ldots ,\pm n\}\) by letting

$$\begin{aligned} p^\sigma (i):=\sigma (i)\cdot p(i) \end{aligned}$$

and we say that \(p^\sigma \) is a signed permutation of the vertices.

Definition 22

Given a sign function \(\sigma :V=\{1,\ldots ,n\}\rightarrow \{+1,-1\}\), we let \(\sigma ({\varGamma })\) be the oriented hypergraph constructed from \({\varGamma }\) by reversing the role of the vertices i such that \(\sigma (i)=-1\), in all hyperedges in which they are contained. We say that the quotient network \(Q(\sigma ({\varGamma }))\) of \(\sigma ({\varGamma })\) is a signed quotient network of \({\varGamma }\).

Using these definitions, we can now extend the theory of hypergraph automorphisms to signed automorphisms. In particular,

Definition 23

A signed hypergraph automorphism is a signed permutation \(p^\sigma \) of the vertices of \({\varGamma }\) such that

$$\begin{aligned} p(h)=(p(h_{in}),p(h_{out})) \in H(\sigma ({\varGamma })) \quad \text {for all } h=(h_{in},h_{out}) \in H({\varGamma }). \end{aligned}$$

Similarly, a signed adjacency automorphism is a signed permutation p of the vertices of \({\varGamma }\) such that \(\bigl (A({\varGamma })\bigr )_{p(i)p(j)} = \bigl (A(\sigma ({\varGamma }))\bigr )_{ij}\) for all \(1 \le i, j \le n\) and a signed Laplacian automorphism is a signed adjacency automorphism \(p^\sigma \) that preserves degrees, that is, \(\deg (i)=\deg (p(i))\), for all \(i=1,\ldots ,n\).

We denote by \({{\,\mathrm{Aut}\,}}_{\text {signed}}({\varGamma })\) the group of signed Laplacian automorphisms of \({\varGamma }\). Moreover,

Definition 24

The signed orbit of \(i\in V\) is

$$\begin{aligned} {\mathcal {O}}^\sigma (i):=\{p^\sigma (i):p^\sigma \in {{\,\mathrm{Aut}\,}}_{\text {signed}}({\varGamma })\}. \end{aligned}$$

In order to make functions on orbits well defined, given \(f:V\rightarrow {\mathbb {R}}\) we let

$$\begin{aligned} f(-i):=-f(i),\qquad \text {for }i\in V. \end{aligned}$$

Using this notation, the following proposition is the analogue of Proposition 2 for anti-twin and anti-duplicate vertices.

Proposition 4

Let \({\varGamma }\) be an oriented hypergraph. Given \(i,j\in V\), let p be the transposition \(p=(i,j)\) and let \(\sigma \) be the sign function such that \(\sigma (i)=-1\) and \(\sigma (k)=+1\), for all \(k\in V\setminus \{i\}\).

  1. (i)

    If i and j are anti-duplicate then \(p^\sigma \) is a signed adjacency automorphism.

  2. (ii)

    If i and j are anti-duplicate and \(\deg (i)=\deg (j)\), then \(p^\sigma \) is a signed Laplacian automorphism.

  3. (iii)

    If i and j are anti-twin then \(p^\sigma \) is a signed hypergraph automorphism.

The converses of these statements are not necessarily true.

Proof

Analogous to the proof of Proposition 2. \(\square \)

We may now decompose the spectrum of \({\varGamma }\) taking into account signed automorphisms.

Proposition 5

Let \(\sigma :V\rightarrow \{+1,-1\}\). The spectrum of \({\varGamma }\) consists of the adjacency spectrum of \(Q(\sigma ({\varGamma }))\) (with eigenfunctions that are constant on each signed orbit) together with the eigenvalues belonging to eigenfunctions that sum to zero on each signed orbit.

Proof

By (Jost and Mulas 2019, Lemma 49), it easily follows that \(\lambda \) is an eigenvalue for \(\sigma ({\varGamma })\) with eigenfunction f if and only if \(\lambda \) is an eigenvalue for \({\varGamma }\) with eigenfunction \(\sigma \cdot f\), where \(\sigma f(i):=\sigma (i)\cdot f(i)\). Together with Corollary 1, this proves the claim. \(\square \)

To illustrate these ideas, we again consider an example.

Example 3

(Signed Hyperflower) For the hyperflower, in Example 2, all vertices are inputs. If we let one vertex v become an output in all hyperedges in which it is contained, then the theory of (unsigned) Laplacian automorphisms cannot detect this reversal. However, by choosing the sign function \(\sigma :V\rightarrow \{+1,-1\}\) that has value \(-1\) on v and value \(+1\) otherwise, and applying Proposition 5 its effect can be detected.

These results finally give us an alternative notion of redundancy.

Definition 25

The signed redundancy is

$$\begin{aligned} r_{\text {signed}}:=\min _{\sigma :V\rightarrow \{+1,-1\}}\frac{\#{\mathcal {O}}^\sigma -1}{n} \end{aligned}$$

By choosing \(\sigma :V\rightarrow \{+1,-1\}\) with \(+1\) on all vertices, we have \({\mathcal {O}}^\sigma (i)={\mathcal {O}}(i)\) for each \(i\in V\), and therefore

$$\begin{aligned} r_{\text {signed}}=\min _{\sigma :V\rightarrow \{+1,-1\}}\frac{\#{\mathcal {O}}^\sigma -1}{n}\le \frac{\#{\mathcal {O}}-1}{n}=r. \end{aligned}$$

Hence, the signed redundancy is more precise than the unsigned redundancy. In the case of Example 3, for instance, \(r_{\text {signed}}=1/n\) while \(r=2/n\).

Example: basic enzyme reactions

In order to illustrate this theory, consider the basic enzyme reactions:

$$E + S\mathop { \leftrightarrows }\limits_{{{\text{kf}}}}^{{{\text{kr}}}} ES\mathop \to \limits^{{K_{{{\text{cat}}}} }} E + P,$$
(7)

, where \(k_f\), \(k_r\) and \(k_{cat}\) are reaction rates. To explore the geometry of this system, we will consider two hypergraph models in which the above chemical substances are represented by vertices, and the reactions are represented by hyperedges. The first hypergraph model accounts for forward reactions only and so represents the system:

$$\begin{aligned} E+S \xrightarrow {k_f} ES \xrightarrow {k_{cat}} E+P. \end{aligned}$$
(8)

We let \({\varGamma }:=(V,H)\), be a hypergraph, where the vertex set is \(V:=\{E,S,ES,P\}\), the hyperedge set is \(H:=\{h_1,h_2\}\), and the oriented hyperedges are \(h_1:=(\{E,S\},\{ES\})\) and \(h_2:=(\{ES\},\{E,P\})\). This hypergraph is illustrated in Fig. 3.

Fig. 3
figure 3

Hypergraph representing the system given in Eq. (8)

The spectrum of \({\varGamma }\), which is a 2-hyperflower with 1 twin on 4 vertices, is 0, 0, 1, 3. In this case, there are exactly two non-zero eigenvalues because there are two hyperedges and these hyperedges are independent of each other (cf. Jost and Mulas 2019). The largest eigenvalue is 3 because the hypergraph is bipartite and each reaction contains exactly three substances (cf. Mulas 2021). Finally, 1 is an eigenvalue because the vertices S and P are anti-duplicate. A corresponding eigenfunction is \(f:V\rightarrow {\mathbb {R}}\) such that \(f(S)=f(P)=1\) and \(f(ES)=f(S)=0\). Since there are no hypergraph automorphisms, the redundancy is

$$\begin{aligned} r=\frac{\#{\mathcal {O}}-1}{4}=\frac{3}{4}. \end{aligned}$$

However, because S and P are anti-duplicate, and E and ES are anti-twin the system possess signed automorphisms. These symmetries are not present in graph representations of this system, and so represent features of the chemical reaction system that are specifically identified by the hypergraph theory. Thus, the signed redundancy differs from the redundancy. In this case, the signed redundancy is

$$\begin{aligned} r_{\text {signed}}=\min _{\sigma :V\rightarrow \{+1,-1\}}\frac{\#{\mathcal {O}}^\sigma -1}{n}=\frac{1}{4}, \end{aligned}$$

and the minimum is achieved for the function \(\sigma :V\rightarrow \{+1,-1\}\) that has value 1 on S, ES and value \(-1\) on E, P. Its signed orbits are \(\{S,P\}\) and \(\{E,ES\}\). It should be noted that these orbits are coincident with the conservation laws of the dynamics, but this is not always the case. Conservation laws do not relate directly to automorphisms or signed automorphisms, but rather are related to properties of another Laplacian, as discussed in Jost and Mulas (2019).

These results demonstrate, via a practical empirical (rather than theoretical) example, that there are geometric properties that are detected by the signed automorphisms and are not detected by the automorphisms. However, this example only accounts for forward reactions. In order to take the backward reaction in the system described by Eq. (7) into account, we consider the hypergraph \({\varGamma }\cup h_3:=(V,H\cup h_3)\), where \(h_3:=(\{ES\},\{E,S\})\). The eigenvalues of \({\varGamma }\cup h_3\) coincide with the eigenvalues of \({\varGamma }\), counted with multiplicity, but the eigenfunctions and the redundancy change.

The hypergraph \({\varGamma }\cup h_3\) has two non-zero eigenvalues (as does \({\varGamma }\)). In this case, although there are three hyperedges (i.e. reactions), only two of them are independent, since \(h_3\) and \(h_1\) are inverse of one another (cf. Jost and Mulas 2019). Moreover, as in the case of \({\varGamma }\), the largest eigenvalue is 3 because the hypergraph is bipartite and each reaction involves three substances (cf. Mulas 2021). Finally, although P and S are not anti-duplicate in this case, \({\varGamma }\cup h_3\) is isospectral with \({\varGamma }\) and so \({\varGamma }\cup h_3\) inherits the eigenvalue 1 due to the fact that S and P are anti-duplicate in \({\varGamma }\). This endows the \({\varGamma }\cup h_3\) with a shadow symmetry. As with \({\varGamma }\), there are no automorphisms, and so

$$\begin{aligned} r=\frac{\#{\mathcal {O}}-1}{4}=\frac{3}{4}. \end{aligned}$$

However, in this case, the signed redundancy is

$$\begin{aligned} r_{\text {signed}}=\min _{\sigma :V\rightarrow \{+1,-1\}}\frac{\#{\mathcal {O}}^\sigma -1}{n}=\frac{1}{2}, \end{aligned}$$

and this minimum is achieved for \(\sigma :V\rightarrow \{+1,-1\}\) that has value 1 on S, ES, P and value \(-1\) on E. Its signed orbits are \(\{S\}\), \(\{P\}\) and \(\{E,ES\}\). The difference arises because S and P are now not anti-duplicate; hence, they do not belong to a same signed orbit.

This second example has shown that, while adding reversed hyperedges (reactions) does not change the hypergraph spectrum, the eigenfunctions, signed automorphisms and signed redundancy can change. Moreover, certain spectral properties of the hypergraph that includes reversed hyperedges are derived from the structure of the simpler hypergraph without them. Thus, there is a motivation for studying simplified systems without losing structural information.

Discussion

Biochemical reaction systems often contain duplication which manifests as symmetry in their underlying hypergraphs. Here, we have introduced and studied automorphisms for oriented hypergraphs. We focused on the normalised Laplacian, which is known to encode many qualitative properties of a hypergraph, and have generalised the known theory for graphs Chung (1997), Brouwer and Haemers (2012). We have shown that, while the generalisation to the case of classical hypergraphs is intuitive and relatively straightforward, for a complete theory in the case of oriented hypergraphs, one needs additional constructions, such as the signed automorphisms and signed redundancy. Thus, the general theory we have introduced extends that of graphs and hypergraphs to the more general—and appropriate for modelling complicated biochemical reaction systems inside a cell—case of oriented hypergraphs. To illustrate this theory we have shown, with a simple practical example, that it can be used to practically study redundancy in biochemical systems.

There has been some prior work on spectral graph theory applied to biochemical networks, see for instance MacArthur et al. (2008), MacArthur and Sánchez-García (2009), Sánchez-García (2020), Lesne (2006), Mason and Verwoerd (2007), Perkins and Langston (2009), Banerjee and Jost (2009), Huang et al. (2019), and there is a growing literature on how to use hypergraphs for modelling biochemical networks. In Estrada and Rodríguez-Velázquez (2006), for example, the concepts of subgraph centrality and clustering are generalised to the case of hypergraphs, and various practical examples, including examples from biology, are given. Similarly, in [39], a notion of curvature for hypergraphs is proposed and applied to the analysis of the E. coli metabolism. In Klamt et al. (2009), it is argued—using a range of practical examples—that biological networks that are typically modelled as graphs can also be fruitfully modelled using hypergraphs. Some practical algorithms that do not involve spectral theory, as well as network statistics for hypergraphs, are also discussed. In Flamm et al. (2015), some mathematical foundations (which again do not include spectral theory) for the study of hypergraphs in the context of chemical reaction systems and biological evolution are given. Similarly, in Ritz et al. (2014), Schwob et al. (2019), hypergraphs are used as a model for signalling pathways in cellular biology. In Ritz et al. (2014), in particular, it is noted that, since hypergraph theory is less well-known than graph theory, there is a need to develop theoretical and algorithmic foundations for hypergraphs.

However, although there is a growing literature on both spectral graph theory applied to biology and hypergraph modelling of biochemical networks, we are still lacking theoretical tools needed to apply spectral hypergraph theory to biochemical networks. In this paper, we have taken a step further in this direction.

In the future it will be interesting to analyse large, complex biochemical networks using spectral hypergraph methods. There are a number of publicly available, curated, repositories of biochemical reaction systems Bader et al. (2001), Bader et al. (2003), Szklarczyk et al. (2016), S.B.R. Group (2021), Kanehisa et al. (2002). Once in a hypergraph format, spectral properties of such empirical networks can then be determined by considering their associated matrices such as the normalised Laplacian, which we have focused on here. As noted, the eigenvalues of these matrices encode many important qualitative properties of the underlying hypergraph Jost and Mulas (2019), Mulas et al. (2020), Mulas (2021), Mulas and Zhang (2021), [24], Mulas (2021), Reff (2014), Chen et al. (2015), Duttweiler and Reff (2019), Chen et al. (2018), Reff and Rusnak (2012), including its symmetries and associated redundancy, and these properties, in turn, shed light on the essential structural properties of the system under study. By converting a geometric problem into an algebraic one the benefits of this approach are numerous, since they make the structure of the system amenable to detailed analysis. These benefits include computational aspects, since the spectrum of a square matrix can be computed with relatively little computational effort.

Moreover, the tools presented here may also be useful in the analysis of chemical reaction networks more generally—particularly in applications that involve complex sets of reactions, for example as encountered in some industrial processes, where redundancy may also be ubiquitous.