1 Introduction

Axial algebras are a new class of commutative non-associative algebras introduced by Hall, Rehren, and Shpectorov [5, 6]. Recently, Joshi [8] constructed a series of subalgebrasFootnote 1 of dimension \(n^2\) of the Matsuo algebras \(M_\eta (S_{2n})\), generated by single and double axes. In this work, we deal with the case of Matsuo algebras \(M_\eta (^-O_{n+1}^+ (3))\) corresponding to a class of reflections of the isometry group of a nondegenerate orthogonal space over \(\mathbb {F}_3\). In these Matsuo algebras, we construct a new series of subalgebras, also of dimension \(n^2\).

Let \(G=GO_{n+1}^+(3)\) denote the group of orthogonal transformations of a vector space V of dimension \(n+1\) over the finite field \(\mathbb {F}_3\), endowed with a nondegenerate symmetric bilinear form admitting an orthonormal basis \(\{e_0,e_1,\dots ,e_n\}\). That is, the vectors \(e_i\) are pairwise orthogonal and

$$(e_0,e_0)=(e_1,e_1)=\cdots =(e_n,e_n)=1.$$

Let C be the set of reflections:

$$\begin{aligned} r_u: v \mapsto v-2\frac{(u,v)}{(u,u)}u, \end{aligned}$$

where u is a vector with \((u,u)=-1\). Consider \(G=\langle C\rangle \le GO_{n+1}^+(3)\). Then, (GC) is a 3-transposition group denoted \({}^-O_{n+1}^+(3)\) (see [2]).

We note that \(r_u=r_{-u}\), so the elements of C are in a bijection with the set of 1-dimensional subspaces \(\langle u \rangle \) of V, where \((u,u)=-1\), and not with the vectors u themselves. Hence, we can identify the elements of C with such subspaces \(\langle u \rangle \).

Let \(\mathbb {K}\) be a field of characteristic not equal to 2 and let \(\eta \in \mathbb {K}\), \(\eta \ne 0,1\). Recall from [6] that the Matsuo algebra \(M_\eta (G,C)\) over \(\mathbb {K}\) corresponding to the 3-transposition group (GC) has C as its basis, with multiplication of basis elements \(a,b\in C\) defined by:

$$\begin{aligned} a \cdot b = {\left\{ \begin{array}{ll} a, &{}\text{ if } a=b;\\ 0, &{} \text{ if } a\not =b \text{ and } ab=ba;\\ \frac{\eta }{2}(a+b-c), &{}\text{ if } a\ne b \text{ and } a^b=b^a=:c. \end{array}\right. }. \end{aligned}$$

The basis elements \(a\in C\) are the axes of the axial algebra \(M_\eta (G,C)\), and we refer to them below as single axes. Double axes are sums \(a+b\) of two orthogonal (\(ab=0\)) single axes. In the theorem below, we identify elements of C with the corresponding 1-spaces in the orthogonal space V (see the comment above).

Theorem 1

Let A be the subspace of the Matsuo algebra \(M_\eta (^-O_{n+1}^+ (3))\) spanned by the set of single axes \(S=\{\langle e_i+\epsilon e_j\rangle :1\le i<j\le n,\epsilon =\pm 1\}\) and the set of double axes \(D=\{\langle e_0+e_i\rangle + \langle e_0-e_i\rangle \mid 1\le i\le n\}\). Then, A is a primitive axial algebra of Monster type \((2\eta ,\eta )\) of dimension \(|S|+|D|=n(n-1)+n=n(n-1+1)=n^2\).

In Sect. 2, we provide the necessary background on axial algebras. It is divided into five subsections. First, we introduce the basics of axial algebras, and define the Matsuo algebra and the Frobenius form on it. Then, we discuss double axes and the fusion rules \(M(2\eta ,\eta )\) they satisfy. We conclude this section by defining the subalgebras of dimension \(n^2\) constructed in [8].

In Sect. 3, we prove our main result, Theorem 1, and show that the subalgebra A of dimension \(n^2\) in the Matsuo algebra \(M_\eta (^-O_{n+1}^+ (3))\) is not isomorphic to the subalgebra of \(M_\eta (S_{2n})\) constructed by Joshi [8].

The goal of Sect. 4 is to study the conditions under which A is simple. In Proposition 4.3, we show that no proper ideal of A can contain one of the generating axes in \(S \cup D\). By [9], this means that A is simple exactly when the Frobenius form on A has zero radical (equivalently the Gram matrix of the Frobenius form on A has non-zero determinant). We calculate, in GAP [4], the determinant as a polynomial in \(\eta \) and we find its roots for \(n\le 14\). Based on this, we put forward several precise conjectures describing the roots and their multiplicities for arbitrary n.

The author would like to express her gratitude to Sergey Shpectorov who provided help in preparation of this paper.

2 Background

In this paper, we consider non-associative algebras, which means algebras that are not necessarily associative.

2.1 Axial algebras

Suppose that A is a commutative algebra over a field \(\mathbb {F}\). For arbitrary \(a \in A\), we write \(ad_a\) for the adjoint map in End(A) that is given by \(ad_a: b \mapsto ab\). The eigenvalues, eigenvectors, and eigenspaces of a are the eigenvalues, eigenvectors, and eigenspaces of \(ad_a\), respectively. The element a is said to be diagonalisable if there exists a basis of A consisting of eigenvectors for \(ad_a\). For \(\lambda \in \mathbb {F}\), we write:

$$\begin{aligned} A_\lambda (a)=\{b\in A:ab=\lambda b\}, \end{aligned}$$

for the \(\lambda \)-eigenspace of a. This is trivial when \(\lambda \) is not an eigenvalue of a. For \({\mathcal {F}}\subseteq \mathbb {F}\):

$$\begin{aligned} A_{{\mathcal {F}}}(a)=\oplus _{\lambda \in {\mathcal {F}}}A_\lambda (a). \end{aligned}$$

Note that \(A_{\emptyset }(a)=0\).

Definition 2.1

A fusion law is a pair \(({\mathcal {F}},*)\) where \({\mathcal {F}}\subseteq \mathbb {F}\) and \(*\) is a symmetric map \(*:{\mathcal {F}}\times {\mathcal {F}}\rightarrow {\mathcal {P}}({\mathcal {F}})\).

Now, we give the definition of an \({\mathcal {F}}\)-axial algebra.

Definition 2.2

A diagonalisable idempotent \(0\ne a\in A\) is an \({\mathcal {F}}\)-axis if all of its eigenvalues lie in \({\mathcal {F}}\) and:

$$\begin{aligned} A_\lambda (a)A_\mu (a)\subseteq A_{\lambda *\mu }(a), \end{aligned}$$

for all \(\lambda ,\mu \in {\mathcal {F}}\). That is, every product bc of a \(\lambda \)-eigenvector b with a \(\mu \)-eigenvector c is a sum of some \(\nu \)-eigenvectors, for \(\nu \in \lambda *\mu \).

Let A be an algebra over \(\mathbb {F}\). Notice that if a is an idempotent, then 1 is an eigenvalue of \(ad_a\). Hence, we always assume that \(1\in {\mathcal {F}}\).

Definition 2.3

The algebra A is an \({\mathcal {F}}\)-axial algebra if it is generated by a set of \({\mathcal {F}}\)-axes.

Definition 2.4

An axis \(a\in A\) is primitive if \(A_1(a)=\mathbb {F}a\), (i.e., it is 1-dimensional). The algebra A is called primitive if it is generated by primitive axes.

The following is a well-known example.

Example 2.5

The Griess algebra over \(\mathbb {R}\) is of dimension 196, 884. This algebra is primitive axial and the fusion rules are given in the table in Fig. 1.

Fig. 1
figure 1

Fusion rules \({\mathcal {M}}\) of the Griess algebra

2.2 Matsuo algebras

A point-line geometry is a pair \(({\mathcal {P}},\mathcal {L})\) consisting of a set of points, \({\mathcal {P}}\), and a set of lines, \(\mathcal {L}\). We will assume that every line is a set of points, that is, \(\mathcal {L} \subseteq 2^{{\mathcal {P}}}\), and that every line has size at least 2. A partial linear space is a point-line geometry where any two distinct lines intersect in at most one point. A partial triple system is a partial linear space \(({\mathcal {P}}, \mathcal {L})\) where every line consists of exactly three points. For any two collinear points a and b in a partial triple system, there exists a unique line through a and b. This line consists of a, b, and a third point \(c=a\wedge b\). (We borrow this notation from [3].)

We will write \(a\sim b\) to indicate that a and b are collinear. If a and b are non-collinear, that is, there is no line through a and b, then we write \(a\not \sim b\). We denote by \(a^\sim \) the set of all points collinear to a (this excludes a). The complement of \(\{a\}\cup a^\sim \) is the set \(a^{\not \sim }\) of all points that are not collinear with a.

A non-empty set of points \(\mathcal {P'}\subseteq {\mathcal {P}}\) is a subspace of \(({\mathcal {P}}, \mathcal {L})\) if any line \(L \in \mathcal {L}\) containing two points from \(\mathcal {P'}\) is fully contained in it. The subspace \(\mathcal {P'}\) can be viewed as a geometry if we endow it with the set of lines \(\mathcal {L'}=\{L\in \mathcal {L}:L\subseteq \mathcal {P'}\}\). Clearly, any non-empty intersection of subspaces is again a subspace. This allows us to define the subspace generated by a set of points X, \(\langle X\rangle \). This is the unique smallest subspace containing X. Subspaces generated by three points not contained in single line are called planes. It is easy to see that a subspace generated by a pair of intersecting lines is a plane.

Definition 2.6

A Fisher space is a partial triple system where any plane generated by two intersecting lines is isomorphic to the dual affine plane of order 2, denoted by \({\mathcal {P}}_2^{\vee }\), (see Fig. 2) or to the affine plane of order 3, denoted by \({\mathcal {P}}_3\).

Fig. 2
figure 2

The affine plane \({\mathcal {P}}_3\) and the dual affine plane \({\mathcal {P}}_2^{\vee }\)

Given a 3-transposition group (GC) (see [1]), we define a geometry \(\mathcal {G}= ({\mathcal {P}}, \mathcal {L})\) by setting \({\mathcal {P}}=C\) and:

$$\begin{aligned} \mathcal {L}=\{\{a,b,c\}\subset C:|ab|=3 \text{ and } c=a^b=b^a\}. \end{aligned}$$

This geometry \(\mathcal {G}\) is a Fisher space and, conversely, every Fisher space can be obtained in this way from a 3-transposition group.

Definition 2.7

Let \(\mathbb {F}\) be a field with \(\text {char}\,\mathbb {F}\not =2\) and \(\eta \in \mathbb {F}\), \(\eta \not =0,1\). For a Fisher space \(\mathcal {G}=({\mathcal {P}},\mathcal {L})\), define the Matsuo algebra \(M=M_\eta (\mathcal {G})\) as the algebra over \(\mathbb {F}\) whose basis is the set of points \({\mathcal {P}}\) and multiplication on the basis is given by:

$$\begin{aligned} a \cdot b = {\left\{ \begin{array}{ll} a &{}\text{ if } a=b, \\ 0 &{} \text{ if } a \not \sim b,\\ \frac{\eta }{2} (a+b-a \wedge b) &{}\text{ if } a \sim b. \end{array}\right. } \end{aligned}$$

Note that the elements \(a \in {\mathcal {P}}\) are idempotent and, in fact, they are the axes of this algebra; namely, it is shown in [6] that a satisfies the fusion law of Jordan type \(\eta \), as in Fig. 3.

Fig. 3
figure 3

Fusion rules of Jordan type \(\eta \)

That is, Matsuo algebras belong to the class of axial algebras of Jordan type \(\eta \) introduced by Hall, Rehren, and Shpectorov in [6]. In addition to Matsuo algebras, the class of algebras of Jordan type contains all Jordan algebras generated by idempotents. This is because each idempotent in a Jordan algebras satisfies the Peirce decomposition, which exactly represents the fusion law of Jordan type \(\frac{1}{2}\). Hence, Jordan algebras are algebras of Jordan type \(\frac{1}{2}\).

2.3 Frobenius form

Definition 2.8

A Frobenius form on an axial algebra A is a non-zero symmetric bilinear form that associates with the algebra product:

$$\begin{aligned} (uv,w)=(u,vw) \end{aligned}$$

for all \(u,v,w\in A\).

According to [6], the Matsuo algebra \(M_\eta (\mathcal G)\) admits the Frobenius form given by:

$$\begin{aligned} (a,b)={\left\{ \begin{array}{ll} 1 &{}\text{ if } a=b, \\ 0 &{} \text{ if } a \not \sim b,\\ \frac{\eta }{2} &{}\text{ if } a \sim b. \end{array}\right. }. \end{aligned}$$

We note that any axial subalgebra A of \(M_\eta (\mathcal G)\) inherits a Frobenius form as long as the form is not zero on A.

2.4 Double axes

Here, we are focussing on the case \(\eta \ne \frac{1}{2}\), so that \(2\eta \not =1\). We define double axes as follows.

Definition 2.9

Consider a Matsuo algebra \(M=M_\eta (G,C)\), where (GC) is a group of 3-transpositions. Let ab be any two Matsuo axes, such that \(a\cdot b=0\). Then, \(x=a+b\) will be called a double axis.

Note that axes a and b satisfying \(a\cdot b=0\) are called orthogonal.

It is easy to see that a double axis is an idempotent: \(x^2=(a+b)^2=a^2+ab+ba+b^2=a+0+0+b= a+b=x\). For a double axis \(x=a+b\), we define \(M_{\alpha \beta }(a,b)\) as:

$$\begin{aligned} M_{\alpha \beta }(a,b)= M_\alpha (a) \cap M_\beta (b). \end{aligned}$$

Note that \(x=a+b\) acts on \(M_{\alpha \beta }(a,b)\) as the scalar \(\alpha +\beta \).

Theorem 2.10

[8]] Suppose \(a,b\in M=M_\eta (G,C)\) are two axes with \(ab=0\). Then, \(x=a+b\) is an axis satisfying the fusion law \(M(2\eta ,\eta )\). Furthermore:

$$\begin{aligned} M_0(x)&=M_{00}(a,b);\\ M_1(x)&=M_{10}(a,b)+M_{01}(a,b)=\langle a,b \rangle ;\\ M_{2\eta }(x)&=M_{\eta \eta }(a,b);\\ M_\eta (x)&=M_{0\eta }(a,b)+M_{\eta 0}(a,b). \end{aligned}$$

The fusion table in this case is given in Fig. 4.

Fig. 4
figure 4

Fusion rules satisfied by a double axis

Remarks

  • Note that the fusion rules \(J(\eta )\) satisfied by every single axis \(a\in M\) is obtained by dropping a row and a column from \(M(2\eta ,\eta )\). This corresponds simply to the \(2\eta \)-eigenspace being zero.

  • Subalgebras of M generated by single axes are Matsuo algebras.

  • Double axes are not primitive in M, namely, \(M_1(x)=\langle a,b \rangle \) is two-dimensional. We try to see if M contains a subalgebra in which the double axes generating it are primitive.

2.5 The \(n^2\)-subalgebra of \(M_\eta (S_{2n})\)

Here, we introduce the subalgebra of dimension \(n^2\) in the Matsuo algebra \(M_\eta (S_{2n})\), which was constructed in [8]. Recall that the generating axes of \(M_\eta (S_{2n})\) are the transpositions of \(S_{2n}\).

Theorem 2.11

(Theorem 4.1. [8]) The fixed subalgebra in \(M_\eta (S_{2n})\) of the flip \((1,2)(3,4)\ldots (2n-1,2n)\) contains n single axes and \(n(n-1)\) double axes, which form this subalgebra’s basis. In particular, it has dimension \(n^2\).

The single axes are:

$$\begin{aligned} (2i-1,2i),\quad i=1,\dots ,n, \end{aligned}$$

and the double axes are:

$$\begin{aligned} (2i-1,2j-1)+(2i,2j),\\ (2i-1,2j)+(2j-1,2i), \end{aligned}$$

where \(1\le i<j \le n\).

In this paper, we construct a similar subalgebra of dimension \(n^2\) in the Matsuo algebra \(M_\eta (^-O_{n+1}^+(3))\). However, our subalgebra contains \(n(n-1)\) single axes and n double axes.

3 The new \(n^2\)-algebra

Recall that \(GO_{n+1}^+ (3)\) is the group of all orthogonal transformations of a vector space V of dimension \(n+1\) over the finite field \(\mathbb {F}_3=\{-1,0,1\}\) with an orthonormal basis \(B=\{e_0,e_1,\dots ,e_n\}\). Consider the set C of all reflections with respect to vectors u with \((u,u)=-1\). Let \(G=\langle C\rangle \le GO_{n+1}^+(3)\). We will see below that (GC) is a 3-transposition group, and it is denoted \({}^-O_{n+1}^+(3)\). Let \(M=M_\eta (G,C)\) be the corresponding Matsuo algebra. In this section, we construct a subalgebra of M of dimension \(n^2\), generated by single and double axes.

Recall that a reflection in a nonsingular vector u (i.e., u satisfies \((u,u)\ne 0\)) is given by:

$$\begin{aligned} r_u : v \mapsto v-2\frac{(v,u)}{(u,u)}u. \end{aligned}$$

Remarks

Since \((u,u)=-1\) and \(2=-1\) in \(\mathbb {F}_3\), in this case, \(r_u:v\mapsto v-(v,u)u\).

Lemma 3.1

For every \(\alpha \in GO_{n+1}^+(3)\), \(r_u^\alpha = r_{u^\alpha }\).

Proof

Let \(v\in V\). Then:

$$\begin{aligned} v^{r_u^\alpha }&=v^{\alpha ^{-1} r_u \alpha }=((v^{\alpha ^{-1}})^{r_u})^\alpha \\&=(v^{\alpha ^{-1}}-2\frac{(v^{\alpha ^{-1}},u)}{(u,u)} u)^\alpha \\&=v-2\frac{(v,u^\alpha )}{(u^\alpha , u^\alpha )}u^\alpha \\&=v^{r_{u^\alpha }}. \end{aligned}$$

Hence, we obtain that \( r_u^{\alpha }=r_{u^\alpha }\). \(\square \)

Note that \(r_u=r_v\) if and only if \(v=\pm u\). Indeed, it is easy to see that \(r_u=r_{\alpha u}\) for \(0\ne \alpha \in \mathbb {F}_3\). Conversely, if \(r_u=r_v\), then \(-v=v^{r_v}=v^{r_u}=v-2\frac{(v,u)}{(u,u)}u\), which immediately implies that u is a multiple of v.

Proposition 3.2

Suppose \(u,v\in V\) with \((u,u)=-1=(v,v)\) and suppose that u and v are independent; that is, \(u\ne \pm v\). Then, \(|r_ur_v|=2\) if \((u,v)=0\) and \(|r_ur_v|=3\) if \((u,v)=\pm 1\).

Proof

If \((u,v)=0\), then \(u^{r_v}=u\), and so, by Lemma 3.0.1, \(r_u^{r_v}=r_u\). This means that \((r_ur_v)^2=1\), and so, \(|r_ur_v|=2\). Now, suppose that \((u,v)\ne 0\). Substituting \(-v\) for v if necessary, we may assume that \((u,v)=-1\). Then, \(u^{r_v}=u+v=v^{r_u}\). Therefore, \(r_u^{r_v}=r_{u+v}= r_v^{r_u}\). In particular, \(r_u^{r_v}=r_v^{r_u}\), which means that \((r_ur_v)^3=1\), and so, \(|r_ur_v|=3\). \(\square \)

This proposition shows that the class C of reflections \(r_u\) with \((u,u)=-1\) is a class of 3-transpositions and so \({}^-O_{n+1}^+(3)=(G,C)\), where \(G=\langle C\rangle \), is a 3-transposition group, as claimed in the introduction and at the beginning of this section. We recall from the introduction that we identify the element \(r_u\in C\) with the one-dimensional subspace \(\langle u\rangle \) as both u and \(-u\) define the same element \(r_u=r_{-u}\) of C.

If \(u,v \in V\) with \((u,u)=-1=(v,v)\), then:

$$\begin{aligned} \langle u \rangle . \langle v \rangle ={\left\{ \begin{array}{ll} \langle u \rangle &{}\text{ if } u=\pm v,\\ 0 &{} \text{ if } (u,v)=0,\\ \frac{\eta }{2}(\langle u \rangle +\langle v \rangle -\langle v-(u,v)u\rangle ) &{}\text{ if } (u,v)=\pm 1. \end{array}\right. }. \end{aligned}$$

We will now prove the main result of the paper, Theorem 1. Recall that \(\eta \in \mathbb {F}\) and \(\eta \notin \{1,0,\frac{1}{2}\}\).

Proof

To show that A is a subalgebra, we need to check that A is closed under multiplication. We establish this by looking through the possible cases of pairs of axes \(a,b\in S\cup D\) and showing in each case that \(ab\in A\). Note that every axis, single or double, is an idempotent, so we just need to consider pairs of distinct axes: \(a\ne b\).

Let us start with two single axes: \(a=\langle e_i+\epsilon e_j\rangle \) and \(b=\langle e_{i'}+\epsilon ' e_{j'}\rangle \). Then, \(|\{i,j\}\cap \{i',j'\}|\) is 0, 1 or 2. If \(\{i,j\}\) and \(\{i',j'\}\) are disjoint, then, clearly, \(ab=0\) since \((e_i+\epsilon e_j,e_{i'}+\epsilon ' e_{j'})=0\). If \(|\{i,j\} \cap \{i',j'\}|=1\), then, without loss of generality, \(i=i'\), that is, \(b=\langle e_i+\epsilon ' e_{j'}\rangle \). In this case, \(ab= \frac{\eta }{2}(a+b-c)\), where \(c=b^{r_a}=\langle -\epsilon e_j+\epsilon ' e_{j'}\rangle \). Manifestly, \(c\in S\), and so, \(ab\in A\). Finally, suppose that \(|\{i,j\}\cap \{i',j'\}|=2\). Then, without loss of generality, \(a=\langle e_i+e_j\rangle \) and \(b=\langle e_i-e_j\rangle \). Here, \((e_i+e_j,e_i-e_j)=0\), and so again, as in the first case, \(ab=0\).

Next, assume that \(a=\langle e_i+\epsilon e_j\rangle \) is a single axis and \(b=\langle e_0+e_k\rangle +\langle e_0-e_k\rangle \) is a double axis. Here, we have two options: either \(k\not \in \{i,j\}\) or \(k\in \{i,j\}\) (say, \(k=i\)). In the first case, \(ab=0\), since \((e_i\pm e_j,e_0\pm e_k)=0\). If \(k=i\), then:

$$\begin{aligned} ab&=\langle e_i+\epsilon e_j\rangle (\langle e_0+e_i\rangle +\langle e_0-e_i\rangle )\\&=\frac{\eta }{2}(\langle e_i+\epsilon e_j\rangle +\langle e_0+e_i\rangle -\langle e_0+\epsilon e_j\rangle )\\&\quad +\frac{\eta }{2}(\langle e_i+\epsilon e_j\rangle +\langle e_0-e_i\rangle -\langle e_0-\epsilon e_j\rangle )\\&=\eta a+\frac{\eta }{2}b-\frac{\eta }{2}(\langle e_0+e_j\rangle +\langle e_0-e_j\rangle ).\\ \end{aligned}$$

Clearly, \(\langle e_0+e_j\rangle +\langle e_0-e_j\rangle \in D\), and so, \(ab\in A\).

Finally, let \(a=\langle e_0+e_i\rangle +\langle e_0-e_i\rangle \) and \(b=\langle e_0+e_j\rangle +\langle e_0-e_j\rangle \) be two double axes, \(i\ne j\). Then:

$$\begin{aligned} ab&=(\langle e_0+e_i\rangle +\langle e_0-e_i\rangle )(\langle e_0+e_j\rangle +\langle e_0-e_j\rangle )\\&=\frac{\eta }{2}(\langle e_0+e_i\rangle +\langle e_0+e_j\rangle -\langle e_i+e_j\rangle )\\&\quad +\frac{\eta }{2}(\langle e_0+e_i\rangle +\langle e_0-e_j\rangle -\langle e_i-e_j\rangle )\\&\quad +\frac{\eta }{2}(\langle e_0-e_i\rangle +\langle e_0+e_j\rangle -\langle e_i-e_j\rangle )\\&\quad +\frac{\eta }{2}(\langle e_0-e_i\rangle +\langle e_0-e_j\rangle -\langle e_i+e_j\rangle )\\&=\eta a+\eta b-\eta \langle e_i+e_j\rangle -\eta \langle e_i-e_j\rangle . \end{aligned}$$

Clearly, all summand here are in A, so in this final case, \(ab\in A\).

We have shown that A is a subalgebra. Manifestly, the vectors in \(S\cup D\) are linearly independent, and so, they form a basis of A. This yields the claim concerning the dimension of A.

It remains to show that the double axes \(x=\langle e_0+e_i\rangle +\langle e_0-e_j\rangle \) are primitive in A. Consider \(\sigma =r_{e_0}\). This involution fixes all single axes in S, and it switches two single axes \(a=\langle e_0+e_i\rangle \) and \(b=\langle e_0-e_i\rangle \) in every \(x=a+b\in D\). Hence, \(S\cup D\) is contained in the fixed subalgebra \(M_\sigma \), which means that A is contained in \(M_\sigma \). Recall from Theorem 2.10 that \(M_1(x)=\langle a,b\rangle \). Within \(M_1(x)\), \(\sigma \) fixes \(a+b=x\) and inverts \(a-b\). Hence, \(A_1(x)=A\cap M_1(x)= \langle x\rangle \), and so, x is indeed primitive in A. \(\square \)

Clearly, this algebra is different as axial algebra from the algebra constructed by Joshi. This is because the number of single and double axes do not match.

4 Frobenius form and simplicity of A

In this section, A is the \(n^2\)-dimensional subalgebra in \(M=M_\eta ({}^-O_{n+1}^+(3))\) that we constructed in the previous section. Here, we use the theory developed in [9] to investigate the following question: for which values of \(\eta \) is A a simple algebra?

4.1 Frobenius form on A

First of all, note that A inherits from M a Frobenius form (see Definition 2.8), a bilinear from associating with the algebra product. In this subsection, we compute the values of the Frobenius form on the basis \(S\cup D\) of A. For \(a\in S\cup D\), let the support \(\text {supp}(a)\) be defined as \(\{i,j\}\) if \(a=\langle e_i+\epsilon e_j\rangle \) is a single axis and as \(\{0,i\}\) if \(a=\langle e_0+e_i\rangle +\langle e_0-e_i\rangle \) is a double axis.

Proposition 4.1

Let \(a,b\in S\cup D\). Then:

  • If \(a=b\), then \((a,a)=1\) if \(a\in S\) and \((a,a)=2\) if \(a\in D\).

  • If \(a\ne b\), then \((a,b)=0\) if \(\text {supp}(a)\cap \text {supp}(b)=\emptyset \) or if \(\text {supp}(a)=\text {supp}(b)\) (in this case, both a and b are single axes).

  • If \(a\ne b\) and \(|\text {supp}(a)\cap \text {supp}(b)|=1\) then \((a,b)=\frac{\eta }{2}\) if \(a,b\in S\); \(\eta \) if \(a\in S\) and \(b\in D\) (or vice versa); and \(2\eta \) if \(a,b\in D\).

Proof

This follows immediately from the values of the Frobenius form on M, as given in Subsection 2.3. \(\square \)

4.2 Ideals in A

According to [9], the ideals of A containing axes from \(S\cup D\) are controlled by the projection graph on the set \(S\cup D\) of axes of A.

Definition 4.2

The projection graph of A is the graph on \(S\cup D\) where two vertices a and b are connected by an edge if \((a,b)\ne 0\).

Proposition 4.3

The algebra A has no proper non-zero ideals containing an axis from \(S\cup D\).

Proof

According to [9], it suffices to show that the projection graph is connected. By Proposition 4.1, we see that the single axis \(\langle e_i+\epsilon e_j\rangle \) is connected by edges to both double axes \(\langle e_0+e_i\rangle +\langle e_0-e_i\rangle \) and \(\langle e_0+e_j\rangle +\langle e_0-e_j\rangle \). Thus, all double axes and all single axes are contained in the same connected component of the projection graph. \(\square \)

4.3 Radical

We turn now to ideals of A that contain no axes from \(S\cup D\). All such ideals are contained in the radical of A, which is defined in [9] as the largest ideal not containing any of the generating axes of A. It is also shown in [9] that, in the presence of a Frobenius form having non-zero values (aa) on all generating axes a, the radical of A coincides with the radical:

$$\begin{aligned} A^\perp =\{u\in A:(u,v)=0 \quad \text{ for } \text{ all } \quad v\in A\} \end{aligned}$$

of the Frobenius form on A. Clearly, this radical is non-zero if and only if the determinant of the Gram matrix of the Frobenius form is non-zero. Clearly, the determinant of the Gram matrix (written with respect to the basis \(S\cup D\) of A) is a polynomial in \(\eta \) of degree depending on n. In the next (and final) subsection, we compute this polynomial for \(n\le 14\), and based on this, we put forward exact conjectures concerning the values of \(\eta \) for the radical is non-zero (and, hence, A is not simple).

4.4 Critical values of \(\eta \)

Here, we use GAP [4] to compute and factorize the determinant of the Gram matrix of the Frobenius form on A for small values of n. We conclude this section with some conjectures.

Example 4.4

Let \(n=2\). Then, our \(2^2\)-dimensional algebra A contains in the basis \(S\cup D\) two single axes \(\{\langle e_1+e_2\rangle ,\langle e_1-e_2\rangle \}\) and two double axes \(\{\langle e_0+e_1\rangle +\langle e_0-e_1\rangle ,\langle e_0+e_2\rangle +\langle e_0-e_2\rangle \}\).

The Gram matrix is given by:

$$\begin{aligned} G= \begin{pmatrix} 1 &{} 0&{}\eta &{}\eta \\ 0&{} 1&{}\eta &{}\eta \\ \eta &{}\eta &{}2&{}2\eta \\ \eta &{}\eta &{}2\eta &{}2 \end{pmatrix}. \end{aligned}$$

Using GAP, we calculate the determinant of G:

$$\begin{aligned} det(G)=2\eta ^3-3\eta ^2+1, \end{aligned}$$

and its roots:

$$\begin{aligned}{}[(-1)^2,-\frac{1}{2}]. \end{aligned}$$

(Here and below, the exponent indicates the multiplicity of the root.)

Hence, \(A^\perp \ne 0\) if and only if \(\eta =-1\) or \(\frac{1}{2}\). For all other values of \(\eta \), the algebra A is simple.

Example 4.5

Let \(n=3\). Here, the single axes are: \(\{\langle e_1+e_2\rangle ,\langle e_1-e_2\rangle , \langle e_1+e_3\rangle ,\langle e_1-e_3\rangle ,\langle e_2+e_3\rangle ,\langle e_2-e_3\rangle \}\), and the double axes are: \(\{\langle e_0+e_1\rangle +\langle e_0-e_1\rangle , \langle e_0+e_2\rangle +\langle e_0-e_2\rangle ,\langle e_0+e_3\rangle +\langle e_0-e_3\rangle \}\).

This gives the Gram matrix:

$$\begin{aligned} G= \begin{pmatrix} 1 &{} 0&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\eta &{}\eta &{}0\\ 0&{} 1&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\eta &{}\eta &{}0\\ \frac{\eta }{2}&{}\frac{\eta }{2}&{}1&{}0&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\eta &{}0&{}\eta \\ \frac{\eta }{2}&{}\frac{\eta }{2}&{}0&{}1&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\eta &{}0&{}\eta \\ \frac{\eta }{2}&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}1&{}0&{}0&{}\eta &{}\eta \\ \frac{\eta }{2}&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}\frac{\eta }{2}&{}0&{}1&{}0&{}\eta &{}\eta \\ \eta &{}\eta &{}\eta &{}\eta &{}0&{}0&{}2&{}2\eta &{}2\eta \\ \eta &{}\eta &{}0&{}0&{}\eta &{}\eta &{}2\eta &{}2&{}2\eta \\ 0&{}0&{}\eta &{}\eta &{}\eta &{}\eta &{}2\eta &{}2\eta &{}2\\ \end{pmatrix}. \end{aligned}$$

Now, we calculate in GAP the determinant of G:

$$\begin{aligned} det(G)=16\eta ^3-12\eta ^2+1, \end{aligned}$$

and the roots:

$$\begin{aligned} \left[ \left( \frac{1}{2}\right) ^2, -\frac{1}{4}\right] . \end{aligned}$$

Hence, \(A^\perp \) is non-zero and A is not simple if and only if \(\eta =\frac{1}{2}\) or \(\eta =-\frac{1}{4}\).

Similarly, we do calculations for \(n\le 14\). The results are summarized in Table 1.

Table 1 Critical values of \(\eta \)

The data in the table are very suggestive and they allow us to formulate several conjectures.

Conjecture 4.6

The determinant of the Gram matrix G is a polynomial of degree \(\frac{n(n+1)}{2}\), unless \(n=3\).

Conjecture 4.7

The multiplicity of root \(\eta =\frac{1}{2}\) is:

$$\begin{aligned} \frac{n(n-1)}{2}-1=\frac{n^2-n-2}{2}=\frac{(n+1)(n-2)}{2}. \end{aligned}$$

Conjecture 4.8

The Gram matrix G has root \(\eta =-\frac{1}{n-3}\) with multiplicity n.

Conjecture 4.9

There is just one further root \(\eta =-\frac{1}{2(n-1)}\) and it is simple (has multiplicity 1).

These conjectures describe the exact values of \(\eta \) for which A is not a simple algebra. We hope to address these conjectures in a future paper. It is also interesting to determine the dimension of the radical when it is non-zero.