1 Introduction and background

The Terwilliger algebra or subconstituent algebra is a finite-dimensional semi-simple matrix \(\mathbb {C}\)-algebra that is noncommutative in general. Since its introduction (see [21,22,23]), the Terwilliger algebra has become a rich area of research in the study of combinatorial objects such as graphs (e.g., [6,7,8]) and association schemes (e.g., [3, 14, 17, 19]). Independent of the notion of subconstituent algebra, Hora and Obata [10] introduced the quantum adjacency algebra of the graph based on a certain partition of the edge set. This partition is called the quantum decomposition of the adjacency matrix, and the quantum adjacency algebra is generated by the quantum components of the decomposition. They used this algebra to explore limiting spectral distributions of infinite sequences of “growing” graphs. The quantum components turned out to be elements of the Terwilliger algebra of the graph with respect to a fixed vertex.

To be able to describe our results, we recall some preliminary concepts (see [1, 2, 16, 21] for more thorough discussion).

Let X denote a nonempty finite set. Denote by \(\text {Mat}_{X}(\mathbb {C})\) the \(\mathbb {C}\)-algebra of \(|X| \times |X|\) matrices with complex entries whose rows and columns are indexed by X. The \(\mathbb {C}\)-vector space of column vectors whose coordinates are indexed by X is denoted by \(V = \mathbb {C}^{X}\). Observe that \(\text {Mat}_X(\mathbb {C})\) acts on V by left multiplication. The vector space V is called the standard module. For all \(v,u \in V\), endow V with Hermitian inner product \(\langle v, u \rangle = v^t \bar{u}\) where \(v^t\) denotes the transpose of v and \(\bar{u}\) denotes the complex conjugate of u.

Let \(\Gamma = (X, R)\) be a finite, undirected, simple connected graph with vertex set X and edge set R. The distance \(\partial (x,y)\) from x to y is the length of a shortest path from x to y. By the diameter of \(\Gamma \), we mean the scalar \(D ={\text {max}}\{\partial (x,y):x,y \in X\}\). If, for all integers \(h,i,j\ (0 \le h,i,j \le D)\) and for all \(x,y \in X\) with \(\partial (x,y)=h\), the number

$$\begin{aligned} p_{ij}^h&:=\bigg |\bigg \{z\in X:\partial (x,z)=i, \partial (y,z)=j\bigg \}\bigg | \end{aligned}$$
(2)

is independent of x and y, then \(\Gamma \) is said to be distance-regular. The scalars (2) are called the intersection numbers of \(\Gamma \). We abbreviate

$$\begin{aligned} b_i:= & {} p_{1i+1}^i \quad (0\le i \le D-1),\\ c_i:= & {} p_{1i-1}^i \quad (1\le i \le D). \end{aligned}$$

For convenience, \(c_0:=0\) and \(b_D:=0\). From here on, we assume that \(\Gamma \) is a distance-regular graph with diameter \(D \ge 1\).

We recall the Bose–Mesner algebra of \(\Gamma \). For each integer \(i\ (0\le i \le D)\), let \(A_i\) denote the matrix in \(\text {Mat}_{X}(\mathbb {C})\) with xy-entry given by

$$\begin{aligned} (A_i)_{xy}&= \left\{ \begin{array}{rl} 1 &{} \text {if } \partial (x,y)=i\\ 0 &{} \text {if } \partial (x,y)\ne i \end{array} \right. \quad (x,y \in X). \end{aligned}$$

The matrix \(A_i\) is called the ith distance matrix of \(\Gamma \). We abbreviate \(A:=A_1\) and refer to this as the adjacency matrix of \(\Gamma \). We observe

$$\begin{aligned} \displaystyle \sum ^{D}_{i=0} A_{i}= & {} J,\\ A_0= & {} I,\\ A_i^t= & {} A_i \qquad \qquad (0\le i \le D),\\ \overline{A_i}= & {} A_i \qquad \qquad (0 \le i \le D),\\ A_iA_j= & {} \sum _{h=0} ^D p_{ij}^hA_h \quad (0\le i,j \le D), \end{aligned}$$

where I and J are the identity and the all-ones matrices in \(\text {Mat}_{X}(\mathbb {C})\), respectively. Since \(p_{ij}^h=p_{ji}^h\), it follows that \(A_iA_j=A_jA_i\). Note that \(\{A_i\}_{i=0}^{D}\) forms a basis for the commutative subalgebra M of \(\text {Mat}_{X}(\mathbb {C})\) known as the Bose–Mesner algebra of \(\Gamma \). The matrix A generates M by [1, p. 190]. Moreover, by [1, pp. 59, 64], M has a second basis \(\left\{ E_i \right\} _{i=0}^D\) called primitive idempotents of \(\Gamma \) such that

$$\begin{aligned} \displaystyle \sum _{i=0}^{D} E_{i}= & {} I,\\ E_0= & {} |X|^{-1}J,\\ E_i^t= & {} E_i \quad \quad \quad (0\le i \le D),\\ \overline{E_i}= & {} E_i \quad \quad \quad (0\le i \le D),\\ E_iE_j= & {} \delta _{ij}E_i \quad \quad (0\le i,j \le D). \end{aligned}$$

Since \(\left\{ E_i \right\} _{i=0}^D\) forms a basis for M, there are scalars \(\theta _0, \ldots , \theta _D\) such that \(A = \sum _{i=0}^{D} \theta _i E_i\). Observe \(AE_i=E_iA=\theta _i E_i\) for each integer \(i\ (0 \le i \le D)\). By [1, p. 97], the scalars \(\left\{ \theta _i \right\} _{i=0}^D\) are real. Since A generates M, the scalars \(\left\{ \theta _i \right\} _{i=0}^D\) are pairwise distinct. For each integer \(i\ (0 \le i \le D)\), \(\theta _i\) is the eigenvalue of \(\Gamma \) associated with \(E_i\). Then, V decomposes into

$$\begin{aligned} V= E_0V + E_1V + \cdots + E_DV \ \text {(orthogonal direct sum)}. \end{aligned}$$

For each integer \(i\ (0 \le i \le D)\), \(E_i V\) is the eigenspace of A associated to eigenvalue \(\theta _i\).

We recall the dual Bose–Mesner algebra of \(\Gamma \). Fix a vertex \(x\in X\) and call it base vertex. For each integer \(i\ (0\le i \le D)\), let \(E_i^*=E_i^*(x)\) denote the diagonal matrix in \(\text {Mat}_X(\mathbb {C})\) with yy-entry given by

$$\begin{aligned} (E_i^*)_{yy}&= \left\{ \begin{array}{rl} 1 &{} \text {if } \partial (x,y)=i\\ 0 &{} \text {if } \partial (x,y)\ne i \end{array} \right. \quad (y\in X). \end{aligned}$$

We call \(E_i^*\) the ith dual primitive idempotent of \(\Gamma \) with respect to the base vertex x. For convenience, we define \(E_i^*=0\) whenever \(i<0\) or \(i>D\). Observe that

$$\begin{aligned} \displaystyle \sum _{i=0}^{D} E^{*}_{i}= & {} I,\\ E_i^{*t}= & {} E_i^{*} \quad \quad \quad (0\le i \le D),\\ \overline{E^{*}_i}= & {} E^{*}_i \quad \quad \quad (0\le i \le D),\\ E_i^{*}E_j^{*}= & {} \delta _{ij}E_i^{*} \quad \quad (0\le i,j \le D). \end{aligned}$$

Note that \(\left\{ E_i^* \right\} _{i=0}^D\) is linearly independent and forms a basis for a commutative subalgebra \(M^*=M^*(x)\) of \(\text {Mat}_X(\mathbb {C})\) known as the dual Bose–Mesner algebra of \(\Gamma \) with respect to x.

We recall Terwilliger algebra of \(\Gamma \) and its irreducible modules. Let \(T=T(x)\) denote the subalgebra of \(\text {Mat}_X(\mathbb {C})\) generated by M and \(M^*\). We call T the Terwilliger algebra of \(\Gamma \) with respect to x. Since M is generated by A and \(M^*\) is generated by \(\left\{ E_i^* \right\} _{i=0}^D\), it follows that T is generated by A and \(\{E_i^*\}_{i=0}^D\). Suppose for a moment W is a subspace of V. For each \(B\in \text {Mat}_X(\mathbb {C})\), we define

$$BW = \left\{ Bw\ :\ w\in W \right\} \subseteq V.$$

We say W is B-invariant whenever \(BW\subseteq W\). If W is B-invariant for all \(B \in T\), then W is called a T-module. A T-module W is said to be irreducible if \(W \ne 0\) and W contains no other T-modules other than 0 and W. If W is a T-module, then its orthogonal complement \(W^{\perp }:=\{v \in V\ :\ \langle v, w \rangle =0\ \forall w \in W \}\) is also a T-module. In fact, if W is a T-module containing another T-module \(W^{\prime }\), then \(W^{\prime \perp }\cap W\) is also a T-module and \(W=W^{\prime } \oplus \left( W^{\prime \perp }\cap W\right) \). Consequently, any nonzero T-module (e.g., the standard module V) is an orthogonal direct sum of irreducible T-modules. Now, let W denote an irreducible T-module. Define \(W_s:=\{i\ :\ 0 \le i \le D,\ E_{i}^{*}W \ne 0\}\). We call \(|W_s|-1\) and \(\text {min}\left( W_s\right) \) the diameter and endpoint of W, respectively. On the other hand, define \(W_{s^{\prime }}:=\{i\ :\ 0 \le i \le D,\ E_{i}W \ne 0\}\). We call \(|W_{s^{\prime }}|-1\) and \(\text {min}\left( W_{s^\prime } \right) \) the dual-diameter and dual-endpoint of W, respectively. We say W is thin (resp. dual-thin) whenever \({\text {dim}}(E_i^*W)\le 1\) (resp. \({\text {dim}}(E_iW) \le 1\)) for all integers \(i\ (0 \le i \le D)\). Let W and \(W^\prime \) denote T-modules. By a T-module isomorphism from W to \(W^\prime \), we mean a vector space isomorphism \(\sigma : W \rightarrow W^\prime \) such that \((\sigma B - B \sigma )W=0\) for all \(B \in T\). If such an isomorphism exists, then W and \(W^\prime \) are said to be isomorphic T-modules.

Now, we recall the quantum adjacency algebra of \(\Gamma \). To describe this algebra, we define the matrices \(L=L(x)\), \(F=F(x)\), and \(R=R(x)\) by

$$\begin{aligned} L= \displaystyle \sum _{i=0}^{D} E_{i-1}^* AE_{i}^*,\quad F=\displaystyle \sum _{i=0}^{D} E_{i}^* AE_{i}^*,\quad R=\displaystyle \sum _{i=0}^{D} E_{i+1}^* AE_{i}^*. \end{aligned}$$

We call L, F, and R the lowering matrix, flat matrix, and raising matrix, respectively. Observe that \(L, F, R \in T\) since A and \(\left\{ E_i^*\right\} _{i=0}^D\) are generators of T. Let \(Q=Q(x)\) denote the subalgebra of T generated by L, F, and R. We call Q the quantum adjacency algebra of \(\Gamma \) with respect to x. Since \(E_j^*AE_k^* = 0\) if \(|j-k|>1\), we have

$$\begin{aligned} A= & {} \bigg ( \displaystyle {\sum _{i=0}^{D}} E^{*}_{i} \bigg ) A \bigg ( \displaystyle {\sum _{j=0}^{D}} E^{*}_{j} \bigg )\nonumber \\= & {} \displaystyle {\sum _{i=0}^{D}} E^{*}_{i-1} A E^{*}_{i} + \displaystyle {\sum _{i=0}^{D}} E^{*}_{i} A E^{*}_{i} + \displaystyle {\sum _{i=0}^{D}} E^{*}_{i+1} A E^{*}_{i} \nonumber \\= & {} L + F + R. \end{aligned}$$
(3)

We call (3) the quantum decomposition of the adjacency matrix A with respect to x. Observe that

$$\begin{aligned} \overline{L}=L,\quad \overline{F}=F,\quad \overline{R}=R,\quad F^t=F,\quad R^t=L. \end{aligned}$$
(4)

Hence, Q is closed under the conjugate-transpose map and is semi-simple. Moreover,

$$\begin{aligned} LE_{i}^* V \subseteq E_{i-1}^*V,\quad FE_{i}^* V \subseteq E_{i}^* V, \text { and } RE_{i}^* V \subseteq E_{i+1}^* V. \end{aligned}$$
(5)

We define Q-modules, irreducible Q-modules, and Q-module isomorphism analogous to that of T-modules, irreducible T-modules, and T-module isomorphism, respectively. Observe that every T-module turns into a Q-module by restricting the action of T to Q.

We compare our results with previous works on Terwilliger algebras of particular distance-regular graphs. The theory of Terwilliger algebra has been most useful when it is applied to Q-polynomial distance-regular graphs (see [2, p. 135] for definition). Hamming graphs and Doob graphs are common examples. Go [8] described the irreducible modules of the Terwilliger algebras of binary Hamming graphs. She showed implicitly that the Terwilliger algebras in this case are homomorphic images of the universal enveloping algebra \(U(\mathfrak {sl}_2)\) of the complex Lie algebra \(\mathfrak {sl}_2\). Terwilliger [20] considered general Hamming graphs and showed that Terwilliger algebras of Hamming graphs also contain homomorphic images of \(U(\mathfrak {sl}_2)\) (cf. [14]). Generalizing these results, Morales and Pascasio [18] considered a Lie algebra in terms of generators and relations that contains subalgebras that are isomorphic to \(\mathfrak {sl}_2\). This Lie algebra is known as the tetrahedron algebra \(\boxtimes \). They showed the Terwilliger algebra of any Hamming graph or Doob graph is generated by the center and the homomorphic image of the universal enveloping algebra \(U(\boxtimes )\). In this paper, we consider the special orthogonal Lie algebra \(\mathfrak {so}_4\) and show that the quantum adjacency algebra of any Doob graph is generated by the center and homomorphic image of the universal enveloping algebra \(U(\mathfrak {so}_4)\). To do this, we exploit the work of Tanabe [19] on irreducible modules of the Terwilliger algebras of Doob graphs.

The paper is organized as follows: In Sect. 2, we review important properties of Doob graphs and their Terwilliger algebras along with Tanabe’s description of irreducible modules. In Sect. 3, we prove several relations among lowering, raising, and flat matrices of Doob graphs. We obtain these relations by restricting quantum components to an arbitrary irreducible module. In Sect. 4, we recall the classical Lie algebra \(\mathfrak {so}_4\). We also review the representation theory of finite-dimensional \(\mathfrak {so}_4\)-modules from the point of view of highest weight theory. In Sect. 5, we display an action of \(\mathfrak {so}_4\) on the standard module for Doob graphs and prove other main results.

2 Doob graphs and their Terwilliger algebras

Let \(\Gamma = (X,R)\) and \(\Gamma ^\prime = (X^\prime , R^\prime )\) be finite, undirected, simple connected graphs. The direct product \(\Gamma \square \Gamma ^\prime \) is the graph on vertex set \(X \times X^\prime \) such that \((a,a^\prime )\) and \((b,b^\prime )\) are adjacent if and only if either \(ab \in R\) and \(a^\prime = b^\prime \) or \(a=b\) and \(a^\prime b^\prime \in R^\prime \). Let d denote a positive integer. We write \(\Gamma ^{\square d}\) instead of \(\Gamma \square \cdots \square \Gamma \) (d copies). A graph is complete if every pair of distinct vertices are adjacent. For each integer \(q \ge 3\), let \(K_q\) denote the complete graph on q vertices. By Hamming graph H(dq), we mean the graph \(K_q^{\square d}\) which is distance-regular with diameter d and has intersection numbers

$$\begin{aligned} b_i&= (q-1)(d-i),\\ c_i&= i, \end{aligned}$$

for integers \(i\ (0 \le i \le d)\) (see [2, p. 261]). The eigenvalues of H(dq) are \(\{q(d-i)-d\ |\ \text {for }0 \le i \le d \}\). On the other hand, the Shrikhande graph S has vertex set consisting of all cyclic permutations of the codes 000000, 110000, 010111, and 011011 such that two vertices are adjacent if and only if they differ in exactly two coordinates. The graph S is distance-regular and has the same intersection numbers as H(2, 4).

Let \(n \ge 1\) and \(m \ge 0\) denote integers. By Doob graph D(nm), we mean the direct product of n copies of S and m copies of \(K_4\). This graph was first introduced by Doob in 1972 (see [4]). The graph D(nm) is distance-regular and has the same intersection numbers as \(H(2n+m,4)\) (see [5]). We note that H(nq) and D(nm) have integral eigenvalues. In this section, objects in reference to the Doob graph D(nm) are labeled with (nm). For example, \(A_i(n,m)\) refers to the ith distance matrix of D(nm) for each integer \(i\ (0 \le i \le 2n+m)\). Fix an integer \(i\ (0 \le i \le 2n+m)\) and note that

$$\begin{aligned} A_i(n,m)= & {} \displaystyle \sum A_{i_1}(1,0) \otimes \cdots \otimes A_{i_n}(1,0) \otimes A_{j_1}(0,1) \otimes \cdots \otimes A_{j_m}(0,1) \end{aligned}$$

where the sum ranges to all \(i_1,i_2, \ldots , i_n \in \{0,1,2\}\) and \(j_1,j_2,\ldots ,j_m \in \{0,1\}\) such that \(i_1+\cdots +i_n+j_1+\cdots +j_m = i\). The primitive idempotent \(E_i(n,m)\) is obtained similarly. Choose the base vertex x(nm) of D(nm) such that

$$\begin{aligned} x(n,m)=(\underbrace{x(1,0), x(1,0), \ldots , x(1,0)}_{n\,\,{\text {copies}}}, \underbrace{x(0,1), x(0,1), \ldots , x(0,1)}_{m\,\,{\text {copies}}}). \end{aligned}$$

Then, the ith dual-primitive idempotent \(E_{i}^*(n,m)\) with respect to x(nm) is given by

$$\begin{aligned} E^*_i(n,m)= & {} \displaystyle \sum E^*_{i_1}(1,0) \otimes \cdots \otimes E^*_{i_n}(1,0) \otimes E^*_{j_1}(0,1) \otimes \cdots \otimes E^*_{j_m}(0,1) \end{aligned}$$

where the sum ranges to all \(i_1,i_2, \ldots , i_n \in \{0,1,2\}\) and \(j_1,j_2,\ldots ,j_m \in \{0,1\}\) such that \(i_1+\cdots +i_n+j_1+\cdots +j_m = i\). Let T(nm) denote the Terwilliger algebra of D(nm) with respect to x(nm) and let V(nm) denote the standard module. Since T(nm) is semi-simple, V(nm) decomposes as a direct sum of irreducible T(nm)-modules. We end this section with Tanabe’s description of irreducible T(nm)-modules on the standard module V(nm) (see [19]).

Proposition 2.1

[19, Proposition 3]

Let \(n, m, v, d, p, t \in \mathbb {Z}\) such that \(n-1, m, v, d, p \ge 0\). Let \(W:=W(n, m; v, d, p, t)\) denote a T(nm)-module on V(nm) with endpoint v, diameter \(d+p\), dimension \((d+1)(p+1)\), and basis

$$\begin{aligned} \{ w_{ij} \in E_{v+i+j}^*(n,m)W :\ 0 \le i \le d \text { and } 0 \le j \le p \} \end{aligned}$$

satisfying

$$\begin{aligned} A_1(n, m)w_{ij}&= 3(d-i+1)w_{i-1, j}+(p-j+1)w_{i, j-1}+ (t+2(i-j))w_{ij}\nonumber \\&\quad + 3(j+1)w_{i, j+1} + (i+1)w_{i+1, j}, \end{aligned}$$
(6)

where \(w_{ij} := 0\) if \(i \notin \{0, \ldots , d \}\) or \(j \notin \{0, \ldots , p\}\). Then, each of the following holds:

  1. (i)

    W is an irreducible T(nm)-module.

  2. (ii)
    $$\begin{aligned} \text {dim} \ E^{*}_{v+k}(n, m)W = \left\{ \begin{array}{ll} k+1 &{} \text {if } 0 \le k \le \text {min}\{d, p\},\\ \text {min}\{d, p\}+1 &{} \text {if } \text {min}\{d, p\}< k \le \text {max}\{d, p\},\\ d+p+1-k &{} \text {if } \text {max}\{d, p\} < k \le d+p. \end{array} \right. \end{aligned}$$
  3. (iii)

    W is thin if and only if \(dp=0\).

  4. (iv)

    If \(\mu \) is the dual-endpoint of W, then

    $$\begin{aligned} \mu= & {} \displaystyle \frac{3(2n+m)-t-3d-p}{4}, \\ \text {dim} \ E_{\mu +k}(n, m)W= & {} \text {dim} \ E^{*}_{v+k}(n,m)W. \end{aligned}$$

    Moreover, the diameter of W and the dual-diameter of W are equal.

  5. (v)

    W and \(W^{\prime }:=W(n, m; v^{\prime }, d^{\prime }, p^{\prime }, t^{\prime })\) are isomorphic T(nm)-modules if and only if \((v, d, p, t)=(v^{\prime }, d^{\prime }, p^{\prime }, t^{\prime })\).

Proposition 2.2

[19, Proposition 1, Lemma 2, and Proposition 4] With reference to above notations, we have the following:

  1. (i)

    The set \(\{U_0,U_1,U_2,U_3,U_4,U_5\}\) forms a complete set of pairwise nonisomorphic irreducible T(1, 0)-modules on standard module V(1, 0) of D(1, 0) where

    $$\begin{aligned} \begin{array}{lll} U_0 \cong W(1,0; 0, 2, 0, 0), &{} U_1 \cong W(1,0; 1, 1, 0, -1), &{} U_2 \cong W(1,0; 1, 0, 1, 1),\\ U_3 \cong W(1,0; 1, 0, 0, -2), &{} U_4 \cong W(1,0; 2, 0, 0, 2), &{} U_5 \cong W(1,0; 2, 0, 0, -2). \end{array} \end{aligned}$$
  2. (ii)

    The set \(\{V_0,V_1\}\) forms a complete set of pairwise nonisomorphic irreducible T(0, 1)-modules on standard module V(0, 1) of D(0, 1) where

    $$\begin{aligned} \begin{array}{ll} V_0 \cong W(0,1; 0, 1, 0, 0),&V_1 \cong W(0,1; 1, 0, 0, -1). \end{array} \end{aligned}$$
  3. (iii)

    V(nm) is isomorphic to a direct sum of spaces

    $$\begin{aligned} \big ( \displaystyle {U_{0}^{\otimes N_0}\otimes U_{1}^{\otimes N_1} \otimes U_{2}^{\otimes N_2} \otimes U_{3}^{\otimes N_3} \otimes U_{4}^{\otimes N_4}\otimes U_{5}^{\otimes N_5} } \big ) \otimes \big (\displaystyle {V_{0}^{M_0} \otimes V_{1}^{M_1}} \big ) \end{aligned}$$
    (7)

    where \(N_0, \ldots , N_5, M_0, M_1\) are nonnegative integers satisfying \(N_0 +\cdots + N_5 = n\), and \(M_0 +M_1 = m\). The space (7) decomposes into irreducible T(nm)-modules isomorphic to W(nmvdpt) such that

    $$\begin{aligned} v&= r+s-2N_0-N_1-N_2-N_3-M_0+2n+m, \\ d&= 2N_0+N_1+M_0-2r, \\ p&= N_2-2s,\\ t&= 2r-2s+2N_0+N_1+3N_2+4N_4+M_0-2n-m. \end{aligned}$$

    where

    $$\begin{aligned} \left\{ \begin{array}{lcl} r = 0, &{} &{} \text {if } N_0 = 1 \text { and } N_1 = M_0 = 0,\\ r = 0, 1, \ldots , N_0+\lfloor \frac{N_1+M_0}{2} \rfloor , &{} &{} \text {otherwise}, \end{array} \right. \end{aligned}$$

    and \(s = 0, 1, \ldots , \lfloor \frac{N_2}{2} \rfloor \).

3 Quantum adjacency algebras of Doob graphs

As mentioned in Sect. 1, the quantum adjacency algebra of the graph (with respect to a base vertex) is the algebra generated by the components of the quantum decomposition of the adjacency matrix. In the case of Doob graphs, the components of our quantum decompositions are the lowering, flat, and raising matrices. In this section, we describe the quantum adjacency algebras of Doob graphs as well as the relations of the quantum components. We shall adopt the following assumption:

Assumption 3.1

Fix the integers \(n\ge 1\) and \(m \ge 0\) and consider the Doob graph \(D=D(n,m)\). Let A be the adjacency matrix of D. Choose a base vertex x of D and let \(E_0^*,E_1^*, \ldots , E_{2n+m}^*\) denote the dual primitive idempotents with respect to x. Let T (resp. Q) denote the Terwilliger algebra (resp. quantum adjacency algebra) of D with respect to x. Let L, F, and R denote lowering, flat, and raising matrices with respect to x, respectively. Define the mapping \([\ ,\ ]: T \times T \rightarrow T\) such that \([Y,Z] = YZ-ZY\) for all \(Y,Z \in T\). Finally, let V denote the standard module for D.

Lemma 3.2

With reference to Assumption 3.1, let \(W=W \left( n,m;v,d,p,t \right) \) denote an irreducible T-module with basis \(\left\{ w_{ij}\right\} \) as described in Proposition 2.1. Then,

  1. (i)

    \(L w_{ij}=3 \left( d-i+1 \right) w_{i-1,j}+\left( p-j+1 \right) w_{i,j-1}\)

  2. (ii)

    \(F w_{ij}=\left( t+ 2 \left( i-j \right) \right) w_{ij}\),

  3. (iii)

    \(R w_{ij}=3 \left( j+1 \right) w_{i,j+1}+\left( i+1 \right) w_{i+1,j}\),

for all integers \(i\ (0 \le i \le d)\) and for all integers \(j\ (0 \le j \le p)\).

Proof

Follows from Proposition 2.1 and definitions of matrices L, F, and R. \(\square \)

Lemma 3.3

With reference to Assumption 3.1, let \(W=W \left( n,m;v,d,p,t \right) \) denote an irreducible T-module with basis \(\left\{ w_{ij}\right\} \) as described in Proposition 2.1. Then,

  1. (i)

    \(\left[ L, R\right] w_{ij}=3\left( p+d-2i-2j \right) w_{ij}\),

  2. (ii)

    \(\left[ L, F\right] w_{ij}= 6 \left( d-i+1 \right) w_{i-1,j} -2 \left( p-j+1 \right) w_{i,j-1}\),

  3. (iii)

    \(\left[ R, F\right] w_{ij}= 6 \left( j+1 \right) w_{i,j+1} -2 \left( i+1 \right) w_{i+1,j}\),

  4. (iv)

    \(\left[ R, \left[ F, L \right] \right] w_{ij}= 6\left( d-p-2i+2j \right) w_{ij}\),

for all integers \(i\ (0 \le i \le d)\) and for all integers \(j\ (0 \le j \le p)\).

Proof

Follows from Lemma 3.2. \(\square \)

Lemma 3.4

With reference to Assumption 3.1, the matrices F, [LR], and [R, [FL]] mutually commute on V.

Proof

Let \(W=W \left( n,m;v,d,p,t \right) \) denote an irreducible T-module with basis \(\left\{ w_{ij}\right\} \) as described in Proposition 2.1. From Lemma 3.2(ii), Lemma 3.3(i), and Lemma 3.3(iv), these matrices mutually commute on W. Since W is arbitrary and V is a direct sum of irreducible T-modules, these matrices mutually commute on V. \(\square \)

Lemma 3.5

With reference to Assumption 3.1, we have

  1. i)

    \(\left[ L, \left[ L,F \right] \right] =0\),

  2. ii)

    \(\left[ R, \left[ R,F \right] \right] =0\),

  3. iii)

    \(\left[ F, \left[ F,R \right] \right] =4R\),

  4. iv)

    \(\left[ F, \left[ F,L \right] \right] =4L\),

  5. v)

    \(\left[ R, \left[ R,L \right] \right] =-6R\),

  6. vi)

    \(\left[ L, \left[ L,R \right] \right] =-6L\),

  7. vii)

    \(\left[ L, \left[ F,R \right] \right] =\left[ R, \left[ F, L \right] \right] \).

Proof

Let \(W=W \left( n,m;v,d,p,t \right) \) denote an irreducible T-module with basis \(\left\{ w_{ij}\right\} \) as described in Proposition 2.1. By Lemmas 3.2 and 3.3, the equations are true on W. Since W is arbitrary and V is a direct sum of irreducible T-modules, these equations hold on V. \(\square \)

Lemma 3.6

With reference to Assumption 3.1, we have

  1. (i)

    \(\left[ R, \left[ R, \left[ F, L \right] \right] \right] =-\left[ \left[ R, F \right] , \left[ L, R \right] \right] = -6\left[ R, F \right] \),

  2. (ii)

    \(\left[ L, \left[ R, \left[ F, L \right] \right] \right] =\left[ \left[ L,F \right] , \left[ L, R \right] \right] =-6\left[ L, F \right] \),

  3. (iii)

    \(\left[ \left[ L,F \right] , \left[ R,F \right] \right] =-4 \left[ L, R \right] \),

  4. (iv)

    \(\left[ \left[ R, F \right] , \left[ R, \left[ F, L \right] \right] \right] =-24R\),

  5. (v)

    \(\left[ \left[ L, F \right] , \left[ R, \left[ F, L \right] \right] \right] =-24L\).

Proof

Similar to the proof of Lemma 3.5. \(\square \)

Lemma 3.7

With reference to Assumption 3.1, let \(W=W \left( n,m;v,d,p,t \right) \) denote an irreducible T-module with basis \(\left\{ w_{ij}\right\} \) as described in Proposition 2.1. Then, the matrix \(F+\frac{1}{6}\left[ R, \left[ F, L \right] \right] \) acts as the scalar \(t-p+d\) on W.

Proof

Follows immediately from Lemma 3.2(ii) and Lemma 3.3(iv). \(\square \)

Lemma 3.8

With reference to Assumption 3.1, the set

$$\begin{aligned} \{L, R, [L,R], [L,F], [R,F], [R,[F,L]]\} \end{aligned}$$
(8)

is linearly independent in Q.

Proof

Let W denote an irreducible T-module with basis \(\{w_{ij}\}\) as in Proposition 2.1. Assume there exists a matrix in (8) that can be expressed as a linear combination of the remaining matrices. Among the matrices in (8), only [LR] and [R, [FL]] leave invariant \(\mathbb {C}w_{ij}\) for all \(i,j \in \mathbb {Z}\). But [LR] and [R, [FL]] cannot be linearly dependent by Lemma 3.3. On the other hand, the matrices L and [LF] (resp. R and [RF]) shift the space \(\mathbb {C}w_{ij}\) to the space \(\mathbb {C}w_{i,j-1} \oplus \mathbb {C}w_{i-1,j}\) (resp. \(\mathbb {C}w_{i,j+1} \oplus \mathbb {C}w_{i+1,j}\)) for all \(i,j \in \mathbb {Z}\). By Lemma 3.2 and Lemma 3.3, the pair cannot be linearly dependent. This shows that any matrix cannot be expressed as a linear combination of the others. Statement holds. \(\square \)

Lemma 3.9

With reference to Assumption 3.1, the subspace of Q spanned by the matrices in (8) is closed under \([\ ,\ ]\). In particular, we have the relations

\([\ ,\ ]\)

\(H_1\)

\(H_2\)

\(X_1\)

\(X_2\)

\(Y_1\)

\(Y_2\)

\(H_1\)

0

0

\(X_1\)

\(X_2\)

\(-Y_1\)

\(-Y_2\)

\(H_2\)

0

0

\(X_1\)

\(-X_2\)

\(-Y_1\)

\(Y_2\)

\(X_1\)

\(-X_1\)

\(-X_1\)

0

0

\(H_1+H_2\)

0

\(X_2\)

\(-X_2\)

\(X_2\)

0

0

0

\(H_1-H_2\)

\(Y_1\)

\(Y_1\)

\(Y_1\)

\(-H_1-H_2\)

0

0

0

\(Y_2\)

\(Y_2\)

\(-Y_2\)

0

\(-H_1+H_2\)

0

0

where \(H_1=\frac{1}{6} \left[ L, R \right] \), \(H_2 = \frac{1}{12} \left[ R, \left[ F, L \right] \right] \), \(X_1=\frac{1}{12} \left( 2L+ \left[ L, F \right] \right) \), \(X_2=\frac{1}{4} \left( 2L- \left[ L, F \right] \right) \), \(Y_1=\frac{1}{4} \left( 2R- \left[ R, F \right] \right) \), and \(Y_2=\frac{1}{12} \left( 2R+ \left[ R, F \right] \right) \).

Proof

Immediate from Lemmas 3.4, 3.5, and 3.6. \(\square \)

4 The special orthogonal Lie algebra \(\mathfrak {so}_4\)

By a complex Lie algebra, we mean a vector space \(\mathfrak {g}\) over \(\mathbb {C}\) together with a bracket operation \([\ ,\ ]:\mathfrak {g} \times \mathfrak {g} \rightarrow \mathfrak {g}\) satisfying the following conditions:

  1. (i)

    \([\ ,\ ]\) is bilinear,

  2. (ii)

    \([x,y]=-[y,x]\) for all \(x,y \in \mathfrak {g}\), and

  3. (iii)

    \([x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0\) for all \(x,y,z \in \mathfrak {g}\).

Let \(\mathfrak {g}\) be a complex Lie algebra. A subalgebra of \(\mathfrak {g}\) is a subspace that is closed under the operation \([\ ,\ ]\). On the other hand, an ideal \(\mathfrak {i}\) of \(\mathfrak {g}\) is a subspace such that \([x,y]\in \mathfrak {i}\) for all \(x \in \mathfrak {g}\) and for all \(y \in \mathfrak {i}\). We say that \(\mathfrak {g}\) is abelian if \([x,y]=0\) for all \(x,y \in \mathfrak {g}\). We say \(\mathfrak {g}\) is simple if it is a non-abelian complex Lie algebra whose only ideals are 0 and \(\mathfrak {g}\) itself. We say that \(\mathfrak {g}\) is semi-simple if it is a direct sum of simple complex Lie algebras.

Let \(\mathfrak {g}\) and \(\mathfrak {h}\) be complex Lie algebras. A linear map \(\pi :\mathfrak {g} \rightarrow \mathfrak {h}\) is called a Lie algebra homomorphism if \(\pi ([x,y])=[\pi (x),\pi (y)]\) for all \(x,y \in \mathfrak {g}\). Let V be an n-dimensional vector space over \(\mathbb {C}\). Let \(\mathfrak {gl}(V)\) denote the complex Lie algebra of all linear transformations on V together with the bracket operation \([x,y]=x\circ y - y \circ x\) for all \(x,y \in \mathfrak {gl}(V)\) where \(\circ \) means composition. Fixing an ordered basis for V, we view \(\mathfrak {gl}(V)\) as the complex Lie algebra \(\mathfrak {gl}_n\) of square matrices of order n with the bracket operation \([x,y]=xy-yx\) for all \(x,y \in \mathfrak {gl}_n\). We say that V is a \(\mathfrak {g}\)-module if there exists a Lie algebra homomorphism \(\pi :\mathfrak {g} \rightarrow \mathfrak {gl}(V)\). In this case, the element \(x \in \mathfrak {g}\) acts on V as the image \(\pi (x)\). We say that a \(\mathfrak {g}\)-module V is irreducible if V contains no other \(\mathfrak {g}\)-modules aside from 0 and V. Two spaces V and \(V^\prime \) are isomorphic \(\mathfrak {g}\)-modules if there exists a vector space isomorphism \(\rho : V \rightarrow V^\prime \) such that x has the same action on v and \(\rho (v)\) for all \(x \in \mathfrak {g}\) and for all \(v \in V\).

Let

$$\begin{aligned} X=\begin{pmatrix} 0 &{} 0 &{} 1 &{} 0\\ 0 &{} 0 &{} 0 &{} 1\\ 1 &{} 0 &{} 0 &{} 0\\ 0 &{} 1 &{} 0 &{} 0 \end{pmatrix} \end{aligned}$$

and consider the subalgebra \(\mathfrak {so}_4=\left\{ P \in \mathfrak {gl}_4\ :\ PX+XP^t=0 \right\} \) of \(\mathfrak {gl}_4\). Note that \(\mathfrak {so}_4\) is a six-dimensional complex Lie algebra with a basis consisting of

$$\begin{aligned} \begin{array}{lll} H_1 =\small {\begin{pmatrix} 1 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} -1 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 \end{pmatrix}}, &{} H_2 =\begin{pmatrix} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} -1 \end{pmatrix}, &{} X_1 = \begin{pmatrix} 0 &{} 0 &{} 0 &{} 1\\ 0 &{} 0 &{} -1 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0 \end{pmatrix}, \\ X_2 =\begin{pmatrix} 0 &{} 1 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} -1 &{} 0 \end{pmatrix}, &{} Y_1 = \begin{pmatrix} 0 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} 0\\ 0 &{} -1 &{} 0 &{} 0\\ 1 &{} 0 &{} 0 &{} 0 \end{pmatrix}, &{} Y_2 =\begin{pmatrix} 0 &{} 0 &{} 0 &{} 0\\ 1 &{} 0 &{} 0 &{} 0\\ 0 &{} 0 &{} 0 &{} -1\\ 0 &{} 0 &{} 0 &{} 0 \end{pmatrix}. \end{array} \end{aligned}$$

Observe that the Lie bracket relations of basis matrices of \(\mathfrak {so}_4\) coincide with the Lie bracket relations in Lemma 3.9. In this section, we focus on the special orthogonal complex Lie algebra \(\mathfrak {so}_4\) which is semi-simple and is one of the classical Lie algebras. We recall the representation theory of finite-dimensional \(\mathfrak {so}_4\)-modules based on a theorem of highest weight. This states that every irreducible \(\mathfrak {so}_4\)-module has a highest weight and two irreducible \(\mathfrak {so}_4\)-modules with the same highest weight are isomorphic. The discussion on the highest weight theory for classical Lie algebras are discussed in many reference books (e.g., [9] and [11]).

For the rest of the section, suppose V is a finite-dimensional vector space over \(\mathbb {C}\) and \(\pi :\mathfrak {so}_4 \rightarrow \mathfrak {gl}(V)\) is a Lie algebra homomorphism (i.e., V is an \(\mathfrak {so}_4\)-module).

Definition 4.1

An ordered pair \(\lambda =(\lambda _1,\lambda _2) \in \mathbb {C}^2\) is called a weight on V if there exists a nonzero vector \(v \in V\) such that

$$\begin{aligned} \pi (H_1)v&= \lambda _1v,\\ \pi (H_2)v&= \lambda _2v. \end{aligned}$$

In this case, we call v a weight vector corresponding to the weight \(\lambda \). If \(\lambda \) is a weight, then the set of all corresponding weight vectors forms a subspace of V is called the weight space corresponding to the weight \(\lambda \). The multiplicity of the weight is the dimension of the corresponding weight space.

Proposition 4.2

If V is a nonzero \(\mathfrak {so}_4\)-module, then V has at least one weight.

Proof

Since \(\mathbb {C}\) is algebraically closed, \(\pi (H_1)\) has at least one eigenvalue \(\lambda _1 \in \mathbb {C}\). Let \(W \subseteq V\) be the eigenspace for \(\pi (H_1)\) with eigenvalue \(\lambda _1\). Since \([H_1,H_2]=0\), \(\pi (H_1)\) commutes with \(\pi (H_2)\) and so W is \(\pi (H_2)\)-invariant. The restriction of \(\pi (H_2)\) to W must have at least one eigenvector w with eigenvalue \(\lambda _2 \in \mathbb {C}\). Therefore, w is a simultaneous eigenvector for \(\pi (H_1)\) and \(\pi (H_2)\) with eigenvalues \(\lambda _1\) and \(\lambda _2\). \(\square \)

Definition 4.3

An ordered pair \(\alpha =(a_1,a_2) \in \mathbb {C}^2\) is called a root if \((a_1,a_2)\ne (0,0)\) and there exists a \(Z_\alpha \in \mathfrak {so}_4\) such that

$$\begin{aligned}{}[H_1, Z_\alpha ]&= a_1Z_\alpha ,\\ [H_2, Z_{\alpha }]&= a_2 Z_{\alpha }. \end{aligned}$$

The element \(Z_\alpha \) is called a root vector corresponding to the root \(\alpha \).

Lemma 4.4

Let \(\alpha =(a_1,a_2)\) denote a root with corresponding root vector \(Z_\alpha \). Suppose \(\lambda =(\lambda _1,\lambda _2)\) is a weight on V with corresponding weight vector v. Then, we have

$$\begin{aligned} \pi (H_1)\pi (Z_\alpha )v&= (\lambda _1+a_1)\pi (Z_\alpha )v, \end{aligned}$$
(9)
$$\begin{aligned} \pi (H_2)\pi (Z_\alpha )v&= (\lambda _2+a_2)\pi (Z_\alpha )v. \end{aligned}$$
(10)

Hence, either \(\pi (Z_\alpha )v=0\) or \(\pi (Z_\alpha )v\) is a weight vector corresponding to the weight \(\lambda +\alpha =(\lambda _1+a_1,\lambda _2+a_2)\).

Proof

Since \(\pi \) is a Lie algebra homomorphism, we have

$$\begin{aligned} \pi (H_1)\pi (Z_\alpha )v&= [\pi (H_1),\pi (Z_\alpha )]v+\pi (Z_\alpha )\pi (H_1)v\\&= \pi \left( [H_1,Z_\alpha ]\right) v+\pi (Z_\alpha )\pi (H_1)v\\&= a_1 \pi \left( Z_\alpha \right) v+\lambda _1\pi (Z_\alpha )v\\&= (\lambda _1+a_1)\pi (Z_\alpha )v. \end{aligned}$$

This proves (9). We prove (10) analogously. If \(\pi (Z_\alpha )v \ne 0\), then \(\pi (Z_\alpha )v\) is a weight vector corresponding to the weight \(\lambda +\alpha \). \(\square \)

Note that the set of all roots is \(R=\{(1,1), (1,-1), (-1,1), (-1,-1)\}\). Let E be the vector space \(\mathbb {R}^2\) with standard inner product \(\langle \ ,\ \rangle _E\). It can be checked that (ER) forms a root system (see [9, Definition 8.1]) which is denoted by \(A_1 \times A_1\). We shall fix the base \(\Delta =\{(1,1), (1,-1)\}\) for E. Observe that each root is expressed as a linear combination of the vectors in \(\Delta \) with either all nonnegative coefficients or all nonpositive coefficients. We say that a root is positive (resp. negative) if these coefficients are all nonnegative (resp. nonpositive). The table below summarizes the roots and corresponding root vectors.

$$\begin{aligned} \begin{array}{c|c|c} \text {Root} &{} \text {Root vector} &{} \text {Type}\\ (1,1) &{} X_1 &{} \text {Positive}\\ (1,-1) &{} X_2 &{} \text {Positive}\\ (-1,-1) &{} Y_1 &{} \text {Negative}\\ (-1,1) &{} Y_2 &{} \text {Negative} \end{array} \end{aligned}$$

Definition 4.5

Let \(\lambda \) and \(\lambda ^\prime \) denote weights. We say \(\lambda \) is higher than \(\lambda ^\prime \) (with respect to the base \(\Delta \)) and we write \(\lambda \succeq \lambda ^\prime \) if there exists nonnegative real numbers \(c_1\) and \(c_2\) such that

$$\begin{aligned} \lambda -\lambda ^\prime = c_1(1,1)+c_2(1,-1). \end{aligned}$$

A weight \(\lambda \) on V is said to be a highest weight if \(\lambda \succeq \lambda ^\prime \) for all weights \(\lambda ^\prime \) on V.

Note that the relation \(\succeq \) depends on the chosen base \(\Delta \) and it forms a partial order on the set of all weights on V.

Proposition 4.6

If V is an irreducible \(\mathfrak {so}_4\)-module, then V is a direct sum of its weight spaces.

Proof

Let \(V^\prime \) denote the sum of the weight spaces of V. By Proposition 4.2, \(V^\prime \ne 0\). Since \(V^\prime \) is the sum of all weight spaces, we may view \(V^\prime \) as the span of all simultaneous eigenvectors of \(\pi (H_1)\) and \(\pi (H_2)\). It follows that \(V^\prime \) is invariant under the actions of \(H_1\) and \(H_2\). Now, take a root \(\alpha \) with corresponding root vector \(Z_\alpha \). By Lemma 4.4, we see that \(V^\prime \) is invariant under \(\pi (Z_\alpha )\). By irreducibility of V, we have \(V=V^\prime \). The sum is direct since weight vectors with different weights are linearly independent. \(\square \)

Proposition 4.7

If \(\lambda =(\lambda _1,\lambda _2)\) is a weight on V, then \(\lambda _1+\lambda _2\) and \(\lambda _1-\lambda _2\) are integers. Hence, either \(\lambda _1\) and \(\lambda _2\) are both integers or both half-integers.

Proof

Since \((\lambda _1,\lambda _2)\) is a weight on V, there exists a simultaneous eigenvector v such that \(\pi (H_1)v = \lambda _1v\) and \(\pi (H_2)v = \lambda _2v\). By applying \(\pi (X_1)\) repeatedly on v, we have

$$\begin{aligned} \pi (H_1)\pi (X_1)^kv&= (\lambda _1+k)\pi (X_1)^kv,\\ \pi (H_2)\pi (X_1)^kv&= (\lambda _2+k)\pi (X_1)^kv. \end{aligned}$$

for all integer \(k \ge 0\). Since V is finite-dimensional, there is a nonnegative integer N such that \(\pi (X_1)^Nv\ne 0\) but \(\pi (X_1)^{N+1}v=0\). By repeatedly applying \(\pi (Y_1)\) on \(\pi (X_1)^Nv\), we have

$$\begin{aligned} \pi (H_1)\pi (Y_1)^\ell \pi (X_1)^Nv&= (\lambda _1+N-\ell )\pi (Y_1)^\ell \pi (X_1)^Nv,\\ \pi (H_2)\pi (Y_1)^\ell \pi (X_1)^Nv&= (\lambda _2+N-\ell )\pi (Y_1)^\ell \pi (X_1)^Nv. \end{aligned}$$

for all integer \(\ell \ge 0\). Since V is finite-dimensional, there is a nonnegative integer M such that \(\pi (Y_1)^M\pi (X_1)^Nv \ne 0\) but \(\pi (Y_1)^{M+1}\pi (X_1)^Nv=0\). So, we have

$$\begin{aligned} 0&= \pi (X_1)\pi (Y_1)^{M+1}\pi (X_1)^Nv\\&= [\pi (X_1),\pi (Y_1)]\pi (Y_1)^M\pi (X_1)^Nv+\pi (Y_1)[\pi (X_1),\pi (Y_1)]\pi (Y_1)^{M-1}\pi (X_1)^Nv\\&\quad + \cdots + \pi (Y_1)^{M-1}[\pi (X_1),\pi (Y_1)]\pi (Y_1)\pi (X_1)^Nv+\pi (Y_1)^{M}[\pi (X_1),\pi (Y_1)]\pi (X_1)^Nv\\&= \pi \left( H_1+H_2\right) \pi (Y_1)^M\pi (X_1)^Nv+\pi (Y_1)\pi \left( H_1+H_2\right) \pi (Y_1)^{M-1}\pi (X_1)^Nv\\&\quad + \cdots + \pi (Y_1)^{M-1}\pi \left( H_1+H_2\right) \pi (Y_1)\pi (X_1)^Nv+\pi (Y_1)^{M}\pi \left( H_1+H_2\right) \pi (X_1)^Nv\\&= \left( \sum _{i=N-M}^N (\lambda _1+\lambda _2+2i)\right) \pi (Y_1)^M\pi (X_1)^Nv. \end{aligned}$$

Since \(\pi (Y_1)^M\pi (X_1)^Nv \ne 0\), \(\left( \sum _{i=N-M}^N (\lambda _1+\lambda _2+2i)\right) =0\). Thus, \(\lambda _1+\lambda _2=M-2N\). Similarly, there exist nonnegative integers P and Q such that \(\lambda _1-\lambda _2=Q-2P\). To prove this, replace \(\pi (X_1)\) by \(\pi (X_2)\) and replace \(\pi (Y_1)\) by \(\pi (Y_2)\) above. \(\square \)

Proposition 4.8

If V is an irreducible \(\mathfrak {so}_4\)-module, then V has a unique highest weight.

Proof

By Proposition 4.6, V is a direct sum of its weight spaces. Since \({\text {dim}}(V)\) is finite, there are only a finite number of weights on V. By Lemma 4.4 and since there are only finitely many weights on V, there exists a weight \(\lambda \) with weight vector v such that \(\pi (Z_\alpha )v=0\) for each root \(\alpha \in \Delta \) with corresponding root vector \(Z_\alpha \). Let \(V^\prime \) denote the smallest \(\mathfrak {so}_4\)-invariant subspace of V that contains v. Then, \(V^\prime \) is the span of vectors of the form

$$\begin{aligned} \pi (Z_{-\alpha _1})\pi (Z_{-\alpha _2})\cdots \pi (Z_{-\alpha _k})v \end{aligned}$$
(11)

where \(\alpha _1, \ldots , \alpha _k \in \Delta \). If the vector (11) is nonzero, then it is a weight vector corresponding to the weight \(\lambda -\alpha _1-\cdots -\alpha _k\) by Lemma 4.4. Observe that \(\lambda \succeq \lambda - \alpha _1 - \cdots - \alpha _k\) and so \(\lambda \) is higher than all other weights on \(V^\prime \). Thus, \(V^\prime \) is an \(\mathfrak {so}_4\)-module with a unique highest weight \(\lambda \). Observe that \(V=V^\prime \) by irreducibility of V. \(\square \)

We end the section with a highest weight theorem for irreducible \(\mathfrak {so}_4\)-modules.

Theorem 4.9

Two irreducible \(\mathfrak {so}_4\)-modules have the same highest weight if and only if they are isomorphic as \(\mathfrak {so}_4\)-modules.

5 An action of \(\mathfrak {so}_4\) on the standard module for Doob graphs

In this section, we establish a Lie algebra homomorphism \(\pi : \mathfrak {so}_4 \rightarrow Q\) and show that Q is generated by the center and \(\pi (\mathfrak {so}_4)\). In addition, we prove a necessary and sufficient condition for irreducible T-modules to be isomorphic irrducible Q-modules.

Theorem 5.1

With reference to Assumption 3.1, there exists a Lie algebra homomorphism \( \pi : \mathfrak {so}_4 \rightarrow Q\) such that

(12)

on the standard module V.

Proof

This follows from the fact that the Lie bracket relations of the basis matrices of \(\mathfrak {so}_4\) coincide with the Lie bracket relations in Lemma 3.9. \(\square \)

Corollary 5.2

With reference to Assumption 3.1, let \(\pi \) denote the Lie algebra homomorphism in Theorem 5.1. Then, Q is generated by \(\pi (\mathfrak {so}_4)\) and \(F+\frac{1}{6}[R,[F,L]]\). Consequently, Q is generated by the homomorphic image of the universal enveloping algebra \(U(\mathfrak {so}_4)\) and the center.

Proof

Let \(Q^\prime \) be the subalgebra of Q generated by \(\pi (\mathfrak {so}_4)\) and the matrix \(F+\frac{1}{6}[R,[F,L]]\). By (12), the matrices L, F, and R are in \(Q^\prime \). Since Q is generated by L, F, and R, we have \(Q^\prime =Q\). \(\square \)

Lemma 5.3

With reference to Assumption 3.1, let W denote an irreducible T-module on the standard module V. Then, W is an irreducible Q-module.

Proof

See [24, Proposition 6.3]. \(\square \)

Lemma 5.4

With reference to Assumption 3.1, let \(\pi \) denote the Lie algebra homomorphism in Theorem 5.1 and let W be an irreducible Q-module on V. If \(F+\frac{1}{6}[R,[F,L]]\) acts a scalar on W, then W is an irreducible \(\pi (\mathfrak {so}_4)\)-module.

Proof

Let W denote an irreducible Q-module. By (12), W is a \(\pi (\mathfrak {so}_4)\)-module since W is invariant under the actions of L, F, and R. Suppose W is not an irreducible \(\pi (\mathfrak {so}_4)\)-module. Since \(\mathfrak {so}_4\) is semi-simple, there exists a nonzero \(\pi (\mathfrak {so}_4)\)-module \(W^{\prime }\) that is properly contained in W. Observe that \(W^{\prime }\) is invariant under the actions of \(\pi (\mathfrak {so}_4)\) and \(F+\frac{1}{6}[R,[F,L]]\). By Corollary 5.2, it follows that \(W^\prime \) is a Q-module. Since W is an irreducible Q-module, \(W=W^\prime \). Hence, W is an irreducible \(\pi (\mathfrak {so}_4)\)-module. \(\square \)

Theorem 5.5

With reference to Assumption 3.1, let \(\pi \) denote the Lie algebra homomorphism in Theorem 5.1. Let W denote a subspace of V and suppose \(F+\frac{1}{6}\left[ R, \left[ F, L \right] \right] \) acts as a scalar on W. Then, the following are equivalent:

  1. i)

    W is an irreducible T-module,

  2. ii)

    W is an irreducible Q-module,

  3. iii)

    W is an irreducible \(\pi (\mathfrak {so}_4)\)-module.

Proof

Assume W is an irreducible \(\pi (\mathfrak {so}_4)\)-module. We show that W is an irreducible T-module. To do this, we construct a basis \(\{w_{ij}\}\) of W and show that this basis satisfies the conditions in Proposition 2.1. Since W is finite-dimensional, there exists a scalar \(v:=\text {min}\{j : E^*_j W \ne 0 \}\). By (5) and (12), \(E_v^* W=E_v^*V\cap W\) is invariant under the actions of \(\pi (H_1)\) and \(\pi (H_2)\). Since \(\mathbb {C}\) is algebraically closed, there exists a weight vector \(w \in E_v^* W\) such that

$$\begin{aligned} \pi \left( H_1 \right) w&= \lambda _{1} w, \end{aligned}$$
(13)
$$\begin{aligned} \pi \left( H_2 \right) w&= \lambda _{2} w, \end{aligned}$$
(14)

where \(\lambda _1,\lambda _2 \in \frac{1}{2}\mathbb {Z}\). By (5) and (12) and since \(E_{v-1}^*V\cap W = E_{v-1}^* W=0\), we have

$$\begin{aligned} \pi \left( X_1 \right) w&= 0, \end{aligned}$$
(15)
$$\begin{aligned} \pi \left( X_2 \right) w&= 0. \end{aligned}$$
(16)

Define the vectors

$$\begin{aligned} w_{ij}&= \displaystyle \frac{\pi \left( Y_1 \right) ^i \pi \left( Y_2 \right) ^jw}{i!\ j!} \end{aligned}$$
(17)

for all integers \(i,j \ge 0\). Observe that \(w_{ij} \in E_{v+i+j}^* W\). For convenience, define \(w_{\ell k}=0\) if \(\ell < 0\) or \(k < 0\). Since \(\left[ Y_1,Y_2\right] =0\), \(\pi \left( Y_1 \right) \) and \(\pi \left( Y_2 \right) \) commute and one easily verifies that

$$\begin{aligned} \pi \left( Y_1 \right) w_{ij}&= \left( i+1 \right) w_{i+1, j} , \end{aligned}$$
(18)
$$\begin{aligned} \pi \left( Y_2 \right) w_{ij}&= \left( j+1 \right) w_{i, j+1} , \end{aligned}$$
(19)

for all integers \(i, j \ge 0\). Now, we claim

$$\begin{aligned} \pi \left( H_1 \right) w_{ij}&= \left( \lambda _{1}-i-j \right) w_{ij}, \end{aligned}$$
(20)
$$\begin{aligned} \pi \left( H_2 \right) w_{ij}&= \left( \lambda _{2}-i+j \right) w_{ij}, \end{aligned}$$
(21)

for all integers \(i,j \ge 0\). We prove this by induction on \(i+j\). By (13), (14), and (17), the claim holds for \(i+j=0\). Assume the claim holds for \(i+j=\ell +k-1\) for some integers \(\ell ,k \ge 0\) such that \(\ell +k \ge 1\). By (18) and (19) and by induction hypothesis, we have

$$\begin{aligned} \pi \left( H_1 \right) w_{\ell k}= & {} \frac{1}{\ell }\ \pi \left( H_1 \right) \pi \left( Y_1 \right) w_{\ell -1, k} \\= & {} \frac{1}{\ell } \left[ \pi \left( H_1 \right) , \pi \left( Y_1 \right) \right] w_{\ell -1, k}+\frac{1}{\ell } \pi \left( Y_1 \right) \pi \left( H_1 \right) w_{\ell -1, k} \\= & {} -\frac{1}{\ell } \pi \left( Y_1 \right) w_{\ell -1, k}+\frac{1}{\ell } \left( \lambda _1-\ell +1-k \right) \pi \left( Y_1 \right) w_{\ell -1, k}\\= & {} \left( \lambda _1-\ell -k \right) w_{\ell k} \end{aligned}$$

and

$$\begin{aligned} \pi \left( H_2 \right) w_{\ell k}= & {} \frac{1}{k}\ \pi \left( H_2 \right) \pi \left( Y_2 \right) w_{\ell , k-1} \\= & {} \frac{1}{k} \left[ \pi \left( H_2 \right) , \pi \left( Y_2 \right) \right] w_{\ell , k-1}+\frac{1}{k} \pi \left( Y_2 \right) \pi \left( H_2 \right) w_{\ell , k-1} \\= & {} \frac{1}{k} \pi \left( Y_2 \right) w_{\ell , k-1}+\frac{1}{k} \left( \lambda _2-\ell +k-1 \right) \pi \left( Y_2 \right) w_{\ell , k-1}\\= & {} \left( \lambda _2-\ell +k \right) w_{\ell k}. \end{aligned}$$

Thus, (20) and (21) hold when \(i+j=\ell +k\). Similarly, we have

$$\begin{aligned} \pi \left( X_1 \right) w_{ij}&= \left( \lambda _{1}+\lambda _{2}-i+1 \right) w_{i-1,j}, \end{aligned}$$
(22)
$$\begin{aligned} \pi \left( X_2 \right) w_{ij}&= \left( \lambda _{1}-\lambda _{2}-j+1 \right) w_{i,j-1}, \end{aligned}$$
(23)

for all integers \(i, j \ge 0\) by induction on \(i+j\). Now, we find a basis for W. Since W is finite-dimensional, we may define the scalars

$$\begin{aligned} d= & {} \text {max}\{i: \ w_{i0} \ne 0 \}, \\ p= & {} \text {max}\{j: \ w_{0j} \ne 0 \}. \end{aligned}$$

We claim that \(w_{ij} \ne 0\) for integers \(i,j\ (0\le i \le d,\ 0 \le j \le p)\). By (22) and (23),

$$\begin{aligned} 0 = w_{d+1,0}=\pi \left( X_1 \right) w_{d+1,0} = \left( \lambda _1+\lambda _2-d \right) w_{d,0}\\ 0 = w_{0,p+1}=\pi \left( X_2 \right) w_{0,p+1} = \left( \lambda _1-\lambda _2-p \right) w_{0,p}. \end{aligned}$$

Since \(w_{d,0} \ne 0\) and \(w_{0,p} \ne 0\), it follows that

$$\begin{aligned} d = \lambda _1+\lambda _2 \text { and }p=\lambda _1 - \lambda _2. \end{aligned}$$
(24)

Consider integers \(\ell , k \ge 0\) such that \(w_{\ell ,k} \ne 0\) but \(w_{\ell +1,k} = 0\) and \(w_{\ell ,k+1}=0\). By (22) and (23),

$$\begin{aligned} 0 = w_{\ell +1,k}=\pi \left( X_1 \right) w_{\ell +1,k} = \left( \lambda _1+\lambda _2-\ell \right) w_{\ell ,k},\\ 0 = w_{\ell ,k+1} = \pi \left( X_2\right) w_{\ell ,k+1} = (\lambda _1-\lambda _2-k)w_{\ell ,k}. \end{aligned}$$

Since \(w_{\ell ,k} \ne 0\), we have \(\ell =d\) and \(k = p\). This proves the claim. Now, let

$$\begin{aligned} W^{\prime }={\text {span}}\{w_{ij}: \ 0 \le i \le d, 0 \le j \le p \}. \end{aligned}$$

Note that \(W^{\prime }\) is \(\pi (\mathfrak {so}_4)\)-invariant by (18)–(23). Since W is an irreducible \(\pi (\mathfrak {so}_4)\)-module, \(W=W^{\prime }\). By (20) and (21) and since \(w_{ij}\ne 0\), \(w_{ij}\) is a weight vector corresponding to the weight \((\lambda _1-i-j,\lambda _2-i+j)\). Hence, \(w_{ij}\)’s belong to different weight spaces and are linearly independent. This proves \(\{w_{ij}\}\) is a basis for W. Finally, we show the action of A on \(w_{ij}\). Since \(F+\frac{1}{6}\left[ R, \left[ F, L \right] \right] \) acts as a scalar on W, we may define

$$\begin{aligned} \left( F+\frac{1}{6}\left[ R, \left[ F, L \right] \right] \right) w_{ij}= & {} \left( t+d-p \right) w_{ij} \end{aligned}$$
(25)

for all integers \(i,j\ (0 \le i \le d, 0 \le j \le p)\). By (12), (21), and (24), we have

$$\begin{aligned} \left[ R, \left[ F, L \right] \right] w_{ij}&= 6(d-p-2i+2j)w_{ij} \end{aligned}$$
(26)

for all integers \(i,j\ (0 \le i \le d, 0 \le j \le p)\). By (25)–(26), we have

$$\begin{aligned} Fw_{ij}= \left( t+2 \left( i-j \right) \right) w_{ij}. \end{aligned}$$
(27)

By (12), (22)–(23), and (18)–(19), we have

$$\begin{aligned} Lw_{ij}= & {} 3 \left( d-i+1 \right) w_{i-1,j}+ \left( p-j+1 \right) w_{i,j-1}, \end{aligned}$$
(28)
$$\begin{aligned} Rw_{ij}= & {} 3 \left( j+1 \right) w_{i, j+1}+\left( i+1 \right) w_{i+1, j} . \end{aligned}$$
(29)

for all integers \(i,j\ (0 \le i \le d, 0 \le j \le p)\). We obtain

$$\begin{aligned} Aw_{ij}= & {} 3 \left( d-i+1 \right) w_{i-1, j}+\left( p-j+1 \right) w_{i, j-1}+\left( t+2 \left( i-j \right) \right) w_{ij}\\&+3 \left( j+1 \right) w_{i, j+1}+\left( i+1 \right) w_{i+1, j} \end{aligned}$$

by (3) and (27)–(29). Note that \(\sum w_{ij}\) is an eigenvector for A with eigenvalue \(3d+p+t\). Since A has integral eigenvalues, t is an integer. Thus, W is an irreducible T-module by Proposition 2.1. The remaining assertions of the theorem follow from Lemma 5.3 and Lemma 5.4. \(\square \)

Corollary 5.6

With reference to Assumption 3.1, let \(\pi \) denote the Lie algebra homomorphism in Theorem 5.1. If \(W=W(n,m;v,d,p,t)\) is an irreducible T-module, then W is an irreducible \(\pi (\mathfrak {so}_4)\)-module with highest weight \(\left( \frac{1}{2} (d+p), \frac{1}{2} (d-p)\right) \).

Proof

Let \(\{w_{ij}\}\) denote the basis of W described in Proposition 2.1. By (12) and Lemma 3.3, we see that

$$\left\{ \left( \frac{d+p}{2}-i-j,\frac{d-p}{2}-i+j \right) \ :\ 0 \le i \le d,\ 0 \le j \le p \right\} $$

is the set of all weights on W. Observe that \(\left( \frac{1}{2}(d+p), \frac{1}{2}(d-p) \right) \) is the highest weight in the set. \(\square \)

Corollary 5.7

With reference to Assumption 3.1, let \(\pi \) denote the Lie algebra homomorphism in Theorem 5.1. Let \(W=W(n,m;v,d,p,t)\) and \(W^\prime =W(n,m;v^\prime ,d^\prime ,p^\prime ,t^\prime )\) denote irreducible T-modules. Then, W and \(W^\prime \) are isomorphic Q-modules if and only if \((d,p,t)=(d^\prime ,p^\prime ,t^\prime )\).

Proof

Let W and \(W^\prime \) denote irreducible T-modules. By Lemma 3.7, \(F+\frac{1}{6}[R,[F,L]]\) acts on W (resp. \(W^\prime \)) as the scalar \(t-d+p\) (resp. \(t^\prime -d^\prime +p^\prime \)). Moreover, W and \(W^\prime \) are irreducible Q-modules and irreducible \(\pi (\mathfrak {so}_4)\)-modules by Theorem 5.5. By Corollary 5.6, the highest weight of W (resp. \(W^\prime \)) is \(\left( \frac{1}{2}(d+p),\frac{1}{2}(d-p)\right) \) (resp. \(\left( \frac{1}{2}(d^\prime +p^\prime ), \frac{1}{2}(d^\prime -p^\prime )\right) \)).

Note that \((d,p,t)=(d^\prime ,p^\prime ,t^\prime )\) if and only if the spaces W and \(W^\prime \) are isomorphic \(\pi (\mathfrak {so}_4)\)-modules and \(F+\frac{1}{6}[R,[F,L]]\) has the same action on them. Since Q is generated by \(\pi (\mathfrak {so}_4)\) and \(F+\frac{1}{6}[R,[F,L]]\) by Corollary 5.2, statement holds. \(\square \)

Remark 5.8

In [24, Theorem 9.1], Terwilliger and Žitnik gave equivalent conditions for \(T \ne Q\) that work for general distance-regular graphs. Among these conditions is the existence of a pair of quasi-isomorphic irreducible T-modules with unequal endpoints. In Doob graph D(nm), one can prove that W(nmvdpt) and \(W(n,m; v^\prime , d^\prime , p^\prime , t^\prime )\) is a pair of quasi-isomorphic irreducible T-modules with unequal endpoints if and only if \(v \ne v^\prime \) and \((d,p,t)=(d^\prime , p^\prime , t^\prime )\). This establishes Corollary 5.7 from the context of quasi-isomorphism.

Remark 5.9

The complex Lie algebra \(\mathfrak {sl}_2\) is a simple Lie algebra with basis \(\{e,f,h\}\) satisfying the relations

$$\begin{aligned}{}[h,e]=2e, \quad [h,f]=-2f, \text { and } [e,f]=h. \end{aligned}$$

Let \(\mathfrak {h}_1\) (resp. \(\mathfrak {h}_2\)) denote the subalgebra of \(\mathfrak {so}_4\) spanned by \(X_1\), \(Y_1\), and \(H_1+H_2\) (resp. \(X_2\), \(Y_2\), and \(H_1-H_2\)). One checks that \(\mathfrak {sl}_2\), \(\mathfrak {h}_1\), and \(\mathfrak {h}_2\) are pairwise isomorphic and in particular, \(\mathfrak {so}_4\) is a direct sum of \(\mathfrak {h}_1\) and \(\mathfrak {h}_2\). With reference to Assumption 3.1, let \(\pi \) denote the homomorphism in Theorem 5.1. Let \(W=W(n,m; v,d,p,t)\) denote an irreducible T-module with basis \(\{w_{ij}\}\) as described in Proposition 2.1. Now, consider the spaces

$$\begin{aligned} {\text {span}}\left\{ w_{ij}\ |\ 0 \le i \le d \right\} \text { for a fixed }j\ (0 \le j \le p), \end{aligned}$$
(30)
$$\begin{aligned} {\text {span}}\left\{ w_{ij}\ |\ 0 \le j \le p \right\} \text { for a fixed }i\ (0 \le i \le d). \end{aligned}$$
(31)

By (18), (20)–(21), and (22), the spaces (30) are irreducible \(\pi (\mathfrak {h}_1)\)-modules. On the other hand, the spaces (31) are irreducible \(\pi (\mathfrak {h}_2)\)-modules by (19), (20)–(21), and (23). Therefore, the spaces above are irreducible \(\mathfrak {sl}_2\)-modules.