1 Introduction

The link of the Toda lattice to three-term recurrence relations via the Lax pair after the Flaschka coordinate transform is well understood, see e.g. [2, 27]. We consider a Lax pair in a specific Lie algebra, such that in irreducible \(*\)-representations the Lax operator is a Jacobi operator. A Lax pair is a pair of time-dependent matrices or operators L(t) and M(t) satisfying the Lax equation

$$\begin{aligned} \dot{L}(t) = [M(t),L(t)], \end{aligned}$$

where \([\, , \, ]\) is the commutator and the dot represents differentiation with respect to time. The Lax operator L is isospectral, i.e. the spectrum of L is independent of time. A famous example is the Lax pair for the Toda chain in which L is a self-adjoint Jacobi operator,

$$\begin{aligned} L(t) e_n = a_n(t) e_{n+1} + b_n(t) e_{n} + a_{n-1}(t) e_{n-1}, \end{aligned}$$

where \(\{e_n\}\) is an orthonormal basis for the Hilbert space, and M is the skew-adjoint operator given by

$$\begin{aligned} M(t) e_n = a_n(t) e_{n+1} - a_{n-1}(t) e_{n-1}. \end{aligned}$$

In this case the Lax equation describes the equations of motion (after a change of variables) of a chain of interacting particles with nearest neighbour interactions. The eigenvalues of L, L being isospectral, constitute integrals of motion.

In this paper we define a Lax pair in a 2-parameter Lie algebra. In the special case of \(\mathfrak {sl}(2,\mathbb {C})\) we recover the Lax pair for the \(\mathfrak {sl}(2,\mathbb {C})\) Kostant Toda lattice, see [2, Sect. 4.6] and references given there. We give a slight generalization by allowing for a more general M(t). We discuss the corresponding solutions to the corresponding differential equations in various representations of the Lie algebra. In particular, one obtains the classical relation to the Hermite, Krawtchouk, Charlier, Meixner, Laguerre and Meixner–Pollaczek polynomials from the Askey scheme of hypergeometric functions [16] for which the Toda modification, see [13, Sect. 2.8], remains in the same class of orthogonal polynomials. This corresponds to the results established by Zhedanov [29], who investigated the situation where L, M and \(\dot{L}\) act as three-term recurrence operators and close up to a Lie algebra of dimension 3 or 4. In the current paper Zhedanov’s result is explained, starting from the other end. In Zhedanov’s approach the condition on forming a low-dimensional Lie algebra forces a factorization of the functions as a function of time t and place n, which is immediate from representing the Lax pair from the Lie algebra element. The solutions of the Toda lattice arising in this way, i.e. which are factorizable as functions of n and t, have also been obtained by Kametaka [15] stressing the hypergeometric nature of the solutions. The link to Lie algebras and Lie groups in Kametaka [15] is implicit, see especially [15, Part I]. The results and methods of the short paper by Kametaka [15] have been explained and extended later by Okamoto [23]. In particular, Okamoto [23] gives the relation to the \(\tau \)-function formulation and the Bäcklund transformations.

Moreover, we extend to non-polynomial settings by considering representations of the corresponding Lie algebras in \(\ell ^2(\mathbb {Z})\) corresponding to the principal unitary series of \(\mathfrak {su}(1,1)\) and the representations of \(\mathfrak {e}(2)\), the Lie algebra of the group of motions of the plane. In this way we find solutions to the Toda lattice equations labelled by \(\mathbb {Z}\). There is a (non-canonical) way to associate to recurrences on \(\ell ^2(\mathbb {Z})\) three-term recurrences for \(2\times 2\)-matrix valued polynomials, see e.g. [3, 18]. However, this does not lead to explicit \(2\times 2\)-matrix valued solutions of the non-abelian Toda lattice as introduced and studied in [4, 7] in relation to matrix valued orthogonal polynomials, see also [14] for an explicit example and the relation to the modification of the matrix weight. The general Lax pair for the Toda lattice in finite dimensions, as studied by Moser [22], can also be considered and slightly extended in the same way as an element of the Lie algebra \(\mathfrak {sl}(d+1,\mathbb {C})\). This involves t-dependent finite discrete orthogonal polynomials, and these polynomials occur in describing the action of L(t) in highest weight representations. We restrict to representations for the symmetric powers of the fundamental representations, then the eigenfunctions can be described in terms of multivariable Krawtchouk polynomials following Iliev [12] establishing them as overlap coefficients between a natural basis for two different Cartan subalgebras. Similar group theoretic interpretations of these multivariable Krawtchouk polynomials have been established by Crampé et al. [5] and Genest et al. [8]. We discuss briefly the t-dependence of the corresponding eigenvectors of L(t).

In brief, in Sect. 2 we recall the 2-parameter Lie algebra as in [20] and the Lax pair. In Sect. 3 we discuss \(\mathfrak {su}(2)\) and its finite-dimensional representations, and in Sect. 4 we discuss the case of \(\mathfrak {su}(1,1)\), where we discuss both discrete series representations and principal unitary series representations. The last leads to new solutions of the Toda equations and the generalization in terms of orthogonal functions. The corresponding orthogonal functions are the overlap coefficients between the standard basis in the representations and the t-dependent eigenfunctions of the operator L. In Sect. 5 we look at the oscillator algebra as specialization, and in Sect. 6 we consider the Lie algebra for the group of plane motions leading to a solution in connection to Bessel functions. In Sect. 7 we indicate how the measures for the orthogonal functions involved have to be modified in order to give solutions of the coupled differential equations. For the Toda case related to orthogonal polynomials, this coincides with the Toda modification [13, Sect. 2.8]. Finally, in Sect. 8 we consider the case of finite dimensional representations of such a Lax pair for a higher rank Lie algebra in specific finite-dimensional representations for which all weight spaces are 1-dimensional.

A question following up on Sect. 7 is whether the modification for the weight is of general interest, cf. [13, Sect. 2.8]. A natural question following up on Sect. 8 is what happens in other finite-dimensional representations, and what happens in infinite dimensional representations corresponding to non-compact real forms of \(\mathfrak {sl}(d+1,\mathbb {C})\) as is done in Sect. 4 for the case \(d=1\). We could also ask if it is possible to associate Racah polynomials, as the most general finite discrete orthogonal polynomials in the Askey scheme, to the construction of Sect. 8. Moreover, the relation to the interpretation as in [19] suggests that it might be possible to extend to quantum algebra setting, but this is quite open.

This paper is dedicated to Richard A. Askey (1933–2019) who has done an incredible amount of fascinating work in the area of special functions, and who always had an open mind, in particular concerning relations with other areas. We hope this spirit is reflected in this paper. Moreover, through his efforts for mathematics education, Askey’s legacy will be long-lived.

2 The Lie algebra \(\varvec{\mathfrak g}(a,b)\)

Let \(a,b \in \mathbb {C}\). The Lie algebra \(\mathfrak g(a,b)\) is the 4-dimensional complex Lie algebra with basis HEFN satisfying

$$\begin{aligned} \begin{aligned}&[E,F]=aH+bN, \quad [H,E]=2E, \quad [H,F]=-2F, \\&[H,N]=[E,N]=[F,N]=0. \end{aligned} \end{aligned}$$
(2.1)

For \(a,b \in \mathbb {R}\) there are two inequivalent \(*\)-structures on \(\mathfrak g(a,b)\) defined by

$$\begin{aligned} E^*=\epsilon F, \quad H^*=H, \quad N^*=N, \end{aligned}$$

where \(\epsilon \in \{+,-\}\).

We define the following Lax pair in \(\mathfrak g(a,b)\).

Definition 2.1

Let \(r,s \in C^1 [0,\infty )\) and \(u \in C[0,\infty )\) be real-valued functions and let \(c \in \mathbb {R}\). The Lax pair \(L,M \in \mathfrak g(a,b)\) is given by

$$\begin{aligned} \begin{aligned} L(t)&= cH+ s(t)(aH+bN)+r(t) \big (E+E^*\big ), \\ M(t)&= u(t)\big (E-E^*\big ). \end{aligned} \end{aligned}$$
(2.2)

Note that \(L^*=L\) and \(M^*=-M\). Being a Lax pair means that \(\dot{L} = [M,L]\), which leads to the following differential equations.

Proposition 2.2

The functions rs and u satisfy

$$\begin{aligned} \begin{aligned} \dot{s}(t) = 2\epsilon r(t)u(t), \quad \dot{r}(t) = -2 (as(t)+c)u(t). \end{aligned} \end{aligned}$$

Proof

From the commutation relations (2.1) it follows that

$$\begin{aligned}{}[M,L]= 2\epsilon r(t)u(t)(aH+bN) - 2 (as(t)+c)u(t)(E+E^*). \end{aligned}$$

Since \([M,L]=\dot{L} = \dot{s}(t)(aH+bN) + \dot{r}(t) (E+E^*)\), the results follows. \(\square \)

Corollary 2.3

The function \(I(r,s)=\epsilon r^2+ (as+2c)s\) is an invariant.

Proof

Differentiating gives

$$\begin{aligned} \begin{aligned} \frac{d}{dt} (\epsilon r(t)^2+as(t)^2+2cs(t))&= 2\epsilon r(t) \dot{r}(t)+ 2 (as(t)+c) \dot{s}(t), \end{aligned} \end{aligned}$$

which equals zero by Proposition 2.2. \(\square \)

In the following sections we consider the Lax operator L in an irreducible \(*\)-representation of \(\mathfrak g(a,b)\), and we determine explicit eigenfunctions and its spectrum. We restrict to the following special cases of the Lie algebra \(\mathfrak g(a,b)\):

  • \(\mathfrak g(1,0) \cong \mathfrak {sl}(2,\mathbb {C})\oplus \mathbb {C},\)

  • \(\mathfrak g(0,1) \cong \mathfrak b(1)\) is the generalized oscillator algebra,

  • \(\mathfrak g(0,0) \cong \mathfrak e(2) \oplus \mathbb {C}\), with \(\mathfrak e(2)\) the Lie algebra of the group of plane motions.

These are the only essential cases as \(\mathfrak g(a,b)\) is isomorphic as a Lie algebra to one of these cases, see [20, Sect. 2-5].

3 The Lie algebra \(\varvec{\mathfrak {su}}(2)\)

In this section we consider the Lie algebra \(\mathfrak g(a,b)\) from Sect. 2 with \((a,b)=(1,0)\) and \(\epsilon = +\), i.e. the Lie algebra \(\mathfrak {su}(2) \oplus \mathbb {C}\). The basis element N plays no role in this case, therefore we omit it. So we consider the Lie algebra with basis HEF satisfying commutation relations

$$\begin{aligned}{}[H,E]=2E, \qquad [H,F]=-2F, \qquad [E,F]=H, \end{aligned}$$

and the \(*\)-structure is defined by \(H^*=H, E^*=F\).

The Lax pair (2.2) is given by

$$\begin{aligned} L(t) = s(t)H + r(t)(E+F), \qquad M(t)=u(t)(E-F), \end{aligned}$$

where (without loss of generality) we set \(c=0\). The differential equations for r and s from Proposition 2.2 read in this case

$$\begin{aligned} \begin{aligned} \dot{s}(t) = 2u(t)r(t), \quad \dot{r}(t) = -2u(t)s(t) \end{aligned} \end{aligned}$$
(3.1)

and the invariant in Corollary 2.3 is given by \(I(r,s)=r^2+s^2\).

Lemma 3.1

Assume \({{\,\mathrm{sgn}\,}}(u(t))={{\,\mathrm{sgn}\,}}(r(t))\) for all \(t>0\), \(s(0)>0\) and \(r(0)>0\). Then \({{\,\mathrm{sgn}\,}}(s(t))>0\) and \({{\,\mathrm{sgn}\,}}(r(t))>0\) for all \(t>0\).

Proof

From \(\dot{s} = 2 ur\) it follows that s is increasing. Since (r(t), s(t)) in phase space is a point on the invariant \(I(r,s)=I(r(0),s(0))\), which describes a circle around the origin, it follows that r(t) and s(t) remain positive. \(\square \)

Throughout this section we assume that the conditions of Lemma 3.1 are satisfied, so that r(t) and s(t) are positive. Note that if we change the condition on r(0) to \(r(0)<0\), then \(r(t)<0\) for all \(t>0\).

For \(j \in \frac{1}{2}\mathbb {N}\) let \(\ell ^2_j\) be the \(2j+1\) dimensional complex Hilbert space with standard orthonormal basis \(\{e_n \mid n=0,\ldots ,2j\}\). An irreducible \(*\)-representation \(\pi _j\) of \(\mathfrak {su}(2)\) on \(\ell ^2_j\) is given by

$$\begin{aligned} \begin{aligned} \pi _j(H)e_n&= 2(n-j)\, e_n, \\ \pi _j(E) e_n&= \sqrt{(n+1)(2j-n))}\, e_{n+1}, \\ \pi _j(F) e_n&= \sqrt{n(2j-n+1)}\, e_{n-1}, \end{aligned} \end{aligned}$$

where we use the notation \(e_{-1}=e_{2j+1}=0\). In this representation the Lax operator \(\pi _j(L)\) is the Jacobi operator

$$\begin{aligned} \pi _j(L(t)) e_n&= r(t) \sqrt{(n+1)(2j-n)} \, e_{n+1}\nonumber \\&\quad + 2s(t)(n-j) \, e_n + r(t) \sqrt{n(2j-n+1)}\, e_{n-1}. \end{aligned}$$
(3.2)

We can diagonalize the Lax operator \(\pi _j(L)\) using orthonormal Krawtchouk polynomials [16, Sect. 9.11], which are defined by

$$\begin{aligned} K_n(x) = K_n(x;p,N) = \left( \frac{p}{1-p}\right) ^\frac{n}{2} \sqrt{\left( {\begin{array}{c}N\\ n\end{array}}\right) } \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{-n,-x}{-N} \ ;\frac{1}{p} \right) , \end{aligned}$$

where \(N\in \mathbb {N}\), \(0<p<1\) and \(n,x \in \{0,1,\ldots ,N\}\). The three-term recurrence relation is

$$\begin{aligned} \begin{aligned} \frac{\frac{1}{2} N- x}{\sqrt{p(1-p)}}K_n(x)&=\sqrt{(n+1)(N-n)}\, K_{n+1}(x) \\&\quad + \frac{p-\frac{1}{2}}{\sqrt{p(1-p)}}(2n-N)K_n(x) + \sqrt{n(N-n+1)}\, K_{n-1}(x), \end{aligned} \end{aligned}$$

with the convention \(K_{-1}(x) = K_{N+1}(x)=0\). The orthogonality relations read

$$\begin{aligned} \sum _{x=0}^N \left( {\begin{array}{c}N\\ x\end{array}}\right) p^x (1-p)^{N-x} K_{n}(x) K_{n'}(x) = \delta _{n,n'}. \end{aligned}$$

Theorem 3.2

Define for \(x \in \{0,\ldots ,2j\}\)

$$\begin{aligned} W_t(x) = \left( {\begin{array}{c}2j\\ x\end{array}}\right) p(t)^x(1-p(t))^{2j-x}, \end{aligned}$$

where \(p(t) = \frac{1}{2} + \frac{s(t)}{2C}\) and \(C = \sqrt{s^2 + r^2}\). For \(t>0\) let \(U_t: \ell ^2_j \rightarrow \ell ^2(\{0,\ldots ,2j\}, W_t)\) be defined by

$$\begin{aligned}{}[U_te_n](x) = K_n(x;p(t),2j), \end{aligned}$$

then \(U_t\) is unitary and \(U_t \circ \pi _j(L(t)) \circ U_t^* = M(2C(j-x))\).

Here M denotes the multiplication operator given by \([M(f)g](x) = f(x)g(x)\).

Proof

From (3.2) and the recurrence relation of the Krawtchouk polynomials we obtain

$$\begin{aligned}{}[U_t\,r^{-1}\pi _j(L) U_t^* K_\cdot (x)](n) = \frac{j-x}{\sqrt{p(1-p)}} K_n(x), \end{aligned}$$

where

$$\begin{aligned} \frac{s}{r} = \frac{p-\frac{1}{2}}{\sqrt{p(1-p)}}. \end{aligned}$$

The last identity implies

$$\begin{aligned} p= \frac{1}{2}+ \frac{1}{2}\sqrt{\frac{s^2}{s^2+r^2}} \end{aligned}$$

so that

$$\begin{aligned} p(1-p)= \frac{r^2}{4(s^2+r^2)}. \end{aligned}$$

Then we find that the eigenvalue is

$$\begin{aligned} \frac{j-x}{\sqrt{p(1-p)}}= \frac{\sqrt{s^2+r^2}}{r} 2(j-x). \end{aligned}$$

Since \(s^2+r^2\) is constant, the result follows. \(\square \)

4 The Lie algebra \(\varvec{\mathfrak {su}}(1,1)\)

In this section we consider representations of \(\mathfrak g(a,b)\) with \((a,b)=(1,0)\) and \(\epsilon =-\), i.e. the Lie algebra \(\mathfrak {su}(1,1) \oplus \mathbb {C}\). We omit the basis element N again. The commutation relations are the same as in the previous section. The \(*\)-structure in this case is defined by \(H^*=H\) and \(E^*=-F\).

The Lax pair (2.2) is given by

$$\begin{aligned} L(t) = s(t)H + r(t)(E-F), \qquad M(t)=u(t)(E+F), \end{aligned}$$

where we set \(c=0\) again. The functions r and s satisfy

$$\begin{aligned} \begin{aligned} \dot{s}(t) = -2u(t)r(t), \quad \dot{r}(t) = -2u(t)s(t) \end{aligned} \end{aligned}$$

and the invariant is given by \(I(r,s)=s^2-r^2\).

Lemma 4.1

Assume \({{\,\mathrm{sgn}\,}}(u(t))=-{{\,\mathrm{sgn}\,}}(r(t))\) for all \(t>0\), \(s(0)>0\) and \(r(0)>0\). Then \({{\,\mathrm{sgn}\,}}(s(t))>0\) and \({{\,\mathrm{sgn}\,}}(r(t))>0\) for all \(t>0\).

Proof

The proof is similar to the proof of Lemma 3.1, where in this case \(I(r,s)=I(r(0),s(0))\) describes a hyperbola or a straight line. \(\square \)

Throughout this section we assume that the assumptions of Lemma 4.1 are satisfied.

We consider two families of irreducible \(*\)-representations of \(\mathfrak {su}(1,1)\). The first family is the positive discrete series representations \(\pi _k\), \(k>0\), on \(\ell ^2(\mathbb {N})\). The actions of the basis elements on the standard orthonormal basis \(\{e_n \mid n \in \mathbb {N}\}\) are given by

$$\begin{aligned} \begin{aligned} \pi _k(H)e_n&= 2(k+n)\, e_n, \\ \pi _k(E)e_n&= \sqrt{(n+1)(2k+n)}\, e_{n+1}, \\ \pi _k(F)e_n&= -\sqrt{n(2k+n-1)}\, e_{n-1}. \end{aligned} \end{aligned}$$

We use the convention \(e_{-1}=0\).

The second family of representations we consider is the principal unitary series representation \(\pi _{\lambda ,\varepsilon }\), \(\lambda \in -\frac{1}{2}+i\mathbb {R}_+\), \(\varepsilon \in [0,1)\) with \((\lambda ,\varepsilon ) \ne (-\frac{1}{2},\frac{1}{2})\), on \(\ell ^2(\mathbb {Z})\). The actions of the basis elements on the standard orthonormal basis \(\{ e_n \mid n \in \mathbb {Z}\}\) are given by

$$\begin{aligned} \begin{aligned} \pi _{\lambda ,\varepsilon }(H)e_n&= 2(\varepsilon +n)\, e_n, \\ \pi _{\lambda ,\varepsilon }(E)e_n&= \sqrt{(n+\varepsilon -\lambda )(n+\varepsilon +\lambda +1)}\, e_{n+1},, \\ \pi _{\lambda ,\varepsilon }(F)e_n&= -\sqrt{n+\varepsilon -\lambda -1)(n+\varepsilon +\lambda )}\, e_{n-1}. \end{aligned} \end{aligned}$$

Note that both representations \(\pi _k^+\) and \(\pi _{\lambda ,\varepsilon }\) as given above define unbounded representations. The operators \(\pi (X)\), \(X \in \mathfrak {su}(1,1)\), are densely defined operators on their representation space, where as a dense domain we take the set of finite linear combinations of the standard orthonormal basis \(\{e_n\}\).

Remark 4.2

The Lie algebra \(\mathfrak {su}(1,1)\) has two more families of irreducible \(*\)-representations: the negative discrete series and the complementary series. The negative discrete series representation \(\pi _k^-\), \(k>0\), can be obtained from the positive discrete series representation \(\pi _k\) by setting

$$\begin{aligned} \pi _k^-(X) = \pi _k(\vartheta (X)), \qquad X \in \mathfrak {su}(1,1), \end{aligned}$$

where \(\vartheta \) is the Lie algebra isomorphism defined by \(\vartheta (H)=-H\), \(\vartheta (E)=F\), \(\vartheta (F)=E\).

The complementary series are defined in the same way as the principal unitary series, but the labels \(\lambda ,\varepsilon \) satisfy \(\varepsilon \in [0,\frac{1}{2})\), \(\lambda \in (-\frac{1}{2},-\varepsilon )\) or \(\varepsilon \in (\frac{1}{2},1)\), \(\lambda \in (-\frac{1}{2}, \varepsilon -1)\).

The results obtained in this section about the Lax operator in the positive discrete series and principal unitary series representations can easily be extended to these two families of representations.

4.1 The Lax operator in the positive discrete series

The Lax operator L acts in the positive discrete series representation as a Jacobi operator on \(\ell ^2(\mathbb {N})\) by

$$\begin{aligned} \pi _k(L(t)) e_n= & {} r(t)\sqrt{(n+1)(n+2k)}\, e_{n+1} + s(t)(2k+2n) e_n\\&+ r(t)\sqrt{n(n+2k-1)}\, e_{n-1}. \end{aligned}$$

\(\pi _k(L)\) can be diagonalized using explicit families of orthogonal polynomials. We need to distinguish between three cases corresponding to the invariant \(s^2-r^2\) being positive, zero or negative. This corresponds to hyperbolic, parabolic and elliptic elements, and the eigenvalues and eigenfunctions have different behaviour per class, cf. [19].

4.1.1 Case 1: \(s^2-r^2>0\)

In this case eigenfunctions of \(\pi _k(L)\) can be given in terms of Meixner polynomials. The orthonormal Meixner polynomials [16, Sect. 9.10] are defined by

$$\begin{aligned} M_n(x)= M_n(x;\beta ,c) = (-1)^n \sqrt{ \frac{(\beta )_n }{n!}c^n} \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{-n,-x}{\beta } \ ;1-\frac{1}{c} \right) , \end{aligned}$$

where \(\beta >0\) and \(0<c<1\). They satisfy the three-term recurrence relation

$$\begin{aligned} \begin{aligned} \frac{(1-c)(x+\frac{1}{2}\beta )}{\sqrt{c}} M_n(x)&= \sqrt{(n+1)(n+\beta )} M_{n+1}(x)\\&\quad + \frac{ (c+1)(n+\frac{1}{2}\beta ) }{\sqrt{c}} M_n(x) + \sqrt{n(n-1+\beta )} M_{n-1}(x). \end{aligned} \end{aligned}$$

Their orthogonality relations are given by

$$\begin{aligned} \sum _{x\in \mathbb {N}} \frac{ (\beta )_x }{x!}c^x (1-c)^{2\beta } M_n(x) M_{n'}(x) = \delta _{n,n'}. \end{aligned}$$

Theorem 4.3

Let

$$\begin{aligned} W_t(x) = \frac{ (2k)_x }{x!}c(t)^x (1-c(t))^{4k}, \qquad x \in \mathbb {N},\, t>0, \end{aligned}$$

where \(c(t) \in (0,1)\) is determined by \(\frac{s}{r}=\frac{1+c}{2\sqrt{c}}\), or equivalently \(c(t) = e^{-2 {{\,\mathrm{arccosh}\,}}(\frac{s(t)}{r(t)})}\). Define for \(t>0\) the operator \(U_t:\ell ^2(\mathbb {N}) \rightarrow \ell ^2(\mathbb {N},W_t)\) by

$$\begin{aligned}{}[U_te_n](x) = M_n(x;2k,c(t)), \end{aligned}$$

then \(U_t\) is unitary and \(U_t\circ \pi _k(L(t)) \circ U_t^* = M(2C(x+k))\) where \(C = \sqrt{s^2-r^2}\).

Proof

The proof runs along the same lines as the proof of Theorem 3.2. The condition \(s^2-r^2>0\) implies \(s/r>1\), so there exists a \(c = c(t) \in (0,1)\) such that

$$\begin{aligned} \frac{s}{r} = \frac{ 1+c }{2\sqrt{c}}. \end{aligned}$$

It follows from the three-term recurrence relation for Meixner polynomials that \(r^{-1}L\) has eigenvalues \(\frac{(1-c)(x+k)}{\sqrt{c}}\), \(x \in \mathbb {N}\). Write \(c=e^{-2a}\) with \(a>0\), then \(\frac{ 1+c }{2\sqrt{c}}= \cosh (a)\), so that

$$\begin{aligned} \frac{1-c}{2\sqrt{c}} = \sinh (a)= \sqrt{\cosh ^2(a) -1 } = \sqrt{\frac{s^2}{r^2}-1} = \frac{C}{r}, \end{aligned}$$

where \(C = \sqrt{s^2 -r^2}\). \(\square \)

4.2 Case 2: \(s^2-r^2=0\)

In this case we need the orthonormal Laguerre polynomials [16, Sect. 9.12], which are defined by

$$\begin{aligned} L_n(x)= L_n(x;\alpha ) = (-1)^n \sqrt{ \frac{(\alpha +1)_n}{n!} } \,_{1}F_{1} \left( \genfrac{}{}{0.0pt}{}{-n}{\alpha +1} \ ;x \right) . \end{aligned}$$

They satisfy the three-term recurrence relation

$$\begin{aligned} x L_n(x)= & {} \sqrt{(n+\alpha +1)(n+1)}\, L_{n+1}(x) + (2n+\alpha +1)L_n(x)\\&+\sqrt{n(n+\alpha )}\, L_{n-1}(x), \end{aligned}$$

and the orthogonality relations are

$$\begin{aligned} \int _0^\infty L_n(x) L_{n'}(x) \, \frac{x^\alpha e^{-x}}{\Gamma (\alpha +1)}\, dx = \delta _{n,n'}. \end{aligned}$$

The set \(\{L_n \mid n \in \mathbb {N}\}\) is an orthonormal basis for the corresponding weighted \(L^2\)-space.

Using the three-term recurrence relation for the Laguerre polynomials we obtain the following result.

Theorem 4.4

Let

$$\begin{aligned} W_t(x) = \frac{x^{2k-1} r(t)^{-2k} e^{-\frac{x}{r(t)}} }{\Gamma (2k)},\qquad x \in [0,\infty ), \end{aligned}$$

and let \(U_t:\ell ^2(\mathbb {N}) \rightarrow L^2([0,\infty ),W_t(x)dx)\) be defined by

$$\begin{aligned}{}[U_te_n](x) = L_n\left( \frac{x}{r(t)};2k-1\right) , \end{aligned}$$

then \(U_t\) is unitary and \(U_t \circ \pi _k(L(t)) \circ U_t^{*} = M(x)\).

4.3 Case 3: \(s^2-r^2<0\)

In this case we need the orthonormal Meixner–Pollaczek polynomials [16, Sect. 9.7] given by

$$\begin{aligned} P_n(x) = P_n(x;\lambda ,\phi ) = e^{in\phi } \sqrt{ \frac{(2\lambda )_n}{n!}} \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{-n, \lambda +ix}{2\lambda } \ ;1-e^{-2i\phi } \right) , \end{aligned}$$

where \(\lambda >0\) and \(0<\phi <\pi \). The three-term recurrence relation for these polynomials is

$$\begin{aligned} 2x\sin \phi \, P_n(x)= & {} \sqrt{(n+1)(n+2k)}\,P_{n+1}(x) - 2(n+\lambda )\cos \phi \, P_n(x) \\&+ \sqrt{n(n+2k-1)}\, P_{n-1}(x), \end{aligned}$$

and the orthogonality relations read

$$\begin{aligned} \begin{aligned}&\int _{-\infty }^\infty P_n(x) P_{n'}(x)\, w(x;\lambda ,\phi )\, dx = \delta _{n,n'},\\&\quad w(x;\lambda ,\phi ) = \frac{(2\sin \phi )^{2\lambda }}{2\pi \,\Gamma (2\lambda )} e^{(2\phi -\pi )x} |\Gamma (\lambda +ix)|^2. \end{aligned} \end{aligned}$$

The set \(\{P_n \mid n \in \mathbb {N}\}\) is an orthonormal basis for the weighted \(L^2\)-space.

Theorem 4.5

For \(\phi (t) = \arccos (\frac{s(t)}{r(t)})\) let

$$\begin{aligned} W_t(x) = w(x;k,\phi (t)), \qquad x \in \mathbb {R}, \end{aligned}$$

and let \(U_t : \ell ^2(\mathbb {N}) \rightarrow L^2(\mathbb {R},W_t(x)dx)\) be defined by

$$\begin{aligned}{}[U_t e_n](x) = P_n(x;k,\phi (t)), \end{aligned}$$

then \(U_t\) is unitary and \(U_t \circ \pi _k(L(t)) \circ U_t^{*} = M(-2Cx)\), where \(C = \sqrt{r^2-s^2}\).

Proof

The proof is similar as before. Using the three-term recurrence relation for the Meixner–Pollaczek polynomials it follows that the generalized eigenvalue of \(r^{-1}\pi _k(L)\) is \(- 2x \sin (\phi )\), where \(\phi \in (0,\pi )\) is determined by \(-\frac{s}{r} = \cos \phi \). Then

$$\begin{aligned} \sin \phi = \sqrt{1-\frac{s^2}{r^2}} = \frac{C}{r}, \end{aligned}$$

from which the result follows. \(\square \)

4.4 The Lax operator in the principal unitary series

The action of the Lax operator L in the principal unitary series as a Jacobi operator on \(\ell ^2(\mathbb {Z})\) is given by

$$\begin{aligned} \begin{aligned} \pi _{\lambda ,\varepsilon } (L(t)) e_n&= r(t)\sqrt{(n+\varepsilon +\lambda +1)(n+\varepsilon -\lambda )}\, e_{n+1} + s(t)(2\varepsilon +2n) e_n \\&\quad + r(t)\sqrt{ (n+\varepsilon +\lambda )(n+\varepsilon -\lambda -1)}\, e_{n-1}. \end{aligned} \end{aligned}$$

Again we distinguish between the cases where the invariant \(s^2-r^2\) is either positive, negative or zero.

4.4.1 Case 1: \(s^2-r^2>0\)

The Meixner functions [11] are defined by

$$\begin{aligned} \begin{aligned} m_n(x) = m_n(x;\lambda ,\varepsilon ,c)&= \left( \frac{\sqrt{c}}{c-1}\right) ^n \frac{ \sqrt{ \Gamma (n+\varepsilon +\lambda +1) \Gamma (n+\varepsilon -\lambda ) } }{(1-c)^\varepsilon \Gamma (n+1-x)} \\&\quad \times \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{n+\varepsilon +\lambda +1,n+\varepsilon -\lambda }{n+1-x} \ ;\frac{c}{c-1} \right) , \end{aligned} \end{aligned}$$

for \(x,n \in \mathbb {Z}\) and \(c \in (0,1)\). The parameters \(\lambda \) and \(\varepsilon \) are the labels from the principal unitary series. The Meixner functions satisfy the three-term recurrence relation

$$\begin{aligned} \begin{aligned} \frac{(1-c)(x+\varepsilon )}{\sqrt{c}}m_n(x)&= \sqrt{(n+\varepsilon +\lambda +1)(n+\varepsilon -\lambda )}\, m_{n+1}(x) \\&\quad + \frac{(c+1)(x+\varepsilon )}{\sqrt{c}} m_n(x) \\&\quad +\sqrt{(n+\varepsilon +\lambda )(n+\varepsilon -\lambda -1)} \, m_{n-1}(x), \end{aligned} \end{aligned}$$

and the orthogonality relations read

$$\begin{aligned} \sum _{x \in \mathbb {Z}} \frac{ c^{-x} }{ \Gamma (x+ \varepsilon + \lambda +1) \Gamma (x+\varepsilon -\lambda ) } m_n(x) m_{n'}(x) = \delta _{n,n'}. \end{aligned}$$

The set \(\{ m_n \mid n \in \mathbb {Z}\}\) is an orthonormal basis for the weighted \(L^2\)-space.

Theorem 4.6

For \(t>0\) let

$$\begin{aligned} W_t(x) = \frac{ c(t)^{-x} }{ \Gamma (x+ \varepsilon + \lambda +1) \Gamma (x+\varepsilon -\lambda )}, \end{aligned}$$

where \(c(t) \in (0,1)\) is determined by \(\frac{s(t)}{r(t)}=\frac{1+c(t)}{2\sqrt{c(t)}}\), or equivalently \(c(t) = e^{-2 {{\,\mathrm{arccosh}\,}}(\frac{s(t)}{r(t)})}\). Define \(U_t:\ell ^2(\mathbb {Z}) \rightarrow \ell ^2(\mathbb {Z},W_t)\) by

$$\begin{aligned}{}[U_t e_n](x) = m_n(x;\lambda ,\varepsilon ,c), \end{aligned}$$

then \(U_t\) is unitary and \(U_t \circ \pi _{\lambda ,\varepsilon }(L(t)) \circ U_t^* = M(2C(x+\varepsilon ))\), where \(C = \sqrt{s^2 - r^2}\).

4.4.2 Case 2: \(s^2-r^2=0\)

In this case we need Laguerre functions [10] defined by

$$\begin{aligned} \psi _n(x) = \psi _n(x;\lambda ,\varepsilon ) = {\left\{ \begin{array}{ll} \begin{aligned} (-1)^n &{} \sqrt{\Gamma (n+\varepsilon +\lambda +1) \Gamma (n+\varepsilon -\lambda )} \\ &{}\quad \times U(n+\varepsilon +\lambda +1;2\lambda +2;x) \end{aligned} &{} x>0,\\ \\ \begin{aligned} &{}\sqrt{ \Gamma (-n-\varepsilon -\lambda ) \Gamma (1-n-\varepsilon +\lambda )} \\ &{} \quad \times U(-n-\varepsilon -\lambda ;-2\lambda ;-x) \end{aligned} &{} x<0, \end{array}\right. } \end{aligned}$$

where \(x\in \mathbb {R}\), \(n \in \mathbb {Z}\), and U(abz) is Tricomi’s confluent hypergeometric function, see e.g.  [25, (1.3.1)], for which we use its principal branch with branch cut along the negative real axis. The Laguerre functions \(\{\psi _n \mid n \in \mathbb {Z}\}\) form an orthonormal basis for \(L^2(\mathbb {R},w(x)dx)\) where

$$\begin{aligned} w(x)=w(x;\rho ,\varepsilon ) = \frac{1}{\pi ^2} \sin \left( \pi (\varepsilon +\lambda +1) \right) \sin \left( \pi ( \varepsilon -\lambda )\right) e^{-|x|}. \end{aligned}$$

The three-term recurrence relation reads

$$\begin{aligned} \begin{aligned} -x \psi _n(x)&= \sqrt{(n+\varepsilon +\lambda +1)(n+\varepsilon -\lambda )} \,\psi _{n+1}(x) \\&\quad + 2(n+\varepsilon )\, \psi _n(x) + \sqrt{(n+\varepsilon +\lambda )(n+\varepsilon -\lambda -1)}\, \psi _{n-1}(x). \end{aligned} \end{aligned}$$

Theorem 4.7

Let

$$\begin{aligned} W_t(x) = \frac{1}{r(t)} w\left( \frac{x}{r(t)};\lambda ,\varepsilon \right) , \qquad x \in \mathbb {R}, \end{aligned}$$

and let \(U_t : \ell ^2(\mathbb {Z}) \rightarrow L^2(\mathbb {R},W_t(x)dx)\) be defined by

$$\begin{aligned}{}[U_t e_n](x) = \psi _n\left( \frac{x}{r(t)};\lambda ,\varepsilon \right) , \end{aligned}$$

then \(U_t\) is unitary and \(U_t \circ \pi _{\lambda ,\varepsilon }(L(t)) \circ U_t^{*} = M(-x)\).

4.4.3 Case 3: \(s^2-r^2<0\)

The Meixner–Pollaczek functions [17, Sect. 4.4] are defined by

$$\begin{aligned} \begin{aligned} u_n(x) = u_n(x;\lambda ,\varepsilon ,\phi )&= (2i\sin \phi )^{-n} \frac{ \sqrt{\Gamma (n+1+\varepsilon +\lambda )\Gamma (n+\varepsilon -\lambda ) }}{\Gamma (n+1+\varepsilon -ix)} \\&\quad \times \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{n+1+\varepsilon +\lambda ,n+\varepsilon -\lambda }{n+1+\varepsilon -ix} \ ;\frac{1}{1-e^{-2i\phi }} \right) . \end{aligned} \end{aligned}$$

Define

$$\begin{aligned} W(x;\lambda ,\varepsilon ,\phi )= w_0(x)\begin{pmatrix} 1 &{} - w_1(x) \\ -\overline{w_1}(x) &{} 1 \end{pmatrix}, \qquad x \in \mathbb {R}, \end{aligned}$$

where \(\overline{f}(x) = \overline{f(x)}\) and

$$\begin{aligned} \begin{aligned} w_1(x;\lambda ,\varepsilon )&= \frac{ \Gamma (\lambda +1+ix) \Gamma (-\lambda +ix) }{\Gamma (ix-\varepsilon )\Gamma (1+\varepsilon -ix)},\\ w_0(x;\varepsilon ,\phi )&= (2\sin \phi )^{-2\varepsilon } e^{(2\phi -\pi )x}. \end{aligned} \end{aligned}$$

Let \(L^2(\mathbb {R},W(x)dx)\) be the Hilbert space consisting of functions \(\mathbb {R}\rightarrow \mathbb {C}^2\) with inner product

$$\begin{aligned} \langle f,g \rangle = \int _{-\infty }^\infty g^t(x) W(x) f(x)\, dx, \end{aligned}$$

where \(f^t(x)\) denotes the conjugate transpose of \(f(x) \in \mathbb {C}^2\). The set \(\{({\begin{matrix}u_n \\ \overline{u_n} \end{matrix}}) \mid n \in \mathbb {Z}\}\) is an orthonormal basis for \(L^2(\mathbb {R},W(x)dx)\). The three-term recurrence relation for the Meixner–Pollaczek functions is

$$\begin{aligned} \begin{aligned} 2x \sin \phi \, u_n(x)&= \sqrt{(n+\varepsilon +\lambda +1)(n+\varepsilon -\lambda )}\, u_{n+1}(x) \\&\quad + 2(n+\varepsilon )\cos \phi \, u_n(x) + \sqrt{(n+\varepsilon +\lambda )(n+\varepsilon -\lambda -1)}\, u_{n-1}(x). \end{aligned} \end{aligned}$$

The function \(\overline{u_n}\) satisfies the same recurrence relation.

Theorem 4.8

For \(\phi (t) = \arccos (\frac{s(t)}{r(t)})\) let

$$\begin{aligned} W_t(x) = W(x;\lambda ,\varepsilon ,\phi (t)), \end{aligned}$$

and let \(U_t : \ell ^2(\mathbb {Z}) \rightarrow L^2(\mathbb {R},W_t(x;\lambda ,\varepsilon )dx)\) be defined by

$$\begin{aligned}{}[U_t e_n](x) = \begin{pmatrix} u_n(x;\lambda ,\varepsilon ,\phi (t)) \\ \overline{u_n}(x;\lambda ,\varepsilon ,\phi (t)) \end{pmatrix}, \end{aligned}$$

then \(U_t\) is unitary and \(U_t \circ \pi _{\lambda ,\varepsilon }(L(t)) \circ U_t^{*} = M(2Cx)\), where \(C = \sqrt{r^2-s^2}\).

Note that the spectrum of \(\pi _{\lambda ,\varepsilon }(L(t))\) has multiplicity 2.

Remark 4.9

Transferring a three-term recurrence on \(\mathbb {Z}\) to a three term recurrence for \(2\times 2\) matrix orthogonal polynomials, see [3, Sect. VII.3] and [18, Sect. 3.2], does not lead to an example of the nonabelian Toda lattice [4, 7, 14]

5 The oscillator algebra \(\varvec{\mathfrak b}(1)\)

\(\mathfrak b(1)\) is the Lie \(*\)-algebra \(\mathfrak g(a,b)\) with \((a,b)=(0,1)\) and \(\epsilon =+\). Then \(\mathfrak b(1)\) has a basis EFHN satisfying

$$\begin{aligned}{}[E,F]=N, \quad [H,E]=2E, \quad [H,F]=-2F, \quad [N,E]=[N,F]=[N,H]=0. \end{aligned}$$

The \(*\)-structure is defined by \(H^*=H\), \(N^*=N\), \(E^*=F\). The Lax pair LM is given by

$$\begin{aligned} L(t) = cH + r(t)(E+F) + s(t) N, \qquad M(t) = u(t) (E- F). \end{aligned}$$

The differential equations for s and r are in this case given by

$$\begin{aligned} \begin{aligned} \dot{s} = 2ru, \quad \dot{r} = -2cu \end{aligned} \end{aligned}$$

and the invariant is \(r^2+2cs\).

Lemma 5.1

Assume \({{\,\mathrm{sgn}\,}}(u(t))={{\,\mathrm{sgn}\,}}(r(t))\) for all \(t>0\), \(s(0)>0\) and \(r(0)>0\). Then \({{\,\mathrm{sgn}\,}}(s(t))>0\) and \({{\,\mathrm{sgn}\,}}(r(t))>0\) for all \(t>0\).

Proof

The proof is similar to the proof of Lemma 4.1, where in this case \(I(r,s)=I(r(0),s(0))\) describes a parabola (\(c \ne 0\)) or a straight line (\(c=0\)). \(\square \)

Throughout this section we assume the conditions of Lemma 5.1 are satisfied.

There is a family of irreducible \(*\)-representations \(\pi _{k,h}\), \(h>0\), \(k\ge 0\), on \(\ell ^2(\mathbb {N})\) defined by

$$\begin{aligned} \begin{aligned} \pi _{k,h}(N) e_n&= -h\, e_n,\\ \pi _{k,h}(H) e_n&= 2(k+n)\, e_n, \\ \pi _{k,h}(E) e_n&= \sqrt{h (n+1)} \, e_{n+1},\\ \pi _{k,h}(F) e_n&= \sqrt{ hn}\, e_{n-1}. \end{aligned} \end{aligned}$$

The action of the Lax operator on the basis of \(\ell ^2(\mathbb {N})\) is given by

$$\begin{aligned} \pi _{k,h}(L(t)) e_n = r(t)\sqrt{h(n+1)}\, e_{n+1}+\left[ 2c(n+k) - h s(t) \right] e_n + r(t)\sqrt{hn}\, e_{n-1}. \end{aligned}$$

For the diagonalization of \(\pi _{k,h}(L)\) we distinguish between the cases \(c \ne 0\) and \(c = 0\).

5.1 Case 1: \(c \ne 0\)

In this case we need the orthonormal Charlier polynomials [16, Sect. 9.14], which are defined by

$$\begin{aligned} C_n(x) = C_n(x;a) = \sqrt{ \frac{ a^n }{n!} } \,_{2}F_{0} \left( \genfrac{}{}{0.0pt}{}{-n,-x}{\text {--}} \ ;-\frac{1}{a} \right) , \end{aligned}$$

where \(a>0\) and \(n,x \in \mathbb {N}\). The orthogonality relations are

$$\begin{aligned} \sum _{x=0}^\infty \frac{a^x e^{-a}}{x!} C_n(x) C_{n'}(x) = \delta _{n,n'}, \end{aligned}$$

and \(\{C_n \mid n \in \mathbb {N}\}\) is an orthonormal basis for the corresponding \(L^2\)-space. The three-term recurrence relation reads

$$\begin{aligned} -xC_n(x) = \sqrt{a(n+1)} \, C_{n+1}(x) - (n+a) C_n(x) + \sqrt{an} \, C_{n-1}(x). \end{aligned}$$

Theorem 5.2

For \(t>0\) define

$$\begin{aligned} W_t(x) = \left( \frac{h r^2(t) }{c} \right) ^x e^{-\frac{hr^2(t)}{c^2}} \end{aligned}$$

and let \(U_t:\ell ^2(\mathbb {N}) \rightarrow L^2(\mathbb {N}, W_t)\) be defined by

$$\begin{aligned} U_t e_n (x) = \left( -{{\,\mathrm{sgn}\,}}(r/c) \right) ^n C_n\left( x ;\frac{hr^2(t)}{c^2}\right) , \qquad x \in \mathbb {N}. \end{aligned}$$

Then \(U_t\) is unitary and \(U_t \circ L(t) \circ U_t^{*} = M(2c(x+k) + Ch)\), where \(C=\frac{1}{2c}r^2+s\).

Proof

The action of L can be written in the following form:

$$\begin{aligned} \begin{aligned}&\pi _{k,h}\Big (\frac{1}{2c}L + \frac{hr^2}{4c^2} + \frac{hs}{2c}-k \Big ) e_n \\&\quad = {{\,\mathrm{sgn}\,}}(r/c) \sqrt{\frac{hr^2(n+1)}{4c^2}}\, e_{n+1} +\left( n + \frac{hr^2}{4c^2}\right) e_n + {{\,\mathrm{sgn}\,}}(r/c) \sqrt{\frac{hr^2 n}{4c^2} } e_{n-1} \end{aligned} \end{aligned}$$

and recall that \(\frac{1}{2c}r^2+ s\) is constant. The result then follows from comparing with the three-term recurrence relation for the Charlier polynomials. \(\square \)

5.2 Case 2: \(c=0\)

In this case \(\dot{r}=0\), so r is a constant function. We use the orthonormal Hermite polynomials [16, Sect. 9.15], which are given by

$$\begin{aligned} H_n(x) = \frac{(\sqrt{2} \, x)^n }{\sqrt{n!}} \,_{2}F_{0} \left( \genfrac{}{}{0.0pt}{}{-\frac{n}{2}, - \frac{n-1}{2} }{\text {--}} \ ;-\frac{1}{x^2} \right) . \end{aligned}$$

They satisfy the orthogonality relations

$$\begin{aligned} \frac{1}{\sqrt{\pi }} \int _\mathbb {R}H_n(x) H_{n'}(x) e^{-x^2}\, dx = \delta _{n,n'}, \end{aligned}$$

and \(\{H_n \mid n \in \mathbb {N}\}\) is an orthonormal basis for \(L^2(\mathbb {R}, e^{-x^2}dx/\sqrt{\pi })\). The three-term recurrence relation is given by

$$\begin{aligned} \sqrt{2}\,x H_n(x) = \sqrt{n+1}\, H_{n+1}(x) + \sqrt{n} H_{n-1}(x). \end{aligned}$$

Theorem 5.3

For \(t>0\) define

$$\begin{aligned} W_t(x) = \frac{1}{r\sqrt{2h\pi }} e^{-\frac{(x-h s(t))^2}{2hr^2}}, \end{aligned}$$

and let \(U_t:\ell ^2(\mathbb {N}) \rightarrow L^2(\mathbb {R};w_t(x;h)\,dx)\) be defined by

$$\begin{aligned} U_t e_n(x) = H_n\left( \frac{x-h s(t)}{r\sqrt{2h}}\right) , \end{aligned}$$

then \(U_t\) is unitary and \(U_t \circ \pi _{k,h}(L(t)) \circ U_t^{*} = M(x)\).

Proof

We have

$$\begin{aligned} \pi _{k,h}\left( \frac{1}{r \sqrt{h}}(L+sh)\right) e_n = \sqrt{n+1}\, e_{n+1} + \sqrt{n}\, e_{n-1}, \end{aligned}$$

which corresponds to the three-term recurrence relation for the Hermite polynomials. \(\square \)

6 The Lie algebra \(\varvec{\mathfrak {e}}(2)\)

We consider the Lie algebra \(\mathfrak g(a,b)\) with \(a=b=0\) and \(\epsilon =+\). Similar as in the case of \(\mathfrak {sl}(2,\mathbb {C})\), we omit the basis element N again. The remaining Lie algebra is \(\mathfrak e(2)\) with basis HEF satisfying

$$\begin{aligned}{}[E,F]=0, \quad [H,E]=2E, \quad [H,F]=-2F, \end{aligned}$$

and the \(*\)-structure is determined by \(E^*=F, \quad H^*=H\).

The Lax pair is given by

$$\begin{aligned} L(t) = cH + r(t)(E+F), \qquad M(t) = u(t) (E-F), \end{aligned}$$

with \(\dot{r} = -2cu\).

\(\mathfrak e(2)\) has a family of irreducible \(*\)-representations \(\pi _k\), \(k>0\), on \(\ell ^2(\mathbb {Z})\) given by

$$\begin{aligned} \begin{aligned} \pi _k(H) e_n&= 2n\, e_n, \\ \pi _k(E) e_n&= k e_{n+1}, \\ \pi _k(F)e_n&= k e_{n-1}. \end{aligned} \end{aligned}$$

This defines an unbounded representation. As a dense domain we use the set of finite linear combinations of the basis elements.

Assume \(c \ne 0\). The Lax operator \(\pi _k(L(t))\) is a Jacobi operator on \(\ell ^2(\mathbb {Z})\) given by

$$\begin{aligned} \pi _k(L(t))e_n = kr(t) e_{n+1} + 2cn e_n + kr(t) e_{n-1}. \end{aligned}$$

For the diagonalization of \(\pi _k(L)\) we use the Bessel functions \(J_n\) [1, 28] given by

$$\begin{aligned} J_n(z) = \frac{ z^n }{2^n \Gamma (n+1) } \,_{1}F_{0} \left( \genfrac{}{}{0.0pt}{}{\text {--}}{n+1} \ ;-\frac{z^2}{4} \right) , \end{aligned}$$

with \(z \in \mathbb {R}\) and \(n \in \mathbb {Z}\). They satisfy the Hansen-Lommel type orthogonality relations, which follow from [1, (4.9.15), (4.9.16)]

$$\begin{aligned} \sum _{m \in \mathbb {Z}} J_{m-n}(z) J_{m-n'}(z) = \delta _{n,n'}. \end{aligned}$$

and the set \(\{J_{\cdot -n}(z) \mid n \in \mathbb {Z}\}\) is an orthonormal basis for \(\ell ^2(\mathbb {Z})\). A well-known recurrence relation for \(J_n\) is

$$\begin{aligned} J_{n-1}(z) + J_{n+1}(z) = \frac{2n}{z}J_n(z), \end{aligned}$$

which is equivalent to

$$\begin{aligned} zJ_{m-n-1}(z)+2nJ_{m-n}(z) + z J_{m-n+1}(z) = 2m J_{m-n}(z). \end{aligned}$$

Theorem 6.1

For \(t>0\) define \(U_t: \ell ^2(\mathbb {Z}) \rightarrow \ell ^2(\mathbb {Z})\) by

$$\begin{aligned} U_t e_n(m) = J_{m-n}\left( \frac{kr(t)}{c}\right) , \end{aligned}$$

then \(U_t\) is unitary and \(U_t \circ \pi _k(L(t)) \circ U_t^{*} = M(2cm)\).

Finally, let us consider the completely degenerate case \(c=0\). In this case r is also a constant function, so there are no differential equations to solve. We can still diagonalize the (degenerate) Lax operator, which is now independent of time.

Theorem 6.2

Define \(U:\ell ^2(\mathbb {Z}) \rightarrow L^2[0,2\pi ]\) by

$$\begin{aligned}{}[Ue_n](x) = \frac{e^{inx}}{\sqrt{2\pi }}, \end{aligned}$$

then U is unitary and \(U \circ \pi _k(L)\circ U^* = M(2kr \cos (x))\).

7 Modification of orthogonality measures

In this section we briefly investigate the orthogonality measures from the previous sections in case the Lax operator L(t) acts as a finite or semi-infinite Jacobi matrix. In these cases the functions \(U_te_n\) are t-dependent orthogonal polynomials and we see that the weight function \(W_t\) of the orthogonality measure for \(U_t e_n\) is a modification of the weight function \(W_0\) in the sense that

$$\begin{aligned} W_t(x) = K_t W_0(x) m(t)^x, \end{aligned}$$

where \(K_t\) is independent of x. The modification function m(t) depends on the functions s or r, which (implicitly) depend on the function u. We show how the choice of u effects the modification function m.

Theorem 7.1

There exists a constant K such that

$$\begin{aligned} m(t) = \exp \left( K\int _0^t \frac{ u(\tau ) }{r(\tau )}\, d\tau \right) , \qquad t \ge 0. \end{aligned}$$

Remark 7.2

In the Toda-lattice case, \(u(t) = r(t)\), this gives back the well-known modification function \(m(t) = e^{Kt}\), see e.g. [13, Theorem 2.8.1].

Theorem 7.1 can be checked for each case by a straightforward calculation: we express m as a function of s and r,

$$\begin{aligned} m(t) = A_0 F(s(t),r(t)), \end{aligned}$$

where \(A_0\) is a normalizing constant such that \(m(0)=1\). Then differentiating and using the differential equations for r and s we can express \(\dot{m}/ m\) in terms of u and r.

7.1 \(\varvec{\mathfrak {su}}(2)\)

From Theorem 3.2 we see that

$$\begin{aligned} m(t) = A_0 \frac{p(t)}{1-p(t)} = A_0 \frac{C+s(t)}{C-s(t)} \end{aligned}$$

with \(C = \sqrt{s^2+r^2}\). Differentiating to t and using the relation \(\dot{s}(t) = 2 u(t) r(t)\) then gives

$$\begin{aligned} \frac{\dot{m}(t) }{m(t) } = \frac{ 4C u(t)r(t) }{C^2-s(t)^2} = 4C \frac{ u(t)}{r(t)}. \end{aligned}$$

7.2 \(\varvec{\mathfrak {su}}(1,1)\)

For \(s^2-r^2>0\) Theorem 4.3 shows that

$$\begin{aligned} m(t) = A_0e^{-2{{\,\mathrm{arccosh}\,}}\left( \frac{s(t)}{r(t)} \right) }. \end{aligned}$$

Then from \(\dot{s}(t) = -2u(t)r(t)\) and \(\dot{r}(t) = -2u(t)s(t)\) it follows that

$$\begin{aligned} \frac{\dot{m}(t) }{m(t) } = \frac{-2}{\sqrt{\frac{s(t)^2}{r(t)^2}-1}} \frac{ r(t) \dot{s}(t) - s(t) \dot{r}(t) }{r(t)^2} = -4C \frac{u(t) }{r(t)}, \end{aligned}$$

where \(C = \sqrt{s^2-r^2}\).

For \(s^2-r^2=0\) Theorem 4.4 shows that

$$\begin{aligned} m(t)= A_0 e^{-\frac{1}{r(t)}}. \end{aligned}$$

Then using \(\dot{r}(t) = 2u(t)r(t)\) it follows that

$$\begin{aligned} \frac{\dot{m}(t) }{m(t)} = - \frac{u(t)}{r(t)} . \end{aligned}$$

For \(s^2-r^2<0\) it follows from Theorem 4.5 that

$$\begin{aligned} m(t) = A_0 e^{2\arccos \left( \frac{s(t) }{r(t)} \right) }. \end{aligned}$$

Then from \(\dot{s}(t) = -2u(t)r(t)\) and \(\dot{r}(t) = -2u(t)s(t)\) it follows that

$$\begin{aligned} \frac{\dot{m}(t) }{m(t) } = \frac{2}{\sqrt{1-\frac{s(t)^2}{r(t)^2}}} \frac{ r(t) \dot{s}(t) - s(t) \dot{r}(t) }{r(t)^2} = -4C \frac{u(t) }{r(t)}, \end{aligned}$$

where \(C = \sqrt{r^2-s^2}\).

7.3 \(\varvec{\mathfrak b}(1)\)

For \(c \ne 0\) we see from Theorem 5.2 that

$$\begin{aligned} m(t) = A_0 r(t)^2. \end{aligned}$$

The relation \(\dot{r}(t) = -2c u(t)\) then leads to

$$\begin{aligned} \frac{ \dot{m}(t) }{m(t) } = -4c\frac{u(t)}{r(t)}. \end{aligned}$$

For \(c=0\) Theorem 5.3 shows that

$$\begin{aligned} m(t) = A_0 e^{\frac{s(t)}{r}}. \end{aligned}$$

Note that \(r=r(t)\) is constant in this case. Then \(\dot{s}(t) = 2ru(t)\) leads to

$$\begin{aligned} \frac{\dot{m}(t)}{m(t) }= 2u(t) = 2r \frac{ u(t) }{r}. \end{aligned}$$

Remark 7.3

The result from Theorem 7.1 is also valid for the orthogonal functions from Theorems 4.6 and 4.8, i.e. for L(t) acting as a Jacobi operator on \(\ell ^2(\mathbb {Z})\) in the principal unitary series for \(\mathfrak {su}(1,1)\) in cases \(r^2-s^2 \ne 0\). However, there is no similar modification function in the other cases where L(t) acts as a Jacobi operator on \(\ell ^2(\mathbb {Z})\). Furthermore, the corresponding recurrence relations for the functions on \(\mathbb {Z}\) can be rewritten to recurrence relations for \(2\times 2\) matrix orthogonal polynomials, but in none of the cases the modification of the weight function is as in Theorem 7.1.

8 The case of \(\varvec{\mathfrak {sl}}(d+1,\varvec{\mathbb {C}}\,)\)

We generalize the situation of the Lax pair for the finite-dimensional representation of \(\mathfrak {sl}(2,\mathbb {C})\) to the higher rank case of \(\mathfrak {sl}(d+1,\mathbb {C})\). Let \(E_{i,j}\) be the matrix entries forming a basis for the \(\mathfrak {gl}(d+1,\mathbb {C})\). We label \(i,j\in \{0,1,\ldots , d\}\). We put \(H_i= E_{i-1,i-1}-E_{i,i}\), \(i\in \{1,\ldots , d\}\), for the elements spanning the Cartan subalgebra of \(\mathfrak {sl}(d+1,\mathbb {C})\).

8.1 The Lax pair

Proposition 8.1

Let

$$\begin{aligned} L(t)&= \sum _{i=1}^d s_i(t) H_i + \sum _{i=1}^d r_i(t) \bigl ( E_{i-1,i}+ E_{i,i-1}\bigr ), \\ M(t)&= \sum _{i=1}^d u_i(t) \bigl ( E_{i-1,i} - E_{i,i-1} \bigr ) \end{aligned}$$

and assume that the functions \(u_i\) and \(r_i\) are non-zero for all i and

$$\begin{aligned} \frac{r_{i-1}(t)}{r_i(t)} =\frac{u_{i-1}(t)}{u_i(t)}, \qquad i\in \{2,\ldots , d\}, \end{aligned}$$

then the Lax pair condition \(\dot{L}(t)=[L(t),M(t)]\) is equivalent to

$$\begin{aligned} \dot{s}_i(t)&= 2r_i(t) u_i(t), \qquad i\in \{1,\ldots , d\}, \\ \dot{r}_i(t)&= u_i(t) \bigl ( s_{i-1}(t) - 2s_i(t) + s_{i+1}(t) \bigr ), \qquad i\in \{2,\ldots , d-1\}, \\ \dot{r}_1(t)&= u_1(t) \bigl ( s_{2}(t) - 2s_1(t) \bigr ), \\ \dot{r}_d(t)&= u_d(t) \bigl ( s_{d-1}(t) -2s_d(t) \bigr ). \end{aligned}$$

Note that we can write it uniformly

$$\begin{aligned} \dot{r}_i(t) = u_i(t) \bigl ( s_{i-1}(t) - 2s_i(t) + s_{i+1}(t)\bigr ), \qquad i\in \{1,\ldots , d\}, \\ \end{aligned}$$

assuming the convention that \(s_0(t)=s_{d+1}(t)=0\), which we adapt for the remainder of this section. The Toda case follows by taking \(u_i=r_i\) for all i, see [2, 22].

Proof

The proof essentially follows as in [2, Sect. 4.6], but since the situation is slightly more general we present the proof, see also [22, Sect. 5]. A calculation in \(\mathfrak {sl}(d+1,\mathbb {C})\) gives

$$\begin{aligned}{}[M(t),L(t)]&= \sum _{i=1}^d 2r_i(t)u_i(t) H_i + \sum _{i=1}^d u_i(t) \bigl ( s_{i-1}(t) - 2s_i(t) + s_{i+1}(t) \bigr )\\&\quad \times (E_{i-1,i}+E_{i,i-1}) \\&\quad + \sum _{i=1}^{d-1} \bigl (r_{i+1}(t) u_i(t) - r_{i}(t) u_{i+1}(t) \bigr ) (E_{i-2,i}+E_{i,i-2}) \end{aligned}$$

and the last term needs to vanish, since this term does not occur in L(t) and in its derivative \(\dot{L}(t)\). Now the stated coupled differential equations correspond to \(\dot{L}=[M,L]\). \(\square \)

Remark 8.2

Taking the representation of the Lax pair for the \(\mathfrak {su}(2)\) case in the \(d+1\)-dimensional representation as in Sect. 6, we get, with \(d=2j\), as an example

$$\begin{aligned} s_i(t) = s(t)i(i-1-d), \quad r_i(t) = r(t)\sqrt{i(d+1-i)}, \quad u_i(t) = u(t)\sqrt{i(d+1-i)}. \end{aligned}$$

Then the coupled differential equations of Proposition 8.1 are equivalent to (3.1).

Let \(\{e_n\}_{n=0}^d\) be the standard orthonormal basis for \(\mathbb {C}^{d+1}\), the natural representation of \(\mathfrak {sl}(d+1,\mathbb {C})\). Then L(t) is a t-dependent tridiagonal matrix. Moreover, we assume that \(r_i\) and \(s_i\) are real-valued functions for all i, so that L(t) is self-adjoint in the natural representation.

Lemma 8.3

Assume that the conditions of Proposition 8.1 hold. Let the polynomials \(p_n(\cdot ;t)\) of degree \(n \in \{0,1,\ldots , d\}\) in \(\lambda \) be generated by the initial value \(p_0(\lambda ;t)=1\) and the recursion

$$\begin{aligned} \lambda p_n(\lambda ;t) = {\left\{ \begin{array}{ll} r_1(t) p_1(\lambda ;t) + s_1(t) p_0(\lambda ;t), &{} n=0 \\ r_{n+1}(t) p_{n+1}(\lambda ;t) + (s_{n+1}(t)-s_n(t)) p_n(\lambda ;t)\\ \quad + r_n(t) p_{n-1}(t), &{} 1\le n < d. \end{array}\right. } \end{aligned}$$

Let the set \(\{ \lambda _0, \ldots , \lambda _d\}\) be the zeroes of

$$\begin{aligned} \lambda p_d(\lambda ;t) = -s_d(t) p_d(\lambda ;t) + r_d(t) p_{d-1}(t). \end{aligned}$$

In the natural representation L(t) has simple spectrum \(\sigma (L(t))= \{ \lambda _0, \ldots , \lambda _d\}\) which is independent of t, and \(\sum _{r=0}^d \lambda _r=0\) and

$$\begin{aligned} L(t) \sum _{n=0}^d p_n(\lambda _r;t)e_n = \lambda _r \, \sum _{n=0}^d p_n(\lambda _r;t)e_n, \quad r\in \{0,1\ldots , d\}. \end{aligned}$$

Note that with the choice of Remark 8.2, the polynomials in Lemma 8.3 are Krawtchouk polynomials, see Theorem 3.2. Explicitly,

$$\begin{aligned} p_n(C(d-2r);t) = \left( \frac{p(t)}{1-p(t)}\right) ^{\frac{1}{2} n} \left( {\begin{array}{c}d\\ n\end{array}}\right) ^{1/2} \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{-n, -r}{-d} \ ;\frac{1}{p(t)} \right) = K_n(r;p(t),d), \end{aligned}$$
(8.1)

where \(C=\sqrt{r^2(t)+s^2(t)}\) is invariant, see Theorem 3.2 and its proof.

Proof

In the natural representation we have

$$\begin{aligned} L(t) e_n = {\left\{ \begin{array}{ll} r_{1}(t) e_1 + s_1(t) e_0, &{} n=0, \\ r_{n+1}(t)e_{n+1} + (s_{n+1}(t)-s_n(t))e_n +r_{n-1}(t) e_{n-1}, &{} 1\le n <d, \\ -s_d(t) e_d + r_d(t) e_{d-1}, &{} n= d \end{array}\right. } \end{aligned}$$

as a Jacobi operator. So the spectrum of L(t) is simple, and the spectrum is time independent, since (L(t), M(t)) is a Lax pair. We can generate the corresponding eigenvectors as \(\sum _{n=0}^d p_n(\lambda ;t) e_n\), where the recursion follows from the expression of the Lemma. The eigenvalues are then determined by the final equation, and since \(\mathrm {Tr}(L(t))=0\) we have \(\sum _{i=0}^d \lambda _i=0\). \(\square \)

Let \(P(t) = \bigl (p_i(\lambda _j;t)\bigr )_{i,j=0}^d\) be the corresponding matrix of eigenvectors, so that

$$\begin{aligned} L(t) P(t) = P(t) \Lambda , \qquad \Lambda = \mathrm {diag}(\lambda _0, \lambda _1,\ldots , \lambda _d). \end{aligned}$$

Since L(t) is self-adjoint in the natural representation, we find

$$\begin{aligned} \sum _{n=0}^d p_n(\lambda _r;t) \overline{p_n(\lambda _s;t)} = \frac{\delta _{r,s}}{w_r(t)}, \qquad w_r(t)>0, \end{aligned}$$
(8.2)

and the matrix \(Q(t) = \bigl (p_i(\lambda _j;t)\sqrt{w_j(t)} \bigr )_{i,j=0}^d\) is unitary. As \(r_i\) and \(s_i\) are real-valued, we have \(\overline{p_n(\lambda _s;t)} = p_n(\lambda _s;t)\), so that Q(t) is a real matrix, hence orthogonal. So the dual orthogonality relations to (8.2) hold as well. We will assume moreover that \(r_i\) are positive functions. The dual orthogonality relations to (8.2) hold;

$$\begin{aligned} \sum _{r=0}^d p_n(\lambda _r;t) p_m(\lambda _r;t) w_r(t) = \delta _{n,m}. \end{aligned}$$
(8.3)

Note that the \(w_r(t)\) are essentially time-dependent Christoffel numbers [26, Sect. 3.4]. By [22, Sect. 2], see also [6, Thm. 2], the eigenvalues and the \(w_r(t)\)’s determine the operator L(t), and in case of the Toda lattice, i.e. \(u_i(t) = r_i(t)\), the time evolution corresponds to linear first order differential equations for the Christoffel numbers [22, §3].

Since the spectrum is time-independent, the invariants for the system of Proposition 8.1 are given by the coefficients of the characteristic polynomial of L(t) in the natural representation. Since the characteristic polynomial is obtained by switching to the three-term recurrence for the corresponding monic polynomials, see [13, Sect. 2.2] and [22, §2], this gives the same computation. For a Lax pair, \(\mathrm {Tr}(L(t)^k)\) are invariants, and in this case the invariant for \(k=1\) is trivial since L(t) is traceless. In this way we have d invariants, \(\mathrm {Tr}(L(t)^k)\), \(k\in \{2,\ldots , d+1\}\).

Lemma 8.4

With the convention that \(r_n\) and \(s_n\) are zero for \(n\notin \{1,\ldots ,d\}\) we have the invariants

$$\begin{aligned} \mathrm {Tr}(L(t)^2)&= \sum _{n=0}^d (s_{n+1}(t)-s_n(t))^2 + \sum _{n=1}^d r_n(t)^2, \\ \mathrm {Tr}(L(t)^3)&= \sum _{n=0}^d (s_{n+1}(t)-s_n(t))^3 + 3\sum _{n=0}^d (s_{n+1}(t)-s_n(t)) r_n^2(t) \\&\qquad + 3\sum _{n=0}^d (s_{n}(t)-s_{n-1}(t)) r_n^2(t). \end{aligned}$$

Proof

Write \(L(t) = DS + D_0 + S^*D\) with \(D=\mathrm {diag}(r_0(t), r_1(t),\ldots , r_d(t))\), \(S:e_n\mapsto e_{n+1}\) the shift operator and \(S^*:e_n\mapsto e_{n-1}\) its adjoint (with the convention \(e_{-1}=e_{d+1}=0\) and \(r_0(t)=0\)). And \(D_0\) is the diagonal part of L(t). Then

$$\begin{aligned} \mathrm {Tr}(L(t)^k) = \mathrm {Tr}((DS + D_0 + S^*D)^k) \end{aligned}$$

and we need to collect the terms that have the same number of S and \(S^*\) in the expansion. The trace property then allows to collect terms, and we get

$$\begin{aligned} \mathrm {Tr}(L(t)^2)&= \mathrm {Tr}(D_0^2) + 2\mathrm {Tr}(D^2), \\ \mathrm {Tr}(L(t)^3)&= \mathrm {Tr}(D_0^3) + 3\mathrm {Tr}(D_0D^2) + 3\mathrm {Tr}(SD_0S^*D^2) \end{aligned}$$

and this gives the result, since \((SD_0S^*)_{n,n}= (D_0)_{n-1,n-1}\). \(\square \)

We do not use Lemma 8.4, and we have included it to indicate the analog of Corollary 2.3.

We can continue this and find e.g.

$$\begin{aligned} \mathrm {Tr}(L(t)^4)&= \mathrm {Tr}(D_0^4) + 2\mathrm {Tr}(D^4) + 4\mathrm {Tr}(D_0^2D^2) + 4\mathrm {Tr}(SD_0S^*D_0D^2) \\&\qquad + 4\mathrm {Tr}(SD_0^2S^*D^2) + 4\mathrm {Tr}(SD^2S^*D^2). \end{aligned}$$

8.2 Action of L(t) in representations

We relate the eigenvectors of L(t) in some explicit representations of \(\mathfrak {sl}(d+1)\) to multivariable Krawtchouk polynomials, and we follow Iliev’s paper [12].

Let \(N\in \mathbb {N}\), and let \(\mathbb {C}_N[x]=\mathbb {C}_N[x_0,\ldots , x_d]\) be the space of homogeneous polynomials of degree N in \(d+1\)-variables, then \(\mathbb {C}_N[x]\) is an irreducible representation of \(\mathfrak {sl}(d+1)\) and \(\mathfrak {gl}(d+1)\) given by \(E_{i,j} \mapsto x_i \frac{\partial }{\partial x_j}\). \(\mathbb {C}_N[x]\) is a highest weight representation corresponding to \(N\omega _1\), \(\omega _1\) being the first fundamental weight for type \(A_d\). Then \(x^\rho = x_0^{\rho _0}\cdots x_d^{\rho _d}\), \(|\rho |=\sum _{i=0}^d\rho _i=N\), is an eigenvector of \(H_i\); \(H_i\cdot x^\rho = (\rho _{i-1}-\rho _i)x^\rho \), and so we have a basis of joint eigenvectors of the Cartan subalgebra spanned by \(H_1,\ldots , H_d\) and the joint eigenspace, i.e. the weight space, is 1-dimensional. It is a unitary representation for the inner product

$$\begin{aligned} \langle x^\rho , x^\sigma \rangle = \delta _{\rho ,\sigma } \left( {\begin{array}{c}N\\ \rho \end{array}}\right) ^{-1}= \delta _{\rho ,\sigma } \frac{\rho _0! \cdots \rho _d!}{N!} \end{aligned}$$

and it gives a unitary representation of \(SU(d+1)\) as well.

Then the eigenfunctions of L(t) in \(\mathbb {C}_N[x]\) are \(\tilde{x}^\rho \), where

$$\begin{aligned} (\tilde{x}_0, \ldots , \tilde{x}_d) = (x_0, \ldots , x_d) Q(t) \end{aligned}$$

since Q(t) changes from eigenvectors for the Cartan subalgebra to eigenvectors for the operator L(t), cf. [12, Sect. 3]. It corresponds to the action of \(SU(d+1)\) (and of \(U(d+1)\)) on \(\mathbb {C}_N[x]\). Since Q(t) is unitary, we have

$$\begin{aligned} \langle \tilde{x}^\rho , \tilde{x}^\sigma \rangle = \langle x^\rho , x^\sigma \rangle = \delta _{\rho ,\sigma } \left( {\begin{array}{c}N\\ \rho \end{array}}\right) ^{-1}. \end{aligned}$$
(8.4)

We recall the generating function for the multivariable Krawtchouk polynomials as introduced by Griffiths [9], see [12, §1]:

$$\begin{aligned} \prod _{i=0}^d \Bigl (z_0 + \sum _{j=1}^d u_{i,j} z_j\Bigr )^{\rho _i} = \sum _{|\sigma |=N} \left( {\begin{array}{c}N\\ \sigma \end{array}}\right) P(\sigma ',\rho ') z_0^{\sigma _0}\cdots z_d^{\sigma _d}, \end{aligned}$$
(8.5)

where \(\rho '= (\rho _1,\ldots , \rho _d) \in \mathbb {N}^d\), and similarly for \(\sigma '\). We consider \(P(\rho ',\sigma ')\) as polynomials in \(\sigma '\in \mathbb {N}^d\) of degree \(\rho '\) depending on \(U=(u_{i,j})_{i,j=1}^d\), see [12, §1].

Lemma 8.5

The eigenvectors of L(t) in \(\mathbb {C}_N[x]\) are

$$\begin{aligned} \tilde{x}^\rho = \prod _{i=0}^d \bigl ( w_i(t)\bigr )^{\frac{1}{2} \rho _i} \sum _{|\sigma |=N} \left( {\begin{array}{c}N\\ \sigma \end{array}}\right) P(\sigma ',\rho ') x^\sigma \end{aligned}$$

for \(u_{i,j} = \frac{Q(t)_{j,i}}{Q(t)_{0,i}}= p_j(\lambda _i;t)\), \(1 \le i,j\le d\) in (8.5), and \(L(t) \tilde{x}^\rho = (\sum _{i=0}^d \lambda _i \rho _i )\tilde{x}^\rho \). The eigenvalue follows from the conjugation with the diagonal element \(\Lambda \).

From now on we assume this value for \(u_{i,j}, 1 \le i,j \le d\). Explicit expressions for \(P(\sigma ',\rho ')\) in terms of Gelfand hypergeometric series are due to Mizukawa and Tanaka [21], see [12, (1.3)]. See also Iliev [12] for an overview of special and related cases of the multivariable cases.

Proof

Observe that

$$\begin{aligned} \tilde{x}_i = \sum _{j=0}^d x_j Q(t)_{j,i} = Q(t)_{0,i} \Bigl ( x_0 + \sum _{j=1}^d \frac{Q(t)_{j,i}}{Q(t)_{0,i}} x_j\Bigr ) \end{aligned}$$

and \(Q(t)_{0,i}= \sqrt{w_i(t)}\) is non-zero. Now expand \(\tilde{x}^\rho \) using (8.5) and \(Q(t)_{i,j} = p_i(\lambda _j;t) \sqrt{w_j(t)}\) gives the result. \(\square \)

By the orthogonality (8.4) of the eigenvectors of L(t) we find

$$\begin{aligned} \sum _{|\sigma |=N}&\left( {\begin{array}{c}N\\ \sigma \end{array}}\right) P(\sigma ',\rho ')P(\sigma ',\eta ') = \frac{\delta _{\rho ,\eta }}{\left( {\begin{array}{c}N\\ \rho \end{array}}\right) \prod _{i=0}^d w_i(t)^{\rho _i}}, \\ \sum _{|\rho |=N}&\left( {\begin{array}{c}N\\ \rho \end{array}}\right) \Bigl ( \prod _{i=0}^d w_i(t)^{\rho _i}\Bigr ) P(\sigma ',\rho ')P(\tau ',\rho ') = \frac{\delta _{\sigma ,\tau }}{\left( {\begin{array}{c}N\\ \sigma \end{array}}\right) }, \end{aligned}$$

where we use that all entries of Q(t) are real. The second orthogonality follows by duality, and the orthogonality corresponds to [12, Cor. 5.3].

In case \(N=1\) we find \(P(f_i',f_j')= p_i(\lambda _j;t)\), where \(f_i\in \mathbb {N}^{d+1}\) is given by \((0,\ldots , 0, 1,0\ldots , 0)\) with the 1 on the i-th spot.

Lemma 8.6

For all \(\rho , \tau \in \mathbb {N}^{d+1}\) with \(|\rho |=|\tau |\) we have for the P from Lemma 8.5 the recurrence

$$\begin{aligned} \Bigl ( \sum _{i=0}^d \lambda _i\rho _i\Bigr ) P(\tau ',\rho ')= & {} \Bigl ( \sum _{i=0}^d s_i(t) (\tau _{i-1}-\tau _i)\Bigr ) P(\tau ',\rho ')\\&+ \sum _{i=0}^d r_i(t) \bigl ( \tau _{i-1} P((\tau -f_{i-1}+f_i)',\rho ')\\&+ \tau _{i} P((\tau +f_{i-1}-f_i)',\rho ')\bigr ). \end{aligned}$$

Note that Lemma 8.6 does not follow from [12, Theorem 6.1].

Proof

Apply Lemma 8.5 to expand \(\tilde{x}^\rho \) in \(L(t)\tilde{x}^\rho = (\sum _{i=0}^d \lambda _i\rho _i) \tilde{x}^\rho \), and use the explicit expression of L(t) and the corresponding action. Compare the coefficient of \(x^\tau \) on both sides to obtain the result. \(\square \)

Remark 8.7

In the context of Remark 8.2 and (8.1) we have that the \(u_{i,j}\) are Krawtchouk polynomials. Then the left hand side in (8.5) is related to the generating function for the Krawtchouk polynomials, see [16, (9.11.11)], i.e. the case \(d=1\) of (8.5). Putting \(z_j = (\frac{p}{1-p})^{-\frac{1}{2} j} \left( {\begin{array}{c}d\\ j\end{array}}\right) ^{\frac{1}{2}}w^j\), we see that in this situation \(\sum _{j=0}^d u_{i,j} z_j\) corresponds to \((1+w)^{d-i} (1- \frac{1-p(t)}{p(t)}w)^i\). Using this in the generating function, the left hand side of (8.5) gives a generating function for Krawtchouk polynomials. Comparing the powers of \(w^k\) on both sides gives

$$\begin{aligned}&\left( \frac{p}{1-p} \right) ^{\frac{1}{2} k} \left( {\begin{array}{c}dN\\ k\end{array}}\right) \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{-\sum _{i=0}^di\rho _i, -k}{-dN} \ ;\frac{1}{p} \right) \\&\quad = \sum _{|\sigma |=N, \sum _{j=0}^d j\sigma _j=k} \left( \prod _{j=0}^d \left( {\begin{array}{c}d\\ j\end{array}}\right) ^{\frac{1}{2} \sigma _j}\right) \left( {\begin{array}{c}N\\ \sigma \end{array}}\right) P(\sigma ',\rho '). \end{aligned}$$

The left hand side is, up to a normalization, the overlap coefficient of L(t) in the \(\mathfrak {sl}(2,\mathbb {C})\) case for the representation of dimension \(Nd+1\), see Sect. 3. Indeed, the representation \(\mathfrak {sl}(2,\mathbb {C})\) to \(\mathfrak {sl}(2,\mathbb {C})\) to \(\text {End}(\mathbb {C}_N[x])\) yields a reducible representation of \(\mathfrak {sl}(2,\mathbb {C})\), and the vector \(x^{(0,\ldots , 0,N)}\) is a highest weight vector of \(\mathfrak {sl}(2,\mathbb {C})\) for the highest weight dN. Restricting to this space then gives the above connection.

8.3 t-Dependence of multivariable Krawtchouk polynomials

Let \(L(t) v(t) = \lambda v(t)\), then taking the t-derivatives gives \(\dot{L}(t)v(t) + L(t)\dot{v}(t) =\lambda \dot{v}(t)\), since \(\lambda \) is independent of t, and using the Lax pair \(\dot{L}=[M,L]\) gives

$$\begin{aligned} (\lambda - L(t)) (M(t) v(t) -\dot{v}(t))=0. \end{aligned}$$

Since L(t) has simple spectrum, we conclude that

$$\begin{aligned} M(t) v(t) = \dot{v}(t) + c(t,\lambda ) v(t) \end{aligned}$$

for some constant c depending on the eigenvalue \(\lambda \) and t. Note that this differs from [24, Lemma 2].

For the case \(N=1\) we get

$$\begin{aligned} M(t)v_{\lambda _r}(t) = \sum _{n=0}^d \bigl ( p_{n-1}(\lambda _r;t) u_n(t) - p_{n+1}(\lambda _r;t) u_{n+1}(t)\bigr ) x_n \end{aligned}$$

with the convention that \(u_0(t)=u_{d+1}(t)=0\), \(p_{-1}(\lambda _r;t)=0\). So

$$\begin{aligned} (M(t)-c(t,\lambda _r))v_{\lambda _r}(t) = \dot{v}_{\lambda _r}(t) = \sum _{n=0}^d \dot{p}_n(\lambda _r;t) \, x_n \end{aligned}$$

and comparing the coefficient of \(x_0\), we find \(c(t,\lambda _r) = - p_1(\lambda _r;t)u_1(t)\). So we have obtained the following proposition.

Proposition 8.8

The polynomials satisfy

$$\begin{aligned} \begin{aligned} \dot{p}_n(\lambda _r;t)&= u_n(t) p_{n-1}(\lambda _r;t) - u_{n+1}(t) p_{n+1}(\lambda _r;t) + u_1(t) p_1(\lambda _r;t) p_n(\lambda _r;t),\\&\quad 1\le n<d, \\ \dot{p}_d(\lambda _r;t)&= u_d(t) p_{d-1}(\lambda _r;t) + u_1(t) p_1(\lambda _r;t) p_d(\lambda _r;t) \end{aligned} \end{aligned}$$

for all eigenvalues \(\lambda _r\) of L(t), \(r\in \{0,\ldots , d\}\).

Note that for \(0 \le n<d\) we have

$$\begin{aligned} \dot{p}_n(\lambda ;t) = u_n(t) p_{n-1}(\lambda ;t) - u_{n+1}(t) p_{n+1}(\lambda ;t) + u_1(t) p_1(\lambda ;t) p_n(\lambda ;t) \end{aligned}$$
(8.6)

as polynomial identity. Indeed, for \(n=0\) this is trivially satisfied, and for \(1\le n<d\), this is a polynomial identity of degree n due to the condition in Proposition 8.1, which holds for all \(\lambda _r\) and hence is a polynomial identity. Note that the right hand side is a polynomial of degree n, and not of degree \(n+1\) since the coefficient of \(\lambda ^{n+1}\) is zero because of the relation on \(u_i\) and \(r_i\) in Proposition 8.1.

Writing out the identity for the Krawtchouk polynomials we obtain after simplifying

$$\begin{aligned}&n \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{-n,-r}{-d} \ ;\frac{1}{p(t)} \right) + \frac{2nr(1-p(t))}{dp(t)}\,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{1-n,1-r}{1-d} \ ;\frac{1}{p(t)} \right) \\&\quad = n(1-p(t)) \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{1-n,-r}{-d} \ ;\frac{1}{p(t)} \right) - p(t) (d-n) \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{-1-n,-r}{-d} \ ;\frac{1}{p(t)} \right) \\&\qquad + (dp(t)-r) \,_{2}F_{1} \left( \genfrac{}{}{0.0pt}{}{-n,-r}{-d} \ ;\frac{1}{p(t)} \right) , \end{aligned}$$

where the left hand side is related to the derivative. Note that the derivative of p cancels with factors u, see Theorem 3.2 and its proof and Sect. 7.

In order to obtain a similar expression for the multivariable t-dependent Krawtchouk polynomials we need to assume that the spectrum of L(t) is simple, i.e. we assume that for \(\rho ,\tilde{\rho } \in \mathbb {N}^{d+1}\) with \(|\rho |=|\tilde{\rho }|\) we have that \(\sum _{i=0}^d \lambda _i(\rho _i-\tilde{\rho }_i)=0\) implies \(\rho =\tilde{\rho }\). Assuming this we calculate, using Proposition 8.1,

$$\begin{aligned} M(t) \tilde{x}^\rho = W_\rho (t) \sum _{|\sigma |=N} \left( {\begin{array}{c}N\\ \sigma \end{array}}\right) P(\sigma ',\rho ') \sum _{r=1}^d u_r(t) (\sigma _r x^{\sigma +f_{r-1}-f_r} - \sigma _{r-1} x^{\sigma -f_{r-1}+f_r}) \end{aligned}$$

using the notation \(W_\rho (t) = \prod _{i=0}^d w_i(t)^{\frac{1}{2} \rho _i}\) and \(f_i=(0,\ldots ,0,1, 0,\ldots , 0)\in \mathbb {N}^{d+1}\), with the 1 at the i-th spot. Now the t-derivative of \(\tilde{x}^\rho \) is

$$\begin{aligned} \dot{W}_\rho (t) \sum _{|\sigma |=N} \left( {\begin{array}{c}N\\ \sigma \end{array}}\right) P(\sigma ',\rho ') x^\sigma + W_\rho (t) \sum _{|\sigma |=N} \left( {\begin{array}{c}N\\ \sigma \end{array}}\right) \dot{P}(\sigma ',\rho ') x^\sigma \end{aligned}$$

and it leaves to determine the constant in \(M(t) \tilde{x}^\rho - C\tilde{x}^\rho = \frac{\partial }{\partial t}\tilde{x}^\rho \). We determine C by looking at the coefficient of \(x_0^N\) using \(P(0, \rho ')= P((N,0,\ldots ,0)',\rho ')=1\). This gives \(C= N u_1(t)W_\rho (t)^{-1} - \frac{\partial }{\partial t} \ln W_\rho (t)\). Comparing the coefficients of \(x^\tau \) on both sides gives the following result.

Theorem 8.9

Assume that L(t) acting in \(\mathbb {C}_N[x]\) has simple spectrum. The t-derivative of the multivariable Krawtchouk polynomials satisfies

$$\begin{aligned}&\dot{W}_\rho (t) P(\tau ',\rho ') + W_\rho (t) \dot{P}(\tau ',\rho ')\\&\quad = \bigl ( \dot{W}_\rho (t) -N u_1(t)\bigr ) P(\tau ',\rho ') + W_\rho (t) \sum _{r=1}^d u_r(t) \bigl ( \tau _{r-1} P((\tau -f_{r-1}+f_r)',\rho ')\\&\qquad - \tau _{r} P((\tau +f_{r-1}-f_r)',\rho ')\bigr ) \end{aligned}$$

for all \(\rho ,\tau \in \mathbb {N}^{d+1}\), \(|\tau |=|\rho |=N\).