1 Introduction

First we explain a general problem setting. Assume that \(D_i\) are bounded symmetric domains for \(i=1\), 2 such that \(D_2\subset D_1\). For \(i=1\), 2, we denote by \(Aut(D_i)\) the group of biholomorphic automorphisms of \(D_i\) and fix subgroups \(G_i\subset Aut(D_i)\). We assume that there is an embedding \(\iota : G_2\rightarrow G_1\) acting equivariantly on \(D_i\) for the embedding \(D_2 \subset D_1\). Let \(V_i\) (\(i=1,2\)) be finite dimensional vector spaces over \({\mathbb C}\). We consider two automorphy factors \(J_{D_i}(g_i,Z):G_i \times D_i\rightarrow GL(V_i)\) for \(i=1\), 2. We denote by \(\mathrm{Hol}(D_i,V_i)\) the space of \(V_i\)-valued holomorphic functions on \(D_i\). For \(g_i\in G_i\), and \(F_i\in Hol(D_i,V_i)\), we write \((F|_{J_{D_i}}[g_i])(Z_i)=J_{D}(g_i,Z_i)^{-1}F(g_iZ_i)\) (\(Z_i\in D_i\), \(g_i \in G_i\)). Our problem is to describe explicitly all linear holomorphic \(V_2\)-valued differential operators \({\mathbb D}\) with constant coefficients on \(\mathrm{Hol}(D_1,V_1)\) such that the following diagram is commutative for any \(g_2\in G_2\).

(1)

Here \(Res_{D_2}\) is the restriction of the domain from \(D_1\) to \(D_2\). Roughly speaking, this condition means that if \(F\in \mathrm{Hol}(D_1,V_1)\) is an automorphic form of weight \(J_{D_1}\), then \(Res_{D_2}({\mathbb D}(F))\) is also an automorphic form of weight \(J_{D_2}\). So we sometimes call Condition (1) the automorphic property, though this is a condition on real Lie groups and has nothing to do with discrete groups. This is a realization of intertwining operators of holomorphic discrete series corresponding to \(J_{D_i}\). Those differential operators are given by polynomials in partial derivatives, so this is also a problem on some kind of special polynomials. This gives a new area of special functions and there are a lot of open problems in this general setting.

In this paper, we consider the case when \(D_1\) is the Siegel upper half space \(H_n\) of degree n. We denote by \(Sp(n,{\mathbb R})\subset SL(2n,{\mathbb R})\) the real symplectic group of real rank n. For positive integers n and \(r\ge 2\), we fix an ordered partition \({\pmb n}=(n_1,\ldots ,n_r)\) of n with \(n=n_1+\cdots +n_r\) where \(n_p\) (\(1\le p\le r\)) are positive integers. Then the domain \(H_{{\pmb n}}=H_{n_1}\times \cdots \times H_{n_r}\) is embedded diagonally to \(H_n\) and we regard \(D_2=H_{{\pmb n}}\) in the diagram (1). Also the group \(Sp({\pmb n},{\mathbb R})=\prod _{p=1}^{r}Sp(n_p,{\mathbb R})\) is naturally embedded into \(Sp(n,{\mathbb R})\), acting equivariantly with respect to the embedding \(H_{{\pmb n}}\rightarrow H_n\). Roughly speaking, our aim is to obtain all the linear holomorphic vector valued partial differential operators \({\mathbb D}\) with constant coefficients such that for scalar valued Siegel modular forms F of weight k, the restriction \(Res_{H_{{\pmb n}}}({\mathbb D}F)\) of \({\mathbb D}F\) to \(H_{{\pmb n}}\) is a Siegel modular form of \(H_{{\pmb n}}\) of a certain weight \(det^k\otimes \rho \). Here we denote by \((\rho ,V)\) a polynomial representation of \(GL({\pmb n},{\mathbb C})=\prod _{p=1}^{r}GL(n_p,{\mathbb C})\). So we have \(V_1={\mathbb C}\) and \(V_2=V\) in the condition (1). When \(\rho \) is irreducible, we have a decomposition \(\rho =\rho _1\otimes \cdots \otimes \rho _r\) where each \(\rho _p\) is an irreducible representation of \(GL(n_p,{\mathbb C})\), and \(Res_{H_{{\pmb n}}}({\mathbb D}F)\) is of weight \(det^k \rho _p\) as a function of \(H_{n_p}\) for each p. Since we assumed that our differential operators \({\mathbb D}\) are linear and have constant coefficients, there are certain V-valued polynomials P(T) over \({\mathbb C}\) in components of an \(n\times n\) symmetric matrix T of variables such that

$$\begin{aligned} {\mathbb D}=P\biggl (\frac{p}{{\partial }Z}\biggr ), \quad \text {where we put } \frac{{\partial }}{{\partial }Z}=\biggl (\frac{1+\delta _{ij}}{2}\frac{{\partial }}{{\partial }z_{ij}}\biggr ), \quad Z=(z_{ij}) \in H_n. \end{aligned}$$

So our aim is to give a description of such V-valued polynomials P which give \({\mathbb D}\) as above. Even if we forget Siegel modular forms, these polynomials of several variables are interesting as themselves and can be regarded as a highly non-trivial generalization of the classical Gegenbauer polynomials. We see by [14] that components of the vectors P are in a set of polynomials with several harmonicity conditions depending on \({\pmb n}\). We call such polynomials higher spherical polynomials for partition. The space of these polynomials has two different canonical bases and we approach the problem to describe P in two different ways through these two bases. One way is to use monomial basis for the partition which consist of certain polynomials containing as a main part a monomial in components of off-diagonal blocks of T (i.e. components of some \(T_{pq}\) with \(p\ne q\) of \(T=(T_{pq})\), where \(T_{pq}\) is the \(n_p\times n_q\) matrix block of T) . By using this, we can reduce an explicit calculation to give P to the problem of realizing a representation of \(GL({\pmb n})=\prod _{p=1}^{r}GL(n_p)\) on polynomials in the off-diagonal block variables. Up to this realization, we can give an explicit algorithm to give all P we want starting from scratch. The other way is to use descending basis for partition which consist of a set of polynomials mutually related by mixed Laplacians. This method is rather theoretical but somewhat mysterious. We explicitly construct a generating series \(G^{({\pmb n})}\) of descending basis for partition \({\pmb n}\) and we claim that every P is obtained by a certain projection of this series. In this sense, we may call this series a generic generating series. Besides, this series has a unified expression for any n and any partition \({\pmb n}\) with a recursive structure by which we can calculate it explicitly starting from some series of one variable to general n. If we stop calculation at \(n=2\), then we have the generating series of the Gegenbauer polynomials. Now, replacing T by \(\frac{{\partial }}{{\partial }Z}\) in the generating series, we obtain a differential operator which we call a generic differential operator \({\mathbb D}_U\). Naturally, all the differential operators we want in the paper are obtained by projections of \({\mathbb D}_U\).

We will also give following direct applications of our differential operators. We prove that the Taylor coefficients of Siegel modular forms with respect to off-diagonal block variables, or of Jacobi forms of general degree of any matrix index with respect to vector part arguments, are linear combinations of certain derivatives of vector valued Siegel modular forms of lower degrees obtained by the images of our differential operators. We also show that the original forms are recovered by these Siegel modular forms of lower degrees. The proof of this part is not trivial at all and we need precise argument for existence of certain good operators. This result is an ultimate generalization of Eichler–Zagier’s results on Jacobi forms of degree one. Several more explicit results and practical construction are also given.

Historically, the differential operators described above are important by various reasons. They are indispensable for explicit calculations of the critical values of the standard L functions of Siegel modular forms (for example, see [1,2,3, 5, 23, 24]), and they are used also to give a construction of liftings (see [16, 17]). But essential points of the theory are independent from Siegel modular forms, and it can be regarded as an interesting theory of new special functions of several variables defined by a system of differential equations, sometimes holonomic, including the Gegenbauer polynomials as a prototype. The case \(n_1=n_2=\cdots =n_r=1\) is treated in [22, 26], and the case \(n=m+m\) in [25]. This paper is a natural generalization of those papers, in particular depends on many previous results in [22]. Also some announcement has been given in [19, 20]. Another explicit one-line formula for \({\mathbb D}\) for the case \(n=m+m\) based on a different idea is written separately in [21]. Some basic related theories have been written in [2] and [14]. In a different context, such differential operators are also studied by [29, 30] and other papers by the same authors. They call such operators symmetric breaking operators. Although operators themselves seem to be the same as our operators, their motivation and results are mostly very different from ours and there seems no essential overlap with our theory. As we explained, our differential operators give intertwining operators from holomorphic discrete series of \(Sp(n,{\mathbb R})\) of scalar type to holomorphic discrete series of \(Sp({\pmb n},{\mathbb R})\) of vector type. R. Nakahama wrote in [32] some explicit general theory on this sort of problem for several symmetric pairs of \(G_1\) and \(G_2\) in the setting of (1). In our symplectic case, this means the case \(r=2\) for the partition of n. He gave an intertwining operator from \(G_2\) to \(G_1\) (an embedding case in his terminology) by differential operators of infinite order including the symplectic case, but the projection case from \(G_1\) to \(G_2\) is treated only for scalar type, and the case from \(Sp(n,{\mathbb R})\) to \(Sp({\pmb n},{\mathbb R})\), which is our subject here, is not treated in that paper (See also [25]). Since his treatment is fairly general, such work would give us a hint for conceptual explanation of our generic differential operator which came out misteriously from scratch by experience. By the way, there are orthogonal polynomials called Heckman Opdam polynomials in several variables associated to root systems [9,10,11]. I understand that roughly speaking they generalized differential equations for the radial part of the classical Riemannian symmetric pairs to general parameters, and described polynomial solutions by generalized hypergeometric series. As we explained in [25, section 7], the case when \(r=2\) and when the target weight is scalar besides, the radial part of our polynomial can be written by similar hypergeometric functions. But the natural differential equations for our theory are different from theirs, the expression for our polynomials by hypergeometric and the relation of the radial part to the original homogeneous polynomials that we need are very complicated. Besides there are no such known theory for \(r\ge 3\) as far as the author knows. So we believe that our polynomials are quite new in various senses.

The paper is organized as follows. In Sect. 2, after reviewing some part of [14] and [22], we give a characterization of the differential operators with the automorphic property (1) for \((D_1,D_2)=(H_n,H_{{\pmb n}})\) by higher spherical polynomials for a partition with some representation theoretic behaviour. We also give fundamental results on such polynomials. In Sect. 3, we explicitly define a generic generating series \(G^{({\pmb n})}\) of a descending basis of higher spherical polynomials for a partition, and by using this, we define the generic differential operator \({\mathbb D}_U\). We show that our differential operators and related vectors of polynomials are obtained by a certain projection from these (Theorems 3.1, 3.2). Based on those theorems, we explain how to calculate dimensions of the space of our differential operators for a fixed initial weight and a target weight. We also give more explicit examples of generating series in some special cases. In Sect. 4, first from [22] we review the monomial basis for \({\pmb n}=(1,\ldots ,1)\) and an inner metric of the higher spherical polynomials, and we add a few remarks not written in [22]. Applying these, we give a monomial basis for a partition in Theorem 4.1, which is a generalization of the one in [22], but not a part of that. We also give in Sect. 4.2 and in Theorem 4.6 some practical algorithm of construction of our differential operators using the monomial basis for the partition defined here. In most cases, this method is more practical than the one in Theorem 3.1. We also give a simple example by using this method. In Sect. 5, by using the results in Sect. 4, we prove Theorem 5.1 on relations between Taylor coefficients with respect to components in off-diagonal blocks and vector valued Siegel modular forms of lower degrees. In Sect. 6, we explain how to apply our operators to the Taylor expansion of Jacobi forms and give an open question. In the Appendix section 1, we give an explicit irreducible space decomposition of the action of \(GL(2)\times GL(2)\) on \(2\times 2\) matrices. This can be regarded as a necessary supplement to give more explicit method to give our differential operators for \(n=4\), \(r=2\) and \(n_1=n_2=2\) using our theorems both by monomial basis and descending basis for partition. We also give a concrete example.

2 Higher spherical polynomials

We state our problems more concretely now. For any holomorphic function F on \(H_n\), any integer k, and any \(g=\left( {\begin{matrix} A &{} B \\ C &{} D \end{matrix}}\right) \in Sp(n,{\mathbb R})\), we define

$$\begin{aligned}F|_{k}[g]=\det (CZ+D)^{-k}F(gZ). \end{aligned}$$

We fix a partition \({\pmb n}=(n_1,\ldots ,n_r)\), a positive integer k, and a polynomial representation \((\rho ,V)\) of \(GL({\pmb n},{\mathbb C})\). For any \(g=(g_1,\ldots ,g_r) \in Sp({\pmb n},{\mathbb R})\) with \(g_p=\left( {\begin{matrix} A_p &{} B_p \\ C_p &{} D_p \end{matrix}}\right) \in Sp(n_p,{\mathbb R})\), and any V-valued function \(f(Z_{11},Z_{22},\ldots ,Z_{rr})\) on \(H_{{\pmb n}}=H_{n_1}\times \cdots \times H_{n_r}\) with \(Z_{pp}\in H_{n_p}\), we write

$$\begin{aligned}&f(Z_{11},Z_{22},\ldots ,Z_{rr})|_{k,\rho }[g]=\prod _{p=1}^{r}\det (C_pZ_{pp}+D_p)^{-k} \\&\quad \times \rho (C_1Z_{11}+D_1,\ \ldots ,\ C_{r}Z_{rr}+D_{r})^{-1} f(g_1Z_{11},g_2Z_{22},\ldots ,g_rZ_{rr}). \end{aligned}$$

We consider linear V-valued holomorphic differential operators \({\mathbb D}\) with constant coefficients on holomorphic functions on \(H_n\) which satisfy the following condition.

Condition 2.1

For any holomorphic function F on \(H_n\) and any element \(g=(g_1,\ldots ,g_r) \in Sp({\pmb n},{\mathbb R})=\prod _{p=1}^{r}Sp(n_p,{\mathbb R}) \subset Sp(n,{\mathbb R})\), we have

$$\begin{aligned} Res_{H_{{\pmb n}}}({\mathbb D}(F|_{k}[g]))=(Res_{H_{{\pmb n}}}({\mathbb D}F))|_{k,\rho }[g] \end{aligned}$$

where we denote by \(Res_{H_{{\pmb n}}}\) the restriction of functions of \(H_n\) to \(H_{{\pmb n}}\).

For the sake of simplicity, we will say that k is the initial weight and \(det^k\otimes \rho \) is the target weight of \({\mathbb D}\) in this condition. Here by abuse of language, we denote by \(det^k\) the representation of \(GL({\pmb n},{\mathbb C})\) defined by \(\prod _{p=1}^{r}\det (h_p)^k\) for \(h=(h_p)\in GL({\pmb n},{\mathbb C})\). For any irreducible polynomial representation \((\rho ,V)\) of \(GL({\pmb n},{\mathbb C})=\prod _{p=1}^{r}GL(n_p,{\mathbb C})\), we denote by \({\mathbb D}(k,det^k\rho )\) the linear space over \({\mathbb C}\) of V-valued holomorphic linear differential operators \({\mathbb D}\) with constant coefficients which satisfy Condition 2.1 for the initial weight k and the target weight \(det^k\otimes \rho \).

It is obvious that for any linear holomorphic V-valued differential operators \({\mathbb D}\) with constant coefficients as above, there exists a V-valued polynomial \(P_V(T)\) in components of \(n\times n\) symmetric matrix T of variables such that \({\mathbb D}=P_V\left( \frac{{\partial }}{{\partial }Z}\right) \), where

$$\begin{aligned} \frac{{\partial }}{{\partial }Z}=\left( \frac{1+\delta _{ij}}{2}\frac{{\partial }}{{\partial }z_{ij}}\right) _{1\le i,j\le n}, \quad \begin{array}{l} Z=(z_{ij})\in H_n, \\ \delta _{ij}\text { is the Kronecker delta.} \end{array} \end{aligned}$$

If \({\mathbb D}\) satisfies Condition 2.1, then such polynomials \(P_V\) have been characterized by invariant harmonic polynomials in [14]. We quote it here. Let d be a positive integer and \(Y=(y_{i\nu })\) an \(n \times d\) matrix of variable components. We say that a polynomial \(\widetilde{P}(Y)\) in \(y_{i\nu }\) is pluri-harmonic when

$$\begin{aligned} \Delta _{ij}(Y)\widetilde{P}=0 \text { for any }i, j\text { with } 1\le i,j\le n \end{aligned}$$

where we put \(\Delta _{ij}(Y)=\sum _{\nu =1}^{d}\dfrac{{\partial }^{2}}{{\partial }y_{i\nu }{\partial }y_{j\nu }}\).

Let \(\rho \) be a finite dimensional irreducible polynomial representation of \(GL({\pmb n},{\mathbb C})\) with a representation space V. For \(n\times d\) matrix Y, we denote by \(Y_p\) the block matrix of size \(n_p \times d\) such that

$$\begin{aligned} Y=\begin{pmatrix} Y_1 \\ Y_2 \\ \vdots \\ Y_r \end{pmatrix}. \end{aligned}$$

We identify \(GL({\pmb n},{\mathbb C})=\prod _{p=1}^{r}GL(n_p,{\mathbb C})\) with the group of matrices

$$\begin{aligned} \left\{ A=\begin{pmatrix} A_1 &{} 0 &{} \cdots &{} 0 \\ 0 &{} A_2 &{}\ddots &{} \vdots \\ \vdots &{} \ddots &{} \ddots &{} 0 \\ 0 &{} \cdots &{} 0 &{} A_r \end{pmatrix};\ A_{p}\in GL(n_p,{\mathbb C})\right\} \subset GL(n,{\mathbb C}). \end{aligned}$$
(2)

We will often write such matrices by \(A=(A_1,\ldots ,A_r)\) for short. We denote by T an \(n\times n\) symmetric matrix of variable components as before.

Theorem 2.2

[14] We assume that \(d\ge n\). For any V-valued polynomial \(P_V(T)\), the differential operator \({\mathbb D}=P_V(\frac{{\partial }}{{\partial }Z})\) satisfies Condition 2.1 for the initial weight d/2 and the target weight \(det^{d/2}\otimes \rho \) if and only if \(P_V\) satisfies the following two conditions.

  1. (i)

    If we define a V-valued polynomial \(\widetilde{P}\) by \(\widetilde{P}(Y)=P_V(Y\,^{t}Y)\) for an \(n \times d\) matrix Y, then all the components of \(\widetilde{P}\) are pluri-harmonic for each \(Y_i\).

  2. (ii)

    For any matrix \(A \in GL({\pmb n},{\mathbb C})\subset GL(n,{\mathbb C})\), we have

    $$\begin{aligned} \widetilde{P}(AY) = \rho (A)\widetilde{P}(Y). \end{aligned}$$

Note that this theorem does not assert anything on existence of such \(P_V\). In principle, the question that how many such \(P_V\) exist can be answered by using Kashiwara–Vergne [27] and branching rule of the restriction of the representation of \(O(d)^{r}\) to the diagonal subgroup isomorphic to O(d), but generally such calculations are hard. We will see in the next section that we have a better solution for this. By the way, the decomposition of pluri-harmonic polynomials of matrix argument under the action of \(GL(n)\times O(d)\) is a part of Howe’s dual reductive pair. describing also holomorphic discrete series corresponding to representations of GL(n) (See [12, 27]). But our point here is to take multiple tensor products of these and its subspace invariant by the diagonal action of O(d). If we put \(n=2\) and \(n_1=n_2=1\) in our formulation, this is the usual setting of the classical Gegenbauer polynomials and the tensor is the product of two spaces. We do not gain much by emphasizing this conceptual side, so we stick mostly to a concrete description.

In order to describe such polynomials more concretely, we review a part of the results of [22]. First of all, it is not nice to write the condition of pluri-harmonicity by the coordinate of Y, and we can replace \(\Delta _{ij}(Y)\) by the differential operator \(D_{ij}\) on functions in \(t_{ij}\) of components of T by the condition \((D_{ij}P_V)(Y^{t}Y)=\Delta _{ij}(Y)\widetilde{P}(Y)\). As in [22], we have

$$\begin{aligned} D_{ij}=D_{ij}^{(d)}=d{\partial }_{ij}+\sum _{k,l=1}^{n}t_{kl}{\partial }_{ik}{\partial }_{jl}, \end{aligned}$$

where we put \({\partial }_{ij}=(1+\delta _{ij})\dfrac{{\partial }}{{\partial }t_{ij}}\). Of course we regard here \(t_{ji}=t_{ij}\), \({\partial }_{ji}={\partial }_{ij}\), and it is obvious that \(D_{ij}\) commutes with each other. By rewriting in this way, we are free from the original meaning of d and we can assume that d is an arbitrary complex number. Then the pluri-harmonicity of a polynomial \(Q(Y)=P(Y\,^{t}Y)\) in the condition (1) is written by the condition on P(T) as

$$\begin{aligned} D_{ij}P=0 \quad \hbox {for all}\ (i,j) \in {I({\pmb n})}, \end{aligned}$$

where we put

$$\begin{aligned} I(p)&=\{(i,j)\in {\mathbb Z}^{2}; n_1+\cdots +n_{p-1}+1\le i,j \le n_1+\cdots +n_p\}\nonumber \\&\quad \text { for }p\text { with }1\le p \le r,\end{aligned}$$
(3)
$$\begin{aligned} {I({\pmb n})}&=\bigcup _{p=1}^{r}I(p). \end{aligned}$$
(4)

In short, \({I({\pmb n})}\) is the set of pairs of the row and the column numbers of elements contained in the diagonal block matrices corresponding to the partition \({\pmb n}\).

We denote by \({\mathbb C}[T]\) the space of polynomials in \(t_{ij}\) for \(T=(t_{ij})\). To describe our polynomials in Theorem 2.2 in T variable, we introduce new notation. For any complex number d, we put

$$\begin{aligned} {\mathcal P}(d)&=\{ P(T)\in {\mathbb C}[T];\ D_{ii}^{(d)}P=0 \text { for all }i\text { with } 1\le i \le n\}, \\ {\mathcal P}^{{\pmb n}}(d)&={{\mathcal P}^{(n_1,\ldots ,n_r)}(d)}=\{P(T)\in {\mathbb C}[T]; D_{ij}^{(d)}P=0 \text { for all }(i,j) \in {I({\pmb n})}\}. \end{aligned}$$

The space \({\mathcal P}(d)\) has been already introduced in [22] and elements of \({\mathcal P}(d)\) are called higher spherical polynomials there. By definition we have \({\mathcal P}(d)={\mathcal P}^{(1,\ldots ,1)}(d)\) and \({\mathcal P}^{{\pmb n}}(d)\subset {\mathcal P}(d)\) since \((i,i)\in {I({\pmb n})}\) for all i and \({\pmb n}\). We will call elements of \({\mathcal P}^{{\pmb n}}(d)\) higher spherical polynomials for the partition \({\pmb n}\). When d is an integer, any components of \(P_V(T)\) in Theorem 2.2 are elements of \({\mathcal P}^{{\pmb n}}(d)\). So for any complex number d, it is natural to consider the following definition.

Definition 2.3

We denote by \({\mathcal P}_{\rho }^{{\pmb n}}(d)\) the space of V-valued polynomials \(P_V(T)\) which satisfy the following two conditions.

  1. (i)

    Any components of \(P_V(T)\) are elements in \({\mathcal P}^{{\pmb n}}(d)\).

  2. (ii)

    For any \(A\in GL({\pmb n},{\mathbb C})\), we have \(P_V(AT\,^{t}A)=\rho (A)P_V(T)\), where \(GL({\pmb n},{\mathbb C})\) is identified with a subgroup of \(GL(n,{\mathbb C})\) by (2).

The condition (ii) of Theorem 2.2 is equivalent to the above (ii) when d is an integer. If we fix a representation matrix R(A) of \(\rho (A)\) for some basis, then \({\mathcal P}^{{\pmb n}}(d)\) is isomorphic to the space of vectors \(P(T)=\,^{t}(P^{(1)}(T),\ldots ,P^{(l)}d(T))\) with \(P^{(i)}(T)\in {\mathcal P}^{{\pmb n}}(d)\) such that \(P(AT{\,^{t}A})=R(A)P(T)\), where \(l=\dim \rho \).

Since any component of an element of \({\mathcal P}^{{\pmb n}}_{\rho }(d)\) is an element of \({\mathcal P}^{{\pmb n}}(d)\), it is natural to ask if \({\mathcal P}^{{\pmb n}}(d)\) is stable by \(GL({\pmb n},{\mathbb C})\). The answer is yes as shown below, so it is natural to study \({\mathcal P}^{{\pmb n}}(d)\) first and then apply results for \({\mathcal P}^{{\pmb n}}(d)\) to the irreducible decomposition of \({\mathcal P}^{{\pmb n}}(d)\) to obtain \({\mathcal P}_{\rho }^{{\pmb n}}(d)\).

Proposition 2.4

The space \({\mathcal P}^{{\pmb n}}(d)\) is stable by the action of \(GL({\pmb n},{\mathbb C})\).

Before proving Proposition 2.4, we give a Lemma. Let d be any complex number. We denote by \({\mathcal D}\) an \(n\times n\) matrix of operators defined by \({\mathcal D}=(D_{ij})_{1\le i,j \le n}\). We fix \(A=(a_{ij}) \in GL(n,{\mathbb C})\). We denote by \((A{\mathcal D}{\,^{t}A})_{ij}\) the (ij) component of \(A{\mathcal D}{\,^{t}A}\), that is,

$$\begin{aligned} (A{\mathcal D}{\,^{t}A})_{ij}=\sum _{p,r=1}^{n}a_{ir}D_{rp}a_{jp}. \end{aligned}$$

Lemma 2.5

Notation being as above,

  1. (i)

    For any differentiable function Q(T) of \(t_{ij}\), we have

    $$\begin{aligned} D_{ij}(Q({\,^{t}A}TA)) = ((A{\mathcal D}{\,^{t}A})_{ij}Q)({\,^{t}A}TA), \end{aligned}$$
    (5)
  2. (ii)

    For any polynomial \(P(T)\in {\mathbb C}[T]\) and any differentiable function Q(T) of \(t_{ij}\), we have

    $$\begin{aligned} P({\mathcal D})(Q({\,^{t}A}TA))=(P(A{\mathcal D}\,^{t}A)Q)({\,^{t}A}TA). \end{aligned}$$
    (6)

Here we note that, by definition, \(D_{rp}\) contains variables \(t_{kl}\) as coefficients, so in RHS of (5), these variables in \(D_{rp}\) should be also replaced by \(({\,^{t}A}TA)_{kl}\).

Proof

We prove (i). The (rm) component of \({\,^{t}A}TA\) is \(\sum _{i,k=1}^{n}a_{ir}t_{ik}a_{km}\), so noting \(t_{ik}=t_{ki}\), we have

$$\begin{aligned} {\partial }_{ik}(Q({\,^{t}A}TA))&=\sum _{1\le r\le m\le n}(a_{ir}a_{km}+a_{kr}a_{im}) \frac{{\partial }Q}{{\partial }t_{rm}}({\,^{t}A}TA)\\&=\sum _{r,m=1}^{n}a_{ir}a_{km}({\partial }_{rm}Q)({\,^{t}A}TA), \end{aligned}$$

where \({\partial }_{ik}=(1+\delta _{ik})\frac{{\partial }}{{\partial }t_{ik}}\). So we have

$$\begin{aligned} t_{kl}{\partial }_{ik}{\partial }_{jl}(Q({\,^{t}A}TA))&= \sum _{r,m,p,q=1}^{n} t_{kl}a_{ir}a_{km}a_{jp}a_{lq}(({\partial }_{rm}{\partial }_{pq}Q)({\,^{t}A}TA)) \\&= \sum _{p,q,r,m=1}^{n}a_{ir}a_{jp}\bigl ( \sum _{k,l}t_{kl}a_{km}a_{lq}\bigr )({\partial }_{rm}{\partial }_{pq}Q)({\,^{t}A}TA) \\&= \sum _{p,q,r,m=1}^{n}a_{ir}a_{jp}({\,^{t}A}{\mathcal D}A)_{mq}({\partial }_{rm}{\partial }_{pq}Q)({\,^{t}A}TA). \end{aligned}$$

So adding \(d{\partial }_{ij}(Q({\,^{t}A}TA))\) to this, we have

$$\begin{aligned} D_{ij}(Q({\,^{t}A}TA))&=\sum _{p,r=1}^{n}a_{ir}a_{jp}d({\partial }_{rp}Q)({\,^{t}A}TA) \\&\quad +\sum _{p,r=1}^{n} a_{ir}a_{jp}\sum _{q,m=1}^{n}({\,^{t}A}TA)_{mq}\left( ({\partial }_{rm}{\partial }_{pq}Q)({\,^{t}A}TA)\right) \end{aligned}$$

Here by definition we have

$$\begin{aligned} ({\,^{t}A}TA)_{mq}({\partial }_{rm}{\partial }_{pq}Q)({\,^{t}A}TA)= (t_{mq}{\partial }_{rm}{\partial }_{pq}Q)({\,^{t}A}TA), \end{aligned}$$

so we have (5). By using the relation (5) repeatedly, we have (i). \(\square \)

Proof of Proposition 2.4

We define I(q) and \({I({\pmb n})}\) as before by (3) and (4). For \(A \in GL({\pmb n},{\mathbb C})=\prod _{p=1}^{r}GL(n_p,{\mathbb C})\) and \(P(T)\in {\mathcal P}^{(n_1,\ldots ,n_r)}(d)={\mathcal P}^{{\pmb n}}(d)\), we must show that \(D_{ij}(P({\,^{t}A}TA))=0\) for any \((i,j)\in {I({\pmb n})}\). By (5), we have

$$\begin{aligned} D_{ij}(P({\,^{t}A}TA))=\sum _{r,p=1}^{n}a_{ir}a_{jp}(D_{rp}P)({\,^{t}A}TA). \end{aligned}$$

Here since \(A \in GL({\pmb n},{\mathbb C})\), we have \(a_{ir}=0\) or \(a_{jp}=0\) unless \((i,r) \in {I({\pmb n})}\) and \((j,p)\in {I({\pmb n})}\). Since we have \((i,j) \in {I({\pmb n})}\), there exists some q such that \((i,j) \in I(q)\), and if \(a_{ir}a_{pj}\ne 0\) for some (rp), then we should have (ir), \((p,j) \in I(q)\) for the same q. Hence we have \((r,p)\in I(q)\subset {I({\pmb n})}\) and by our assumption we have \(D_{rp}P=0\), hence we also have \(D_{ij}(P({\,^{t}A}TA))=0\). \(\square \)

We fix a vector \(\mathbf{a}=(a_1,\ldots ,a_n)\in ({\mathbb Z}_{\ge 0})^n\). We say that a polynomial \(P(T)\in {\mathbb C}[{T}]\) is homogeneous of multidegree \(\mathbf{a}\) (as in [22]) if it satisfies \(P((c_ic_jt_{ij}))=(\prod _{i=1}^{n}c_i^{a_i})P(T)\) for variables \(c_i\). The space of polynomials of multidegree \(\mathbf{a}\) is written by \({\mathbb C}[{T}]_{\mathbf{a}}\). This is of course finite dimensional. We write

$$\begin{aligned} {\mathcal P}_{\mathbf{a}}(d)={\mathcal P}(d)\cap {\mathbb C}[{T}]_{\mathbf{a}}, \quad {\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d)={\mathcal P}^{{\pmb n}}(d)\cap {\mathbb C}[{T}]_{\mathbf{a}}. \end{aligned}$$

It is obvious that

$$\begin{aligned} {\mathcal P}^{{\pmb n}}(d)=\oplus _{\mathbf{a}}{\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d). \end{aligned}$$

To parametrize elements in \({\mathcal P}(d)\) and \({\mathcal P}^{{\pmb n}}(d)\) and give dimension formulas, we introduce the following notation

$$\begin{aligned} {\mathcal N}&=\{{\pmb \nu }=\,^{t}{\pmb \nu }=(\nu _{ij})\in M_n({\mathbb Z}); \nu _{ij}\ge 0, \nu _{ii}\in 2{\mathbb Z}\hbox { for all} i \hbox {,} j\},\\ {\mathcal N}_0&= \{{\pmb \nu }=\,^{t}{\pmb \nu }=(\nu _{ij})\in {\mathcal N}; \nu _{ii} =0 \text { for all }i\}. \\ {\mathcal N}_0^{{\pmb n}}&={{\mathcal N}_0^{(n_1,\ldots ,n_r)}}=\{{\pmb \nu }=(\nu _{ij}) \in {\mathcal N}_0; \ \nu _{ij}=0 \text { for all } (i,j)\in {I({\pmb n})}\}. \end{aligned}$$

We call an element of \({\mathcal N}\) an index. We denote by \({\pmb 1}\) the column vector in \({\mathbb C}^{n}\) such that all the components are 1, and write

$$\begin{aligned} {\mathcal N}^{{\pmb n}}_0(\mathbf{a}) =\{{\pmb \nu }\in {\mathcal N}^{{\pmb n}}_0;{\pmb \nu }\cdot {\pmb 1}=\mathbf{a}\}. \end{aligned}$$

We denote the cardinality of \({\mathcal N}^{{\pmb n}}_0(\mathbf{a})\) by \(N_{0}^{{\pmb n}}(\mathbf{a})=\#({\mathcal N}_0^{{\pmb n}}(\mathbf{a}))\). We denote by \({\pmb 0}\) the \(n \times n\) zero matrix. In order to apply it to a dimension formula of \({\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d)\), we review one of the canonical basis of \({\mathcal P}(d)\) defined in [22].

Theorem 2.6

[22] Unless d is an integer such that \(d<n\), the space \({\mathcal P}(d)\) has a basis (called descending basis) consisting of the polynomials \(P_{{\pmb \nu }}^{D}(T)\) indexed by \({\pmb \nu }\in {\mathcal N}_0\) which are uniquely determined by the following conditions (1) and (2).

  1. (1)

    \(P_\mathbf{0}(T)=1\).

  2. (2)

    For any i, j with \(1\le i\ne j \le n\), we have

    $$\begin{aligned} D_{ij}P_{{\pmb \nu }}^{D}(T)=P_{{\pmb \nu }-{\pmb e}_{ij}}^{D}(T), \end{aligned}$$

    where \({\pmb e}_{ij}\) is the \(n \times n\) symmetric matrix whose components are 1 at (ij) and (ji) and 0 at all other places. Here we put \(P_{{\pmb \nu }-{\pmb e}_{ij}}(T)=0\) if any components of \({\pmb \nu }-{\pmb e}_{ij}\) is negative.

As an easy corollary of this theorem, we have a following theorem.

Theorem 2.7

Unless d is an integer with \(d<n\), the set of polynomials \(P_{{\pmb \nu }}^{D}(T)\) with \(\nu \in {\mathcal N}_0^{{\pmb n}}\) gives a basis of \({\mathcal P}^{{\pmb n}}(d)\). We have \(P_{{\pmb \nu }}^{D}(T)\in {\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d)\) if and only if \({\pmb \nu }\in {\mathcal N}^{{\pmb n}}_0(\mathbf{a})\), and we have \(\dim {\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d)=N_0^{{\pmb n}}(\mathbf{a})\).

Proof

If we take any element P(T) of \({\mathcal P}(d)\) and write it by descending basis as

$$\begin{aligned} P(T)=\sum _{{\pmb \nu }\in {\mathcal N}_0}c_{{\pmb \nu }}P_{{\pmb \nu }}^{D}(T), \end{aligned}$$

then we have

$$\begin{aligned} D_{ij}P(T)=\sum _{{\pmb \nu }\in {\mathcal N}_0}c_{{\pmb \nu }}P_{{\pmb \nu }-{\pmb e}_{ij}}^{D}(T). \end{aligned}$$
(7)

We have \(P_{{\pmb \nu }-{\pmb e}_{ij}}^{D}(T)=0\) if and only if \({\pmb \nu }-{\pmb e}_{ij}\) contains a negative component. This is equivalent to \(\nu _{ij}=0\). If we assume that \(P(T)\in {\mathcal P}^{{\pmb n}}(d)\) and \((i,j)\in I({\pmb n})\), then (7) vanishes by definition. The polynomials \(P_{{\pmb \nu }-{\pmb e}_{ij}}^{D}(T)\) with \(\nu _{ij}\ge 1\) are linearly independent since it is a part of a basis. So we have \(c_{{\pmb \nu }}=0\) unless \(\nu _{ij}=0\). Since we have \(D_{ij}P=0\) for all \((i,j) \in {I({\pmb n})}\), this means that \(c_{{\pmb \nu }}=0\) in (7) unless \({\pmb \nu }\in {\mathcal N}^{{\pmb n}}_0\). The polynomials \(P_{{\pmb \nu }}^{D}(T)\) for \({\pmb \nu }\in {\mathcal N}^{{\pmb n}}\) are linearly independent by definition. So this gives a basis of \({\mathcal P}^{{\pmb n}}(d)\). The assertion for the dimension is obvious from this. \(\square \)

Remark 2.8

  1. (1)

    In [22] we have shown that \(\dim {\mathcal P}_{\mathbf{a}}(d)=N_0(\mathbf{a})\) unless d is an even non-positive integer. But Theorem 2.6 is not valid in general for an integer d such that \(0\le d \le n-1\). For example, when \(n=3\), \({\pmb n}=(2,1)\) and \(\mathbf{a}=(2,2,0)\), we have \({\mathcal N}_0^{{\pmb n}}(\mathbf{a})=\emptyset \), but if \(d=1\) (which is less than \(n=3\)), then we see that \({\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)={\mathbb C}(t_{11}t_{22}-t_{12}^2)\), so \(N_0^{{\pmb n}}(\mathbf{a})(d)=0<1=\dim {\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)\).

  2. (2)

    Although we have shown that \({\mathcal P}^{{\pmb n}}(d)\) is stable by the action of \(GL({\pmb n},{\mathbb C})\), the space \({\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)\) is not stable by the action of \(GL({\pmb n},{\mathbb C})\) for general \({\pmb n}\).

3 Generic differential operators with the automorphic property

As explained in the introduction, there are two different ways to give \({\mathcal P}_{\rho }^{{\pmb n}}(d)\) concretely. In this section, we give one of them. We give a certain generic generating series \(G^{({\pmb n})}\) of the descending basis \(P_{{\pmb \nu }}^{D}(T)\) \(({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\)) of \({\mathcal P}^{{\pmb n}}(d)\) and then by using this, we explain \({\mathcal P}_{\rho }^{{\pmb n}}(d)\) uniformly by some universality of \(G^{({\pmb n})}\). We also define a generic differential operator \({\mathfrak D}_{U}\) with the automorphic property (1) based on \(G^{({\pmb n})}\) which is a source of all the differential operators in question.

3.1 Universality

We denote by X an \(n\times n\) symmetric matrix of variables and write this by matrix blocks as \(X=(X_{pq})\) where \(X_{pq}\) is an \(n_p \times n_q\) matrix for each (pq) with \(1\le p,q\le r\). We also assume that \(X_{pp}=0\) for all \(p=1\), ..., r. For any \(\nu \in {\mathcal N}_0^{{\pmb n}}\), we write \(X^{{\pmb \nu }}=\prod _{1\le i<j\le n}x_{ij}^{\nu _{ij}}\) for \(X=(x_{ij})_{1\le i,j\le n}\). Note that by definition of \({\mathcal N}_0^{{\pmb n}}\), there is no \(x_{ij}\) term in this product such that \((i,j)\in {I({\pmb n})}\). We denote by \({{\mathbb C}[[X]]}\) the vector space of formal power series in the components of X, that is, we put

$$\begin{aligned} {{\mathbb C}[[X]]}=\left\{ \sum _{{\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}}c_{{\pmb \nu }}X^{{\pmb \nu }}; c_{{\pmb \nu }}\in {\mathbb C}\right\} . \end{aligned}$$

Identifying elements \(A=(A_1,\ldots ,A_r) \in GL({\pmb n},{\mathbb C})\) with an element in \(GL(n,{\mathbb C})\) as before, we define a left action \(\rho _{U}\) of \(GL({\pmb n},{\mathbb C})\) on \({{\mathbb C}[[X]]}\) by

$$\begin{aligned} \rho _{U}(A)\left( \sum _{{\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}} c_{{\pmb \nu }}X^{{\pmb \nu }}\right) = \sum _{{\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}}c_{{\pmb \nu }}(^{t}AXA)^{{\pmb \nu }}. \end{aligned}$$

Of course \(\rho _{U}\) is not irreducible at all and is a direct sum of infinitely many finite dimensional irreducible polynomial representations of \(GL({\pmb n},{\mathbb C})\). We write this decomposition as

$$\begin{aligned} \rho _{U}=\oplus _{\lambda }m_{\lambda }\rho _{\lambda }, \end{aligned}$$

where \(\rho _{\lambda }\) are irreducible representations of \(GL({\pmb n},{\mathbb C})\) and \(m_{\lambda }\) are their multiplicities. The irreducible decomposition of \(\rho _U\) is in principle obtained by applying Peter-Weyl Theorem, which we will explain later. Take an \(n \times n\) symmetric matrix \(T=\,^{t}T=(t_{ij})\) of variables \(t_{ij}\). For integers i with \(0\le i\le n\), we define polynomials \(\sigma _i(TX)\) in \(t_{ij}\) and \(x_{ij}\) by the relation

$$\begin{aligned} \det (x 1_n-TX)=\sum _{i=0}^{n}(-1)^{i}\sigma _{n-i}(TX)x^{n-i}, \end{aligned}$$

where x is a variable. Here \(\sigma _0=1\) and we regard \(\sigma _i\) for \(1\le i\le n\) as independent variables. For a complex number \(\nu \) such that \(\nu \) is not a negative integer, we define a formal power series \({\mathbb J}_{\nu }(x)\) in a variable x by

$$\begin{aligned} {\mathbb J}_{\nu }(x)=\sum _{i=0}^{\infty }\frac{x^i}{i!\,(\nu +1)_i}= 1+\frac{x}{\nu +1}+\frac{x^2}{2(\nu +1)(\nu +2)}+\cdots \end{aligned}$$

where \((\nu +1)_i=\prod _{j=1}^{i}(\nu +j)\) is the ascending Pochhammer symbol. We define operators \({\mathcal M}_{i}\) (\(1\le i \le n\)) on \({\mathbb C}[\sigma _1,\ldots ,\sigma _n]\) by

It is obvious that \(\sigma _i{\mathcal M}_if(\sigma _1,\ldots ,\sigma _n) ={\mathcal M}_i(\sigma _if(\sigma _1,\ldots ,\sigma _n))\) for any \(f(\sigma _1,\ldots ,\sigma _n)\in {\mathbb C}[\sigma _1,\ldots ,\sigma _n]\) since \({\mathcal M}_i\) does not contain the derivation by \(\sigma _i\). Finally, we assume that d is any complex number such that \(d \not \in {\mathbb Z}_{\le n-1}\), where \({\mathbb Z}_{\le n-1}\) is the set of integers not bigger than \(n-1\). Then we define a formal power series \(G^{({\pmb n})}(T,X)\) in \(\sigma _1\), \(\sigma _2\), ..., \(\sigma _n\) by

$$\begin{aligned} G^{({\pmb n})}(T,X) {=} {\mathbb J}_{\frac{d-n{-}1}{2}}(\sigma _n{\mathcal M}_n)\, {\mathbb J}_{\frac{d-n}{2}}(\sigma _{n-1}{\mathcal M}_{n-1}) \,\cdots \,{\mathbb J}_{\frac{d-3}{2}}(\sigma _2{\mathcal M}_2) \biggl (\frac{1}{(1-\sigma _1/2)^{d-2}}\biggr ). \end{aligned}$$
(8)

For each \({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\), we define polynomials \(P_{{\pmb \nu }}(T)\) in \(t_{ij}\) by

$$\begin{aligned} G^{({\pmb n})}(T,X)=\sum _{{\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}}P_{{\pmb \nu }}(T)X^{{\pmb \nu }}. \end{aligned}$$
(9)

For any \(A \in GL({\pmb n},{\mathbb C})\), we have

$$\begin{aligned} G^{({\pmb n})}(AT\,^{t}A,X)=G^{({\pmb n})}(T,\ ^{t}AXA), \end{aligned}$$
(10)

since we have \(\sigma _i(AT{}\,^{t}AX)=\sigma _i(T{}\,^{t}AXA)\).

Theorem 3.1

  1. (i)

    The set of all polynomials \(P_{{\pmb \nu }}(T)\) for \({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\) defined above is a basis of \({\mathcal P}^{{\pmb n}}(d)\) proportional to \(P_{{\pmb \nu }}^{D}(T)\).

  2. (ii)

    The space \({\mathcal P}^{{\pmb n}}(d)\) is stable under the action of \(GL({\pmb n},{\mathbb C})\).

  3. (iii)

    We have an isomorphism

    $$\begin{aligned} \mathrm{Hom}_{GL({\pmb n},{\mathbb C})}({\mathbb C}[[X]],V)\ni c \cong c(G^{({\pmb n})}(T,X))\in {\mathcal P}_{\rho }^{{\pmb n}}(d). \end{aligned}$$

Proof

When \(n_1=\cdots = n_r=1\), we proved in [22] that for each \({\pmb \nu }\in {\mathcal N}_0\), the coefficients \(P_{{\pmb \nu }}(T)\) of \(X^{{\pmb \nu }}\) in \(G^{(1,\ldots ,1)}(T,X)\) is equal to \(P_{{\pmb \nu }}^{D}(T)\) up to multiplication of a non-zero constant. There X is a matrix such that diagonals are zero. Since our new series \(G^{({\pmb n})}(T,X)\) is obtained just by replacing \(G^{(1,\ldots ,1)}(T,X)\) by taking \(X_{pp}=0\) for all \(p=1\), ...r, we see that for \({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\), the coefficients \(P_{{\pmb \nu }}(T)\) are proportional to the descending basis in \({\mathcal P}^{{\pmb n}}(d)\). We have already seen in Theorem 2.7 that this is a basis. The claim (ii) was proved in Proposition 2.4 but is also obvious by (10). The claim (iii) is almost obvious, but since this is an important point of this theorem, we try to give a down to earth explanation. By definition of \(\sigma _i\), we see that \(\sigma _i(AT{\,^{t}A}X)=\sigma _i(T{\,^{t}A}XA)\) for any \(A\in GL({\pmb n},{\mathbb C})\), so for \(A\in GL({\pmb n},{\mathbb C})\), we have

$$\begin{aligned} \sum _{{\pmb \nu }\in {\mathcal N}_{0}^{{\pmb n}}}P_{{\pmb \nu }}(AT{\,^{t}A})X^{{\pmb \nu }} = \sum _{{\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}}P_{{\pmb \nu }}(T)({\,^{t}A}XA)^{{\pmb \nu }}. \end{aligned}$$
(11)

Here by definition of \(A \in GL({\pmb n},{\mathbb C})\) and X, the diagonal blocks of the matrix \({\,^{t}A}XA\) are again all zero. Since \(({\,^{t}A}XA)^{{\pmb \nu }}\) is a linear combination of \(X^{{\pmb \mu }}\) for \({\pmb \mu }\in {\mathcal N}_0^{{\pmb n}}\) and this expression is nothing but the representation matrix of \(\rho _{U}\), and we see that \(G^{({\pmb n})}(AT{\,^{t}A},X)=\rho _{U}(A)G^{({\pmb n})}(T,X)\). We fix an irreducible representation \(\rho \) which is equivalent to a subrepresentation of \(\rho _{U}\) and denote by \(W(\rho )\) the sum in \({{\mathbb C}[[X]]}\) of all the irreducible subspaces of \({{\mathbb C}[[X]]}\) equivalent to \(\rho \). This is called the \(\rho \)-isobaric component and the decomposition of \({{\mathbb C}[[X]]}\) into isobaric components is uniquely determined. We consider the scalar extension \({\mathbb C}[T][[X]]={{\mathbb C}[[X]]}\otimes _{{\mathbb C}} {\mathbb C}[T]\) of \({{\mathbb C}[[X]]}\) taking polynomials in T as coefficients of the formal power series in components of X. Then \(G^{({\pmb n})}(T,X)\) is regarded as an element in the vector space \({\mathbb C}[T][[X]]\) over \({\mathbb C}\). Then the projection of \(G^{({\pmb n})}(T,X)\) to the \(\rho \)-isobaric component is well defined. In other words, if we choose a basis \(e_i(X)\) (polynomials in X) of \({{\mathbb C}[[X]]}\) over \({\mathbb C}\) consisting of basis of \(W(\rho )\) for all \(\rho \), then rewriting \(X^{{\pmb \nu }}\) by linear combinations of \(e_i(X)\), we may write

$$\begin{aligned} G^{({\pmb n})}(T,X)=\sum _{i}f_i(T)e_i(X) \end{aligned}$$

where \(f_i(T)\) are certain (finite) linear combinations of \(P_{\nu }(T)\). The partial sum \(P_{\rho }(T,X)\) of \(G^{({\pmb n})}(T,X)\) obtained by the linear combination only over the basis of \(W(\rho )\) is the image of the projection of \(G^{({\pmb n})}(T,X)\) to the \(\rho \)-isobaric component. If we put \(s_{\rho }=\dim \rho \), we have \(s_{\rho }m_{\rho }=\dim W(\rho )\). There is no canonical decomposition of \(W(\rho )\) into irreducible components, but we fix one decomposition \(W(\rho )=\oplus _{l=1}^{m_{\rho }}V_l\)

where \(V_l\) are some irreducible subspaces of \({{\mathbb C}[[X]]}\) equivalent to \(\rho \). For a fixed representation matrix R(A) of \(\rho \) which does not depend on l. we may choose a basis \(\{e_1^{(l)}(X),\ldots ,e_{s_{\rho }}^{(l)}(X)\}\) of \(V_l\) so that

$$\begin{aligned} (e_1^{(l)}(^{t}AXA),\ldots ,e_{s_{\rho }}^{(l)}(^{t}AXA)) = (e_1^{(l)}(X),\ldots ,e_{s_{\rho }}^{(l)}(X)) R(A) \end{aligned}$$

where \(e_i^{(l)}=e_i^{(l)}(X)\) are polynomials in components of X. We write

$$\begin{aligned} P_{\rho }(T,X)=\sum _{l=1}^{m_{\rho }}\sum _{i=1}^{s_{\rho }}f^{(l)}_{i}(T)e_{i}^{(l)}(X). \end{aligned}$$

Here \(f^{(l)}_{i}(T)\in {\mathcal P}^{{\pmb n}}(d)\). We have \(P_{\rho }(AT\,^{t}A,X)=P_{\rho }(T,\,^{t}AXA)\) and

$$\begin{aligned} \begin{pmatrix} f_1^{(l)}(AT\,^{t}A)\\ f_2^{(l)}(AT\,^{t}A) \\ \vdots \\ f_{s_{\rho }}^{(l)}(AT\,^{t}A) \end{pmatrix} = R(A) \begin{pmatrix} f_1^{(l)}(T)\\ f_2^{(l)}(T) \\ \vdots \\ f_{s_{\rho }}^{(l)}(T) \end{pmatrix}. \end{aligned}$$

Now let \(\mathfrak {W}\) be the vector space spanned over \({\mathbb C}\) by all \(f_i^{(l)}(T)\) with \(1\le i \le s_{\rho }\) and \(1\le l \le m_{\rho }\). Then polynomias \(F_1(T)\), ..., \(F_{s_{\rho }}(T)\in \mathfrak {W}\) satisfy the relation

$$\begin{aligned} \begin{pmatrix} F_1(AT^{t}A)\\ \vdots \\ F_{s_\rho }(AT^{t}A) \end{pmatrix} = R(A) \begin{pmatrix} F_1(T) \\ \vdots \\ F_{s_\rho }(T) \end{pmatrix}. \end{aligned}$$

if and only if there are constants \(c_l\) \((1\le l \le m)\) depending only on l such that

$$\begin{aligned} F_{i}(T)=\sum _{l=1}^{m}c_{l}f^{(l)}_{i}(T). \end{aligned}$$
(12)

This is easily proved by Schur’s lemma. Indeed, denote by f(T) the \(s_{\rho }m_{\rho }\) dimensional column vector such that \(f_i^{(l)}(T)\) (\(1\le i \le s_{\rho }\), \(1\le l\le m_{\rho }\)) is the \((i+(l-1)m_{\rho })\)-th component and by F(T) the \(s_{\rho }\) dimensional column vector whose i-th component is \(F_i(T)\). Then we have \(F(T)=Bf(T)\) for some \(s_{\rho }\times s_{\rho }m_{\rho }\) matrix B and \(F(AT^{t}A)=R(A)F(T)=R(A)Bf(T)\). On the other hand, we have

$$\begin{aligned} F(AT^{t}A)=Bf(AT^{t}A)=B\begin{pmatrix} R(A) &{} 0 &{} \cdots &{} 0 \\ 0 &{} R(A) &{} 0 &{} \vdots \\ \vdots &{} 0 &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} R(A) \end{pmatrix}f(T). \end{aligned}$$

So writing \(B=(B_{l})\) by blocks where \(B_{l}\) are \(s_{\rho }\times s_{\rho }\) matrices, we have \(R(A)B_{l}=B_{l}R(A)\), and this means that \(B_l=c_l1_{s_{\rho }}\) for some constant \(c_l\) by Schur’s lemma, so we have (12). In other words, let V be an abstract representation space of \(\rho \) and assume that the representation matrix of \(\rho \) with respect to a basis \(\{\omega _1, \ldots , \omega _{s_{\rho }}\}\) of V is R(A). Then regarding \({\mathbb C}[T]\) as scalars, by the projection \(c\in Hom_{GL({\pmb n},{\mathbb C})}(W(\rho ),V)\) such that \(c(e_i^{(l)}(X))=c_l\omega _i\), we have

$$\begin{aligned} c(P_{\rho }(T,X))=\sum _{i=1}^{s_{\rho }}F_{i}(T)\omega _i \end{aligned}$$

for some \(F_1(T)\), ..., \(F_{s_{\rho }}(T)\) written as in (12). This proves the assertion (iii). \(\square \)

Now we interpret Theorem 3.1 into the differential operators with the automorphic property. We define a \({{\mathbb C}[[X]]}\)-valued differential operator \({\mathfrak D}_{U}(X)\) on the space \(\text {Hol}(H_n,{\mathbb C})\) of holomorphic functions on \(H_n\) by

$$\begin{aligned} {\mathfrak D}_{U}(X)=G^{({\pmb n})}\left( \frac{{\partial }}{{\partial }Z},\ X\right) = \sum _{{\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}}P_{{\pmb \nu }}\left( \frac{{\partial }}{{\partial }Z}\right) X^{{\pmb \nu }}. \end{aligned}$$

We call this the generic differential operators with the automorphic property for the partition \({\pmb n}\). We denote by \(\text {Hol}(H_n,{\mathbb C})\) the vector space of all scalar valued holomorphic functions on \(H_n\) and by \(\text {Hol}(H_n,{{\mathbb C}[[X]]})\) the vector space of all \({{\mathbb C}[[X]]}\)-valued holomorphic functions given by

$$\begin{aligned} \text {Hol}(H_n,{{\mathbb C}[[X]]})= \left\{ \sum _{{\pmb \nu }\in {\mathcal N}_{0}^{{\pmb n}}}f_{{\pmb \nu }}(Z)X^{{\pmb \nu }};\ f_{{\pmb \nu }}(Z)\in \text {Hol}(H_n,{\mathbb C})\right\} . \end{aligned}$$

Then for any \(F \in \text {Hol}(H_n, {\mathbb C})\), we have \({\mathfrak D}_U(X)F(Z)\in \text {Hol}(H_n,{{\mathbb C}[[X]]})\). The following theorem is an immediate corollary of Theorem 3.1. In short, the next theorem claims that \({\mathfrak D}_U\) exhausts all the differential operators in question.

Theorem 3.2

Assume that k is an integer and \(d=2k\ge n\).

  1. (1)

    The operator \({\mathfrak D}_{U}\) satisfies Condition 2.1 for the initial weight k and the target weight \(det^k\otimes \rho _{U}\).

  2. (2)

    Let \((\rho ,V)\) be an irreducible polynomial representation of \(GL({\pmb n},{\mathbb C})\). Then the vector space \({\mathbb D}(k,det^k\rho )\) of all linear holomorphic V-valued differential operators \({\mathbb D}\) of constant coefficients which satisfy Condition 2.1 for the initial weight k and the target weight \(det^k\otimes \rho \) is linearly isomorphic to \(\mathrm{Hom}_{GL({\pmb n},{\mathbb C})}({{\mathbb C}[[X]]},V)\). More precisely, we have

    $$\begin{aligned} {\mathbb D}(k,det^k\rho )= \{c\circ {\mathfrak D}_{U}; c \in \mathrm{Hom}_{GL({\pmb n},{\mathbb C})}({{\mathbb C}[[X]]},V)\} \end{aligned}$$

    and \(\dim {\mathbb D}(k,det^k\rho )=m_{\rho }\), where \(m_{\rho }\) is the multiplicity of \(\rho \) in \(\rho _U\).

Remark 3.3

I was informed by Professor Siddarta Sahi that the projection operators \(c\in \mathrm{Hom}({{\mathbb C}[[X]]},V)\) is explicitly described. This is done by using an explicit expression of the center of the enveloping algebra of \(GL(n_p,{\mathbb C})\) obtained by a sort of the Capelli identity. When \(r=2\), irreducible components are their (different) simultaneous eigenspaces, and the eigenvalues are known by Howe and Ueda in [13], so an explicit projection operator is easily given. The actual description of the images by this method is not so simple since the images of the monomials \(X^{{\pmb \nu }}\) are not linearly independent in general and the relations are complicated.

Now we explain how to see which kind of irreducible representations appear in \(\rho _{U}\) and what are their multiplicities. First for \(p\ne q\) we consider the space \({\mathbb C}[X_{pq}]\) of polynomials in the components of \(n_p\times n_q\) block \(X_{pq}\). The group \(GL(n_p,{\mathbb C})\times GL(n_q,{\mathbb C})\) acts on \({\mathbb C}[X_{pq}]\) by \(f(X_{pq})\rightarrow f(^{t}A_pX_{pq}A_{q})\). The irreducible decomposition of this space is well known [7, 35] and described as follows. Let \(\lambda \) be a dominant integral weight (or the Young diagram parameter)

$$\begin{aligned} \lambda =(\lambda _1,\lambda _2, \lambda _3,\ldots ) \end{aligned}$$

where \(\lambda _i\) are non-negative integers such that \(\lambda _i\ge \lambda _{i+1}\ge 0\) and zero except for finitely many i. We denote by \(\mathrm{depth}(\lambda )\) the maximum number l such that \(\lambda _l>0\). If we fix a positive integer m, the set of \(\lambda \) with \(\mathrm{depth}(\lambda )\le m\) corresponds bijectively to polynomial representations of \(GL(m,{\mathbb C})\). Since this representation depends on the choice of m, we denote by \(\rho _{\lambda ,m}\) the representation of \(GL(m,{\mathbb C})\) corresponding to \(\lambda \). Then we have

$$\begin{aligned} {\mathbb C}[X_{pq}]=\sum _{\mathrm{depth}(\lambda )\le min(n_p,n_q)} \rho _{\lambda ,n_p}\otimes \rho _{\lambda ,n_q}, \end{aligned}$$

where \(\rho _{\lambda ,n_p}\otimes \rho _{\lambda ,n_q}\) means the irreducible representation of \(GL(n_p,{\mathbb C})\times GL(n_q,{\mathbb C})\) realized by the tensor product. This is a well known classical result. (See [35] Chapter VII (7.10) or [7] p. 283 Theorem 5.6.7). In particular, if \(n_p=n_q\), then this is the tensor of the same representations \(\rho _{\lambda ,n_p}=\rho _{\lambda ,n_q}\). We denote by \({\mathbb C}[X]\) the space of polynomials in the components of X with \(X_{11}=X_{22}=\cdots =X_{rr}=0\). Then this is regarded as a tensor product

$$\begin{aligned} {\mathbb C}[X]=\otimes _{1\le p<q\le r}{\mathbb C}[X_{pq}]. \end{aligned}$$

So for each (pq) with \(1\le p<q\le r\), take a dominant integral weight \(\lambda ^{(pq)}\) with \(\text {depth}(\lambda ^{(pq)})\le \min (n_p,n_q)\), and we consider a collection \(\Lambda =(\lambda _{pq})_{1\le p<q \le r}\) of such representations. We denote by \({\mathcal R}\) the set of all such \(\Lambda \).

$$\begin{aligned} {\mathcal R}=\{\Lambda =(\lambda ^{(pq)})_{1\le p<q\le r}; \ \text {depth}(\lambda ^{(pq)})\le min(n_p,n_q)\}. \end{aligned}$$

When \(q<p\), we put \(\lambda ^{(pq)}=\lambda ^{(qp)}\).

Theorem 3.4

The space \({\mathbb C}[X]\) is isomorphic to the direct sum of spaces \(V(\Lambda )\) corresponding to \(\Lambda \in {\mathcal R}\), where we put

$$\begin{aligned} V(\Lambda )=\otimes _{p=1}^{r} \left( \otimes _{q=1, q\ne p}^{r}\rho _{\lambda ^{(pq)},n_p}\right) . \end{aligned}$$

Here each \(\left( \otimes _{q=1, q\ne p}^{r}\rho _{\lambda ^{(pq)},n_p}\right) \) is the tensor representation of \(GL(n_p,{\mathbb C})\) for a fixed \(n_p\) and \(V(\Lambda )\) is the representation of \(GL({\pmb n},{\mathbb C})\) realized by their tensor product.

Here of course the spaces \(\left( \otimes _{q=1,q\ne p}^{r}\rho _{\lambda ^{(pq)},n_p}\right) \) are not irreducible in general as a representation space of \(GL(n_p,{\mathbb C})\). The decomposition of the tensor product representation into irreducible representations is known as the Littlewood-Richardson rule and if the corresponding Young diagram is given explicitly, the Young diagram corresponding to irreducible components in the tensor space can be easily calculated. (See [34] for example. We omit the details here.)

We give an example. Assume that \(n=6\), \(r=3\) and \(n_1=n_2=n_3=2\). Irreducible representations of GL(2) are generally given by \(det^{k}Sym(j)\), where Sym(j) is the j-th symmetric tensor representation. If we take a dominant integral weight (2, 0) for example, then this corresponds to the symmetric tensor representation Sym(2) of degree two and we have

$$\begin{aligned} Sym(2)\otimes Sym(2) = Sym(4)+\ det \otimes Sym(2)+\mathrm{det}^{2}. \end{aligned}$$

So if we take \(\lambda ^{(12)}=\lambda ^{(13)}=\lambda ^{(23)}=(2,0)\) for example, then the representation \((\det (A_1A_2A_3))^2\) appears as one of the irreducible components. But if we take \(\lambda ^{(12)}=\lambda ^{(13)}=\lambda ^{(23)}=(1,1)\), then this corresponds to the representation \(\det \) and the tensor is \(\det ^2\). So here also, \(\det (A_1A_2A_3)^2\) appears. We can see that there is no other representation which can produce \(\det ^2\), so the multiplicity of the representation \(\det (A_1A_2A_3)^{2}\) of \(GL(2)^{3}\) in \({\mathbb C}[X]\) is 2. In the same way, when \(n=6\) and \(n_1=n_2=n_3=2\), we can show that the multiplicity of \(\det (A_1A_2A_3)^{k}\) in \({\mathbb C}[X]\) is \([k/2]+1\) where [k/2] is the maximum integer which is not greater than k/2.

3.2 Explicit generating function in some special cases

Here we put \(d=2k\), where k is the initial weight. When \(r=2\), \(n_1=1\) and \(n_2=n-1\), our generating function (8) becomes extremely simple. We explain this here. First of all, the representation on the polynomials on the block \(X_{12}\), which is a vector of length \(n-1\), corresponds with the Young diagram of depth 1 given by \((l,0,\ldots ,0)\) for any positive integer l. This means that the corresponding representations are the tensor of the symmetric tensor representation of \(GL(n-1)\) and the representation taking a power of GL(1) of the same degree. So if we take Siegel modular forms of degree n of weight k, then the weight of \(Res_{H_{{\pmb n}}}({\mathbb D}F)\) is \(k+l\) for \(Z_{11}\in H_1\), and \(det^{k}Sym(l)\) for \(Z_{22}\in H_{n-1}\), where Sym(l) is the symmetric tensor representation of degree l of \(GL(n-1,{\mathbb C})\).

Notation being the same as before, we have \(X=\,^{t}X=(x_{ij})\) where \(x_{11}=0\), \(x_{ij}=0\) for \(2\le i,j \le n\). Then we have

$$\begin{aligned} TX=\begin{pmatrix} t_{12}x_{12}+\cdots +t_{1n}x_{1n} &{} t_{11}x_{12},\ &{} \cdots , \cdots ,\ &{} t_{11}x_{1n} \\ t_{22}x_{12}+t_{23}x_{13}+\cdots + t_{2n}x_{2n} &{} t_{12}x_{12},\ &{} \cdots , \cdots ,\ &{} t_{12}x_{1n} \\ \vdots &{} \vdots \ &{} \cdots ,\cdots , &{} \vdots \\ t_{2n}x_{12}+t_{3n}x_{13}+\cdots + t_{nn}x_{1n} &{} t_{1n}x_{12},\ &{} \cdots , \cdots ,\ &{} t_{1n}x_{1n} \end{pmatrix}. \end{aligned}$$

We denote by \(e_{ij}\) the \(n \times n\) matrix whose (ij) component is one and the other components are 0 (so \({\pmb e}_{ij}=e_{ij}+e_{ji}\)). Multiplying the matrices \(1_n-(x_{1i}/x_{12})e_{2i}\) from the right and \(1_n+(x_{1i}/x_{12})e_{2i}=(1_n-(x_{1i}/x_{12})e_{2i})^{-1}\) from the left for all \(i=3\), ..., n, we see that TX is conjugate to

$$\begin{aligned} \begin{pmatrix} t_{12}x_{12}+t_{13}x_{13}+\cdots + t_{1n}x_{1n} &{} t_{11}x_{12} &{}\quad 0, &{} \cdots , &{}\quad 0\\ a_2+\frac{x_{13}}{x_{12}}a_3+\cdots +\frac{x_{1n}}{x_{12}}a_{n} &{} t_{12}x_{12}+t_{13}x_{13}+\cdots + t_{1n}x_{1n} &{}\quad 0, &{} \cdots , &{} \quad 0 \\ * &{} * &{}\quad 0, &{} \cdots , &{}\quad 0 \\ * &{} * &{}\quad 0, &{} \cdots , &{}\quad 0 \end{pmatrix} \end{aligned}$$

where \(a_{i}=t_{i2}x_{12}+\cdots +t_{in}x_{1n}\) (noting \(t_{ij}=t_{ji}\)). So the characteristic polynomial of TX is given by

$$\begin{aligned} \lambda ^{n}-\sigma _1 \lambda ^{n-1}+\sigma _2 \lambda ^{n-2} \end{aligned}$$

where

$$\begin{aligned} \sigma _1&= 2(t_{12}x_{12}+t_{13}x_{13}+\cdots + t_{1n}x_{1n}) \\ \sigma _2&= \sum _{i=2}^{n}(t_{1i}^2-t_{11}t_{ii})x_{1i}^{2} +2\sum _{2\le i<j \le n}x_{1i}x_{1j}(t_{1i}t_{1j}-t_{11}t_{ij}). \end{aligned}$$

So taking \(\sigma _i=0\) for \(i\ge 3\), the generating function of the higher spherical polynomials for this partition is given by

$$\begin{aligned} G^{({\pmb n})}(X,T)&= \dfrac{1}{((1-\sigma _1/2)^2-\sigma _2)^{(d-2)/2}} \nonumber \\&= \dfrac{1}{\left( 1-2\sum _{2\le i \le n}t_{1i}x_{1i}+t_{11} (\sum _{2\le i ,j\le n}t_{ij}x_{1i}x_{1j})\right) ^{(d-2)/2}}. \end{aligned}$$
(13)

For example, expanding this as a formal power series of \(x_{1,i}\) for all i, the degree two term is given by

$$\begin{aligned} \frac{d-2}{2}\sum _{2\le i\le j\le n}(dt_{1i}t_{1j}-t_{11}t_{ij})x_{1,i}x_{1,j}. \end{aligned}$$

So \(P(T)=\bigl ((2-\delta _{ij})(dt_{1i}t_{1j}-t_{11}t_{ij}))_{2\le i \le j\le n}\) gives a differential operator \({\mathbb D}=P(\frac{{\partial }}{{\partial }Z})\) from the initial weight d/2 to the target weight \((d/2+2)\) for \(H_1\) and \(det^{d/2}Sym(2)\) for \(H_{n-1}\).

Actually, this generating function is a part of the results in [14]. There, more general case has been treated, that is, for any partition \({\pmb n}=(n_1,n_2)\) with \(n=n_1+n_2\) and the target weight \(det^k\rho _{l,n_1}\otimes det^k\rho _{l,n_2}\), where \(\rho _{l,n_i}\) are the symmetric tensor representations of \(GL(n_i,{\mathbb C})\), the generating function of our polynomial has been given there by

(14)

Here we are taking the representation space of \(\rho _{l,n_1}\otimes \rho _{l,n_2}\) as the space of polynomials in \(u=(u_1,\ldots ,u_{n_1})\) and in \(v=(v_1,\ldots ,v_{n_2})\) which are homogeneous of degree l for both u and v. (In case \(n_1=1\), \(n_2=n-1\), this is the same as (13) if we put \(x_{1i}=u_1v_i\) for \(2\le i\).) In other words, expanding this generating functions as a series in u and v, and replacing each \(t_{ij}\) by \(\frac{1+\delta _{ij}}{2}\frac{{\partial }}{{\partial }z_{ij}}\), the homogeneous part of degree l for both u and v gives the differential operator of the target weight \(det^k\rho _{l,n_1}\otimes det^k\rho _{l,n_2}\). See also [4] for an alternative proof of (14). See also [30] II for (13).

The generating function for the case \(n=4\), \(n_1=n_2=2\) and the target weight \(det^{k+l}\otimes \ det^{k+l}\) has been also given in [14]. We write \(4 \times 4\) matrix T by \(2\times 2\) blocks \(T_{ij}\) as \(T=\begin{pmatrix} T_{11} &{} T_{12} \\ ^{t}T_{12} &{} T_{22} \end{pmatrix}\). The generating function in this case is given by

$$\begin{aligned} \frac{1}{R(T,u)^{(d-5)/2}\sqrt{\Delta _0(T,u)^2-4\det (T)u^2}} \end{aligned}$$

where

$$\begin{aligned} \Delta _0(T,u)&= 1-\det (T_{12})u+\det (T_{11}T_{22})u^2, \\ R(T,u)&=(\Delta _0+\sqrt{\Delta _0^2-4\det (T)u^2})/2. \end{aligned}$$

Here u is a dummy variable and the coefficients of \(u^l\) corresponds with the target weight \(det^{k+l}\otimes det^{k+l}\). It is not clear if there is any easy way to derive this from (8). We also have several other explicit algebraic expressions of \(G^{({\pmb n})}(T,X)\) in the case when \(r=n\) and all \(n_i=1\), but we omit them here. See [22] for these.

Applications to Jacobi forms will be given later.

4 The second algorithm by monomial basis

In this section, we give the second way to give \({\mathcal P}_{\rho }^{{\pmb n}}(d)\) concretely. Theorem 3.1 in the last section is in a sense rather theoretical in nature, but the method in this section is much more practical in most cases.

4.1 Monomial basis for the partition

We give another canonical basis of \({\mathcal P}^{{\pmb n}}(d)\) different from the descending basis. This is a generalization (but not a part) of monomial basis defined in [22] for \({\pmb n}=(1,\ldots ,1)\). Our method in this section uses this new basis. First we review monomial basis of \({\mathcal P}(d)\) defined in [22]. For any \({\pmb \nu }\in {\mathcal N}\), we write

$$\begin{aligned} T^{{\pmb \nu }}=\prod _{1\le i,j\le n}t_{ij}^{\nu _{ij}/2}. \end{aligned}$$

Since \(t_{ij}=t_{ji}\), we have \(t_{ij}^{\nu _{ij}/2}t_{ji}^{\nu _{ji}/2}=t_{ij}^{\nu _{ij}}\) if \(i\ne j\).

Theorem 4.1

[22, Theorem 1] Unless d is even non-positive integer, there exists the unique polynomial \(P^{M}_{{\pmb \nu }}(T)\in {\mathcal P}(d)\) for any \({\pmb \nu }=(\nu _{ij}) \in {\mathcal N}_0\) such that

$$\begin{aligned} P_{{\pmb \nu }}^{M}(T)=T^{{\pmb \nu }}+Q(T), \end{aligned}$$

where \(T^{{\pmb \nu }}=\prod _{i<j}t_{ij}^{\nu _{ij}}\) and Q(T) is a polynomial in \(t_{ij}\) (\(1\le i,j\le n\)) which vanishes if we put \(t_{11}=t_{22}=\cdots = t_{nn}=0\).

This basis is called the monomial basis in [22].

In order to generalize this to general partitions \({\pmb n}\), we need a natural metric on the space \({\mathcal P}^{{\pmb n}}(d)\). We have already defined in [22] an inner metric \((P,Q)_d\) for P, \(Q \in {\mathbb C}[{T}]\) for \(Re(d)>n-1\) by

$$\begin{aligned} (P,Q)_d=c_n(d)\int _{T>0}e^{-tr(T)/2}P(T)\overline{Q(T)}\det (T)^{(d-n-1)/2}dT \end{aligned}$$

where \(\overline{Q(T)}\) is the complex conjugation, \(dT=\prod _{i\le j}dt_{ij}\), and

$$\begin{aligned} c_{n}(d)=\pi ^{-n(n-1)/4}\prod _{i=0}^{n-1}\Gamma \left( \frac{d-i}{2}\right) ^{-1}. \end{aligned}$$

This is a positive definite metric for any real \(d>n-1\). It was shown in [22] that this is a polynomial in d, hence prolonged holomorphically to all \(d \in {\mathbb C}\). This is non degenerate unless d is an integer such that \(d\le n-1\), but even if it is non degenerate, it is not positive definite for general d. Here we give a slightly different approach. For polynomials P(T), \(Q(T)\in {\mathbb C}[{T}]\), we define a sesquilinear form \((*,*)\) over \({\mathbb C}\) by

$$\begin{aligned} (P(T),Q(T))=(P((D_{ij}))\overline{Q})(0). \end{aligned}$$
(15)

Here \(P((D_{ij}))\) means that we replace \(t_{ij}\) by \(D_{ij}\) in the polynomial P(T). This is well defined since \(D_{ij}\) commutes with each other. For general polynomials P and Q, we have \((P,Q)\ne (P,Q)_d\), but if we consider only the case P, \(Q \in {\mathcal P}(d)\), these are equal.

To see this, first we assume that d is an integer with \(d\ge n\). Take an \(n \times d\) variable matrix Y. Then for \(P\in {\mathcal P}_n(d)\), a polynomial \(\widetilde{P}\) defined by \(\widetilde{P}(Y)=P(Y^{t}Y)\) is a linear combination of elements in the space \(({\mathcal H}_{a_1}\otimes \cdots \otimes {\mathcal H}_{a_n})^{O(d)}\), where \({\mathcal H}_{a_i}\) is the space of harmonic polynomials in i-th row \(y_i\) of Y of degree \(a_i\), and the superscript O(d) means an invariant part by the diagonal action of the orthogonal group O(d). So \(\widetilde{P}(Y)\) is a certain linear combination of polynomials \(f_1(y_1)f_2(y_2)\cdots f_n(y_n)\), where each \(f_i(y_i)\) is a harmonic polynomial in \(y_i\) independent of the other \(y_j\). We write \(y_i=(y_{i1},\ldots ,y_{id})\) and

$$\begin{aligned} \frac{{\partial }}{{\partial }Y}=\left( \frac{{\partial }}{{\partial }y_{i\nu }}\right) ,&\frac{{\partial }}{{\partial }y_i}=\left( \frac{{\partial }}{{\partial }y_{i1}},\ldots ,\frac{{\partial }}{{\partial }y_{id}}\right) ,&\Delta _{ij}(Y)=\sum _{\nu =1}^{d}\frac{{\partial }^2}{{\partial }y_{i\nu }{\partial }y_{j\nu }}. \end{aligned}$$

Then for any harmonic polynomial \(g(y_i)\), by virtue of Kashiwara and Vergne [27] p. 19 (5.5), we have

$$\begin{aligned} \left( f_i\left( \frac{{\partial }}{{\partial }y_{i}}\right) g(y_i)\right) \biggl |_{y_i=0} =(2\pi )^{-d/2}\int _{{\mathbb R}^{d}}e^{-y_i\,^{t}y_i/2} f_i(y_i)g(y_i)dy_i, \end{aligned}$$
(16)

We have \((D_{ij}Q)(Y\,^{t}Y)=\Delta _{ij}(Y)\widetilde{Q}(Y)\) by definition, where \(\widetilde{Q}(Y)=Q(Y\,^{t}Y)\) for \(Q\in {\mathcal P}(d)\). Then (16) means

$$\begin{aligned} P((D_{ij}))\overline{Q(Y^{t}Y)}\biggl |_{Y=0}&= \widetilde{P}\left( \frac{{\partial }}{{\partial }Y}\right) \overline{\widetilde{Q}(Y)}\biggl |_{Y=0} \\&=(2\pi )^{-nd/2}\int _{{\mathbb R}^{nd}}e^{-Tr(Y^{t}Y/2)} \widetilde{P}(Y)\overline{\widetilde{Q}(Y)}dY, \end{aligned}$$

which is nothing but \((P,Q)_{d}\) in [22].

More generally, we have a following proposition.

Proposition 4.2

  1. (i)

    For any complex number \(d \in {\mathbb C}\) and P, \(Q \in {\mathcal P}(d)\), we have

    $$\begin{aligned} (P,Q)=(P,Q)_d. \end{aligned}$$
  2. (ii)

    For any polynomials P(T) and Q(T), and any matrix \(A \in GL(n,{\mathbb C})\), we have

    $$\begin{aligned} (P(AT\,^{t}A),Q(T))=(P(T),Q(^{t}ATA)). \end{aligned}$$

Proof

It is known in [22] that \((P,Q)_d\) is a polynomial in d and it is obvious by definition that (PQ) is also a polynomial in d. Since they are equal for integers such that \(d>n-1\), they are equal for any d. The assertion (ii) directly follows from Lemma 2.5. \(\square \)

For a complex number \(d\not \in {\mathbb Z}_{\le n-1}\), we have an easy alternative proof of the above proposition using duality of the monomial and descending basis. We omit the details.

Now we will give a basis of \({\mathcal P}^{{\pmb n}}(d)\) similar to the monomial basis of \({\mathcal P}(d)\) and different from the descending basis. We write the block decomposition of T for the partition \({\pmb n}\) as \(T=\,^{t}T=(T_{pq})\), where \(T_{pq}\) is an \(n_p\times n_q\) matrix.

Theorem 4.3

We fix a partition \({\pmb n}=(n_1,\ldots ,n_r)\) of n with \(r\ge 2\). Assume that d is not an integer such that \(d<n\) and that the metric (PQ) is positive definite (e.g. d is real and \(d>n-1\)). Then for each index \({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\) there exists a unique polynomial \(P_{{\pmb \nu }}^{M,{\pmb n}}(T)\in {\mathcal P}^{{\pmb n}}(d)\) such that

$$\begin{aligned} P_{{\pmb \nu }}^{M,{\pmb n}}(T)=T^{{\pmb \nu }}+Q(T), \end{aligned}$$

where \(Q(T)|_{T_{11}=T_{22}=\cdots =T_{rr}=0}=0\). Such polynomials \(P_{{\pmb \nu }}^{M,{\pmb n}}(T)\) have multidegree \({\pmb \nu }\cdot {\pmb 1}\) and form a basis of \({\mathcal P}^{{\pmb n}}(d)\). We also have

$$\begin{aligned} (P_{{\pmb \nu }}^{M,{\pmb n}}(T),P_{{\pmb \mu }}^{D}(T))=\delta _{{\pmb \nu }{\pmb \mu }} \end{aligned}$$

for any \({\pmb \nu }\), \({\pmb \mu }\in {\mathcal N}_0^{{\pmb n}}={{\mathcal N}_0^{(n_1,\ldots ,n_r)}}\), where \(\delta _{{\pmb \nu }{\pmb \mu }}\) is the Kronecker symbol.

We call this basis a monomial basis for the partition \({\pmb n}\). This notion depends on \({\pmb n}\). In general, this is not a part of the monomial basis defined in [22], since Q(T) in the above theorem might contain a term \(T^{{\pmb \mu }}\) with \({\pmb \mu }\in {\mathcal N}_0\). The monomial basis defined in [22] or in Theorem 4.1 is the monomial basis corresponding to the partition \({\pmb n}=(1,1,\ldots ,1)\).

In order to prove Theorem 4.3, we first prepare a lemma. We fix a multidegree \(\mathbf{a}\). We denote by \(W_2\) the subspace of \({\mathcal P}_{\mathbf{a}}(d)\) spanned by polynomials \(P_{{\pmb \nu }}^{M}(T)\) for all \({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}(\mathbf{a})\), where \(P_{{\pmb \nu }}^{M}(T)=P_{{\pmb \nu }}^{M,(1,\ldots ,1)}(T)\) are the monomial bases corresponding to the partition \({\pmb n}=(1,\ldots ,1)\). Since \(P_{{\pmb \nu }}^{M}(T)\) are linearly independent, we have

$$\begin{aligned} \dim W_2=N_{0}^{{\pmb n}}(\mathbf{a})=\dim {\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d), \end{aligned}$$

but \(P_{{\pmb \nu }}^{M}(T)\) are not necessarily elements of \({\mathcal P}^{{\pmb n}}(d)\), so \(W_2\) is not equal to \({\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d)\) in general.

We denote by \(W_1\) the subspace of \({\mathcal P}_{\mathbf{a}}(d)\) spanned by \(P_{{\pmb \nu }}^{M}(T)\) such that \({\pmb \nu }\in {\mathcal N}_0(\mathbf{a})\) but \({\pmb \nu }\not \in {\mathcal N}_0^{{\pmb n}}(\mathbf{a})\). Since \(\{P_{{\pmb \nu }}^{M}(T);{\pmb \nu }\in {\mathcal N}_0(\mathbf{a})\}\) is a basis of \(P_{\mathbf{a}}(d)\), we have \({\mathcal P}_{\mathbf{a}}(d)=W_1\oplus W_2\) (a direct sum as modules.) Denote by \(W_1^{\perp }\) the orthogonal complement of \(W_1\) in \({\mathcal P}_{\mathbf{a}}(d)\). Since the monomial basis is the dual basis of the descending basis, and \({\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d)\) is spanned by \(P_{{\pmb \nu }}^{D}(T)\) with \({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}(\mathbf{a})\), we have

$$\begin{aligned} {\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)=W_1^{\perp }. \end{aligned}$$
(17)

Through the natural embedding of \({\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)\) in \({\mathcal P}_{\mathbf{a}}(d)=W_1\oplus W_2\), we define the natural projection pr from \({\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)\) to \(W_2\).

Lemma 4.4

If the metric space \({\mathcal P}(d)\) has no singular vector, in particular if the inner metric \((*,*)\) is positive definite, the projection map pr gives a linear isomorphism \({\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)\cong W_2\) (though \({\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)\ne W_2\) in general).

Proof

For \(P=P_1+P_2 \in {\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)\subset {\mathcal P}_{\mathbf{a}}(d)=W_1+W_2\) with \(P_i \in W_i\), assume that \(P_2=0\). Then we have \((P,P)=(P,P_1)\), and by (17) we have \((P,P_1)=0\). So if there is no singular vector, we have \(P=0\). This means that pr is injective. Since \(\dim {\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)=N_0^{{\pmb n}}(\mathbf{a}) =\dim W_2\), the map pr is surjective. This proves the lemma. \(\square \)

Proof of Theorem 4.3

Take \({\pmb \nu }\in {\mathcal N}^{{\pmb n}}_0(\mathbf{a})\). Then since \(P_{{\pmb \nu }}^{M}(T)\in W_2\), there exists the unique \(P \in {\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d)\) such that \(pr(P)=P_{{\pmb \nu }}^{M}(T)\). This means that \(P(T)=P_{{\pmb \nu }}^{M}(T) +R(T)\) for some \(R(T) \in W_1\). We may write \(R(T)=\sum _{{\pmb \mu }}c_{{\pmb \mu }}P_{{\pmb \mu }}^{M}(T)\) where \({\pmb \mu }\) runs over \({\mathcal N}_0(\mathbf{a})\) not in \({\mathcal N}_0^{{\pmb n}}(\mathbf{a})\). By definition, for each \({\pmb \mu }\), we have \(P_{{\pmb \mu }}^M(T)=T^{{\pmb \mu }}+Q_{{\pmb \mu }}(T)\) where \(Q_{{\pmb \mu }}(T)\) vanishes under the restriction to all \(t_{ii}=0\). Since \({\pmb \mu }\not \in {\mathcal N}_0^{{\pmb n}}(\mathbf{a})\), we have \(\mu _{ij}\ne 0\) for some \((i,j)\in {I({\pmb n})}\). So we have \(T^{{\pmb \mu }}|_{T_{11}=T_{22}=\cdots =T_{rr}=0}=0\), and hence we have \(R(T)|_{T_{11}=T_{22}=\cdots =T_{rr}=0}=0\). Besides, by definition we have \(P_{{\pmb \nu }}^{M}(T)=T^{{\pmb \nu }}+Q_{{\pmb \nu }}(T)\) where \(Q_{{\pmb \nu }}(T)\) vanishes for all \(t_{ii}=0\). By our construction, P(T) is of multidegree \({\pmb \nu }\cdot {\pmb 1}\). So if we define \(P^{M,{\pmb n}}_{{\pmb \nu }}(T)\) by this P, then this satisfies the desired property of Theorem 4.3. Now it is obvious that \(\{P^{M,{\pmb n}}_{{\pmb \nu }};{\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\}\) is a basis of \({\mathcal P}^{{\pmb n}}(d)\) since \(T^{{\pmb \nu }}\) are linearly independent and \(\dim {\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d)=N_0^{{\pmb n}}(\mathbf{a})\). The uniqueness is also clear since if we write any element in \({\mathcal P}^{{\pmb n}}_{\mathbf{a}}(d)\) as a linear combination of \(P^{M,{\pmb n}}_{{\pmb \nu }}(T)\) (\({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\)), then the coefficients are determined only by the coefficients of \(T^{{\pmb \nu }}\). The duality to the descending basis is obvious by (17), since we have \(P_{\nu }^{M,{\pmb n}}(T) \in P_{\nu }^{M}(T)+W_1\). \(\square \)

Remark 4.5

In Theorem 4.3, we assumed that d is not an integer such that \(d<n-1\), but if we regard d as a variable, then we can always define a monomial basis for a partition \({\pmb n}\) as a polynomial such that coefficients are rational functions of d. This can be seen as follows. We denote by \({\mathcal N}_0(\mathbf{a})\backslash {\mathcal N}_0^{{\pmb n}}(\mathbf{a})\) the set of elements of \({\mathcal N}_0(\mathbf{a})\) that does not belong to \({\mathcal N}_0^{{\pmb n}}(\mathbf{a})\). The condition that

$$\begin{aligned} P_{{\pmb \nu }}^{M}(T)+\sum _{{\pmb \mu }\in {\mathcal N}_0(\mathbf{a})\backslash {\mathcal N}_0^{{\pmb n}}(\mathbf{a})}c_{{\pmb \mu }}P_{{\pmb \mu }}^{M}(T) \in {\mathcal P}_{\mathbf{a}}^{{\pmb n}}(d) \end{aligned}$$

is equivalent that this is orthogonal to \(P_{{\pmb \kappa }}^{M}(T)\) for all \({\pmb \kappa }\in {\mathcal N}_0(\mathbf{a}) \backslash {\mathcal N}_0^{{\pmb n}}(\mathbf{a})\). So consider the following linear simultaneous equation for unknown \(c_{{\pmb \mu }}\) for all \({\pmb \kappa }\in {\mathcal N}_0(\mathbf{a})\backslash {\mathcal N}_0^{{\pmb n}}(\mathbf{a})\).

$$\begin{aligned} \sum _{{\pmb \mu }\in {\mathcal N}_0(\mathbf{a})\backslash {\mathcal N}_0^{{\pmb n}}(\mathbf{a})}c_{{\pmb \mu }}(P_{{\pmb \kappa }}^{M}(T),P_{{\pmb \mu }}^{M}(T)) =-(P_{{\pmb \kappa }}^{M}(T),P_{{\pmb \nu }}^{M}(T)). \end{aligned}$$
(18)

When \(d>n-1\), the Gram matrix \(((P_{{\pmb \kappa }}^{M}(T),P_{{\pmb \mu }}^{M}(T)))_{{\pmb \kappa },{\pmb \mu }\in {\mathcal N}_0(\mathbf{a}), \not \in {\mathcal N}_0^{{\pmb n}}(\mathbf{a})}\) is invertible, so this has the unique solution for variable d, and since \((P_{{\pmb \kappa }}^{M}(T),P_{{\pmb \mu }}^M(T))\) are polynomials in d, the solutions \(c_{{\pmb \mu }}\) are rational functions of d.

4.2 Practical algorithm to calculate \({\mathcal P}_{\rho }^{{\pmb n}}(d)\)

Now we explain a practical algorithm to write down vectors \(P_{\rho }(T)\in {\mathcal P}_{\rho }^{{\pmb n}}(d)\). First we explain how to calculate the monomial basis for partition explicitly. We have already seen that

$$\begin{aligned} P_{{\pmb \nu }}^{M,{\pmb n}}(T)=P_{{\pmb \nu }}^{M}(T)+\sum _{{\pmb \mu }\in {\mathcal N}_0(\mathbf{a})\backslash {\mathcal N}_0^{{\pmb n}}(\mathbf{a})}c_{{\pmb \mu }}P_{{\pmb \mu }}^{M}(T) \end{aligned}$$

where \(c_{{\pmb \mu }}\) are solutions of (18). The equation (18) contains the monomial basis for \({\pmb n}=(1,\ldots ,1)\) and the inner products, so we must explain how to obtain these concretely. The formula for the monomial basis has been explicitly given in [22] as follows. For any vector \(v=(v_1,\ldots ,v_n)\in ({\mathbb Z}_{\ge 0})^n\), we put \(\delta (T)^{v}=\prod _{i=1}^{n}t_{ii}^{v_i}\). For any \(P\in {\mathcal P}_{\mathbf{a}}(d)\) and for each (i.j) with \(1 \le i,j\le n\), we define an operator \(R_{ij}(\mathbf{a})\) by

$$\begin{aligned} R_{ij}(\mathbf{a})P=\delta (T)^{\mathbf{a}+\mathbf{e}_i+\mathbf{e}_j-(2-d)\mathbf{1}/2} D_{ij}\delta (T)^{(2-d)\mathbf{1}/2-\mathbf{a}}P(T). \end{aligned}$$

Here \(\mathbf{e}_k\) is the unit vector whose k component is 1 and the other components are 0. We can define \(R_{ij}\) as an element of an algebra of operators on \({\mathbb C}[T]\) which gives \(R_{ij}(\mathbf{a})\) on polynomials of multidegree \(\mathbf{a}\). Then \(R_{ij}\) maps \({\mathcal P}_{\mathbf{a}}(d)\) to \({\mathcal P}_{\mathbf{a}+\mathbf{e}_i+\mathbf{e}_j}(d)\). Here we can show that the actions of \(R_{ij}\) are commutative for all (ij) on \({\mathcal P}(d)\). Hence, for any \({\pmb \nu }=(\nu _{ij}) \in {\mathcal N}_0\), the following operator is well defined.

$$\begin{aligned} \mathbf{R}^{{\pmb \nu }}=\prod _{i<j}R_{ij}^{\nu _{ij}}. \end{aligned}$$

By a result in [22], we have

$$\begin{aligned} P_{{\pmb \nu }}^{M}(T)=\frac{1}{\epsilon _{2\mathbf{a}}(d-2)}{} \mathbf{R}^{{\pmb \nu }}(1), \quad (d \in {\mathbb C}, d\not \in 2{\mathbb Z}_{\le 1}), \end{aligned}$$

where 1 is the constant function taking the value 1 and

$$\begin{aligned} \epsilon _{2\mathbf{a}}(d-2)=\prod _{i=1}^{n}d(d+2)\cdots (d+2a_i-2). \end{aligned}$$

Now the numbers \(c_{{\pmb \mu }}\) in (15) can be calculated once we know \( (P_{{\pmb \kappa }}^M(T),P_{{\pmb \mu }}^M(T))\) for \({\pmb \kappa }\), \({\pmb \mu }\in {\mathcal N}_0(\mathbf{a})\). The latter is easily calculated as follows. If we write \(P_{{\pmb \kappa }}^M(T)=T^{{\pmb \kappa }}+Q(T)\), then by definition any monomial appearing in Q(T) contains some \(t_{ii}\). We also have \(D_{ii}P_{{\pmb \mu }}^{M}(T)=0\) by definition for any i with \(1\le i \le n\). So if we replace T by \((D_{ij})_{1\le i,j \le n}\) in \(P_{{\pmb \kappa }}^M(T)\), then \(Q((D_{ij}))P_{{\pmb \mu }}^{M}(T)=0\). So for \({\pmb \kappa }\), \({\pmb \mu }\in {\mathcal N}_0(\mathbf{a})\). we have

$$\begin{aligned} (P_{{\pmb \kappa }}^M(T),P_{{\pmb \mu }}^M(T))={\mathcal D}^{{\pmb \kappa }}P_{{\pmb \mu }}^{M}(T)= \biggl (\prod _{1\le i< j\le n}D_{ij}^{\kappa _{ij}}\biggr )P_{{\pmb \mu }}^{M}(T). \end{aligned}$$

It is obvious that the final expression is a constant, comparing the multidegree. So we gave a concrete way to calculate \(P_{\nu }^{M,{\pmb n}}(T)\) explicitly by starting from the constant function 1 by repeating multiplication of several given rational functions and differentiation several times.

Our final aim is to give \({\mathcal P}_{\rho }^{{\pmb n}}(d)\) and we explain this now. We fix an irreducible polynomial representation \((\rho ,V)\) of \(GL({\pmb n},{\mathbb C})\). We fix a basis \(\{e_1,\ldots , e_l\}\) of V where \(l=\dim \rho \) and define the representation matrix R(A) of \(\rho \) by

$$\begin{aligned} \begin{pmatrix} \rho (A)e_1 \\ \vdots \\ \rho (A)e_l \end{pmatrix} =R({\,^{t}A})\begin{pmatrix} e_1 \\ \vdots \\ e_l \end{pmatrix}. \end{aligned}$$

We assume that \({\mathcal P}_{\rho }(d)\ne \{0\}\). Then by Theorem 3.1, such a representation \(\rho \) is realized as a subrepresentation of \(\rho _{U}\) acting on \({\mathbb C}[X]\). We denote by \({\mathcal V}(\rho )\) a \({\mathbb C}\) linear space of all vectors \(f(T)=(f^{(s)}(T))_{1\le s\le l}\) of polynomials \(f^{(s)}(T)\) in \(t_{ij}\) for all \((i,j)\not \in I({\pmb n})\) such that

$$\begin{aligned} f(AT{\,^{t}A})=R(A)f(T). \end{aligned}$$

Assume that the multiplicity of \(\rho \) in \(\rho _U\) is \(m_{\rho }\). Then we have \(\dim {\mathcal V}(\rho )=m_{\rho }\). Here we may write

$$\begin{aligned} f^{(s)}(T)=\sum _{{\pmb \nu }\in {\mathcal N}_{0}\backslash {\mathcal N}_0^{{\pmb n}}}c_{{\pmb \nu },s}T^{{\pmb \nu }} \qquad \qquad (1\le s\le l), \end{aligned}$$

where \(c_{{\pmb \nu },s}=0\) except for finitely many \({\pmb \nu }\). Then we put

$$\begin{aligned} P_f^{(s)}(T)=\sum _{{\pmb \nu }\in {\mathcal N}_{0}\backslash {\mathcal N}_0^{{\pmb n}}}c_{{\pmb \nu },s}P_{{\pmb \nu }}^{M,{\pmb n}}(T). \end{aligned}$$

and

$$\begin{aligned} P_f(T)=(P_f^{(s)}(T))_{1\le s\le l}. \end{aligned}$$

Theorem 4.6

Assumption and notation being as above, for any \(f \in {\mathcal V}(\rho )\), we have \(P_{f}(T)\in {\mathcal P}_{\rho }^{{\pmb n}}(d)\). Any element of \({\mathcal P}_{\rho }^{{\pmb n}}(d)\) is obtained in this way from some \(f(T)\in {\mathcal V}(\rho )\). In particular, we have \(\dim {\mathcal P}_{\rho }^{{\pmb n}}(d)=\dim {\mathcal V}(\rho )=m_{\rho }\).

By Theorem 4.6 and 2.2(a result in [14]), when \(d=2k\ge n\), the differential operators \(P_f(\frac{{\partial }}{{\partial }Z})\) satisfies Condition 2.1 for the initial weight k and the target weight \(det^k\rho \), and any such operators are obtained in this way.

Proof

As before, we write the block decomposition of T as \(T=(T_{pq})\) where \(T_{pq}\) are \(n_p\times n_q\) matrices. We have \(P_f^{(s)}(T)\in {\mathcal P}^{{\pmb n}}(d)\) since this is a linear combination of monomial basis for the partition \({\pmb n}\), and we also have \(P_f^{(s)}(AT\,^{t}A)\in {\mathcal P}^{{\pmb n}}(d)\) since \({\mathcal P}^{{\pmb n}}(d)\) is stable by the action of \(GL({\pmb n},{\mathbb C})\). Now we prove that \(P_f(AT{\,^{t}A})=R(A)P_f(T)\) for any \(A \in GL({\pmb n},{\mathbb C})\). Any component of \(P_f(AT{\,^{t}A})-R(A)P_f(T)\) is in \({\mathcal P}^{{\pmb n}}(d)\) and written by a linear combination of the monomial basis \(P_{{\pmb \nu }}^{M,{\pmb n}}(T)\) for the partition \({\pmb n}\) where \({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\). On the other hand, by definition, \(P_f(T)-f(T)\) becomes 0 if we restrict it to \(T_{11}=T_{22}=\cdots =T_{rr}=0\). Since \(AT\,^{t}A\) for \(A=(A_1,\ldots ,A_r)\) is given by \((A_pT_{pq}\,^{t}A_q)_{1\le p,q\le r}\), the polynomial \(P_f(AT\,^{t}A)-f(AT\,^{t}A)\) also vanishes under the restriction to all \(T_{ii}=0\). We have

$$\begin{aligned} P_f(AT\,^{t}A)-f(AT\,^{t}A)=P_f(AT\,^{t}A)-R(A)P_f(T)+R(A)(P_f(T)-f(T)) \end{aligned}$$

and \(R(A)(P_f(T)-f(T))\) vanishes under the restriction to all \(T_{ii}=0\), so we see that \(P_f(AT\,^{t}A)-R(A)P_f(T)\) also vanishes under the same restriction. So, writing a component of \(P_f(AT{\,^{t}A})-R(A)P_f(T)\) as a linear combination \(\sum _{{\pmb \mu }\in {\mathcal N}_0^{{\pmb n}}}c_{{\pmb \mu }}P_{{\pmb \mu }}^{M,{\pmb \mu }}(T)\) of the monomial basis for the partition \({\pmb n}\) and restricting this to \(T_{11}=\cdots =T_{rr}=0\), we have

$$\begin{aligned} \sum _{{\pmb \mu }\in {\mathcal N}_0^{{\pmb n}}}c_{{\pmb \mu }}T^{{\pmb \mu }}=0. \end{aligned}$$

Since \(T^{{\pmb \mu }}\) are linearly independent, we have \(c_{{\pmb \mu }}=0\). So we have \(P_f(AT{\,^{t}A})-R(A)P_f(T)=0\). The rest of the claims are obvious by Theorem 3.1\(\square \)

In the above theorem, to give the vector \(f(T)\in {\mathcal V}(\rho )\) concretely for a general representation \(\rho \) is a matter of representation theory of GL(n) or \(GL({\pmb n})\), but it seems that no really practical closed formula is found in reference. For example, when \(r=2\), we can at least give the highest weight vector corresponding to \(\rho \) concretely, but to write down f(T) itself explicitly in reasonably simple way from that is a different problem. We give one realization of f(T) in case when \(r=2\) and \(n_1=n_2\). We prepare \(n\times n\) matrices U, V of independent variables. For any q with \(1\le q \le n\), denote by \(U_q\) and \(V_q\) the \(q\times n\) matrices consisting of the first q rows of U and V respectively. For any subset I of \(\{1,\ldots , n\}\) with cardinality \(\#(I)=q\), we denote by \((U_q)_I\) the \(q\times q\) minor, the determinant of the matrix consisting of \(i_{\nu }\)-th columns of \(U_q\) for all \(i_{\nu }\in I\). Let \(\lambda =(\lambda _1,\ldots , \lambda _n)\) be a dominant integral weight and we put \(\lambda _{n+1}=0\). The corresponding representation \(\rho _{n,\lambda }\) of \(GL_n({\mathbb C})\) can be realized by the space of bideterminants as in [8, 31], that is, the space spanned by all the products \(\prod _{j=1}^{\lambda _{q}-\lambda _{q+1}}(U_q)_{I_{qj}}\) for some \(I_{qj}\subset \{1,\ldots , n\}\), \(\#(I_{qj})=q\), for \(q=1\) to n, where the action is induced by \(U\rightarrow UA\) for \(A \in GL_n({\mathbb C})\). Using this realization and writing \(T=\left( {\begin{matrix} T_{11} &{} T_{12} \\ ^{t}T_{12} &{} T_{22} \end{matrix}}\right) \) by \(n\times n\) blocks \(T_{ij}\), we can give f(T) by

$$\begin{aligned} f(T)=\prod _{q=1}^{n}\det (U_qT_{12}\,^{t}V_q)^{\lambda _q-\lambda _{q+1}}, \end{aligned}$$
(19)

where U, V are dummy variables to describe a basis of \(\rho _{n,\lambda }\otimes \rho _{n,\lambda }\). We give more concrete examples in Sect. 1.

Another one-line formula for the differential operators based on a different idea is given in [21] for the case \(r=2\). This is an alternative practical way to calculate.

Finally we give a simplest concrete example obtained by the method in this section. We put \(n=4\), \({\pmb n}=(2,2)\) and consider the representation of \(GL({2,2})=GL(2)\times GL(2)\) given by \(\rho =det^{1}Sym(1) \otimes det^{1}Sym(1)\). The representation \(Sym(1)\otimes Sym(1)\) of \(GL(2,{\mathbb C})\times GL(2,{\mathbb C})\) is realized on \(2\times 2\) matrix \(T_{12}\) by \(T_{12}\rightarrow A_1 T_{12} \,^{t}A_2\) for \(A_1\), \(A_2 \in GL(2,{\mathbb C})\). Then for f(T), we may put

$$\begin{aligned} f(T)=(t_{13}t_{24}-t_{14}t_{23})\begin{pmatrix} t_{13}\ &{} t_{14} \\ t_{23}\ &{} t_{24} \end{pmatrix}. \end{aligned}$$
(20)

(Using the realization (19), this is \(f(T)=\det (UV)\det (T)\sum _{i,j=1}^{2}u_{1i}v_{1j}t_{i,j+2}\).)

Instead of writing index by a matrix \({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\), we write it symbolically by \(X^{{\pmb \nu }}=\prod _{i<j}x_{ij}^{\nu _{ij}}\). We can give following monomial basis for \({\pmb n}=(1,1,1,1)\).

$$\begin{aligned} P_{x_{13}^2x_{24}}^M&= t_{13}^2t_{24}-t_{11}t_{24}t_{33}/d, \\ P_{x_{13}x_{14}x_{24}}^M&= t_{13}t_{14}t_{23}-(t_{12}t_{14}t_{33}+t_{11}t_{23}t_{34})/d+t_{11}t_{24}t_{33}/d^2, \\ P_{x_{12}x_{13}x_{24}}^M&= t_{12}t_{13}t_{34}-(t_{12}t_{14}t_{33}+t_{11}t_{23}t_{34})/d+t_{11}t_{24}t_{33}/d^2, \\ P_{x_{14}^2x_{23}}^M&= t_{14}^2t_{23}-t_{11}t_{23}t_{44}/d, \\ P_{x_{13}x_{14}x_{24}}^M&= t_{13}t_{14}t_{24}-(t_{11}t_{24}t_{34}+t_{12}t_{13}t_{44})/d+t_{11}t_{23}t_{44}/d^2, \\ P_{x_{12}x_{14}x_{34}}^M&= t_{12}t_{14}t_{34}-(t_{11}t_{24}t_{34}+t_{12}t_{13}t_{44})/d+t_{11}t_{23}t_{44}/d^2, \\ P_{x_{14}x_{23}^2}^M&= t_{14}t_{23}^2-t_{14}t_{22}t_{33}/d, \\ P_{x_{13}x_{23}x_{24}}^M&= t_{13}t_{23}t_{24}-(t_{12}t_{24}t_{33}+t_{13}t_{22}t_{34})/d+t_{14}t_{22}t_{33}/d^2, \\ P_{x_{12}x_{23}x_{34}}^M&= t_{12}t_{13}t_{34}-(t_{12}t_{24}t_{33}+t_{13}t_{22}t_{34})/d+t_{14}t_{22}t_{33}/d^2, \\ P_{x_{13}x_{24}^2}^M&= t_{13}t_{24}^2-t_{13}t_{22}t_{44}/d, \\ P_{x_{14}x_{23}x_{24}}^M&= t_{14}t_{23}t_{24}-(t_{14}t_{22}t_{34}+t_{12}t_{23}t_{44})/d+t_{13}t_{22}t_{44}/d^2, \\ P_{x_{12}x_{24}x_{34}}^M&= t_{12}t_{24}t_{34}-(t_{14}t_{22}t_{34}+t_{12}t_{23}t_{33})/d+t_{13}t_{22}t_{44}/d^2. \end{aligned}$$

Then the monomial basis for the partition \({\pmb n}=(2,2)\) which correspond to monomials in f(T) of (20) are given as follows.

$$\begin{aligned} P_{x_{13}^2x_{24}}^{M,{\pmb n}}&= P_{x_{13}^2x_{24}}^M-\frac{2d}{(d-1)(d+2)}P_{x_{12}x_{13}x_{24}}^M, \\ P_{x_{13}x_{14}x_{24}}^{M,{\pmb n}}&= P_{x_{13}x_{14}x_{24}}^M-\frac{(d-2)}{(d-1)(d+2)}P_{x_{12}x_{13}x_{24}}^M, \\ P_{x_{14}^2x_{23}}^{M,{\pmb n}}&= P_{x_{14}^2x_{23}}^M-\frac{2d}{(d-1)(d+2)}P_{x_{12}x_{14}x_{34}}^M, \\ P_{x_{13}x_{14}x_{24}}^{M,{\pmb n}}&= P_{x_{13}x_{14}x_{24}}^M-\frac{d-2}{(d-1)(d+2)}P_{x_{12}x_{14}x_{34}}^M \\ P_{x_{14}x_{23}^2}^{M,{\pmb n}}&= P_{x_{14}x_{23}^2}^M-\frac{2d}{(d-1)(d+2)}P_{x_{12}x_{23}x_{34}}^M, \\ P_{x_{13}x_{23}x_{24}}^{M,{\pmb n}}&= P_{x_{13}x_{23}x_{24}}^M-\frac{d-2}{(d-1)(d+2)}P_{x_{12}x_{23}x_{34}}^M, \\ P_{x_{13}x_{24}^2}^{M,{\pmb n}}&= P_{x_{13}x_{24}^2}^{M}-\frac{2d}{(d-1)(d+2)} P_{x_{12}x_{24}x_{34}}^M, \\ P_{x_{14}x_{23}x_{24}}^{M,{\pmb n}}&= P_{x_{14}x_{23}x_{24}}^{M,{\pmb n}} -\frac{d-2}{(d-1)(d+2)} P_{x_{12}x_{24}x_{34}}^M. \end{aligned}$$

Now if we put

$$\begin{aligned} P_f(T)= \begin{pmatrix} P_{x_{13}^2x_{24}}^{M,{\pmb n}}-P_{x_{13}x_{14}x_{23}}^{M,{\pmb n}}, \quad &{} \ P_{x_{13}x_{24}x_{14}}^{M,{\pmb n}}-P_{x_{14}^2x_{23}}^{M,{\pmb n}} \\ P_{x_{13}x_{23}x_{24}}^{M,{\pmb n}}-P_{x_{14}x_{23}^2}^{M,{\pmb n}},\quad &{} \ P_{x_{13}x_{24}^2}^{M,{\pmb n}}-P_{x_{14}x_{23}x_{24}}^{M,{\pmb n}} \end{pmatrix}, \end{aligned}$$
(21)

then we have \(D_{ij}(P_f(T))=0\) for \((i,j)=(1,1)\), (1, 2), (2, 2), (3, 3), (3, 4), (4, 4) and

$$\begin{aligned} P_f(AT^{t}A)=\det (A_1A_2)A_1P_f(T)\,^{t}A_2, \qquad A=\begin{pmatrix} A_1 &{} 0 \\ 0 &{} A_2 \end{pmatrix} \in GL(4,{\mathbb C}). \end{aligned}$$

Then \(P_f(T)\) is a basis of the one-dimensional space \({\mathcal P}^{{\pmb n}}_{\rho }(d)\) for \(d\ge 4\), and the differential operator \({\mathbb D}_{P_f}=P_f(\frac{{\partial }}{{\partial }Z})\) satisfies Condition 2.1 for the initial weight d/2 and the target weight \(det^{d/2+1}Sym(1)\otimes det^{d/2+1}Sym(1)\).

5 Taylor expansion for variables in off-diagonal blocks

Now we apply Theorem 4.3 to the Taylor expansion of any scalar valued holomorphic function F(Z) on \(H_n\). We fix a partition \({\pmb n}=(n_1,\ldots ,n_r)\) of n with \(r\ge 2\) as before. For a fixed block decomposition of \(Z=(Z_{pq})\in H_n\) with \(n_p\times n_q\) matrices \(Z_{pq}\), we consider the Taylor expansion at \(z_{ij}=0\) for all \((i,j)\not \in {I({\pmb n})}\). We write this expansion by

$$\begin{aligned} F(Z)=\sum _{{\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}}F_{{\pmb \nu }}(Z_{11},Z_{22},\ldots ,Z_{rr})Z^{{\pmb \nu }}, \end{aligned}$$

where we write \(Z^{{\pmb \nu }}=\prod _{i<j,(i,j)\not \in {I({\pmb n})}}z_{ij}^{\nu _{ij}}\) for \({\pmb \nu }=(\nu _{ij})\in {\mathcal N}_0^{{\pmb n}}\). We denote by \(Res_{H_{{\pmb n}}}\) the restriction of functions on \(H_n\) to the diagonal blocks \(H_{{\pmb n}}=H_{n_1}\times \cdots \times H_{n_r}\). We assume that d is an even positive integer. If we apply our differential operators \({\mathbb D}\) satisfying Condition 2.1 to this, it is clear that the components of the vector \(Res_{H_{{\pmb n}}}({\mathbb D}F)\) is a linear combination of higher derivatives of \(F_{{\pmb \nu }}(Z_{11},Z_{22},\ldots ,Z_{rr})\). So when F is a Siegel modular form of weight \(k=d/2\), our operator gives a Siegel modular form of weight \(det^k\otimes \rho \) on \(H_{{\pmb n}}\) for some \(\rho \) depending only on a set of Taylor coefficients \(F_{{\pmb \nu }}\) of F. It is clear that the similar thing happens also in case when F is a Jacobi form. This is a generalization of Eichler–Zagier [6] on the relations between higher derivatives of the Taylor coefficients of Jacobi forms on \(H\times {\mathbb C}\) and elliptic modular forms. We also have the following injectivity of these maps.

Theorem 5.1

We fix a positive integer k with \(2k\ge n\). Let F be any holomorphic functions on \(H_n\). Then we have \(F=0\) if and only if \(Res_{H_{{\pmb n}}}({\mathbb D}F)=0\) for all \({\mathbb D}\) which satisfies Condition 2.1 for a fixed initial weight k and all target weight \(det^k\rho \), where \(\rho \) runs over all the irreducible representations of \(GL({\pmb n},{\mathbb C})\).

This is a subtle non-trivial theorem and in order to prove this, we need an existence of some concrete differential operators. We note that even if we fix \(\rho \), the space \({\mathcal P}_{\rho }^{{\pmb n}}(d)\) is not one-dimensional in general, and we need all the corresponding differential operators \({\mathbb D}\) in this theorem. The similar theorem for Jacobi forms will be written in Theorem 6.1. These theorems mean that there exists an injection from scalar valued Siegel modular forms and Jacobi forms into direct sum of vector valued Siegel modular forms of various weights on a lower dimensional domain, so at least theoretically, forms of higher degrees are determined by forms of lower degrees. In some cases, this is more explicitly described as in [15]. The above theorem is an easy corollary of the next theorem.

Theorem 5.2

We fix a natural number k with \(2k\ge n\). Then the Taylor coefficients \(F_{{\pmb \nu }}(Z_{11},Z_{22},\ldots ,Z_{rr})\) are linear combinations of certain higher derivatives of the images \(Res_{H_{{\pmb n}}}({\mathbb D}F)\) for all \({\mathbb D}=P(\frac{{\partial }}{{\partial }Z})\) with \(P \in {\mathcal P}^{{\pmb n}}(2k)\). In particular, we have \(F=0\) if and only if \(Res_{H_{{\pmb n}}}({\mathbb D}F)=0\) for all the scalar valued operators \({\mathbb D}=P(\frac{{\partial }}{{\partial }Z})\) with \(P \in {\mathcal P}^{{\pmb n}}(2k)\).

Proof

We prove this by induction. For multidegrees \(\mathbf{a}=(a_1,\ldots ,a_n)\) and \({\pmb b}=(b_1,\ldots ,b_n)\), we write \(\mathbf{a}\le {\pmb b}\) if \(a_i\le b_i\) for all i. We write \(\mathbf{a}<{\pmb b}\) if \(\mathbf{a}\le {\pmb b}\) and \(a_i<b_i\) for some i. We put \({\mathbb D}^{M,{\pmb n}}_{{\pmb \nu }}=P_{{\pmb \nu }}^{M,{\pmb n}}(\frac{{\partial }}{{\partial }Z})\) for a monomial basis for the partition \({\pmb n}\) and index \({\pmb \nu }\in {\mathcal N}_0^{{\pmb n}}\). We see the action of \({\mathbb D}^{M,{\pmb n}}_{{\pmb \nu }}\) on F more concretely. We define

$$\begin{aligned} {\mathcal N}_{{\pmb n}}=\{{\pmb c}=(c_{ij})\in {\mathcal N}; c_{ij}=0 \text { unless } (i,j)\in I({\pmb n})\}. \end{aligned}$$

Then any monomial in \({\mathbb C}[T]\) is written as \(T^{{\pmb c}}T^{{\pmb \mu }}\) for some \({\pmb \mu }\in {\mathcal N}_0^{{\pmb n}}\) and \({\pmb c}\in {\mathcal N}_{{\pmb n}}\). By definition, we have \(P^{M,{\pmb n}}_{{\pmb \nu }}(T)=T^{{\pmb \nu }}+Q(T)\) with \(Q(T)|_{T_{11}=\cdots =T_{rr}=0}=0\). This means that Q(T) is a linear combination of monomials \(T^{{\pmb c}}T^{{\pmb \mu }}\) with \({\pmb \mu }\in {\mathcal N}_0^{{\pmb n}}\) and \({\pmb c}\in {\mathcal N}_{{\pmb n}}\) such that \({\pmb c}=(c_{ij})\ne {\pmb 0}\), that is, \(c_{ij}\ne 0\) for some \((i,j)\in I({\pmb n})\). Since \(P^{M,{\pmb n}}_{{\pmb \nu }}(T)\) is homogeneous of multidegree \({\pmb \nu }\cdot {\pmb 1}\), we have \({\pmb \nu }\cdot {\pmb 1}={\pmb c}\cdot {\pmb 1} +{\pmb \mu }\cdot {\pmb 1}\) for such a monomial, and so we have \({\pmb \mu }\cdot {\pmb 1}<{\pmb \nu }\cdot {\pmb 1}\). We put \(|{\pmb \nu }|=\sum _{i<j}\nu _{ij}\) and write \({\pmb \nu }!=\prod _{i<j}\nu _{ij}!\) where we put \(0!=1\). Then for \({\pmb \mu }= (\mu _{ij})\in {\mathcal N}_0^{{\pmb n}}\), the action of the operator

$$\begin{aligned} \left( \frac{{\partial }}{{\partial }Z}\right) ^{{\pmb \mu }}=\prod _{1\le i<j\le n}2^{-\mu _{ij}} \left( \dfrac{{\partial }}{{\partial }z_{ij}}\right) ^{\mu _{ij}} \end{aligned}$$

on \(Z^{{\pmb \kappa }}\) for \({\pmb \kappa }\in {\mathcal N}_0^{{\pmb n}}\) is obviously given by

$$\begin{aligned} Res_{H_{{\pmb n}}} \left( \left( \frac{{\partial }}{{\partial }Z}\right) ^{{\pmb \mu }}Z^{{\pmb \kappa }}\right) = \left\{ \begin{array}{ll} 2^{-|{\pmb \mu }|}{\pmb \mu }! &{} \text { if } {\pmb \mu }={\pmb \kappa }, \\ 0 &{} \text { if }{\pmb \mu }\ne {\pmb \kappa }. \end{array}\right. \end{aligned}$$

On the other hand, we have \(\left( \frac{{\partial }}{{\partial }Z}\right) ^{{\pmb c}}Z^{{\pmb \kappa }}=0\) for any pair \(({\pmb c},{\pmb \kappa }) \in {\mathcal N}_{{\pmb n}}\times {\mathcal N}_0^{{\pmb n}}\), and we have

$$\begin{aligned} Res_{H_{{\pmb n}}}\left( \left( \frac{{\partial }}{{\partial }Z}\right) ^{{\pmb c}}\left( \frac{{\partial }}{{\partial }Z}\right) ^{{\pmb \mu }} F(Z)\right) = 2^{-|{\pmb \mu }|}{\pmb \mu }! \left( \frac{{\partial }}{{\partial }Z}\right) ^{{\pmb c}}F_{{\pmb \mu }}(Z_{11},\ldots ,Z_{rr}). \end{aligned}$$

So we have

$$\begin{aligned}&Res_{H_{{\pmb n}}}({\mathbb D}^{M,{\pmb n}}_{{\pmb \nu }}(F)) = 2^{-|{\pmb \nu }|}{\pmb \nu }! F_{{\pmb \nu }}(Z_{11},\ldots ,Z_{rr}) + \text { a linear combination of } \nonumber \\&\quad \left( \frac{{\partial }}{{\partial }Z}\right) ^{{\pmb c}}F_{{\pmb \mu }}(Z_{11},\ldots ,Z_{rr}) \text { for }{\pmb \mu }\in {\mathcal N}_0^{{\pmb n}}\text { with }{\pmb \mu }\cdot {\pmb 1}<{\pmb \nu }\cdot {\pmb 1} \text { and }{\pmb 0}\ne {\pmb c}\in {\mathcal N}_{{\pmb n}}.\nonumber \\ \end{aligned}$$
(22)

Now for a fixed multidegree \({\pmb b}\), we consider the condition that all \(F_{{\pmb \mu }}\) with \({\pmb \mu }\cdot {\pmb 1}\le {\pmb b}\) are linear combinations of higher derivatives of \(Res_{H_{{\pmb n}}}({\mathbb D}^{M,{\pmb n}}_{{\pmb \kappa }}F)\) for \({\pmb \kappa }\cdot {\pmb 1}\le {\pmb \mu }\cdot {\pmb 1}\). We have \(F_{{\pmb 0}}=Res_{H_{{\pmb n}}}(F)=Res_{H_{{\pmb n}}}({\mathbb D}_{{\pmb 0}}^{M,{\pmb n}} F)\), so the condition is satisfied for \({\pmb b}=(0,\ldots ,0)\). We fix any \({\pmb \nu }\) such that \(\mathbf{a}={\pmb \nu }\cdot {\pmb 1}\) and assume that the induction assumption is satisfied for any \({\pmb b}<{\pmb \nu }\cdot {\pmb 1}\). Then by (22), we may write \(F_{{\pmb \nu }}\) as a linear combination of \(Res_{H_{{\pmb n}}}({\mathbb D}^{M,{\pmb n}}_{{\pmb \nu }}F)\) and higher derivatives of \(F_{{\pmb \mu }}\) with \({\pmb \mu }\cdot {\pmb 1}<{\pmb \nu }\cdot {\pmb 1}\), which are linear combinations of higher derivatives of \(Res_{H_{{\pmb n}}}({\mathbb D}^{M,{\pmb n}}_{{\pmb \kappa }}F)\) with \({\pmb \kappa }\cdot {\pmb 1}\le {\pmb \mu }\cdot {\pmb 1}<{\pmb \nu }\cdot {\pmb 1}\) by induction assumption. So the assertion is also valid for \(\mathbf{a}\). So the first assertion follows. From this, we also see that if all \(Res_{H_{{\pmb n}}}({\mathbb D}_{{\pmb \nu }}^{M,{\pmb n}}(F))=0\), then \(F_{{\pmb \nu }}=0\) for all \({\pmb \nu }\in {\mathcal N}_{0}^{{\pmb n}}\) and we have \(F=0\). Only if part of the last assertion is trivial. \(\square \)

6 Taylor expansion of Jacobi forms

In Eichler and Zagier [6], it is proved that we can define elliptic modular forms of various weights by taking linear combinations of the restriction to \(z=0\) of the derivatives of Jacobi forms of degree one. By applying our differential operators, we can do the same thing in case of general degree. This means that Jacobi forms are at least theoretically (and sometimes more concretely) recovered from their finitely many Taylor coefficients associated with vector valued Siegel modular forms of less variables (for example, see [15]). Although the relation between these Taylor coefficients and Siegel modular forms is obtained by an easy exercise of the results of the previous section, there are some apparently different features, so for readers’ convenience, we explain how our differential operators can be applied to (scalar valued) Jacobi forms of any matrix index. We also give explicit operators in several cases. For a further generalization for vector valued Jacobi forms, see [18].

6.1 Definition of Jacobi forms of matrix index

We review the definition of scalar valued Jacobi forms of matrix index (See for example Ziegler [36].) We fix a positive integer n and fix a partition \(n=n_1+n_2\) with \(n_1\), \(n_2\ge 1\). We define the Heisenberg group \(H^{(n_1,n_2)}({\mathbb R})\) as a subgroup of \(Sp(n,{\mathbb R})\) consisting of elements

$$\begin{aligned} \begin{pmatrix} 1_{n_1} &{} 0 &{} 0 &{} ^{t}\mu \\ \lambda &{} 1_{n_2} &{} \mu &{} \kappa \\ 0 &{} 0 &{} 1_{n_1} &{} -\,^{t}\lambda \\ 0 &{} 0 &{} 0 &{} 1_{n_2} \end{pmatrix}\in Sp(n,{\mathbb R}), \quad \begin{array}{l} (\lambda ,\mu \in M_{n_2n_1}({\mathbb R}),\ \kappa \in M_{n_2}({\mathbb R}),\\ \text { such that } \mu \,^{t}\lambda +\kappa \text { symmetric}). \end{array} \end{aligned}$$

We denote this element also by \([(\lambda ,\mu ),\kappa ]\). We define an embedding \(\iota \) of \(Sp(n_1,{\mathbb R})\) to \(Sp(n,{\mathbb R})\) by the mapping

$$\begin{aligned} g=\begin{pmatrix} a &{} b \\ c &{} d \end{pmatrix} \in Sp(n_1,{\mathbb R}) \longrightarrow \iota (g)=\begin{pmatrix} a &{} 0 &{} b &{} 0 \\ 0 &{} 1_{n_2} &{} 0 &{} 0 \\ c &{} 0 &{} d &{} 0 \\ 0 &{} 0 &{} 0 &{} 1_{n_2} \end{pmatrix} \in Sp(n,{\mathbb R}) \end{aligned}$$

and regard \(Sp(n_1,{\mathbb R})\) as a subgroup of \(Sp(n,{\mathbb R})\). We denote by \(J^{(n_1,n_2)}({\mathbb R})\) the subgroup of \(Sp(n,{\mathbb R})\) generated by \(H^{(n_1,n_2)}({\mathbb R})\) and \(Sp(n_1,{\mathbb R})\) and call this the real Jacobi group.

We fix a natural number k and the action \(F|_{k}[g]\) of \(g\in Sp(n,{\mathbb R})\) on holomorphic functions F(Z) of \(Z\in H_{n}\) is defined as before. We fix an \(n_2 \times n_2\) half-integral symmetric matrix M. For simplicity we assume that M is positive definite. (A positive semi-definite case reduces to the positive definite case in some way. See [36].) We consider a holomorphic function \(F(\tau ,z)\) on \(H_{n_1} \times M_{n_2n_1}({\mathbb C})\), where \(\tau \in H_{n_1}\), \(z \in M_{n_2n_1}({\mathbb C})\). We denote by \(\omega \) the variable in \(H_{n_2}\) and we write \(e(x)=exp(2\pi ix)\) for any \(x \in {\mathbb C}\). Then by direct calculation, we see that for \(\widetilde{g} \in J^{(n_1,n_2)}({\mathbb R})\subset Sp(n,{\mathbb R})\), we have

$$\begin{aligned}{}[F(\tau ,z)e(tr(M\omega ))]|_{k}[\widetilde{g}]= \widetilde{F}_{\widetilde{g}}(\tau ,z)e(tr(M\omega )) \end{aligned}$$

for some holomorphic function \(\widetilde{F}_{\widetilde{g}}\) on \(H_{n_1}\times M_{n_2n_1}({\mathbb C})\) independent of \(\omega \). This \(\widetilde{F}_{\widetilde{g}}\) depends on the choice of k, M and \(\tilde{g}\). So we write \(\widetilde{F}_{\tilde{g}}=F|_{k,M}[\tilde{g}]\). More explicitly, for \(g=\begin{pmatrix} a &{} b \\ c &{} d \end{pmatrix} \in Sp(n_1,{\mathbb R})\) and \([(\lambda ,\mu ),\kappa ]\in H^{(n_1,n_2)}({\mathbb R})\), we have

$$\begin{aligned} F|_{k,M}[g]&= \det (c\tau +d)^{-k} e(-Tr(Mz(c\tau +d)^{-1}c\,^{t}z))F(g\tau ,z(c\tau +d)^{-1}), \\ F|_{M}[(\lambda ,\mu ),\kappa ]&= e(Tr(M(\lambda \tau \,^{t}\lambda +2\lambda \, ^{t}z+\mu \,^{t}\lambda +\kappa ))) F(\tau ,z+\lambda \tau +\mu ). \end{aligned}$$

We put

$$\begin{aligned} H^{(n_1,n_2)}({\mathbb Z}) = \{[(\lambda ,\mu ),\kappa ]\in H^{(n_1,n_2)}({\mathbb R}); \lambda , \mu \in M_{n_2n_1}({\mathbb Z}), \kappa \in M_{n_2}({\mathbb Z})\}. \end{aligned}$$

For any subgroup \(\Gamma \) of \(Sp(n_1,{\mathbb Z})\) of finite index, we embed \(\Gamma \) into \(J^{(n_1,n_2)}({\mathbb R})\) by \(\iota \) as before and denote by \(\Gamma ^{(n_1,n_2)}\) the subgroup of \(J^{(n_1,n_2)}({\mathbb R})\) generated by \(\iota (\Gamma )\) and \(H^{(n_1,n_2)}({\mathbb Z})\). A holomorphic function \(F(\tau ,z)\) on \(H_{n_1} \times M_{n_2n_1}({\mathbb C})\) is said to be a holomorphic Jacobi form of weight k of index M of \(\Gamma ^{(n_1,n_2)}\) if

$$\begin{aligned} F|_{k,M}[\gamma ]=F \end{aligned}$$
(23)

for all \(\gamma \in \Gamma ^{(n_1,n_2)}\) and besides if it satisfies the condition on the Fourier expansions explained later. We denote by \(L_{n_1}^{*}\) the set of all \(n_1\times n_1\) half-integral symmetric matrices. For any \(g\in Sp(n_1,{\mathbb Z})\), by (23), we have the Fourier expansion

$$\begin{aligned} F|_{k,M}[g]= \sum _{N,r}a^{(g)}(N,r)e(Tr(N\tau +r\,^{t}z)) \end{aligned}$$

where \(N\in \lambda _{g}^{-1}L_{n_{1}}^{*}\) for some rational number \(\lambda _g\) depending on g, and \(r \in M_{n_2n_1}({\mathbb Z})\). The condition on Fourier expansion in the definition of Jacobi forms is that for \(g\in Sp(n_1,{\mathbb Z})\), we should have \(a^{(g)}(N,r)=0\) unless \(\begin{pmatrix} N &{} r/2 \\ ^{t}r/2 &{} M \end{pmatrix}\) is positive semi-definite. This condition on Fourier expansion is automatically satisfied if \(n_1\ge 2\) by Koecher principle (see [36]). The property (23) is more precisely written as follows.

$$\begin{aligned} F(\gamma \tau ,z(c\tau +d)^{-1})&=\det (c\tau +d)^{k}e(Tr(Mz(c\tau +d)^{-1}c\,^{t}z))F(\tau ,z) \nonumber \\&\quad \text { for any } \gamma =\begin{pmatrix} a &{} b \\ c &{} d \end{pmatrix}\in \Gamma , \end{aligned}$$
(24)
$$\begin{aligned} F(\tau ,z+\lambda \tau +\mu )&= e(-Tr(M(\lambda \tau \,^{t}\lambda +2z\,^{t}\lambda )))F(\tau ,z) \nonumber \\&\quad \text { for any } \lambda , \mu \in M_{n_2n_1}({\mathbb Z}). \end{aligned}$$
(25)

We denote by \(J_{k,M}(\Gamma ^{(n_1,n_2)})\) the space of holomorphic Jacobi forms. We consider the Taylor expansion at \(z=0\) for \(z \in M_{n_2n_1}({\mathbb C})\) and will describe the Taylor coefficients by (vector valued) Siegel modular forms of various weights.

For any dominant integral weight \(\lambda =(\lambda _1,\lambda _2,\ldots ,)\), we write \(|\lambda |=\sum _{l}\lambda _{l}\). We denote by \(V(\rho _{\lambda ,n_1})\) a representation space of the irreducible representation \(\rho _{\lambda ,n_1}\) of \(GL(n_1,{\mathbb C})\) corresponding to \(\lambda \) with \(\mathrm{depth}(\lambda )\le n_1\). We denote by \(A_{det^k\otimes \,\rho _{\lambda ,n_1}}(\Gamma )\) the space of Siegel modular forms of degree \(n_1\) of weight \(det^k\otimes \rho _{\lambda ,n_1} \) belonging to \(\Gamma \), that is, the space of \(V(\rho _{\lambda ,n_1})\)-valued holomorphic functions \(F(\tau )\) of \(\tau \in H_{n_1}\) which satisfy the condition

$$\begin{aligned} F(\gamma \tau )=\det (c\tau +d)^{k}\rho (c\tau +d)F(\tau ) \qquad \text { for any }\gamma =\begin{pmatrix} a &{} b \\ c &{} d \end{pmatrix}\in \Gamma \end{aligned}$$

and holomorphic at each cusp of \(\Gamma \). For any natural number m, we denote the m fold direct sum of the space by \((A_{det^k\otimes \,\rho _{\lambda , n_1}}(\Gamma ))^{m}\).

Theorem 6.1

For any non-negative integer h, we can define linear mappings \(\xi _h\)

(\(\lambda \) runs over dominant integral weights), such that the following two conditions are satisfied.

  1. (a)

    The map \(\xi _{h}\) depends only on the coefficients of the Taylor expansion of \(F(\tau ,z)\) at \(z=0\) of total degree up to h.

  2. (b)

    For any h, the Taylor coefficients of F up to total degree h are determined by the set of the images \(\xi _{i}(F)\) for \(0\le i\le h\).

    In particular, The map \(\xi =(\xi _h)_{h=0}^{\infty }\) on \(J_{k,M}(\Gamma ^{(n_1,n_2)})\) is injective.

Proof

The mapping \(\xi _h\) is defined by \(\dim \rho _{\lambda ,n_2}\) number of \(V(\rho _{\lambda ,n_1})\)-valued holomorphic linear differential operators with constant coefficients. Indeed, by Theorem 3.4, for \({\pmb n}=(n_1,n_2)\), \(D_1=H_n\), \(D_2=H_{{\pmb n}}=H_{n_1}\times H_{n_2}\), and for any \(\lambda \) with \(\mathrm{depth}(\lambda )\le \min (n_1,n_2)\), there exists a \(V(\rho _{\lambda ,n_1})\otimes V(\rho _{\lambda ,n_2})\) valued differential operator \({\mathbb D}\) on holomorphic functions of \(H_{n}\) which satisfies Condition 2.1 for the initial weight k and the target weight \(det^k \rho _{\lambda ,n_1}\otimes det^k\rho _{\lambda ,n_2}\). If we apply this differential operator \({\mathbb D}\) on \(F(\tau ,z)e(Tr(M\omega ))\) for \(F(\tau ,z) \in J_{k,M}(\Gamma ^{(n_1,n_2)})\), then operation on components of \(\omega \) becomes just a multiple of polynomials in components of M. So we may define a differential operator \({\mathbb D}_M\) associated with \({\mathbb D}\) on holomorphic functions of \(H_n \times M_{n_2n_1}({\mathbb C})\) by

$$\begin{aligned} {\mathbb D}(F(\tau ,z)e(Tr(M\omega ))={\mathbb D}_{M}(F(\tau ,z))e(Tr(M\omega )). \end{aligned}$$

If we take a basis \(\{e_i\}\) of \(V(\rho _{\lambda ,n_2})\), then we have

$$\begin{aligned} Res_{D_2}({\mathbb D}(F(\tau ,z)e(Tr(M\omega ))) = \sum _{i=1}^{\dim \rho _{\lambda ,n_2}}(F_{\lambda ,i}(\tau )\otimes e_i) e(Tr(M\omega )), \end{aligned}$$

for some \(V(\rho _{\lambda ,n_1})\)-valued holomorphic functions \(F_{\lambda ,i}(\tau )\). By Condition 2.1, for any \(g\in Sp(n_1,{\mathbb R})\subset Sp(n,{\mathbb R})\), we have

$$\begin{aligned} Res({\mathbb D}_{M}(F(\tau ,z)|_{k,M}[g])) = \sum _{i=1}^{\dim \rho _{\lambda ,n_2}} (F_{\lambda ,i}|_{\det \,^{k}\otimes \rho _{\lambda ,n_1}}[g])\otimes e_i. \end{aligned}$$
(26)

where Res is the restriction of functions on \(H_{n_1}\times M_{n_2n_1}({\mathbb C})\) to \(H_{n_1}\). Since \(F|_{k,M}[\gamma _1]=F\) for any \(\gamma _1 \in \Gamma \), we have \(F_{\lambda ,i}|_{\det \,^{k}\rho _{\lambda ,n_1}}[\gamma _1]=F_{\lambda ,i}\). Now we must prove the holomorphy of \(F_i\) at cusps. If \(n_1\ge 2\), this is obvious by the usual Koecher principle, but in general, by definition of Jacobi forms, for any \(g\in Sp(n_1,{\mathbb Z})\), we have

$$\begin{aligned} F(\tau ,z)|_{k,M}[g] =\sum _{N,r}a^{(g)}(N,r)e(Tr(N\tau ))e(r\,^{t}z) \end{aligned}$$

where \(a^{(g)}(N,r)=0\) unless \(\begin{pmatrix} N &{} ^{t}r/2 \\ r/2 &{} M \end{pmatrix}\) is positive semi-definite. In particular, N is also positive semi-definite. The function \({\mathbb D}_{M}(F(\tau ,z)|_{k,M}[g])\) have the same property. So we have the expansion

$$\begin{aligned} F_{\lambda ,i}|_{\det \,^{k}\rho _{\lambda ,n_1}}[g]=\sum _{N}b^{(g)}(N)e(Tr(N\tau )) \end{aligned}$$

and \(b^{(g)}(N)=0\) unless N is positive semi-definite. So the condition on Fourier expansions at cusps for \(F_{\lambda ,i}\) are satisfied and we have \(F_{\lambda ,i}\in A_{det^k \rho _{\lambda ,n_1}}(\Gamma )\). The second assertion is the injectivity from the Taylor coefficients up to degree h to Siegel modular forms of degree \(n_1\). This has been already proved in Theorem 5.2. \(\square \)

In order to apply the above maps \(\xi _{h}\) and to determine the space \(J_{k,M}(\Gamma ^{(n_1,n_2)})\) from finitely many \(A_{det^k\otimes \, \rho _{\lambda ,n_1}}(\Gamma )\) explicitly, we have to solve the following problem.

Problem 6.2

We consider vectors of mappings in Theorem 6.1 by taking

What is the minimum h such that this mapping is injective?

Some upper bound is known. For example, when \(n=2\) and \(n_1=n_2=1\), where the index M is a number \(m>0\), we know that \(h=2m\) if k is even and \(2m-3\) if k is odd by [6]. But exact answers in the cases \(n_1\ge 2\) are known only for a few cases. It is known that

  • when \((n_1,n_2)=(2,1)\) and \(M=1\), then \(h=2\),

  • when \((n_1,n_2)=(2,1)\) and \(M=2\), then \(h=6\),

  • when \((n_1,n_2)=(3,1)\) and \(M=1\), then \(h=4\).

For the first two, see [15]. The last one is a unpublished joint work with S. Grushevsky.

If we denote the graded ring of Siegel modular forms of even weight by \(A_{even}(\Gamma ) =\oplus _{k\ge 0}A_{2k}(\Gamma )\), where \(A_{2k}(\Gamma )\) is the space of scalar valued Siegel modular forms of even weight 2k, then the module defined by \(J_{M}(\Gamma )=\oplus _{k\ge 1}J_{k,M}(\Gamma ^{(n_1,n_2)})\) is an \(A_{even}(\Gamma )\) module. Such module structural theorems have been given in [15] in some cases. (See also [18]).

6.2 Examples of differential operators acting on Jacobi forms

Since Theorem 6.1 would look abstract, we give here several concrete examples for \(\xi _h\). For the differential operators on Siegel modular forms of degree n, the partitions \((n_1,n_2)\) and \((n_2,n_1)\) are essentially the same. But for Jacobi forms, the former acts on functions on \(H_{n_1}\times M_{n_2n_1}({\mathbb C})\) and the latter on those on \(H_{n_2}\times M_{n_1n_2}({\mathbb C})\) and they are different. First assume that \(n_1=n-1\) and \(n_2=1\). So a Jacobi form in question is a function on \(H_{n-1}\times {\mathbb C}^{n-1}\) and the index M is just a number \(m>0\) since \(n_2=1\). Since the depth of \(\lambda \) is \(1=\min (1,n-1)\), we have \(\lambda =(l,0,\ldots )\) and the representation \(\rho _{\lambda ,n_1}\) of \(GL(n-1,{\mathbb C})\) is the symmetric tensor representation Sym(l) of some fixed degree l. On the other hand, \(\rho _{\lambda ,n_2}=\rho _{\lambda ,1}\) is one dimensional representation, i.e. \(\dim \rho _{n_2,1}=1\), which is just the l-th power of elements of \(GL(1,{\mathbb C})={\mathbb C}^{\times }\). So the mapping \(\xi _h\) gives the mapping from \(J_{k,m}(\Gamma ^{(n-1,1)})\) to \(A_{det^k Sym(l)}(\Gamma )\) for \(\Gamma \subset Sp(n-1,{\mathbb Z})\). Since the generating function of such differential operators (or corresponding polynomial) has been already given in (13), the concrete mapping is given as follows. For \(Z \in H_{n}\), we write

$$\begin{aligned} Z=\begin{pmatrix} \tau &{} ^{t}z \\ z &{} \omega \end{pmatrix} \end{aligned}$$

where \(\tau =(\tau _{ij}) \in H_{n-1}\), \(z=(z_{1},\ldots ,z_{n-1}) \in {\mathbb C}^{n-1}\), \(\omega \in H_{1}\). For any \(n-1\) dimensional vector z and \(\alpha =(\alpha _i)\in {\mathbb Z}^{n-1}\), we write \(z^{\alpha }=\prod _{i=1}^{n-1}z_i^{\alpha _i}\) and \(|\alpha |=\sum _{i=1}^{n-1}\alpha _i\). For variables \(u=(u_1,\ldots ,u_n)\), we define \({\mathcal D}_2(u)\) by

$$\begin{aligned} {\mathcal D}_2(u)=\sum _{1\le i\le j\le n}u_iu_j\frac{{\partial }}{{\partial }\tau _{ij}}. \end{aligned}$$

For \(f(\tau ,z)\in J_{k,m}(\Gamma ^{(n-1,1)})\) we write the Taylor expansion by

$$\begin{aligned} f(\tau ,z)=\sum _{\alpha \in ({\mathbb Z}_{\ge 0})^{n-1}}f_{\alpha }(\tau )z^{\alpha }. \end{aligned}$$

and for any integer \(l\ge 0\), we write

Then the mapping \(\xi _{h}\) is given by

$$\begin{aligned} \xi _{h}f=\sum _{0\le q \le [h/2]}(-2\pi im)^{q} {\mathcal D}_2(u)^{q}f_{h-2q}(\tau ,u), \end{aligned}$$

up to constant (See [15]).

Next, we assume that \(n_1=1\) and \(n_2=n-1\). This time, we consider functions \(f(\tau ,z)\) where \(\tau \in H_1\), \(z=(z_1,\ldots ,z_{n-1})\in {\mathbb C}^{n-1}\). The dimension of the symmetric tensor representation \(\rho _{\lambda ,n-1}\) of \(GL(n-1,{\mathbb C})\) is in general greater than one, and for \(\lambda =(l,0,\cdots ,0)\) and \(n\ge 2\), it is given by

$$\begin{aligned} \dim \rho _{\lambda ,n-1}=\left( {\begin{array}{c}l+n-2\\ l\end{array}}\right) =\left( {\begin{array}{c}l+n-2\\ n-2\end{array}}\right) . \end{aligned}$$

This means that we have \(\left( {\begin{array}{c}l+n-2\\ n-2\end{array}}\right) \) different differential operators for the target weight \(k+l\). We use the notation \(\omega =(\omega _{ij}) \in H_{n-1}\). Here an index of Jacobi forms is a \((n-1)\times (n-1)\) positive definite half-integral matrix \(M=(\frac{1+\delta _{ij}}{2}m_{ij})\) with \(m_{ij}\in {\mathbb Z}\). In order to give differential operators associated with a polynomial P(T) in \(t_{ij}\) with \(n \times n\) matrix \(T=(t_{ij})\), we put \(t_{11}=\frac{{\partial }}{{\partial }\tau }\) and \(2t_{1i}=2t_{i1}=\dfrac{{\partial }}{{\partial }z_{i-1}}\) for \(i\ge 2\). For the other variables, we put \(t_{ij}=\frac{1+\delta _{ij}}{2}m_{ij}\) \((2\le i \le j \le n)\), since we have

$$\begin{aligned} \frac{1+\delta _{ij}}{2}\frac{{\partial }}{{\partial }\omega _{ij}}Tr(M\omega )= \frac{1+\delta _{ij}}{2}m_{ij}. \end{aligned}$$

So by virtue of (13), the generating function of the differential operator is given by

$$\begin{aligned} \dfrac{1}{\left( 1-\sum _{i=1}^{n-1}x_{1,i+1}\dfrac{{\partial }}{{\partial }z_{i}}+\dfrac{{\partial }}{{\partial }\tau } \sum _{1\le i \le j\le n-1} m_{ij}x_{1,i+1}x_{1,j+1}\right) ^{k-1}}. \end{aligned}$$
(27)

Here \(x_{1,i}\) are dummy variables and when we expand this as a formal power series in \(x_{1,i}\) for all i, then the coefficient of each monomial in \(x_{1,i}\) gives a different differential operator mapping Jacobi forms on \(H_1\times {\mathbb C}^{n-1}\) of weight k to elliptic modular forms of weight \(k+l\), where l is the total degree of the corresponding monomial in \(x_{1,i+1}\). Each monomial in \(x_{1,i}\) of degree l indicates a different component in \(A_{k+l}(\Gamma )^{\dim \rho _{l,n-1}}\). Hence expanding (27), the differential operators \(\xi _h\) of rank h for small h are given by

$$\begin{aligned} \xi _0 f&= f|_{z=0} \\ \xi _1 f&= (k-1)\sum _{i=1}^{n-1}x_{1,i+1} \frac{{\partial }f}{{\partial }z_{i}}\biggl |_{z=0}. \\ \xi _2 f&= \biggl (-(k-1) \sum _{1\le i \le j\le n-1}m_{ij}x_{1,i+1}x_{1,j+1} \frac{{\partial }f}{{\partial }\tau } \\&\quad +\frac{k(k-1)}{2} \sum _{1\le i, j\le n-1}x_{1,i+1}x_{1,j+1}\frac{{\partial }^{2} f}{{\partial }z_{i}{\partial }z_{j}}. \biggr )\biggl |_{z=0}. \end{aligned}$$

From this, we see the action on the Taylor coefficients more concretely. For the Taylor expansion

$$\begin{aligned} f(\tau ,z)=f_0(\tau )+\sum _{i=1}^{n-1}f_i(\tau )z_i +\sum _{1\le i\le j\le n-1}f_{ij}(\tau )z_iz_j+\cdots \end{aligned}$$

we have \(n-1\) differential operators

$$\begin{aligned} (k-1)\frac{{\partial }f}{{\partial }z_{i}}\biggl |_{z=0}=(k-1)f_{i}(\tau ) \end{aligned}$$

for \(i=1\), ..., \(n-1\) which map a Jacobi form of weight k to an elliptic modular form of weight \(k+1\). Corresponding to pairs of (ij) with \(1\le i\le j\le n-1\), there are \(n(n-1)/2\) differential operators

$$\begin{aligned}&\left( (2-\delta _{ij})\frac{k(k-1)}{2}\frac{{\partial }^2 f}{{\partial }z_i{\partial }z_j} -(k-1)m_{ij}\frac{{\partial }f}{{\partial }\tau }\right) \biggl |_{z=0} \\&\quad = k(k-1)f_{ij}(\tau )-m_{ij}(k-1)\frac{{\partial }f_0(\tau )}{{\partial }\tau }, \end{aligned}$$

which map a Jacobi form on \(H_1\times {\mathbb C}^{n-1}\) of weight k of index m to elliptic modular forms of weight \(k+2\). It is clear that \(f_{ij}(\tau )\) are recovered by the images of \(\xi _h\) with \(h\le 2\). In general, it is obvious that each Taylor coefficient of \(z^{\alpha }=\prod _{i=1}^{n-1}z_{i}^{\alpha _i}\) is recovered by the images of the differential operators which are obtained as the coefficient of \(\prod _{i=1}^{n-1}x_{1,i+1}^{\beta _i}\) with \(\sum _{i=1}^{n-1}\beta _i\le \sum _{i=1}^{n-1}\alpha _i\).