1 Introduction

In this paper, we study correlation functions of characteristic polynomials in a sub-class of determinantal random point processes. They are called polynomial ensembles [39] and belong to biorthogonal ensembles in the sense of Borodin [10]. Polynomial ensembles are characterised by the fact that one of the two determinants in the joint density of points is given by a Vandermonde determinant, while the other one is kept general. Thus they are generalising the classical ensembles of Gaussian random matrices [41]. Polynomial ensembles appear in various contexts as the joint distribution of eigenvalues (or singular values) of random matrices, see [3, 14, 19, 20, 27]. They enjoy many invariance properties on the level of joint density, kernel and bi-orthogonal functions [35, 38] and provide examples for realisations of multiple orthogonal polynomials, see e.g. [8, 20, 39] and Muttalib–Borodin ensembles [10, 42].

Random matrices enjoy many different applications in physics and beyond, see [2] and references therein. Polynomial ensembles in particular are relevant in the following contexts: Ensembles with an external field have been introduced as a tool to count intersection numbers of moduli spaces on Riemann surfaces [16]. In the application to the quantum field theory of the strong interactions, quantum chromodynamics (QCD), they have been used as a schematic model to study the influence of temperature in the chiral phase transition [29]. Detailed computations of Dirac operator eigenvalues [28, 45] within this class of models have been restricted to supersymmetric techniques so far, that can now be addressed in the framework of biorthogonal ensembles.

Recently, sums and products of random matrices have been shown to lead to polynomial ensembles [3, 18, 36]—see [4] for a review. This has important consequences for the spectrum of Lyapunov exponents, relating this multiplicative process to the additive process of Dyson’s Brownian motion [6]. Last but not least polynomial ensembles of Pólya type have led to a deeper understanding of the relation between singular values and complex eigenvalues [34, 35], where a bijection between the respective point processes was constructed.

In this paper, we consider expectation values of products and ratios of characteristic polynomials within the class of polynomial ensembles. While these can be used to generate multi-point resolvents and thus arbitrary k-point density correlation functions, as well as the kernel of bi-orthogonal polynomials, they are of interest in their own right as well. Examples for applications are the partition function of QCD with an arbitrary number of fermionic flavours [46]. In mathematics, the Montgomery conjecture in conjunction with moments of the Riemann zeta-functions has lead to important insights [32], where moments and correlations of characteristic polynomials relevant for more general L-functions were computed.

Mathematical properties of ratios of characteristic polynomials have equally received attention, and we will not be able to give full justice to the existing literature. Based on earlier works such as [7, 22], the determinantal structure of the expectation value of ratios of characteristic polynomials in orthogonal polynomial ensembles was expressed in several equivalent forms, given in terms of orthogonal polynomials, their Cauchy transform or their respective kernels. This structure was generalised for products of characteristic polynomials in [1], as well as to all symmetry classes [33]. The universality of such ratios has been studied in several works [13, 47] and in particular its relation to the sine- and Airy-kernel [11]. New critical behaviours have been found from such ensembles as well [15] and their universality was discussed in [9].

Moving to polynomial ensembles, expectation values of products are easy to evaluate by including them into the Vandermonde determinant, just as for orthogonal polynomial ensembles. Determinantal formulas for expectation values of characteristic polynomials and their inverse have been derived, see e.g. [8, 19, 21]. A duality in the number of products and matrix dimension, which is well known for the classical ensembles, holds also in this external field model [17]. The kernel for general polynomial ensembles has been expressed in terms of the residue of a single ratio of characteristic polynomials in [19], see also [11, 26]. Most recently the study of eigenvector statistics of random matrices has seen a revival, and also in this context expectation values of ratios of characteristic polynomials in polynomial ensembles arise [23, 24]. This has been one of the starting points of the present work.

The outline of the paper is as follows. In Sect. 2, we introduce polynomial ensembles, provide several examples, and state the main results of the present paper. In particular, Theorem 2.2 says that any polynomial ensemble is a Giambelli compatible point process in the sense of Borodin, Olshanski, and Strahov [12]. This leads to Theorem 2.3, expressing the expectation value of the ratio of an equal number of characteristic polynomials as a determinant of a single ratio, generalising [7, Theorem 3.3] to polynomial ensembles. In Sect. 2.3, we introduce a more restricted class of polynomial ensembles which we call invertible. Here, we give a nested multiple complex contour integral representation for general ratios of characteristic polynomials in Theorem 2.9. The number of integrals only depends on the number of characteristic polynomials, but not on the number of points N of the point process. This generalises the results of [24, Theorem 5.1] to rectangular random matrices, in the presence of an arbitrary number of characteristic polynomials. Several examples are given that belong to the class of invertible polynomial ensembles, including the external field models. Sections 3 and 4 are devoted to the proofs of the results stated in Sect. 2. Section 5 contains some special cases and comparison with the work by Fyodorov, Grela, and Strahov [24]. Finally, Appendix A collects properties of the Vandermonde determinant, when adding or removing factors.

2 Definitions and Statement of Results

2.1 Polynomial Ensembles

We introduce polynomial ensembles following [39]. They are defined by the probability density function on \(I^n\), where \(I \subseteq {\mathbb {R}}\) is an interval. The probability density function is given by

$$\begin{aligned} {\mathcal {P}}(x_1,\ldots ,x_N) = \frac{1}{{\mathcal {Z}}_N} \Delta _N(x_1,\ldots ,x_N) \det [\varphi _l(x_k)]_{k,l=1}^{N}\ , \end{aligned}$$
(2.1)

where \(\Delta _N(x_1,\ldots ,x_N) = \prod _{1\le i < j \le N} (x_i-x_j)=\det \left[ x_i^{N-j}\right] _{i,j=1}^N\) is the Vandermonde determinant of N variables. The \(\varphi _{1},\ldots ,\varphi _{_N}\) are certain integrable real-valued functions on I, such that the normalisation constant \({\mathcal {Z}}_N\)

$$\begin{aligned} {\mathcal {Z}}_N= & {} \left( \prod _{n=1}^{N} \int _I \mathrm{d}x_n \right) \Delta _N(x_1,\ldots ,x_N) \det [\varphi _l(x_k)]_{k,l=1}^{N}\nonumber \\= & {} N! (-1)^{N(N-1)/2} \det [G] \ , \end{aligned}$$
(2.2)

exists and is nonzero. The constant \({\mathcal {Z}}_N\) is also called partition function in the physics literature. Polynomial ensembles are formed by eigenvalues (or singular values) of certain \(N\times N\) random matrices H, see examples below. Here, the matrix \(G=(g_{k,l})_{k,l=1}^{N}\) is the invertible generalised moment matrix with entries

$$\begin{aligned} g_{k,l} = \int _I \mathrm{d}x\, x^{k-1} \varphi _{l}(x)\ . \end{aligned}$$
(2.3)

The second equality in (2.2) follows using (A.1) and the Andréiéf integral formula,

$$\begin{aligned}&\left( \prod _{n=1}^{N} \int _I \mathrm{d}x_n \right) \det [\psi _l(x_k)]_{k,l=1}^{N} \det [\phi _l(x_k)]_{k,l=1}^{N}\nonumber \\&\quad =N! \det \left[ \int _I\mathrm{d}x \psi _k(x)\phi _l(x)\right] _{k,l=1}^{N} \ , \end{aligned}$$
(2.4)

valid for any two sets of integrable functions \(\psi _k\) and \(\phi _l\). We will now give some explicit realisations of polynomial ensembles in terms of random matrices. The simplest example for a polynomial ensembles is given by the eigenvalues of \(N\times N\) complex Hermitian random matrices H from the Gaussian Unitary Ensembles (GUE), defined by the probability measure

$$\begin{aligned} {P}_{\mathrm{GUE}}(H)\mathrm{d}H= c_N \exp [-{\mathrm{Tr\,}}[H^2]]\mathrm{d}H\ ,\quad c_N= {2^{\frac{N(N-1)}{2}}}{\pi ^{-\frac{N^2}{2}}}\ . \end{aligned}$$
(2.5)

The probability density function of the real eigenvalues \(x_1,\ldots ,x_N\) of H reads [41]

$$\begin{aligned} {\mathcal {P}}_{\mathrm{GUE}}(x_1,\ldots ,x_N) = \frac{1}{{\mathcal {Z}}_N^{\mathrm{GUE}}} \Delta _N(x_1,\ldots ,x_N)^2 \exp \left[ -\sum _{j=1}^Nx_j^2\right] . \end{aligned}$$
(2.6)

This is a polynomial ensemble where the resulting \(\varphi \)-functions, \(\varphi _k(x)=x^{N-k}e^{-x^2}\), are obtained after multiplying the exponential factors into one of the Vandermonde determinants. Note that the GUE is an orthogonal polynomial ensemble.

The GUE with an external source or field [14, 27] contains an additional constant, deterministic Hermitian matrix A of size \(N\times N\) that we choose to be diagonal here, \(A=\text{ diag }(a_1,\ldots ,a_N)\) with \(a_j\in {\mathbb {R}}\) for \(j=1,\ldots ,N\), without loss of generality. It will constitute our first main example and is defined by the probability measure

$$\begin{aligned} {P}_{\mathrm{ext1}}(H)\mathrm{d}H= {c}_N \exp [-{\mathrm{Tr\,}}[ (H-A)^2]]\mathrm{d}H\ , \end{aligned}$$
(2.7)

with the probability density function

$$\begin{aligned} {\mathcal {P}}_{\mathrm{ext1}}(x_1,\ldots ,x_N) = \frac{1}{{\mathcal {Z}}_N^\mathrm{ext1}} \Delta _N(x_1,\ldots ,x_N) \det \left[ \exp [-(x_j-a_k)^2]\right] _{j,k=1}^N. \end{aligned}$$
(2.8)

The resulting \(\varphi _k(x)=e^{-(x-a_k)^2}\) follows from the Harish-Chandra–Itzykson–Zuber integral [30, 31] and from multiplying the Gaussian term inside the determinant. We refer to [14] for the derivation. Notice that the second determinant in (2.8) cannot be reduced to a Vandermonde determinant in general.

Our second main example is the chiral GUE with an external source, cf. [19]. It is defined in terms of a complex non-Hermitian \(N\times (N+\nu )\)-dimensional random matrix X and a deterministic matrix A of equal size, with \(\nu \ge 0\). Again without loss of generality, we can choose \(AA^\dag =\text{ diag }(a_1,\ldots ,a_N)\), with elements \(a_j\in {\mathbb {R}}_+\) for \(j=1,\ldots ,N\). The ensemble is defined by

$$\begin{aligned} P_{\mathrm{ext2}}(X)\mathrm{d}x = {\hat{c}}_N \exp \left[ -{\mathrm{Tr\,}}[ (X-A)(X^\dag -A^\dag )\right] \mathrm{d}x ,\quad {\hat{c}}_N=c_N \pi ^{-N(N+\nu )}. \end{aligned}$$
(2.9)

At vanishing A, it reduces to the chiral GUE also called complex Wishart or Laguerre unitary ensemble. The probability density function of the real positive eigenvalues \(x_1,\ldots ,x_N\) of \(XX^\dag \) reads,

$$\begin{aligned}&{\mathcal {P}}_{\mathrm{ext2}}(x_1,\ldots ,x_N) \nonumber \\&\quad = \frac{1}{{\mathcal {Z}}_N^\mathrm{ext2}} \Delta _N(x_1,\ldots ,x_N) \det \left[ x_j^{\nu /2}e^{-(x_j+a_k)}I_\nu \left( 2\sqrt{a_kx_j}\right) \right] _{j,k=1}^N.\nonumber \\ \end{aligned}$$
(2.10)

The modified Bessel function of second kind \(\varphi _k(x)=x^{\nu /2}e^{-(x+a_k)}I_\nu (2\sqrt{a_kx})\), follows from the Berezin–Karpelevich integral formula, cf. [44]. In principle, we may also allow the parameter \(\nu >-1\) to take real values.

In the application to QCD at finite temperature, typically the density (2.9) is endowed with \(N_f\) extra terms, \(P(X) \rightarrow P(X)\prod _{f=1}^{N_f}\det [XX^\dag +m_f^2\mathbb {1}_N]\), with \(m_{f=1,\ldots ,N_f}\in {\mathbb {R}}\), that correspond to \(N_f\) Fermion flavours with masses \(m_f\), see e.g. [45] which also motivates the present study. We would like to mention that the expectation value of the ratio of two characteristic polynomials studied in [24] follows from the above ensemble, when setting \(\nu =0\) and letting \(m_f\rightarrow 0\) for all \(f=1,\ldots ,N_f\). This leads to a polynomial ensemble with \(\varphi _k(x)=x^{{\mathcal {L}}}e^{-(x+a_k)}I_0(2\sqrt{a_kx})\) of [24], with \({\mathcal {L}}= N_f\).

Further examples have been given already in the introduction, including the singular values of products of independent random matrices, see [4] for a review, where \(\varphi _k(x)\) is given by a special function, the Mejier G-function, and more generally Polyá ensembles [34, 35]. Notice that when also the Vandermonde determinant in (2.1) is replaced by a general determinant, as in the Andréiéf integration formula (2.4), we are back to biorthogonal ensembles [10]—an explicit example can be found in [5]. For this class, our methods below will not apply in general.

2.2 Polynomials Ensembles as Giambelli Compatible Point Processes

In this section, we adopt notation and definitions from Macdonald [40]. Let \(\Lambda \) be the algebra of symmetric functions. The Schur functions \(s_{\lambda }\) indexed by Young diagrams \(\lambda \) form an orthonormal basis in \(\Lambda \). Recall that Young diagrams can be written in the Frobenius notation, namely

$$\begin{aligned} \lambda =\left( p_1,\ldots ,p_d|q_1,\ldots ,q_d\right) , \end{aligned}$$

where d equals the number of boxes on the diagonal of \(\lambda \), \(p_j\) with \(j=1,\ldots ,d\) denotes the number of boxes in the jth row of \(\lambda \) to the right of the diagonal, and \(q_l\) with \(l=1,\ldots ,d\) denotes the number of boxes in the lth column of \(\lambda \) below the diagonal. The Schur functions satisfy the Giambelli formula:

$$\begin{aligned} s_{\left( p_1,\ldots ,p_d|q_1,\ldots ,q_d\right) } =\det \left[ s_{(p_i|q_j)}\right] _{i,j=1}^d. \end{aligned}$$
(2.11)

The Schur polynomial \(s_{\lambda }\left( x_1,\ldots ,x_N\right) \) is the specialisation of \(s_{\lambda }\) to the variables \(x_1\), \(\ldots \), \(x_N\). The Schur polynomial \(s_{\lambda }\left( x_1,\ldots ,x_N\right) \) corresponding to the Young diagram \(\lambda \) with \(l(\lambda )\le N\) rows of lengths \(\lambda _1\ge \cdots \ge \lambda _{l(\lambda )}>0\) can be defined by

$$\begin{aligned} s_\lambda (x_1,\ldots ,x_N)=\frac{1}{\Delta _N(x_1,\ldots ,x_N)} \det \left[ x_i^{\lambda _j + N -j} \right] _{i,j=1}^{N}. \end{aligned}$$
(2.12)

If \(l(\lambda )>N\), then \(s_\lambda (x_1,\ldots ,x_N) \equiv 0\) (by definition).

The Giambelli compatible point processes form a class of point processes whose different probabilistic quantities of interest can be studied using the Schur symmetric functions. This class of point processes was introduced in Borodin, Olshanski, and Strahov [12] to prove determinantal identities for averages of analogs of characteristic polynomials for ensembles originating from Random Matrix Theory, the theory of random partitions, and from representation theory of the infinite symmetric group. In the context of random point processes formed by N-point random configurations on a subset of \({\mathbb {R}}\), the Giambelli compatible point processes can be defined as follows.

Definition 2.1

Assume that a point process is formed by an N-point configuration \(\left( x_1,\ldots ,x_N\right) \) on \(I\subseteq {\mathbb {R}}\). If the Giambelli formula

$$\begin{aligned} s_{(p_1,\ldots ,p_d\vert q_1,\ldots ,q_d)}(x_1,\ldots ,x_N) = \det \left[ s_{(p_i \vert q_j)}(x_1,\ldots ,x_N) \right] _{i,j=1}^{d} \end{aligned}$$
(2.13)

(valid for the Schur polynomial \(s_\lambda (x_1,\ldots ,x_N)\) parameterised by an arbitrary Young diagram \(\lambda \left( p_1,\ldots ,p_d|q_1,\ldots ,q_d\right) \)) can be extended to averages, i.e.

$$\begin{aligned} {\mathbb {E}}\left[ s_{(p_1,\ldots ,p_d\vert q_1,\ldots ,q_d)}(x_1,\ldots ,x_N) \right] = \det \left[ {\mathbb {E}}\left[ s_{(p_i \vert q_j)}(x_1,\ldots ,x_N) \right] \right] _{i,j=1}^{d}\ , \end{aligned}$$
(2.14)

then the random point process is called Giambelli compatible point process.

In the present paper, we show that the polynomial ensembles introduced in Sect. 2.1 can be understood as Giambelli compatible point processes. Namely, the following Theorem holds true.

Theorem 2.2

Any polynomial ensemble in the sense of Sect. 2.1 is a Giambelli compatible point process.

As it is explained in Borodin, Olshanski, and Strahov [12], the Giambelli compatibility of point processes implies determinantal formulas for averages of ratios of characteristic polynomials. Namely, we obtain

Theorem 2.3

Assume that \(x_1,\ldots ,x_N\) form a polynomial ensemble. Let \(u_1,\ldots , u_M \in {\mathbb {C}} \backslash {\mathbb {R}}\) and \(z_1,\ldots ,z_M \in {\mathbb {C}}\) for any \(M \in {\mathbb {N}}\) be pairwise distinct variables. Then

$$\begin{aligned} {\mathbb {E}}\left[ \prod _{m=1}^{M} \frac{ D_N(z_m)}{ D_N(u_m)} \right] = \left[ \det \left( \frac{1}{u_i-z_j} \right) _{i,j=1}^{M} \right] ^{-1} \det \left[ \frac{1}{u_i - z_j} {\mathbb {E}} \left( \frac{D_N(z_j)}{D_N(u_i)} \right) \right] _{i,j=1}^{M}, \end{aligned}$$
(2.15)

where \(D_N(z)=\prod _{n=1}^N(z-x_n)\) denotes the characteristic polynomial associated with the random variables \(x_1\), \(\ldots \), \(x_N\).

2.3 Averages of Arbitrary Ratios of Characteristic Polynomials in Invertible Ensembles

In this section, we present our results for arbitrary ratios of characteristic polynomials,

$$\begin{aligned} {\mathbb {E}}\left[ \frac{\prod _{m=1}^{M} D_{N}(z_m)}{\prod _{l=1}^{L} D_{N}(y_l) } \right] , \end{aligned}$$
(2.16)

allowing the number of characteristic polynomials in the numerator M and denominator, \(L\le N\), to differ. As before we will assume the parameters \(y_1,\ldots ,y_L \in {\mathbb {C}} \backslash {\mathbb {R}}\) and \(z_1,\ldots ,z_M \in {\mathbb {C}}\) to be pairwise distinct. We will not consider the most general polynomial ensembles (2.1) here, but consider functions \(\varphi _j(x)\) that satisfy certain conditions to be specified below.

Definition 2.4

Consider a polynomial ensemble defined by the probability density function (2.1). Assume that \(\varphi _l(x)=\varphi (a_l,x)\) for \(l=1,\ldots ,N\), (where \(a_1,\ldots ,a_N\) are real parameters) is analytic in both arguments and that there exists a family \(\left\{ \pi _k\right\} _{k=0}^{\infty }\) of monic polynomials such that each polynomial \(\pi _k\) of degree k can be represented as

$$\begin{aligned} \pi _{k}(a)=\int _I \mathrm{d}x x^{k}\varphi (a,x), \quad k=0,1,\ldots . \end{aligned}$$
(2.17)

In addition, assume that Eq. (2.17) is invertible, i.e. there exists a function \(F: I'\times {\mathbb {C}}\rightarrow {\mathbb {C}}\) such that

$$\begin{aligned} z^{k}=\int _{I^\prime }\mathrm{d}s F(s,z)\pi _{k}(s),\quad k=0,1,\ldots , \end{aligned}$$
(2.18)

where \(I^\prime \) is a certain contour in the complex plane. Then, we will refer to such a polynomial ensemble as an invertible ensemble.

Remark 2.5

Condition (2.17) together with (2.2) immediately implies that for invertible polynomial ensembles the normalising partition function simplifies as follows:

$$\begin{aligned} {\mathcal {Z}}_N=N!\Delta _N(a_1,\ldots ,a_N). \end{aligned}$$
(2.19)

Here, we use that in (A.1) the determinant of monomials equals that of arbitrary monic polynomials.

We will now present two examples for polynomial ensembles of invertible type according to Definition 2.4 and comment on the general class of such ensembles.

Example 2.6

Our first example is given by the GUE with external field (2.8). Here, the eigenvalues take real values, \(I={\mathbb {R}}\), and the functions \(\varphi _l(x)\) can be chosen as

$$\begin{aligned} \varphi _l(x)=\varphi (a_l,x)=\frac{e^{-(x-a_l)^2}}{\sqrt{\pi }}, \end{aligned}$$
(2.20)

which are analytic. From [25, 8.951], we know the following representation of the standard Hermite polynomials \(H_n(t)\) of degree n,

$$\begin{aligned} H_n(t) = \frac{(2i)^n}{\sqrt{\pi }} \int _{-\infty }^{\infty } \mathrm{d}x e^{-(x+it)^2} x^n\ , \end{aligned}$$
(2.21)

that can be made monic as follows, \(2^{-n} H_n(x)=x^n+O(x^{n-2})\). This leads to the integral

$$\begin{aligned} (2i)^{-n}H_n(ia)=\frac{1}{\sqrt{\pi }}\int _{-\infty }^\infty \mathrm{d}s s^n e^{-(s-a)^2}\ , \end{aligned}$$
(2.22)

from which we can read off

$$\begin{aligned} \pi _{k}(a)=\int _{-\infty }^{\infty } \mathrm{d}x x^{k}\frac{e^{-(x-a)^2}}{\sqrt{\pi }}\ , \end{aligned}$$
(2.23)

with \(\pi _{k}(a)=(2i)^{-k}H_{k}(ia)\), for \(k=0,1,\ldots \), which is again monic. Thus condition (2.17) is satisfied.

For the second condition (2.18), we use the integral [25, 7.374.6]

$$\begin{aligned} y^{n} = \frac{1}{\sqrt{\pi }} \int _{-\infty }^{\infty } \mathrm{d}x \ 2^{-n} H_n(x) e^{-(x-y)^2} \ . \end{aligned}$$
(2.24)

Renaming \(y=iz\) and \(x=is\) we obtain

$$\begin{aligned} z^{k}=\int _{I^\prime } \mathrm{d}s F(s,z)\pi _{k}(s)\ , \quad \text{ for }\ k=0,1,\ldots \end{aligned}$$
(2.25)

with \(I^\prime =i{\mathbb {R}}\) and \(F(s,z)= \frac{i}{\sqrt{\pi }}e^{(s-z)^2}\).

Remark 2.7

Example 2.6 is the simplest case of a much wider class of polynomial ensembles of Pólya type convolved with fixed matrices, as introduced in [37, Theorem II.3]. Such polynomials ensembles are generalising the form (2.20) to

$$\begin{aligned} \varphi (a_l,x) =f(x-a_l)\ , \end{aligned}$$
(2.26)

such that f is \((N-1)\)-times differentiable on \({\mathbb {R}}\),

analytic on \({\mathbb {C}}\), and the moments of its derivatives exist,Footnote 1

$$\begin{aligned} \left| \int _{-\infty }^\infty \mathrm{d}x x^{k}\frac{\partial ^j f(x)}{\partial x^j}\right| <\infty \ ,\quad \forall k,j=0,1,\ldots , N-1. \end{aligned}$$
(2.27)

It immediately follows that its generalised moment matrix leads to polynomials, upon shifting the integration variable, and thus (2.17) is satisfied. It is not too difficult to show using Fourier transformation of f that also condition (2.18) of Definition 2.4 is satisfied and thus these ensembles are invertible.

Example 2.8

Our second example is the chiral GUE with external field (2.10) having \(I = {\mathbb {R}}_+\) and functions \(\varphi _l(x)\) can be chosen as

$$\begin{aligned} \varphi _l(x)=\varphi (a_l, x) = \left( \frac{x}{a_l}\right) ^{\nu /2} e^{-\left( x+a_l\right) }I_\nu (2\sqrt{a_l x})\ , \end{aligned}$$
(2.28)

which is analytic, with the \(a_l\) positive real numbers. The following integral is known, see e.g. [25, 6.631.10] after analytic continuation,

$$\begin{aligned} \int _0^{\infty } x^{n+\frac{\nu }{2}} e^{-x} I_\nu ( 2 \sqrt{ax}) \mathrm{d}x = n! a^{\nu /2} e^{a} L_n^{\nu } \left( -a \right) \ . \end{aligned}$$
(2.29)

Here, \(L_n^\nu (y)\) is the standard generalised Laguerre polynomial of degree n, which is made monic as follows, \(n!L_n^\nu (-x)=x^n+O(x^{n-1})\). Then, the first condition (2.17) is satisfied,

$$\begin{aligned} \pi _{k}(a)=\int _0^{\infty } \mathrm{d}x x^{k} \left( \frac{x}{a}\right) ^{\frac{\nu }{2}} e^{-(x+a)} I_\nu (2\sqrt{ax}), \end{aligned}$$
(2.30)

with \(\pi _{k}(a)=k!L_{k}^\nu (-a)\) for \(k=0,1,\ldots .\)

For the second condition (2.18), we consider the following integral, see [25, 7.421.6], which is also called Hankel transform,

$$\begin{aligned} \int _0^\infty \mathrm{d}t t^{\nu /2}e^{-t} n!L_n^\nu (t) J_\nu \left( 2\sqrt{zt}\right) =z^nz^{\nu /2}e^{-z}. \end{aligned}$$
(2.31)

Bringing factors on the other side and making the substitution \(t=-s\) to make the same monic polynomials \(n!L_n^\nu (-s)\) as above appear in the integrand, we obtain after using \(I_\nu (x)=i^{-\nu }J_\nu (ix)\)

$$\begin{aligned} z^{k}=\int _{I^\prime }\mathrm{d}s F(s,z) \pi _{k}(s)\ , \quad \text{ for }\ k=0,1,\ldots \end{aligned}$$
(2.32)

with \(F(s,z)= (-1)^\nu \left( \frac{s}{z}\right) ^{\nu /2}e^{s+z} I_\nu \left( 2\sqrt{zs} \right) \) and \(I^\prime = {\mathbb {R}}_{-}\).

Now we state the second main result of the present paper which gives a formula for averages of products and ratios of characteristic polynomials in the case of invertible ensembles.

Theorem 2.9

Consider a polynomial ensemble (2.1) formed by \(x_1\), \(\ldots \), \(x_N\), and assume that this ensemble is invertible in the sense of Definition 2.4. Then we have for \(L\le N\)

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\left[ \frac{\prod _{m=1}^{M} D_{N}(z_m)}{\prod _{l=1}^{L} D_{N}(y_l) } \right] \\&\quad =\frac{ (-1)^{ \frac{L(L-1)}{2}} }{L!\Delta _M(z_1,\ldots ,z_M)} \left[ \prod _{j=1}^M \int _{I^\prime }\mathrm{d}s_j F(s_j,z_j) \prod _{n=1}^N(s_j-a_{n})\!\right] \!\Delta _M(s_1,\ldots ,s_M)\\&\qquad \times \left[ \prod _{l=1}^{L} \int _I dv_l \left( \frac{v_{l}}{y_l}\right) ^{N-L} \frac{\prod _{m=1}^M(z_m-v_l)}{\prod _{j=1}^{L} (y_j - v_{l})}\right] \Delta _L(v_1,\ldots ,v_L)\\&\qquad \times \left[ \prod _{l=1}^{L} \oint _{C_l} \frac{du_l}{2\pi i} \frac{1}{\prod _{n=1}^{N} (u_{l}-a_n)} \frac{\varphi (u_{l},v_l)}{\prod _{j=1}^M(s_j-u_{l})}\right] \Delta _L(u_{1},\ldots ,u_{L})\ , \end{aligned} \end{aligned}$$
(2.33)

where \(D_N(z)=\prod _{n=1}^N(z-x_n)\) denotes the characteristic polynomial associated with the random variables \(x_1\), \(\ldots \), \(x_N\), the parameters \(y_1,\ldots ,y_L \in {\mathbb {C}} \backslash {\mathbb {R}}\) and \(z_1,\ldots ,z_M \in {\mathbb {C}}\) are pairwise distinct, and all contours \(C_l\) with \(l=1,\ldots ,N\) encircle the points \(a_1,\ldots ,a_N\) counter-clockwise.

We note that Theorem 2.9 generalises Theorem 5.1 in [24] for the ratio of two characteristic polynomials, derived for the polynomial ensemble with \(\varphi (a,x)=x^{{\mathcal {L}}}e^{-x}I_0(2\sqrt{ax})\), to general ratios in invertible polynomial ensembles. Clearly, it is well suited for the asymptotic analysis when \(N\rightarrow \infty \) as the number of integrations does not depend on N.

2.4 A Formula for the Correlation Kernel for Invertible Ensembles

It is well known that each polynomial ensemble is a determinantal process. For invertible polynomial ensembles (see Definition 2.4), Theorem 2.9 enables us to deduce a double contour integration formula for the correlation kernel.

Proposition 2.10

Consider an invertible polynomial ensemble, i.e. a polynomial ensemble defined by (2.1), where the functions \(\varphi _l(x)=\varphi (a_l,x)\) satisfy the conditions specified in Definition  2.4. The correlation kernel \(K_N(x,y)\) of this ensemble can be written as

$$\begin{aligned} K_N(x,y)=\frac{1}{2\pi i}\int \limits _{I^\prime }\mathrm{d}sF(s,x)\prod _{n=1}^N\left( s-a_n\right) \oint \limits _{C}du\frac{\varphi (u,y)}{(s-u)\prod _{n=1}^N\left( u-a_n\right) }, \end{aligned}$$
(2.34)

where C encircles the points \(a_1\), \(\ldots \), \(a_N\) counter-clockwise, and where \(\varphi (u,y)\) and F(sx) are defined by Eqs. (2.17) and (2.18) correspondingly.

Proof

We use the following fact valid for any polynomial ensemble formed by \(x_1\), \(\ldots \), \(x_N\) on \(I\subseteq {\mathbb {R}}\), see Ref. [19].Footnote 2 Assume that

$$\begin{aligned} {\mathbb {E}}\left( \prod \limits _{k=1}^N\frac{x-x_k}{z-x_k}\right) =\int \limits _Idv\frac{x-v}{z-v}\Phi _N(x,v), \end{aligned}$$
(2.35)

where the function \(v\rightarrow \Phi _N(x,v)\) is analytic for all \(v\in I\). Then the correlation kernel of the determinantal process formed by \(x_1,\ldots ,x_N\) is given by

$$\begin{aligned} K_N(x,y)=\Phi _N(x,y). \end{aligned}$$

In our case, Theorem 2.9 gives

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\left( \prod \limits _{k=1}^N\frac{x-x_k}{z-x_k}\right) = \frac{1}{2\pi i}\int \limits _{I}dv\left( \frac{v}{z}\right) ^{N-1}\frac{x-v}{z-v}\\&\quad \times \left[ \int \limits _{I^\prime }\mathrm{d}sF(s,x)\prod _{n=1}^N \left( s-a_n\right) \oint \limits _{C}du\frac{\varphi (u,v)}{(s-u) \prod _{n=1}^N\left( u-a_n\right) }\right] , \end{aligned} \end{aligned}$$
(2.36)

which leads to the formula for the correlation kernel in the statement of the Proposition. \(\square \)

3 Proof of Theorem 2.2

Let \(x_1\), \(\ldots \), \(x_N\) form a polynomial ensemble on \(I^N\), where \(I\subseteq {\mathbb {R}}\). The probability density function of this ensemble is defined by Eq. (2.1). Denote by \({\widetilde{s}}_{\lambda }\) the expectation of the Schur polynomial \(s_{\lambda }\left( x_1,\ldots ,x_N\right) \) with respect to this ensemble,

$$\begin{aligned} {\widetilde{s}}_{\lambda }={\mathbb {E}}\left( s_{\lambda }\left( x_1,\ldots ,x_N\right) \right) . \end{aligned}$$
(3.1)

Our aim is to show that \({\widetilde{s}}_{\lambda }\) satisfies the Giambelli formula, i.e.

$$\begin{aligned} {\widetilde{s}}_{\lambda }=\det \left[ {\widetilde{s}}_{\left( p_i|q_j \right) }\right] _{i,j=1}^d, \end{aligned}$$
(3.2)

where \(\lambda \) is an arbitrary Young diagram, \(\lambda =\left( p_1,\ldots ,p_d\vert q_1,\ldots ,q_d\right) \) in the Frobenius coordinates. According to Definition 2.1, this will mean that the polynomial ensemble under considerations is a Giambelli compatible point process.

The proof of Eq. (3.2) below is based on the following general fact due to Macdonald, see Macdonald [40], Example I.3.21.

Proposition 3.1

Let \(\{ h_{r,s} \}\) with integer \(r \in {\mathbb {Z}}\) and non-negative integer \(s\in {\mathbb {N}}\) be a collection of commuting indeterminates such that we have

$$\begin{aligned} \forall s \in {\mathbb {N}}: h_{0,s} = 1 \ \text{ and } \ \forall r<0\ h_{r,s} = 0 \ , \end{aligned}$$
(3.3)

and set

$$\begin{aligned} {\widetilde{s}}_{\lambda }=\det \left[ h_{\lambda _i-i+j,j-1} \right] _{i,j=1}^{k}, \end{aligned}$$
(3.4)

where k is any number such that \(k\ge l(\lambda )\). Then we have

$$\begin{aligned} {\widetilde{s}}_\lambda = \det \left[ {\widetilde{s}}_{(p_i \vert q_j )} \right] _{i,j=1}^{d}, \end{aligned}$$
(3.5)

where \(\lambda \) is an arbitrary Young diagram, \(\lambda =\left( p_1,\ldots ,p_d\vert q_1,\ldots ,q_d\right) \) in the Frobenius coordinates.

Clearly, in order to apply Proposition 3.1 to \({\widetilde{s}}_{\lambda }\) defined by Eq. (3.1) we need to construct a collection of indeterminates \(\{ h_{r,s} \}\) such that

$$\begin{aligned} {\mathbb {E}}\left( s_{\lambda }\left( x_1,\ldots ,x_N\right) \right) =\det \left[ h_{\lambda _i-i+j,j-1} \right] _{i,j=1}^{k} \end{aligned}$$
(3.6)

will hold true for an arbitrary Young diagram \(\lambda \), for an arbitrary \(k\ge l(\lambda )\) and such that condition (3.3) will be satisfied.

By Andréiéf’s integration formula (2.4) and the expression for the normalisation constant \({\mathcal {Z}}_N\) (2.2), we can write

$$\begin{aligned} \begin{aligned} {\mathbb {E}}\left[ s_\lambda (x_1,\ldots ,x_N) \right] = \frac{\det \left[ \int _I \mathrm{d}x x^{\lambda _i +N-i}\varphi _j(x) \right] _{i,j=1}^{N}}{\det \left[ \int _I \mathrm{d}x x^{N-i}\varphi _j(x) \right] _{i,j=1}^{N}}\ , \end{aligned} \end{aligned}$$
(3.7)

where we used (A.1) and Eq. (2.12). Notice that at this point it matters that we consider polynomial ensembles and not more general bi-orthogonal ensembles. In the latter case, the Vandermonde determinant in the denominator of the Schur function (2.12) would not cancel, the Andréiéf formula would not apply, and we would not know how to compute such expectation values. Set

$$\begin{aligned} A_{n,m}=\int _I \mathrm{d}x x^{n}\varphi _m(x);\;\; n=0,1,\ldots ;\;\;m=1,\ldots , N, \end{aligned}$$
(3.8)

and denote by \(Q=\left( Q_{i,j}\right) _{i,j=1}^N\) the inverseFootnote 3 of \({\tilde{G}}=\left( {\tilde{g}}_{i,j}\right) _{i,j=1}^N\), where \({\tilde{g}}_{i,j}=\int _I\mathrm{d}xx^{N-i}\varphi _j(x)\). With this notation we can rewrite Eq. (3.7) as

$$\begin{aligned} {\mathbb {E}}\left( s_\lambda (x_1,\ldots ,x_N) \right) =\det \left[ \sum \limits _{\nu =1}^NA_{\lambda _i+N-i,\nu }Q_{\nu ,j}\right] _{i,j=1}^N. \end{aligned}$$
(3.9)

Since Q is the inverse of \({\tilde{G}}\), we have

$$\begin{aligned} \sum \limits _{j=1}^N{\tilde{g}}_{i,j}Q_{j,k}=\delta _{i,k},\;\; 1\le i,k\le N, \end{aligned}$$
(3.10)

or

$$\begin{aligned} \sum \limits _{j=1}^NA_{N-i,j}Q_{j,k}=\delta _{i,k},\;\; 1\le i,k\le N. \end{aligned}$$
(3.11)

The following Proposition will imply Theorem 2.2.

Proposition 3.2

Let \(\{ h_{r,s} \}\), with integer \(r \in {\mathbb {Z}}\) and non-negative integer \(s\in {\mathbb {Z}}_{\ge 0}\), be a collection of indeterminates defined by

$$\begin{aligned} h_{r,s} \equiv {\left\{ \begin{array}{ll} \sum \nolimits _{\nu =1}^{N} A_{N+r-s-1,\nu }Q_{\nu ,s+1}, &{} s \in \{0,1,\ldots ,N-1\} , \quad r \ge 0 , \\ \delta _{r,0} , &{} s \ge N , \quad r \ge 0 , \\ 0 , &{} s\ge 0 , \quad r < 0 \text {.} \end{array}\right. } \end{aligned}$$
(3.12)

The collection of indeterminates \(\{ h_{r,s} \}\) satisfies condition (3.3). Moreover, with this collection of indeterminates, \(\{ h_{r,s} \}\) formula (3.6) holds true for an arbitrary Young diagram \(\lambda \), and for an arbitrary \(k\ge l(\lambda )\).

Proof

We divide the proof of into several steps. First, the collection of indeterminates \(\{ h_{r,s} \}\) defined by (3.12) is shown to satisfy condition (3.3). Next, we prove that Eq. (3.6) holds true for an arbitrary Young diagram \(\lambda \), and for an arbitrary \(k\ge l(\lambda )\).

Step 1. First, we want to show that

$$\begin{aligned} \det \left[ h_{\lambda _i -i+j,j-1} \right] _{i,j=1}^{k}= \det \left[ h_{\lambda _i-i+j,j-1} \right] _{i,j=1}^{l(\lambda )}\ , \end{aligned}$$
(3.13)

for any \(k\ge l(\lambda )\).

Let \(\lambda \) be an arbitrary Young diagram and assume that \(k>l(\lambda )\). Consider the diagonal entries of the \(k\times k\) matrix

$$\begin{aligned} \left( h_{\lambda _i - i +j,j-1} \right) _{i,j=1}^{k} \end{aligned}$$

for \(i=j \in \{ l(\lambda )+1,\ldots ,k \}\). By definition of the \(h_{r,s}\) these entries are all equal to 1, since \(\lambda _i = 0\) for \(i \in \{ l(\lambda )+1,\ldots ,k \}\) implying \(h_{0,s}=1\) by condition (3.3). For \(r<0\), we have \(h_{r,s}=0\) (see Eq. (3.12)) and the matrix \(\left( h_{\lambda _i-i+j,j-1}\right) _{i,j=1}^k\) has the form

$$\begin{aligned} \left( \begin{matrix} \star &{} \ldots &{} \star &{} \vert &{} \star &{} \ldots &{} \ldots &{} \ldots &{} \star \\ \vdots &{} \ddots &{} \vdots &{} \vert &{} \vdots &{} \ddots &{} \ddots &{} \ddots &{} \vdots \\ \star &{} \ldots &{} \star &{} \vert &{} \star &{} \ldots &{} \ldots &{} \ldots &{} \star \\ -- &{} -- &{} -- &{} -- &{} -- &{} -- &{} -- &{} -- &{} -- \\ 0 &{} \ldots &{} 0 &{} \vert &{} 1 &{} \star &{} \ldots &{} \ldots &{} \star \\ \vdots &{} &{} \vdots &{} \vert &{} 0 &{} 1 &{} \star &{} \ldots &{} \star \\ \vdots &{} \ddots &{} \vdots &{} \vert &{} \vdots &{} \ddots &{} \ddots &{} \ddots &{} \vdots \\ \vdots &{} &{} \vdots &{} \vert &{} \vdots &{} &{} \ddots &{} \ddots &{} \star \\ 0 &{} \ldots &{} 0 &{} \vert &{} 0 &{} \ldots &{} \ldots &{} 0 &{} 1 \\ \end{matrix}\right) , \end{aligned}$$

where the first row from the top with zeros has the label \(l(\lambda )+1\), and the first column from the left with ones has the label \(l(\lambda )+1\). The determinant of such a block matrix reduces to the product of the determinants of the blocks, which gives relation (3.13).

Step 2. Assume now that \(l(\lambda ) >N\). Then it trivially holds that

$$\begin{aligned} {\mathbb {E}}\left( s_\lambda (x_1,\ldots ,x_N) \right) = 0\ , \end{aligned}$$

by the very definition of the Schur polynomials. Here, we would like to show that it equally holds that

$$\begin{aligned} \det \left[ h_{\lambda _i-i+j,j-1} \right] _{i,j=1}^{l(\lambda )} = 0\, , \end{aligned}$$

if \(l(\lambda ) > N\).

We have \(h_{r,s} = \delta _{r,0}\) for \(s\ge N\) and \(r\ge 0\). This implies that the matrix \(\left( h_{\lambda _i-i+j,j-1}\right) _{i,j=1}^{l(\lambda )}\), which we can write out as

$$\begin{aligned} \left( \begin{matrix} h_{\lambda _1,0} &{} \star &{} \ldots &{} \star &{} \vert &{} h_{\lambda _1+N,N} &{} \ldots &{} \ldots &{} h_{\lambda _1-1+l(\lambda ),l(\lambda )-1} \\ \star &{} h_{\lambda _2,1} &{} \ddots &{} \vdots &{} \vert &{}\vdots &{} &{} &{} \vdots \\ \vdots &{} \ddots &{}\ddots &{} \star &{} \vert &{} \vdots &{} &{} &{} \vdots \\ \star &{} \ldots &{} \star &{} h_{\lambda _N,N-1} &{}\vert &{} h_{\lambda _N+1,N} &{} \ldots &{} \ldots &{} h_{\lambda _N-N+l(\lambda ),l(\lambda )-1} \\ -- &{} -- &{} -- &{} -- &{} -- &{} -- &{} -- &{} -- &{} -- \\ \star &{} \ldots &{} \ldots &{} \star &{} \vert &{} h_{\lambda _{N+1},N} &{} \ldots &{} \ldots &{} h_{\lambda _{N+1}-N-1+l(\lambda ),l(\lambda )-1} \\ \vdots &{} &{} &{} \vdots &{} \vert &{} \star &{} \ddots &{} &{} \vdots \\ \vdots &{} &{} &{} \vdots &{} \vert &{} \vdots &{} \ddots &{} \ddots &{} \vdots \\ \star &{} \ldots &{} \ldots &{} \star &{} \vert &{} \star &{} \ldots &{} \star &{} h_{\lambda _{l(\lambda )},l(\lambda )-1} \\ \end{matrix} \right) \end{aligned}$$

has the form

$$\begin{aligned} \left( \begin{matrix} h_{\lambda _1,0} &{} \star &{} \ldots &{} \star &{} \vert &{}0 &{} \ldots &{} \ldots &{} 0 \\ \star &{} h_{\lambda _2,1} &{} \ddots &{} \vdots &{} \vert &{}\vdots &{} &{} &{} \vdots \\ \vdots &{} \ddots &{}\ddots &{} \star &{} \vert &{} \vdots &{} &{} &{} \vdots \\ \star &{} \ldots &{} \star &{} h_{\lambda _N,N-1} &{}\vert &{} 0 &{} \ldots &{} \ldots &{} 0 \\ -- &{} -- &{} -- &{} -- &{} -- &{} -- &{} -- &{} -- &{} -- \\ \star &{} \ldots &{} \ldots &{} \star &{} \vert &{} 0 &{} \ldots &{} \ldots &{} 0 \\ \vdots &{} &{} &{} \vdots &{} \vert &{} \star &{} \ddots &{} &{} \vdots \\ \vdots &{} &{} &{} \vdots &{} \vert &{} \vdots &{} \ddots &{} \ddots &{} \vdots \\ \star &{} \ldots &{} \ldots &{} \star &{} \vert &{} \star &{} \ldots &{} \star &{}0 \\ \end{matrix} \right) . \end{aligned}$$

Thus, we can again apply the formula for determinants of block matrices to obtain

$$\begin{aligned} \det \left[ h_{\lambda _i-i+j,j-1} \right] _{i,j=1}^{l(\lambda )} = \det \left[ h_{\lambda _i-i+j,j-1} \right] _{i,j=1}^{N} \cdot 0 = 0\ , \end{aligned}$$

which is true for any \(l(\lambda ) >N\) and therefore condition (3.6) is satisfied in this case.

Step 3. Now we wish to prove that

$$\begin{aligned} \sum \limits _{\nu =1}^NA_{N-i+\lambda _i,\nu }Q_{\nu ,j}=h_{\lambda _i-i+j,j-1} \end{aligned}$$
(3.14)

is valid for any Young diagram with \(l(\lambda )\le N\), and for \(1\le i,j\le N\). Assume that \(\lambda _i-i+j\ge 0\). Then (3.14) turns into the first equation in (3.12) with \(r=\lambda _i-i+j\), \(s=j-1\). Assume that \(\lambda _i-i+j<0\), then \(i-\lambda _i>j\). Clearly, \(i-\lambda _i\in \{1,\ldots ,N\}\) in this case, and we have

$$\begin{aligned} \sum \limits _{\nu =1}^NA_{N-i+\lambda _i,\nu }Q_{\nu ,j}=\delta _{i-\lambda _i,j}=0, \end{aligned}$$

where we have used Eq. (3.11). Also, if \(\lambda _i-i+j<0\), and \(1\le i,j\le N\), then \(h_{\lambda _i-i+j,j-1}=0\) as it follows from Eq. (3.12). We conclude that (3.14) holds true for \(\lambda _i-i+j<0\) as well.

Finally, the results obtained in Step 1-Step 3 together with formula (3.9) give the desired formula (3.6). \(\square \)

4 Proof of Theorem 2.9

Denoting by \(S_K\) the symmetric group of a set of K variables with its elements being the permutations of these, we will utilise the following Lemma that was proven in [22].

Lemma 4.1

Let L be an integer with \(1\le L\le N\), and let \(x_1,\ldots ,x_N\) and \(y_1,\ldots ,y_L\) denote two sets of parameters that are pairwise distinct. Then the following identity holds

$$\begin{aligned} \begin{aligned}&\prod _{l=1}^{L} \frac{y_l^{N-L}}{\prod _{n=1}^{N} (y_l-x_n)} = \!\sum _{\sigma \in S_N/(S_{N-L}\times S_L)}\Delta _L(x_{\sigma (1)},\ldots ,x_{\sigma (L)})\!\!\!\!\!\\&\quad \times \frac{ \Delta _{N-L}(x_{\sigma (L+1)},\ldots ,x_{\sigma (N)}) \prod _{n=1}^{L}x_{\sigma (n)}^{N-L}}{\Delta _N(x_{\sigma (1)},\ldots , x_{\sigma (N)})\prod _{n,l=1}^{L} (y_l - x_{\sigma (n)})} \end{aligned} \end{aligned}$$
(4.1)

on the coset of the permutation group.

As shown in [22], this follows from the Cauchy–Littlewood formula and the determinantal formula for the Schur polynomials (2.12). We can use this identity to reduce the number of variables in the inverse characteristic polynomials from N to L. Applied to the averages of products and ratios of characteristic polynomials, we obtain

$$\begin{aligned}&{\mathbb {E}} \left[ \frac{\prod _{m=1}^{M} D_{N}(z_m)}{\prod _{l=1}^{L}D_{N}(y_l) } \right] \nonumber \\&\quad = \frac{N!}{(N-L)!L!{\mathcal {Z}}_N} \left[ \prod _{n=1}^{N} \int _I \mathrm{d}x_n \prod _{m=1}^M(z_m-x_n)\right] \det [\varphi _l(x_k)]_{k,l=1}^{N}\nonumber \\&\qquad \times \frac{\prod _{k=1}^{L}\left( \frac{x_{k}}{y_k}\right) ^{N-L}}{\prod _{n,l=1}^{L} (y_l - x_{n})}\ \Delta _L(x_{1},\ldots ,x_{L})\Delta _{N-L}(x_{L+1},\ldots ,x_{N})\ , \end{aligned}$$
(4.2)

where we used the fact that each term in the sum over permutations gives the same contribution to the expectation. Hence, we can undo the permutations under the sum by a change of variables and replace the sum over \(S_N/(S_{N-L} \times S_L)\) by the cardinality of the coset space \(N!/(N-L)!L!\). Next, we expand the determinant over the \(\det \left[ \varphi _l(x_k) \right] _{k,l=1}^{N} \) and then separate the integration over the first L variables \(x_{l=1,\ldots ,L}\) and the following \(N-L\) variables \(x_{n=L+1,\ldots ,N}\), by also splitting the characteristic polynomials accordingly. This gives

$$\begin{aligned} \begin{aligned}&{\mathbb {E}} \left[ \frac{\prod _{m=1}^{M} D_{N}(z_m)}{\prod _{l=1}^{L} D_{N}(y_l) } \right] \\&\quad =\frac{N!}{(N-L)!L!{\mathcal {Z}}_N} \sum _{\sigma \in S_N}{\mathrm{sgn\,}}(\sigma )\\&\qquad \quad \times \left[ \prod _{l=1}^{L} \int _I \mathrm{d}x_l \ \varphi _{\sigma (l)}(x_l) \frac{x_{l}^{N-L}}{y_l^{N-L}} \frac{\prod _{m=1}^M(z_m-x_l)}{\prod _{j=1}^{L} (y_j - x_{l})}\right] \Delta _L(x_{1},\ldots ,x_{L})\\&\quad \qquad \times \left[ \prod _{k=L+1}^{N} \int _I \mathrm{d}x_k \ \varphi _{\sigma (k)}(x_k) \prod _{m=1}^M(z_m-x_k) \right] \Delta _{N-L}(x_{L+1},\ldots ,x_{N})\ . \end{aligned} \end{aligned}$$
(4.3)

Because we are aiming at an expression that will be amenable to taking the large-N limit, we now focus on the integrals over \(N-L\) variables in the second line, which we denote by J. Here, we make use of one of the properties of the Vandermonde determinant, namely the absorption of the M characteristic polynomials in J into a larger Vandermonde determinant, see (A.2), to write

$$\begin{aligned} J=\left[ \prod _{k=L+1}^{N} \int _I \mathrm{d}x_k \ \varphi _{\sigma (k)}(x_k)\right] \frac{\Delta _{N-L+M}(z_1,\ldots ,z_M,x_{L+1},\ldots ,x_{N})}{\Delta _{M}(z_{1},\ldots ,z_{M})}. \end{aligned}$$

We use the representation (A.1), pull the integrations \(\int _I \mathrm{d}x_k \ \varphi _{\sigma (k)}(x_k)\) into the corresponding columns, and use definition (2.3) of the generalised moment matrix to obtain

$$\begin{aligned} J= & {} \frac{1}{\Delta _M(z_1,\ldots ,z_M)} \\&\times \left| \begin{matrix} z_1^{N+M-L-1} &{} \ldots &{} z_M^{N+M-L-1} &{} g_{N+M-L,\sigma (L+1)} &{} \ldots &{} g_{N+M-L,\sigma (N)} \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ z_1 &{} \ldots &{} z_M &{} g_{2,\sigma (L+1)} &{} \ldots &{} g_{2,\sigma (N)} \\ 1 &{} \ldots &{} 1 &{} g_{1,\sigma (L+1)} &{} \ldots &{} g_{1,\sigma (N)} \\ \end{matrix} \right| . \end{aligned}$$

Property (2.17) of invertible polynomial ensembles enables us to rewrite J as

$$\begin{aligned} \begin{aligned} J&=\frac{1}{\Delta _M(z_1,\ldots ,z_M)}\\&\quad \times \left| \begin{matrix} z_1^{N+M-L-1} &{} \ldots &{} z_M^{N+M-L-1} &{} \pi _{N+M-L-1}(a_{\sigma (L+1)}) &{} \ldots &{} \pi _{N+M-L-1}(a_{\sigma (N)})\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ z_1 &{} \ldots &{} z_M &{} \pi _{1}(a_{\sigma (L+1)}) &{} \ldots &{} \pi _{1}(a_{\sigma (N)}) \\ 1 &{} \ldots &{} 1 &{} \pi _{0}(a_{\sigma (L+1)}) &{} \ldots &{} \pi _{0}(a_{\sigma (N)}) \\ \end{matrix} \right| . \end{aligned} \end{aligned}$$

Property (2.18) allows us to replace again the determinant of monic polynomials by a Vandermonde determinant of size \(N-L+M\) to obtain

$$\begin{aligned} J= & {} \frac{\Delta _{N-L}(a_{\sigma (L+1)},\ldots ,a_{\sigma (N)}) }{\Delta _M(z_1,\ldots ,z_M)} \\&\times \left[ \prod _{j=1}^M\int _{I^\prime }\mathrm{d}t_j F(t_j,z_j) \prod _{n=L+1}^N(t_j-a_{\sigma (n)}) \right] \Delta _M(t_1,\ldots ,t_M) . \end{aligned}$$

Let us come back to the expectation value of characteristic polynomials in the form (4.3) and insert what we have derived for J above. This gives

$$\begin{aligned}&{\mathbb {E}}\left[ \frac{\prod _{m=1}^{M} D_{N}(z_m)}{\prod _{l=1}^{L} D_{N}(y_l) } \right] \nonumber \\&\quad =\frac{N!}{(N-L)!L!{\mathcal {Z}}_N\Delta _M(z_1,\ldots ,z_M)}\nonumber \\&\qquad \times \left[ \prod _{j=1}^M \int _{I^\prime }\mathrm{d}t_j F(t_j,z_j) \prod _{n=1}^N(t_j-a_{n})\right] \Delta _M(t_1,\ldots ,t_M)\nonumber \\&\qquad \times \left[ \prod _{l=1}^{L} \int _I \mathrm{d}x_l \left( \frac{x_{l}}{y_l}\right) ^{N-L} \frac{\prod _{m=1}^M(z_m-x_l)}{\prod _{j=1}^{L} (y_j - x_{l})}\right] \Delta _L(x_{1},\ldots ,x_{L})\nonumber \\&\quad \quad \times \sum _{\sigma \in S_N}{\mathrm{sgn\,}}(\sigma )\Delta _{N-L}(a_{\sigma (L+1)},\ldots ,a_{\sigma (N)}) \prod _{l=1}^L \frac{\varphi (a_{\sigma (l)},x_l)}{\prod _{j=1}^M(t_j-a_{\sigma (l)})}. \end{aligned}$$
(4.4)

The integrals are now put into a form to apply the following Lemma that will allow us to simplify (and eventually get rid of) the sum over permutations.

Lemma 4.2

Let \(S_N\) denote the permutation group of \(\left\{ 1,\ldots ,N\right\} \), and let \(S_L\) be the subgroup of \(S_N\) realised as the permutation group of the first L elements \(\{1,\ldots ,L\}\). Also, let \(S_{N-L}\) be the subgroup of \(S_N\) realised as the permutation group of the remaining \(N-L\) elements \(\{L+1,\ldots ,N\}\). Assume that F is a complex valued function on \(S_N\) which satisfies the condition \(F(\sigma h)=F(\sigma )\) for each \(\sigma \in S_N\), and each \(h\in S_L\times S_{N-L}\). Then we have

$$\begin{aligned} \sum \limits _{\sigma \in S_N}F(\sigma )=(N-L)!L!\sum \limits _{1\le l_1<\cdots <l_L\le N}F\left( \left( l_1,\ldots ,l_{L},1,\ldots ,{\check{l}}_1, \ldots ,{\check{l}}_L\ldots ,N\right) \right) , \end{aligned}$$
(4.5)

where \(\left( i_1,\ldots ,i_N\right) \) is a one-line notation for the permutation \(\left( \begin{array}{cccc} 1 &{} 2 &{} \ldots &{} N \\ i_1 &{} i_2 &{} \ldots &{} i_N \end{array} \right) \), and notation \({\check{l}}_p\) means that \(l_p\) is removed from the list.

Proof

Recall that if G is a finite group, and H is its subgroup, then there are transversal elements \(t_1,\ldots ,t_k\in G\) for the left cosets of H such that \(G=t_1H\uplus \cdots \uplus t_kH\), where \(\uplus \) denotes disjoint union. It follows that if F is a function on G with the property \(F(gh)=F(g)\) for any \(g\in G\), and any \(h\in H\), then

$$\begin{aligned} \sum \limits _{g\in G}F(g)=|H|\sum \limits _{i=1}^kF\left( t_i\right) , \end{aligned}$$
(4.6)

where |H| denotes the number of elements in H. In our situation \(G=S_N\), \(H=S_L\times S_{N-L}\), and each transversal element can be represented as a permutation

$$\begin{aligned} \left( l_1,\ldots ,l_{L},1,\ldots ,{\check{l}}_1,\ldots ,{\check{l}}_L\ldots ,N\right) , \end{aligned}$$

written in one-line notation, where \(1\le l_1<\cdots <l_L\le N\). Moreover, each collection of numbers \(l_1\), \(\ldots \), \(l_L\) satisfying the condition \(1\le l_1<\cdots <l_L\le N\) gives a transversal element for the left cosets of \(H=S_L\times S_{N-L}\) in \(G=S_N\). We conclude that Eq. (4.6) is reduced to Eq. (4.5). \(\square \)

Assume that \(\Phi (x_1,\ldots ,x_L)\) is antisymmetric under permutations \(\sigma \) of its L variable, i.e.

$$\begin{aligned} \Phi (x_{\sigma (1)},\ldots ,x_{\sigma (L)})={\mathrm{sgn\,}}(\sigma )\Phi (x_1,\ldots ,x_L), \end{aligned}$$

and that \(L\le N\). Let F be the function on \(S_N\) defined by

$$\begin{aligned} \begin{aligned} F(\sigma )&={\mathrm{sgn\,}}(\sigma ) \Delta _{N-L}(a_{\sigma (L+1)},\ldots ,a_{\sigma (N)}) \\&\quad \times \left[ \prod _{k=1}^L\int _I\mathrm{d}x_k f(a_{\sigma (k)},x_k)\right] \Phi (x_1,\ldots ,x_L)\ ,\\ \end{aligned} \end{aligned}$$

where f is a function of two variables. Clearly, F satisfies the condition \(F(\sigma h)=F(\sigma )\) for each \(\sigma \in S_N\), and each \(h\in S_L\times S_{N-L}\). Application of Lemma 4.2 to this function gives

$$\begin{aligned} \begin{aligned} \sum \limits _{\sigma \in S_N}F(\sigma )&=(N-L)!L! \sum \limits _{1\le l_1<\cdots <l_L\le N}{\mathrm{sgn\,}}\left( \left( l_1,\ldots ,l_{L},1,\ldots ,{\check{l}}_1, \ldots ,{\check{l}}_L\ldots ,N\right) \right) \\&\quad \times \Delta _{N-L}^{(l_1,\ldots ,l_L)}\left( a_1,\ldots ,a_N\right) \left[ \prod _{k=1}^L\int _I\mathrm{d}x_k f(a_{l_k},x_k)\right] \Phi (x_1,\ldots ,x_L)\ , \end{aligned} \end{aligned}$$

where the reduced Vandermonde determinant is defined in (A.6). Taking into account that

$$\begin{aligned} {\mathrm{sgn\,}}\left( \left( l_1,\ldots ,l_{L},1,\ldots ,{\check{l}}_1,\ldots , {\check{l}}_L\ldots ,N\right) \right) =(-1)^{l_1+\cdots +l_L-\frac{L(L+1)}{2}}, \end{aligned}$$
(4.7)

we obtain the formula

$$\begin{aligned} \begin{aligned}&\sum \limits _{\sigma \in S_N}{\mathrm{sgn\,}}(\sigma )\Delta _{N-L}(a_{\sigma (L+1)},\ldots ,a_{\sigma (N)})\left[ \prod _{k=1}^L\int _I\mathrm{d}x_k f(a_{\sigma (k)},x_k)\right] \Phi (x_1,\ldots ,x_L)\\&\quad =(N-L)!L! \sum \limits _{1\le l_1<\cdots <l_L\le N}(-1)^{l_1+\cdots +l_L-\frac{L(L+1)}{2}} \Delta _{N-L}^{(l_1,\ldots ,l_L)}\left( a_1,\ldots ,a_N\right) \\&\qquad \times \left[ \prod _{k=1}^L\int _I\mathrm{d}x_k f(a_{l_k},x_k)\right] \Phi (x_1,\ldots ,x_L)\ , \end{aligned} \end{aligned}$$
(4.8)

valid for any antisymmetric function \(\Phi (x_{\sigma (1)},\ldots ,x_{\sigma (L)})\), and for any function f(xy) such that the integrals in the equation above exist.

Formula (4.8) enables us to rewrite Eq. (4.4) as

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}\left[ \frac{\prod _{m=1}^{M} D_{N}(z_m)}{\prod _{l=1}^{L} D_{N}(y_l) } \right] \\&\quad =\frac{N!}{{\mathcal {Z}}_N\Delta _M(z_1,\ldots ,z_M)} \left[ \prod _{j=1}^M \int _{I^\prime }\mathrm{d}t_j F(t_j,z_j) \prod _{n=1}^N(t_j-a_{n})\right] \Delta _M(t_1,\ldots ,t_M)\\&\qquad \times \left[ \prod _{l=1}^{L} \int _I \mathrm{d}x_l \left( \frac{x_{l}}{y_l}\right) ^{N-L} \frac{\prod _{m=1}^M(z_m-x_l)}{\prod _{j=1}^{L} (y_j - x_{l})}\right] \Delta _L(x_{1},\ldots ,x_{L})\\&\qquad \times \sum \limits _{1\le l_1<\cdots <l_L\le N}(-1)^{l_1+\cdots +l_L-\frac{L(L+1)}{2}} \Delta _{N-L}^{(l_1,\ldots ,l_L)}\left( a_1,\ldots ,a_N\right) \\&\quad \qquad \times \prod _{i=1}^L \frac{\varphi (a_{l_i},x_i)}{\prod _{j=1}^M(t_j-a_{l_j})}. \end{aligned} \end{aligned}$$
(4.9)

We note that due to (A.7) it holds

$$\begin{aligned} \frac{\Delta _{N-L}^{(l_1,\ldots ,l_L)}\left( a_1,\ldots ,a_N\right) }{\Delta _{N} \left( a_1,\ldots ,a_N\right) } =\frac{(-1)^{l_1+\cdots +l_L-L} \Delta _{L}\left( a_{l_1},\ldots ,a_{l_L}\right) }{\underset{n\ne l_1}{\prod \nolimits _{n=1}^N}\left( a_{l_1}-a_n\right) \ldots \underset{n\ne l_L}{\prod \nolimits _{n=1}^N}\left( a_{l_L}-a_n\right) }. \end{aligned}$$
(4.10)

In addition, we apply (2.19) to eliminate \({\mathcal {Z}}_N\), cancel signs, and see that the strict ordering of the indices \(l_1<l_2<\cdots <l_L\) can be relaxed,

$$\begin{aligned} L! \sum _{1\le l_1<\cdots <l_L \le N} \rightarrow \sum _{l_1=1}^N\cdots \sum _{l_L=1}^N\ . \end{aligned}$$

Finally, we see that the sum in formula (4.9) can be written as contour integrals, because of the formula

$$\begin{aligned} \frac{1}{2\pi i} \oint _{C} du \frac{f(u)}{\prod _{n=1}^{N} (u-a_n)} = \sum _{l=1}^{N} \frac{f(a_l)}{\prod _{\begin{array}{c} n=1 \\ n \ne l \end{array}}^{N} (a_l -a_n)}\ , \end{aligned}$$
(4.11)

where the contour C encircles the points \(a_1,\ldots ,a_N\) counter-clockwise. The leads to the formula in the statement of Theorem 2.9. \(\square \)

5 Special Cases

In Proposition 2.10, we have used Eq. (2.33) in the case \(M=L=1\). Another case of interest is that corresponding to products of characteristic polynomials. In this case \(L=0\), and we obtain that only the first set of integrals remains in (2.33), i.e.

$$\begin{aligned} {\mathbb {E}}\left[ {\prod _{m=1}^{M} D_{N}(z_m)} \right] = \frac{\det [{\mathcal {B}}_{i}(z_{j})]_{i,j=1}^M}{\Delta _M(z_1,\ldots ,z_M)}\, , \end{aligned}$$
(5.1)

where

$$\begin{aligned} {\mathcal {B}}_{i}(z) = \int _{I^\prime }\mathrm{d}s F(s,z)\,s^{M-i} \prod _{n=1}^N(s-a_{n})\ , \end{aligned}$$
(5.2)

after pulling the M integrations over the \(s_j\)’s into the Vandermonde determinant of size M. This result also could have been directly computed using Lemma A.2.

As a final special case of interest, we look at the ratio of \(M+1\) characteristic polynomials over a single one at \(L=1\). This object is needed in the application to finite temperature QCD, cf. [45]. Theorem 2.9 gives

$$\begin{aligned}&{\mathbb {E}}\left[ \frac{\prod _{m=1}^{M+1} D_{N}(z_m)}{D_{N}(y) } \right] \nonumber \\&\quad =\frac{1}{\Delta _{M+1}(z_1,\ldots ,z_{M+1})}\nonumber \\&\qquad \times \left( \prod _{j=1}^{M+1} \int _{I^\prime }\mathrm{d}s_j F(s_j,z_j) \prod _{n=1}^N(s_j-a_{n})\right) \Delta _{M+1}(s_1,\ldots ,s_{M+1}) \nonumber \\&\qquad \times \int _I dv \left( \frac{v}{y}\right) ^{N-1} \frac{\prod _{m=1}^{M+1}(z_m-v)}{(y - v)} \nonumber \\&\qquad \times \oint _{C} \frac{du}{2\pi i} \frac{1}{\prod _{n=1}^{N} (u-a_n)} \frac{\varphi (u,v)}{\prod _{j=1}^{M+1}(s_j-u)}\ . \end{aligned}$$
(5.3)

Following [19], we may use the Lagrange extrapolation formula

$$\begin{aligned} \frac{1}{\prod _{j=1}^{M+1}(u-s_j)}=\sum _{m=1}^{M+1} \frac{1}{u-s_m}\prod _{\begin{array}{c} j=1\\ j\ne m \end{array}}^{M+1}\frac{1}{s_m-s_j}\ , \end{aligned}$$
(5.4)

to rewrite

$$\begin{aligned}&\frac{1}{\prod _{j=1}^{M+1}(s_j-u)}\Delta _{M+1}(s_1,\ldots ,s_{M+1}) =\nonumber \\&\quad (-1)^{M+1}\sum _{m=1}^{M+1}\frac{(-1)^{m-1}}{u-s_m}\Delta _M^{(m)}(s_1,\ldots ,s_{M+1})\, . \end{aligned}$$
(5.5)

This leads to the following rewriting of (5.3)

$$\begin{aligned}&{\mathbb {E}} \left[ \frac{\prod _{m=1}^{M+1} D_{N}(z_m)}{D_{N}(y) } \right] \nonumber \\&\quad =\frac{ (-1)^{M} }{\Delta _{M+1}(z_1,\ldots ,z_{M+1})} \int _I dv \left( \frac{v}{y}\right) ^{N-1} \nonumber \\&\quad \qquad \times \frac{\prod _{m=1}^{M+1}(z_m-v)}{(y - v)} \oint _{C} \frac{du}{2\pi i} \frac{\varphi (u,v)}{\prod _{n=1}^{N} (u-a_n)} \nonumber \\&\qquad \times \sum _{m=1}^{M+1}(-1)^{m} \left( \prod _{j=1}^{M+1} \int _{I^\prime }\mathrm{d}s_j F(s_j,z_j) \prod _{n=1}^N(s_j-a_{n})\right) \nonumber \\&\quad \qquad \times \frac{1}{u-s_m}\Delta _M^{(m)}(s_1,\ldots ,s_{M+1}) \nonumber \\&\quad = \frac{ (-1)^{M} }{\Delta _{M+1}(z_1,\ldots ,z_{M+1})} \int _I dv \left( \frac{v}{y}\right) ^{N-1} \nonumber \\&\quad \qquad \times \frac{\prod _{m=1}^{M+1}(z_m-v)}{(y - v)} \oint _{C} \frac{du}{2\pi i} \frac{\varphi (u,v)}{\prod _{n=1}^{N} (u-a_n)} \nonumber \\&\qquad \quad \times \det \left[ \begin{matrix} {\mathcal {A}}(z_1,u) &{} \ldots &{} {\mathcal {A}}(z_{M+1},u) \\ {\mathcal {B}}_{1}(z_1) &{} \ldots &{} {\mathcal {B}}_{1}(z_{M+1}) \\ \vdots &{} \ldots &{} \vdots \\ {\mathcal {B}}_{M}(z_1) &{} \ldots &{} {\mathcal {B}}_{M}(z_{M+1}) \\ \end{matrix} \right] , \end{aligned}$$
(5.6)

where we have defined

$$\begin{aligned} {\mathcal {A}}(z,u) = \int _{I^\prime }\mathrm{d}s F(s,z) \frac{-1}{u-s} \prod _{n=1}^N(s-a_{n})\ . \end{aligned}$$
(5.7)

In the second step in (5.6), we have first pulled all the s-integrals except the one over \(s_m\) into the Vandermonde determinant \(\Delta _M^{(m)}(s_1,\ldots ,s_{M+1})\), leading to a determinant of size M with matrix elements \({\mathcal {B}}_i(z_j)\) (5.2). We then recognise that the sum is a Laplace expansion of a determinant of size \(M+1\) with respect to the first row, containing the matrix elements \({\mathcal {A}}(z_j,u)\) (5.7). This reveals the determinantal form of the corresponding kernel.

Fyodorov, Grela, and Strahov [24] considered the probability density defined byFootnote 4

$$\begin{aligned} P_N^{{\mathcal {L}}}\left( x_1,\ldots ,x_N\right) =\frac{1}{{\mathcal {Z}}_N^{{\mathcal {L}}}}\Delta _N\left( x_1,\ldots ,x_N\right) \det \left[ x_k^{{\mathcal {L}}}e^{-(x_k+a_l)}I_0 \left( 2\sqrt{a_lx_k}\right) \right] _{k,l=1}^N \end{aligned}$$
(5.8)

on \({\mathbb {R}}_+^N\). Note that this polynomial ensemble is invertible only for \({\mathcal {L}}=0\) as it follows from Eqs. (2.30) and (2.32). However, computations of different averages with respect to \(P_N^{{\mathcal {L}}}\) can be reduced to those with respect to \(P_N^{{\mathcal {L}}=0}\), i.e. with respect to an invertible ensemble. Indeed, we have

$$\begin{aligned} {\mathbb {E}}_{P_N^{{\mathcal {L}}}}\left( f\left( x_1,\ldots ,x_N\right) \right) =\frac{{\mathbb {E}}_{P_N^{{\mathcal {L}}=0}}\left( f\left( x_1,\ldots ,x_N\right) \prod _{l=1}^{{\mathcal {L}}}D_N\left( z_l\right) \right) }{{\mathbb {E}}_{P_N^{{\mathcal {L}}=0}}\left( \prod _{l=1}^{{\mathcal {L}}}D_N\left( z_l\right) \right) } \biggl |_{z_1,\ldots ,z_{{\mathcal {L}}}=0} \end{aligned}$$
(5.9)

for any function \(f\left( x_1,\ldots ,x_N\right) \) such that the expectations in the formula above exist. In particular, we can reproduce the results of [24] for the expectation value of a single characteristic polynomial, its inverse or a single ratio. Without going much into detail, we need two ingredients for this check. First, in order to perform the limit of vanishing arguments in Eq. (5.9), it is useful to antisymmetrise the product of the first \({\mathcal {L}}\) functions \(F(t_j,z_j)\) using the Vandermonde determinant \(\Delta _{{\mathcal {L}}+1}(t_1,\ldots ,t_{{\mathcal {L}}+1})\) in (2.33). We are then led to consider

$$\begin{aligned} \lim _{z_1,\ldots ,z_{{\mathcal {L}}}\rightarrow 0} \frac{\det \left[ I_0(2\sqrt{z_it_j})\right] _{i,j=1}^{{\mathcal {L}}}}{\Delta _{{\mathcal {L}}}(z_1,\ldots ,z_{{\mathcal {L}}})}= & {} \lim _{z\rightarrow 0} \det \left[ \frac{t_i^{j-1}}{(j-1)!}\frac{I_{j-1}(2\sqrt{zt_i})}{\sqrt{zt_i}^{j-1}}\right] _{i,j=1}^{{\mathcal {L}}} \nonumber \\= & {} \frac{(-1)^{{\mathcal {L}}({\mathcal {L}}-1)/2}}{\prod _{j=1}^{{\mathcal {L}}}(j-1)!^2}\Delta _{{\mathcal {L}}}(t_1,\ldots ,t_{{\mathcal {L}}})\ , \end{aligned}$$
(5.10)

after first taking the limit of degenerate arguments, which is then sent to zero. Obviously, we first separate the remaining non-vanishing argument \(z_{{\mathcal {L}}+1}\) from the Vandermonde determinant by \(\Delta _{{\mathcal {L}}}(z_1,\ldots ,z_{{\mathcal {L}}}) \prod _{l=1}^{{\mathcal {L}}}(z_l-z_{{\mathcal {L}}+1}) =\Delta _{{\mathcal {L}}+1}(z_1,\ldots ,z_{{\mathcal {L}}+1})\).

Second, we need an equivalent formulation of Propositions 3.1 and 3.5 employed in [24], which are due to [19, 21], respectively.Footnote 5

Proposition 5.1

$$\begin{aligned} {\mathbb {E}}_{{\mathcal {P}}} \left[ \frac{1}{D_{N}(y) } \right]= & {} \frac{1}{\det G} \left| \begin{matrix} g_{1,1} &{} \ldots &{} g_{1,N} \\ \vdots &{} \ddots &{} \ldots \\ g_{N-1,1} &{} \ldots &{} g_{N-1,N} \\ \int _0^\infty \frac{du\varphi _1(u)}{y-u}\left( \frac{u}{y}\right) ^{N-1} &{} \ldots &{} \int _0^\infty \frac{du\varphi _N(u)}{y-u}\left( \frac{u}{y}\right) ^{N-1} \\ \end{matrix} \right| \nonumber \\= & {} \int _0^\infty \frac{du}{y-u}\left( \frac{u}{y}\right) ^{N-1} \sum _{j=1}^N c_{N,j} \varphi _j(u)\ , \end{aligned}$$
(5.11)

where C is the inverse of the \(N\times N\) moment matrix G, and \(c_{i,j}\) are the matrix elements of \(C^T\).

Proof

Eqs. (5.11) were stated in [24] following [19, 21], without the factors of \((u/y)^{N-1}\). The equivalence of the two statements can be seen as follows. Expanding the geometric series inside the determinant without these factors, we have

$$\begin{aligned}&\frac{1}{\det G} \left| \begin{matrix} g_{1,1} &{} \ldots &{} g_{1,N} \\ \vdots &{} \ddots &{} \vdots \\ g_{N-1,1} &{} \ldots &{} g_{N-1,N} \\ \int _0^\infty {du\varphi _1(u)}\sum _{j=0}^\infty \frac{u^j}{y^{J+1}} &{} \ldots &{} \int _0^\infty {du\varphi _N(u)}\sum _{j=0}^\infty \frac{u^j}{y^{J+1}}\\ \end{matrix} \right| \\&\quad = \left| \begin{matrix} g_{1,1} &{} \ldots &{} g_{1,N} \\ \vdots &{} \ddots &{} \vdots \\ g_{N-1,1} &{} \ldots &{} g_{N-1,N} \\ \sum _{j=N}^\infty \frac{g_{j,1}}{y^{J+1}} &{} \ldots &{} \sum _{j=N}^\infty \frac{g_{j,N}}{y^{J+1}}\\ \end{matrix} \right| \det [c_{i,j}]_{i,j=1}^N\ . \end{aligned}$$

If we perform the integrals in the last row, we obtain infinite series over generalised moment matrices \(g_{k,l}\), the first \(N-1\) of which can be removed by subtraction of the upper \(N-1\) rows. Rewriting the last row as integrals and resumming the series, we arrive at the first line of (5.11).

The second line in (5.11) is obtained as follows. Using that \(\det [c_{i,j}]_{i,j=1}^N=1/{\mathcal {Z}}_N\) and then multiplying the matrix C with the matrix inside the determinant from the right, this leads to an identity matrix, except for the last row, as C is the inverse of the finite, \(N\times N\) dimensional matrix. Laplace expanding with respect to the last column leads to the desired result. \(\square \)

Employing Proposition 5.1 in [24], it is not difficult to see that from our Theorem 2.9 together with (5.10) we obtain an equivalent form of [24, Theorem 4.1] for a single characteristic polynomial, [24, Theorem 3.4] for its inverse and [24, Theorem 5.1] for a single ratio.